1. Overview

In this tutorial, we’ll learn about Http4k, a Ktor alternative to Kotlin server-side development.

Http4k sells itself as the “functional toolkit for Kotlin HTTP applications”. The toolkit takes direct inspiration from the Twitter paper “Your server as a function“, which suggests minimal functional abstractions are all we need to build elegant and complex distributed systems. Http4k builds on that paper’s abstractions and provides a complete yet easy-to-understand framework for building web, API(s), and serverless applications. It does all of this with the beauty of minimalism and the power of functional programming.

2. Core Abstractions

Http4k’s core module defines the few basic abstractions that are consistently used throughout the toolkit.

As we’ll see, these core abstractions are symmetric for all supported integrations so that our client, server, or serverless applications will ultimately rely on the same functional types.

For developers, working with Http4k can be a pleasing and consistent experience, also considering the clear path for any custom functional extension. As a plus, core abstractions only depend on Kotlin’s standard library.

2.1. HttpHandler

HttpHandler is the reference functional type for the business logic of our program. Technically, an HttpHandler is just the alias:

typealias HttpHandler = (request: Request) -> Response

A handler function body can either express our business logic directly or by calling dedicated services. Whatever the choice, the handler shapes our business logic as a transformative problem, from an immutable Request to an immutable Response.

2.2. RoutingHttpHandler

RoutingHttpHandler is a special HttpHandler with additional routing capabilities. It’s widely used in the toolkit for selecting one out of several handlers using Request properties, such as headers, path, or query parameters.

Besides, the routes() utility function, the infix bind() and to() extensions together define a concise routing DSL for creating RoutingHttpHandler instances, as we see here for a simple ping service:

val app: HttpHandler = routes(
    "/ping" bind POST to {

We can see how the routing DSL binds the /ping path to the HTTP POST method and then to the corresponding HttpHandler providing the basic Response for this example.

2.3. Optics

Http4k interacts with Request and Response instances via “optics”. Optics is the common name for a functional object which can access and update immutable data structures.

Http4k widely adopts lenses, a basic optic type, to manipulate Request and Response objects. The toolkit goal here is both being consistent with functional programming and avoiding performance bottlenecks commonly introduced by reflection.

Lenses can access and inject headers, retrieve path and query parameters, consume and set payloads. Since lenses are composable, we can also define them in such a way as to minimize boilerplate code required for traversing and updating Request and Response.

2.4. Filter

A Filter is a second-order function supporting a non-functional aspect of our application. Accordingly, a Filter consumes the HttpHandler passed as a parameter and produces a derived HttpHandler, now enriched with non-functional aspects. Technically, a Filter is just a Kotlin functional interface definition:

fun interface Filter : (HttpHandler) -> HttpHandler {
    companion object

The library provides basic Filter extensions for chaining filters together to enable both pre- and post-processing pipelines:

fun Filter.then(next: Filter): Filter = Filter { this(next(it)) }

fun Filter.then(next: HttpHandler): HttpHandler = this(next).let { http -> { http(it) } }

Http4k provides several useful filters to make a lot easier, enriching our application with support for basic and OAuth authentication, circuit breaking, bulkheading, and rate limiting, just to name a few. More importantly, we can selectively apply filters to exposed resources. We can achieve the maximum level of control on the non-functional aspects of our application with relatively low coding, all with a simple and concise functional approach.

3. Toolkit Integrations

Within Http4k, the HttpHandler functional type drives server, serverless and client-side logic. Of course, we’ll choose the proper runtime integration according to the handlers’ role in our program.

3.1. Servers

An Http4kServer provides the HTTP server backend for an HttpHandler used as a service. Http4kServer is an AutoCloseable type with basic lifecycle methods:

interface Http4kServer : AutoCloseable {
    fun start(): Http4kServer
    fun stop(): Http4kServer
    fun block() = Thread.currentThread().join()
    override fun close() {

    fun port(): Int

Http4kServer factories receive runtime parameters via ServerConfig instances:

interface ServerConfig {
    sealed class StopMode {
        object Immediate : StopMode()
        data class Graceful(val timeout: Duration) : StopMode()

    class UnsupportedStopMode(stopMode: StopMode) :
        IllegalArgumentException("Server does not support stop mode $stopMode")

    val stopMode: StopMode get() = StopMode.Immediate

    fun toServer(http: HttpHandler): Http4kServer

The asServer() extension finally binds handlers, servers, and runtime configuration into a convenient factory for Http4kServer:

fun HttpHandler.asServer(config: ServerConfig): Http4kServer = config.toServer(this)

A backend implementation controls the lifecycle and runtime configuration of the underlying technology. Http4k provides a number of Http4kServer implementations, each one targeting a mainstream HTTP server, including Apache Tomcat, Jetty, and Undertow.

If app is the HttpHandler for our application, we can then expose it as a service over Jetty with a single line of code:


Moreover, should we need an additional backend integration, we could roll out our own by implementing two interfaces only, a dedicated ServerConfig and related Http4kServer.

3.2. Clients

Http4k considers an HTTP client just as a special HttpHandler integration. In this case, the implementation goal is to map the input Request to the underlying client technology and then map the received HTTP envelope back to its matching Response.

Http4k provides integrations with mainstream HTTP clients, including OkHttp, Jetty, and Apache.

3.3. Serverless

The same core HttpHandler and Filter abstractions also power Http4k support for serverless runtime environments. At present, the toolkit can export HttpHandler instances as AWS Lambda functions (both HTTP or SQS-driven, event-based applications), Google Cloud Functions, and Apache OpenWhisk functions.

4. The Toolkit in Action

To get a taste of the development experience with Http4k, let’s limit our business logic to a minimum, and then let’s try to see how rate limiting could be added to our endpoint through filters.

4.1. Setup

Let’s prepare our environment with minimal Maven dependencies for driving our Http4k example:


The latest version of these libraries can be found on Maven Central. As an alternative, we can bootstrap our Http4k application by using the toolbox available online.

4.2. Basic Routing

Let’s start by defining a simple echo service. We’ll first define a HttpHandler for the echo logic:

val echoHandler = { req: Request -> 

Then, we’ll define our application routing to expose the echoHandler under the /echo endpoint:

val app: HttpHandler = routes(
    "/echo" bind POST to echoHandler

4.3. Application Testing

Our application simply returns a Response object with a 200 status code, the returned payload being the same as the calling request. We can test these assumptions just using plain Request and Response objects, even without any binding to a server-side integration.

As an example, let’s ensure our echo application really gives back the request payload once invoked:

val testPayload = "hello"
val expectedResponse = Response(OK).body(testPayload)
val appResponse = app(Request(POST, "/echo").body(testPayload))

assertEquals(expectedResponse, appResponse)

This example just scratches the surface of Http4k support for testing. Indeed, the toolkit supports a variety of testing approaches, including integration, chaos, approval, and service virtualization testing.

4.4. Service Exposure

We can now transform our application into an Http4kServer:

val server = app.asServer(Jetty(8081)).start()

Here, we decided to use Jetty to power the application, so we are feeding the asServer() extension with the Jetty server configuration. The start() method triggers the server thread, which is now listening for requests on port 8081. On our machine, we can verify the endpoint is working with a simple curl command:

$ curl -v -X POST http://localhost:8081/echo -d "Hello Http4k"

In the command output, we can check the service responded correctly:

*   Trying
* Connected to localhost ( port 8081 (#0)
> POST /echo HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Length: 12
> Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 12 out of 12 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 04 Jul 2023 09:56:47 GMT
< Content-Length: 12
< Server: Jetty(11.0.15)
* Connection #0 to host localhost left intact
Hello Http4k

4.5. Service Resilience

For the example’s sake, let’s suppose we need to limit our service request rate, which is quite a common requirement in the microservices world. Rate limiting could be a whole topic on its own, but here we are only going to see we can easily add this behavior with a Http4k Filter.

Firstly, we import the Http4k resilience module and define a basic configuration for a Resilience4j limiter:

val rateLimitingConfig: RateLimiterConfig = RateLimiterConfig.custom()

Secondly, we chain our echo handler to a rate-limiting Filter using the rateLimitingConfig:

val app: HttpHandler = routes(
    "/echo" bind POST to ResilienceFilters.RateLimit(RateLimiter.of("echo-rate-limit", rateLimitingConfig))

We forcibly configured one request per minute only, just to show the Filter effect. Eventually, if we execute again our curl command multiple times within the minute, we’ll hit a 429 error:

*   Trying
* Connected to localhost ( port 8081 (#0)
> POST /echo HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Length: 12
> Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 12 out of 12 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 429 Too Many Requests
< Date: Tue, 04 Jul 2023 10:24:50 GMT
< Content-Length: 0
< Server: Jetty(11.0.15)

5. Coroutines Support

If we take a closer look at Filter and HttpHandler, we’ll notice that their declarations don’t include the suspend keyword at all. Http4k doesn’t support Kotlin coroutines, and according to this discussion, it’s unlikely it will anytime soon.

If using a blocking thread model is truly a concern, we should follow with interest upcoming Http4k integrations exploiting the Loom project. Virtual threads will be part of the coming JDK releases, and the current version of the toolkit already provides integration with the Loom-based SunHttp and Jetty servers.

Soon, it will be possible to state Http4k supports continuations, albeit being based on virtual threads only.

6. Conclusion

In this article, we looked at Http4k and its elegant, functional approach to Kotlin backend development.

As developers, we can appreciate the toolkit core abstractions, for they are sound, easy to understand, and focused on “getting things done”. Http4k’s pragmatic approach, however, comes with some compromises, the lack of coroutines support being the most notable one.

Whether we should consider missing coroutines support as a limitation really depends on our needs and goals only.

Indeed, if we consider the upcoming virtual threads support and how vast the toolkit’s integrations are with existing technologies, there are reasons to say Http4k’s functional approach can really be a good fit for many projects.

As always, the code samples can be found over on GitHub.

Comments are closed on this article!