Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Browser testing is essential if you have a website or web applications that users interact with. Manual testing can be very helpful to an extent, but given the multiple browsers available, not to mention versions and operating system, testing everything manually becomes time-consuming and repetitive.

To help automate this process, Selenium is a popular choice for developers, as an open-source tool with a large and active community. What's more, we can further scale our automation testing by running on theLambdaTest cloud-based testing platform.

Read more through our step-by-step tutorial on how to set up Selenium tests with Java and run them on LambdaTest:

>> Automated Browser Testing With Selenium

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

1. Introduction

Docker Model Runner, introduced in Docker Desktop 4.40 for Mac with Apple Silicon (at the time of writing this article), revolutionizes local AI development by simplifying the deployment and management of large language models (LLMs). It tackles common challenges such as complex setup processes, high cloud inference costs, and data privacy concerns.

By providing an OpenAI-compatible Inference API, Model Runner enables seamless integration with frameworks like Spring AI, allowing developers to build AI-powered applications locally with ease. In this tutorial, we’ll learn how to set up Docker Model Runner and create a Spring AI application that connects to it. By the end, we’ll have a fully functional local AI application leveraging a powerful LLM.

2. Docker Model Runner

Docker Model Runner is a tool designed to simplify the deployment and execution of LLMs inside Docker containers. It’s an AI Inference Engine offering a wide range of models from various providers.

Let’s see the key features that Docker Model Runner includes:

  • Simplified Model Deployment: Models are distributed as standard Open Container Initiative (OCI) artifacts on Docker Hub under the ai namespace. This makes it easy to pull, run, and manage AI models directly within Docker Desktop.

  • Broad Model Support: Supports a variety of LLMs from multiple providers, such as Mistral, LLaMA, and Phi-4, ensuring flexibility in model selection.

  • Local Inference: Runs models locally, enhancing data privacy and eliminating dependency on cloud-based inference.

  • OpenAI-Compatible API: Provides a standardized API that integrates effortlessly with existing AI frameworks, reducing development overhead.

3. Set Up Environment

This section outlines the prerequisites for using Docker Model Runner and Maven dependencies to create a Spring AI application that uses the Model Runner.

3.1. Prerequisites

To use Docker Model Runner, we’ll need a few things:

  • Docker Desktop 4.40 or later: Installed on a Mac with Apple Silicon.

  • Java 21 or later: Required for Spring AI development.

  • Model Runner-compatible LLM: A model compatible with Docker Model Runner, such as LLaMA or Gemma 3.

3.2. Maven Dependencies

Let’s start by importing the spring-boot-starter-web, spring-ai-openai-spring-boot-starter, spring-ai-spring-boot-testcontainers, and junit-jupiter dependencies to the pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>1.0.0-M6</version>
</dependency>
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
    <version>1.0.0-M6</version>
</dependency>
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-spring-boot-testcontainers</artifactId>
    <version>1.0.0-M6</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>junit-jupiter</artifactId>
    <version>1.19.8</version>
    <scope>test</scope>
</dependency>

4. Enabling and Configuring Docker Model Runner

This section outlines the steps to enable Docker Model Runner and pull a specific model using two distinct methods.

4.1. Enable Model Runner With a Specific TCP Port

First, let’s  enable Model Runner and expose it on a specific TCP port (e.g., 12434):

docker desktop enable model-runner --tcp 12434

This configures Model Runner to listen on http://localhost:12434/engines. In our Spring AI application, we need to configure the api-key, model, and base URL to point to the Model Runner endpoint:

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.base-url=http://localhost:12434/engines
spring.ai.openai.chat.options.model=ai/gemma3

4.2. Enable Model Runner With Testcontainers

We can run the following command to enable Model Runner without specifying a port:

docker desktop enable model-runner

This sets up Model Runner to run on the default internal Docker network. Then, we use Testcontainers and set the base-url, api-key, and model as follows:

@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {
    @Bean
    DockerModelRunnerContainer socat() {
        return new DockerModelRunnerContainer("alpine/socat:1.8.0.1");
    }

    @Bean
    DynamicPropertyRegistrar properties(DockerModelRunnerContainer dmr) {
        return (registrar) -> {
          registrar.add("spring.ai.openai.base-url", dmr::getOpenAIEndpoint);
          registrar.add("spring.ai.openai.api-key", () -> "test-api-key");
          registrar.add("spring.ai.openai.chat.options.model", () -> "ai/gemma3");
        };
    }
}

The provided TestcontainersConfiguration class is a Spring Boot @TestConfiguration designed for integration testing with Testcontainers. It defines two beans: a DockerModelRunnerContainer that starts a Docker container using the alpine/socat:1.8.0.1 image, likely to proxy or mock an AI service endpoint, and a DynamicPropertyRegistrar that dynamically sets Spring AI properties. These properties configure the AI client with a base URL from the container’s endpoint (via getOpenAIEndpoint()), a test API key (test-api-key), and a model identifier (ai/gemma3). The @TestConfiguration(proxyBeanMethods = false) annotation ensures lightweight bean creation for testing without proxying. This setup enables tests to simulate an AI service environment without external dependencies, using the socat container to handle requests. The socat forwards traffic to the internal model-runner.docker.internal service.

4.3. Pulling and Verifying the Gemma 3 Model

Now, after enabling Model Runner using one of the options, we pull the Gemma 3 model:

docker model pull ai/gemma3

Then, we can confirm it’s available locally:

docker model list

This command lists all locally available models, including ai/gemma3.

5. Integration With Spring AI

Now, let’s create a simple controller to interact with the model:

@RestController
class ModelRunnerController {
    private final ChatClient chatClient;

    public ModelRunnerController(ChatClient.Builder chatClientBuilder) {
        this.chatClient = chatClientBuilder.build();
    }

    @GetMapping("/chat")
    public String chat(@RequestParam("message") String message) {
        return this.chatClient.prompt()
          .user(message)
          .call()
          .content();
    }
}

5.1. Testing Model Runner With a Specific TCP Port

To use Docker Model Runner, we need to configure the OpenAI client to point to the right endpoint and use the model pulled earlier. Now, we start the application and test the /chat endpoint:

curl "http://localhost:8080/chat?prompt=What%20is%20the%20future%20of%20AI%20development?"

The response will be generated by the Gemma 3 model running in Model Runner.

5.2. Testing Model Runner With Testcontainers

Let’s create the ModelRunnerApplicationTest class. It will import the TestcontainersConfiguration class and call the sample controller:

@Import(TestcontainersConfiguration.class)
class ModelRunnerApplicationTest {
    // ...

    @Test
    void givenMessage_whenCallChatController_thenSuccess() {
        // given
        String userMessage = "Hello, how are you?";

        // when
        ResponseEntity<String> response = restTemplate.getForEntity(
          baseUrl + "/chat?message=" + userMessage, String.class);

        // then
        assertThat(response.getStatusCode().is2xxSuccessful()).isTrue();
        assertThat(response.getBody()).isNotEmpty();
    }
}

The @Import(TestcontainersConfiguration.class) imports the TestcontainersConfiguration class, which defines a DockerModelRunnerContainer (running alpine/socat:1.8.0.1). Also, it dynamically registers Spring AI properties (e.g., spring.ai.openai.base-url, spring.ai.openai.api-key, spring.ai.openai.chat.options.model). This ensures the test environment is configured with a mock AI service endpoint provided by the Testcontainers-managed container.

6. Conclusion

Docker Model Runner provides a developer-friendly, privacy-focused, and cost-effective solution for running LLMs locally, particularly for those building GenAI applications within the Docker ecosystem. In this article, we explored Docker Model Runner’s capabilities and demonstrated its integration with Spring AI.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)