Let's get started with a Microservice Architecture with Spring Cloud:
Introduction to Jlama
Last updated: February 11, 2026
1. Overview
Jlama is an inference engine, which means it runs a pre-trained AI model to generate outputs without training the model itself. In other words, it enables us to run large language models (LLMs) directly on a local machine, without relying on an external API. This makes it easy to use AI models locally in Java applications.
In this short, hands-on tutorial, we’ll quickly get started with Jlama. Using Java and Maven, we’ll download a model from Hugging Face, configure a prompt, and run it locally against the model.
2. Integrating Jlama Into a Maven Project
There are a few different ways to set up and start using Jlama:
- the jlama-cli module offers a CLI tool for quick experimentation and interactive sessions
- for applications requiring HTTP integration, the jlama-net module lets us deploy Jlama as a REST API service with OpenAI-compatible endpoints
- we can embed inference directly into a Maven project using the jlama-native module, to run models directly from code
To get started, let’s add the necessary dependencies to the pom.xml file. Apart from jlama-native, we also import the jlama-core module, which provides the Java API to interact with the embedded inference engine:
<dependency>
<groupId>com.github.tjake</groupId>
<artifactId>jlama-native</artifactId>
<!-- supports linux-x86_64, macos-x86_64/aarch_64, windows-x86_64 -->
<classifier>${jlama-native.classifier}</classifier>
<version>0.8.4</version>
</dependency>
<dependency>
<groupId>com.github.tjake</groupId>
<artifactId>jlama-core</artifactId>
<version>0.8.4</version>
</dependency>
Jlama uses Java 21 preview features, particularly the Vector API for high-performance operations. However, to enable these features, we need to configure the JVM with the appropriate flags. For example, we can add these options to the Maven compiler and surefire plugins, or directly via environment variables:
set JDK_JAVA_OPTIONS=--add-modules jdk.incubator.vector --enable-preview
This way, we can potentially benefit from the latest optimizations.
3. Running Prompts
Now, let’s download a model and run an initial prompt.
Initially, we start by selecting a model from the available options at huggingface.co/tjake. This is fairly simple, as we can use the Jlama API to load a model from the local filesystem or automatically download that model from Hugging Face if it’s not already present:
static AbstractModel loadModel(String workingDir, String model) throws IOException {
File localModelPath = new Downloader(workingDir, model)
.huggingFaceModel();
return ModelSupport.loadModel(localModelPath, DType.F32, DType.I8);
}
As we can see, ModelSupport.loadModel() accepts two data type parameters. DType.F32 means we use 32-bit floating point numbers for precise calculations. DType.I8 means we use 8-bit integers for compact storage.
After that, we can use this model to generate text from a prompt. To that end, the Jlama API provides an elegant builder pattern that lets us configure the generation parameters declaratively. For instance, we can set a session ID to maintain context across multiple prompts, specify the maximum number of tokens for the response, and control the creativity of the output through the temperature parameter:
public static void main(String[] args) throws IOException {
// available models: https://huggingface.co/tjake
AbstractModel model = loadModel("./models", "tjake/Llama-3.2-1B-Instruct-JQ4");
PromptContext prompt = PromptContext.of("Why are llamas so cute?");
Generator.Response response = model.generateBuilder()
.session(UUID.randomUUID())
.promptContext(prompt)
.ntokens(256)
.temperature(0.3f)
.generate();
System.out.println(response.responseText);
}
static AbstractModel loadModel(String workingDir, String model) throws IOException {
File localModelPath = new Downloader(workingDir, model)
.huggingFaceModel();
return ModelSupport.loadModel(localModelPath, DType.F32, DType.I8);
}
And that’s it! This is all we need to download the model and run it locally using Java and Maven.
4. Conclusion
In this short article, we learned how to get started with Jlama by integrating it into a Maven-based Java project.
To begin with, we downloaded a model from Hugging Face, configured a prompt, and ran inference locally using the Jlama Java API. With this foundation, it’s fairly straightforward to begin experimenting with different models, prompts, and generation settings to build AI-powered features directly into Java applications.
As always, the code in this article is available over on GitHub.
















