Course – Black Friday 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Course – Black Friday 2025 – NPI (cat=Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

1. Overview

With the widespread use of generative AI and ChatGPT, in particular, many languages have started to provide libraries that interact with their OpenAI API. Java isn’t an exception.

In this tutorial, we’ll talk about openai-java. This is a client that allows more convenient communication with OpenAI API. However, reviewing the entire library in a single article is impossible. Thus, we’ll use a practical example and build a simple console tool connected to ChatGPT.

Note: the openai-java library used in this tutorial has been archived on June 6 2024, and is no longer maintained. You can now use the Java library from OpenAI: https://github.com/openai/openai-java.

2. Dependencies

First, we must import the required dependencies for our project. We can find the libraries in the Maven repository. These three modules are dedicated to different aspects of the interaction:

<dependency>
    <groupId>com.theokanning.openai-gpt3-java</groupId>
    <artifactId>service</artifactId>
    <version>0.18.2</version>
</dependency>

<dependency>
    <groupId>com.theokanning.openai-gpt3-java</groupId>
    <artifactId>api</artifactId>
    <version>0.18.2</version>
</dependency>

<dependency>
    <groupId>com.theokanning.openai-gpt3-java</groupId>
    <artifactId>client</artifactId>
    <version>0.18.2</version>
</dependency>

Please note that the name explicitly mentions GPT3, but it works with GPT4 as well.

3. Baeldung Tutor

In this tutorial, we’ll build a tool that helps us to create our curriculum based on the articles and tutorials from our favorite learning platform, or at least try to do so. While the internet provides us with unlimited resources and we can find almost anything online, curating the information has become much harder.

Trying to learn new things is increasingly overwhelming as it’s hard to identify the best learning path and filter things that won’t benefit us. To resolve this problem, we’ll build a simple client to interact with ChatGPT and ask it to guide us in the vast ocean of Baeldung articles.

4. OpenAI API Token

The first step is to connect our application to the OpenAI API. To do so, we need to provide an OpenAI token, which can be generated on the website:

ChatGPT API Key Generation

However, we should be careful about the token and avoid exposing it. The openai-java examples use the environment variables for this. This might not be the best solution for production, but it works well for small experiments.

During the runs, we don’t necessarily need to identify the environment variable for our entire machine; we can use configurations in our IDE. For example, IntelliJ IDEA provides a simple way to do so.

We can generate two types of tokens: personal and service accounts. The personal token is self-explanatory. The tokens for service accounts are used for bots or applications that can be connected to the OpenAI projects. While both would work, a personal token is good enough for our purpose.

5. OpenAiService

The entry point to OpenAI APIs is the class conveniently named OpenAiService. The instance of this class allows us to interact with the APIs and receive responses from the ChatGPT. To create it, we should pass the token we’ve generated in the previous step:

String token = System.getenv("OPENAI_TOKEN");
OpenAiService service = new OpenAiService(token);

This is the first step in our journey; we need to identify the information and populate the request.

5.1. ChatCompletionRequest

We create a request using ChatCompletionRequest. The minimal setup requires us to provide only messages and a model :

ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest
  .builder()
  .model(GPT_3_5_TURBO_0301.getName())
  .messages(messages)
  .build();

Let’s review these parameters step-by-step.

5.2. Model

It’s essential to pick the model that would fit our requirements, and also it affects the costs. Thus, we need to make a reasonable choice. For example, often, there’s no need to use the most advanced model to clean text or parse it based on some simple formats. At the same time, more complex or important tasks require more advanced models to reach our goals.

While we can pass the model name directly, it’s better to use the ModelEnum:

@Getter
@AllArgsConstructor
public enum ModelEnum {         
    GPT_3_5_TURBO("gpt-3.5-turbo"),
    GPT_3_5_TURBO_0301("gpt-3.5-turbo-0301"),
    GPT_4("gpt-4"),
    GPT_4_0314("gpt-4-0314"),
    GPT_4_32K("gpt-4-32k"),
    GPT_4_32K_0314("gpt-4-32k-0314"),
    GPT_4_1106_preview("gpt-4-1106-preview");
    private String name;
}

It doesn’t contain all the models, but in our case, it’s enough. If we want to use a different model, we can provide its name as a String.

5.3. Messages

The next thing is the messages we’ve created. We use the ChatMessage class for it. In our case, we pass only the role and the message itself:

List<ChatMessage> messages = new ArrayList<>();
ChatMessage systemMessage = new ChatMessage(ChatMessageRole.SYSTEM.value(), PROMPT);
messages.add(systemMessage);

The interesting part is that we send a collection of messages. Although in usual chats, we communicate by sending messages one by one, in this case, it’s more similar to email threads.

The system works on completion and appends the next message to the chain. This way, we can maintain the context of the conversation. We can think about this as a stateless service. However, this means we must pass the messages to keep the context.

At the same time, we can go another way and create an assistant. With this approach, we would store the messages in the threads, and it doesn’t require sending the entire history back and forth.

While passing, the content of the messages is reasonable, but the purpose of the roles isn’t. Because we send all the messages at once, we need to provide some way to identify the relationships between the messages and users based on their roles.

5.4. Roles

As was mentioned, roles are crucial for ChatGPT to understand the context of the conversation. We can use them to identify the actors behind the messages. This way, we can help ChatGPT interpret the messages correctly. ChatMessages support four roles: chat, system, assistant, and function:

public enum ChatMessageRole {
    SYSTEM("system"),
    USER("user"),
    ASSISTANT("assistant"),
    FUNCTION("function");

    private final String value;

    ChatMessageRole(final String value) {
        this.value = value;
    }

    public String value() {
        return value;
    }
}

Usually, the SYSTEM role refers to the initial context or prompt. The user represents the user of the ChatGPT, and the assistant is a ChatGPT itself. This means that, technically, we can also write the messages from the assistant’s standpoint. As its name suggests, the function role identifies the functions the assistant can use.

5.5. Tokens

While we previously talked about access tokens to the API, the meaning is different in the context of models and messages. We can think about tokens as the amount of information we can process and the amount we want to get in response.

We can restrict the model from generating huge responses by limiting the number of tokens in the response:

ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest
  .builder()
  .model(MODEL)
  .maxTokens(MAX_TOKENS)
  .messages(messages)
  .build();

There’s no direct mapping between the words and tokens because each model processes them slightly differently. This parameter restricts the answer to a specific number of tokens. Using the default values might allow excessive responses and increase the bills for usage. Thus, it’s a good practice to configure it explicitly.

We can add the information about used tokens after each response:

long usedTokens = result.getUsage().getTotalTokens();
System.out.println("Total tokens used: " + usedTokens);

5.6. Tokenization

In the previous example, we displayed the number of tokens used in the response. While this information is valuable, we often need to estimate the size of the request as well. To achieve this, we can use tokenizers provided by OpenAI.

To do this in a more automated way, openai-java provides us with TikTokensUtil, to which we can pass the name of the model and the messages and get the number of tokens as a result.

5.7. Options

An additional method we can use to configure our request is mysteriously named n(). It controls how many responses we want to get for each request. Simply put, we can have two different answers to the same request. By default, we’ll have only one.

Sometimes, it could be useful for bots and website assistants. However, the responses are billed based on the tokens across all the options.

5.8. Bias and Randomization

We can use a couple of additional options to control the randomness and biases of ChatGPT answers. For example, logitBias() can make it more probable to see or not to see specific tokens. Please note that we’re talking about tokens and not particular words here. However, it doesn’t mean this token won’t appear 100%.

Also, we can use topP() and temperature() to randomize the responses. While it’s useful for some cases, we won’t change the defaults for our learning tool.

6. Curriculum

Now, let’s check our tool in action. We’ll have the following overall code:

public static void main(String[] args) {
    String token = System.getenv("OPENAI_TOKEN");
    OpenAiService service = new OpenAiService(token);

    List<ChatMessage> messages = new ArrayList<>();
    ChatMessage systemMessage = new ChatMessage(ChatMessageRole.SYSTEM.value(), PROMPT);
    messages.add(systemMessage);

    System.out.print(GREETING);
    Scanner scanner = new Scanner(System.in);
    ChatMessage firstMsg = new ChatMessage(ChatMessageRole.USER.value(), scanner.nextLine());
    messages.add(firstMsg);

    while (true) {
        ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest
          .builder()
          .model(GPT_3_5_TURBO_0301.getName())
          .messages(messages)
          .build();
        ChatCompletionResult result = service.createChatCompletion(chatCompletionRequest);
        long usedTokens = result.getUsage().getTotalTokens();
        ChatMessage response = result.getChoices().get(0).getMessage();

        messages.add(response);

        System.out.println(response.getContent());
        System.out.println("Total tokens used: " + usedTokens);
        System.out.print("Anything else?\n");
        String nextLine = scanner.nextLine();
        if (nextLine.equalsIgnoreCase("exit")) {
            System.exit(0);
        }
        messages.add(new ChatMessage(ChatMessageRole.USER.value(), nextLine));
    }
}

If we run it, we can interact with it via the console:

Hello!
What do you want to learn?

In response, we can write the topics that we’re interested in:

$ I would like to learn about binary trees.

As expected, the tool would provide us with some articles we can use to learn about the topics:

Great! Here's a suggested order for Baeldung's articles on binary trees:

1. Introduction to Binary Trees: https://www.baeldung.com/java-binary-tree-intro
2. Implementing a Binary Tree in Java: https://www.baeldung.com/java-binary-tree
3. Depth First Traversal of Binary Tree: https://www.baeldung.com/java-depth-first-binary-tree-traversal
4. Breadth First Traversal of Binary Tree: https://www.baeldung.com/java-breadth-first-binary-tree-traversal
5. Finding the Maximum Element in a Binary Tree: https://www.baeldung.com/java-binary-tree-maximum-element
6. Binary Search Trees in Java: https://www.baeldung.com/java-binary-search-tree
7. Deleting from a Binary Search Tree: https://www.baeldung.com/java-binary-search-tree-delete

I hope this helps you out!
Total tokens used: 288
Anything else?

This way, we solved the problem by creating a curriculum and learning new things. However, not everything is so bright; the problem is that only one article is real. For the most part, ChatGPT listed non-existent articles with appropriate links. While the names and the links sound reasonable, they won’t lead us anywhere.

This is a crucial aspect of any AI tool. Generative models have a hard time checking the validity of the information. As they are based on predicting and picking the most appropriate next word, it might be hard for them to verify the information. We cannot rely 100% on the information from generative models.

7. Conclusion

AI tools are great and help us to improve applications and automate daily chores, from processing emails and creating shopping lists to optimizing education. Java provides a couple of ways to interact with OpenAI APIs, and openai-java is one such library.

However, it’s important to remember that generative models, despite being quite convincing, have trouble validating if the information is true. Thus, it’s our responsibility to either recheck crucial information or provide a model with enough information that it will be able to give us valid answers.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Course – Black Friday 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

Course – Black Friday 2025 – NPI (All)
announcement - icon

Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:

>> EXPLORE ACCESS NOW

eBook Jackson – NPI EA – 3 (cat = Jackson)