Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Browser testing is essential if you have a website or web applications that users interact with. Manual testing can be very helpful to an extent, but given the multiple browsers available, not to mention versions and operating system, testing everything manually becomes time-consuming and repetitive.

To help automate this process, Selenium is a popular choice for developers, as an open-source tool with a large and active community. What's more, we can further scale our automation testing by running on theLambdaTest cloud-based testing platform.

Read more through our step-by-step tutorial on how to set up Selenium tests with Java and run them on LambdaTest:

>> Automated Browser Testing With Selenium

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

1. Introduction

In this tutorial, we’ll learn how to import data from a CSV file into Elasticsearch using Spring Boot. Importing data from a CSV file is a common use case when we need to migrate data from legacy systems or external sources, or to prepare test datasets.

2. Setting up Elasticsearch with Docker

To use Elasticsearch, we’ll set it up locally using Docker. Follow these steps to start an Elasticsearch container:

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.17.0

Next, we run the container using the command:

docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:8.17.0

Let’s create a sample Excel file “products.csv” with the following data:

id,name,category,price,stock
1,Microwave,Appliances,705.77,136
2,Vacuum Cleaner,Appliances,1397.23,92
...

3. Using a Manual for Loop to Process CSV Data

The first method involves using a manual for-loop to read and index records from a CSV file into Elasticsearch. To implement this method, we’ll use the Apache Commons CSV library to parse the CSV file and Elasticsearch Rest High-Level Client to integrate with the Elasticsearch search engine.

Let’s start by adding the required dependencies into our pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-csv</artifactId>
    <version>1.12.0</version>
</dependency>
<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <version>7.17.11</version>
</dependency>

After adding the dependencies, we need to set up the Elasticsearch configuration. Let’s create a configuration class to set up the RestHighLevelClient:

@Configuration
public class ElasticsearchConfig {
    @Bean
    public RestHighLevelClient restHighLevelClient() {
        return RestClients.create(ClientConfiguration.builder()
          .connectedTo("localhost:9200")
          .build()).rest();
    }
}

Next, we create a Product class to represent the CSV data:

@Document(indexName = "products")
public class Product {
    @Id
    private String id;
    private String name;
    private String category;
    private double price;
    private int stock;

    // Getters and setters
}

Afterward, we’ll create a service in our Spring Boot application to handle the CSV import process. In the service, we use a for loop to iterate over each record in the CSV file:

@Autowired
private RestHighLevelClient restHighLevelClient;

public void importCSV(File file) {
    try (Reader reader = new FileReader(file)) {
        Iterable<CSVRecord> records = CSVFormat.DEFAULT
          .withHeader("id", "name", "category", "price", "stock")
          .withFirstRecordAsHeader()
          .parse(reader);

        for (CSVRecord record : records) {
            IndexRequest request = new IndexRequest("products")
              .id(record.get("id"))
              .source(Map.of(
                "name", record.get("name"),
                "category", record.get("category"),
                "price", Double.parseDouble(record.get("price")),
                "stock", Integer.parseInt(record.get("stock"))
              ));

            restHighLevelClient.index(request, RequestOptions.DEFAULT);
        }
    } catch (Exception e) {
        // handle exception
    }
}

For each record, we construct an IndexRequest object to prepare the data for indexing in Elasticsearch. The data is then indexed using the RestHighLevelClient, which is the primary client library for interacting with Elasticsearch.

Let’s import the data from a CSV file into an Elasticsearch index:

File csvFile = Paths.get("src", "test", "resources", "products.csv").toFile();
importCSV(csvFile);

Next, let’s query the first indexing and verify its contents against expected values:

IndexRequest firstRequest = captor.getAllValues().get(0);
assertEquals(Map.of(
  "name", "Microwave",
  "category", "Appliances",
  "price", 705.77,
  "stock", 136
), firstRequest.sourceAsMap());

This approach is straightforward and gives us complete control over the process. However, it is more suited for smaller datasets as it can be inefficient and time-consuming for large files.

4. Using Spring Batch for Scalable Data Imports

Spring Batch is a powerful framework for batch processing in Java. It’s ideal for handling large-scale data imports by processing data in chunks.

To use Spring Batch, we need to add the Spring Batch dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-batch</artifactId>
    <version>3.4.1</version>
</dependency>

4.1. Define the Spring Configuration File

Next, let’s create a configuration class to define the batch job. In this configuration, we use the @EnableBatchProcessing annotation to activate the Spring Batch features that allow us to create and manage batch jobs.

We set up a FlatFileItemReader to read the CSV file and an ItemWriter to write the data to Elasticsearch. We also create and configure a RestHighLevelClient bean in the Spring configuration file:

@Configuration
@EnableBatchProcessing
public class BatchConfig {
    // ...
    @Autowired
    private RestHighLevelClient restHighLevelClient
}

4.2. Define a Reader

To read data from a CSV file, let’s create a method reader() and define a FlatFileItemReader. We’ll use a FlatFileItemReaderBuilder to configure the reader with various settings:

@Bean
public FlatFileItemReader<Product> reader() {
    return new FlatFileItemReaderBuilder<Product>()
      .name("productReader")
      .resource(new FileSystemResource("products.csv"))
      .delimited()
      .names("id", "name", "category", "price", "stock")
      .fieldSetMapper(new BeanWrapperFieldSetMapper<>() {{
          setTargetType(Product.class);
      }})
      .build();
}

We assign a name to the reader using the name() method, which helps identify it within the batch job. Additionally, the resource() method specifies the location of the CSV file, “products.csv“, by using a FileSystemResource. The file is expected to be delimited (comma-separated), which is specified through the delimited() method.

The names() method lists the column headers from the CSV file and maps them to the fields of the Product class. Finally, the fieldSetMapper() method maps each line of the CSV file into a Product object using the BeanWrapperFieldSetMapper.

4.3. Define a Writer

Next, let’s create a writer() method to handle writing the processed data into Elasticsearch. This method defines an ItemWriter that receives a list of Product objects. It uses a RestHighLevelClient to interact with Elasticsearch:

@Bean
public ItemWriter<Product> writer(RestHighLevelClient restHighLevelClient) {
    return products -> {
        for (Product product : products) {
            IndexRequest request = new IndexRequest("products")
              .id(product.getId())
              .source(Map.of(
                "name", product.getName(),
                "category", product.getCategory(),
                "price", product.getPrice(),
                "stock", product.getStock()
              ));
            restHighLevelClient.index(request, RequestOptions.DEFAULT);
        }
    };
}

For each product in the list, we create an IndexRequest to specify the Elasticsearch index and the document structure. The id() method assigns a unique ID to each document using the Product object’s ID.

The source() method will map the fields of the Product object, such as name, category, price, and stock, into a key-value format that Elasticsearch can store. Once the request is configured, we use the client.index() method to send the Product record to Elasticsearch, ensuring that the product is indexed for search and retrieval.

4.4. Define a Spring Batch Job

Finally, let’s create the importJob() method and use Spring Batch’s JobBuilder and StepBuilder to configure the job and its steps:

@Bean
public Job importJob(JobRepository jobRepository, PlatformTransactionManager transactionManager, 
  RestHighLevelClient restHighLevelClient) {
    return new JobBuilder("importJob", jobRepository)
      .start(new StepBuilder("step1", jobRepository)
        .<Product, Product>chunk(10, transactionManager)
        .reader(reader())
        .writer(writer(restHighLevelClient))
        .build())
      .build();
}

In this example, we use JobBuilder to configure the job. It takes the job name “importJob” and JobRepository as arguments. We also configure a step called “step1” and specify that the job will process 10 records at a time. The transactionManager ensures data consistency during the processing of chunks.

The reader() and writer() methods are integrated into the step to handle data flow from the CSV to Elasticsearch. Next, we linked the job with the step using the start() method. This connection ensures that the step is executed as part of the job. Once this configuration is done, we can run the job using Spring’s JobLauncher.

4.5. Run the Batch Job

Let’s have a look at the code to run the Spring Batch job using JobLauncher. We’ll create a CommandLineRunner bean to execute the job when the application starts:

@Configuration
public class JobRunnerConfig {
    @Autowired
    private JobLauncher jobLauncher;

    @Autowired
    private Job importJob;

    @Bean
    public CommandLineRunner runJob() {
        return args -> {
            try {
                JobExecution execution = jobLauncher.run(importJob, new JobParameters());
            } catch (Exception e) {
                // handle exception
            }
        };
    }
}

After running the job successfully, we can test the results by making a request using curl:

curl -X GET "http://localhost:9200/products/_search" \
  -H "Content-Type: application/json" \
  -d '{
        "query": {
          "match_all": {}
        }
      }'

Let’s see the expected result:

{
  ...
  "hits": {
    "total": {
      "value": 25,
      "relation": "eq"
    },
    "max_score": 1.0,
    "hits": [
      {
        "_index": "products",
        "_type": "_doc",
        "_id": "1",
        "_score": 1.0,
        "_source": {
          "id": "1",
          "name": "Microwave",
          "category": "Appliances",
          "price": 705.77,
          "stock": 136
        }
      },
      {
        "_index": "products",
        "_type": "_doc",
        "_id": "2",
        "_score": 1.0,
        "_source": {
          "id": "1",
          "name": "Vacuum Cleaner",
          "category": "Appliances",
          "price": 1397.23,
          "stock": 92
        }
      }
      ...
    ]
  }
}

This method is more complex to set up than the previous ones but provides scalability and flexibility for importing data.

5. Using Logstash to Import CSV Data

Logstash is part of the Elastic stack and is designed for data processing and ingestion.

We can use Docker to set up Logstash quickly. First, let’s pull and run the Logstash image:

docker pull docker.elastic.co/logstash/logstash:8.17.0

After pulling the image, we create a configuration file “csv-to-es.conf” for Logstash. This file defines how Logstash reads the CSV file and sends the data to Elasticsearch:

input {
    file {
        path => "/path/to/your/products.csv"
        start_position => "beginning"
        sincedb_path => "/dev/null"
    }
}

filter {
    csv {
        separator => ","
        columns => ["id", "name", "category", "price", "stock"]
    }

    mutate {
        convert => { "price" => "float" }
        convert => { "stock" => "integer" }
    }
}

output {
    elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "products"
    }

    stdout {
        codec => json_lines
    }
}

In this file, we define the input, filter, and output stages of the data pipeline. The input stage specifies the CSV file to read, while the filter stage processes and transforms the data. Finally, the output stage sends the processed data to Elasticsearch.

After setting up the configuration file, we need to invoke a docker run command to execute the Logstash pipeline:

docker run --rm -v $(pwd)/csv-to-es.conf:/usr/share/logstash/pipeline/logstash.conf \
  -v $(pwd)/products.csv:/usr/share/logstash/products.csv \
  docker.elastic.co/logstash/logstash:8.17.0

This command mounts our configuration and CSV files to the Logstash container and runs the data pipeline to import data into Elasticsearch. After we run the command successfully, we can run the curl query again to verify the result.

Logstash efficiently imports CSV data into Elasticsearch without requiring custom code, making it a popular choice for handling large datasets and setting up automated data pipelines.

6. Summary

Now that we’ve explored three methods to import data from a CSV file into Elasticsearch, let’s compare their pros and cons:

Method Pros Cons
Manual For-Loop Easy to implement; full control Not efficient for large files
Spring Batch Scalable for large datasets Complex setup for beginners
Logstash No coding; high performance Requires Logstash installation

7. Conclusion

In this article, we covered how to import CSV data into Elasticsearch using three methods: a manual for-loop, Spring Batch, and Logstash. Each approach has its strengths and is suited for different use cases.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)