eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Regression testing is an important step in the release process, to ensure that new code doesn't break the existing functionality. As the codebase evolves, we want to run these tests frequently to help catch any issues early on.

The best way to ensure these tests run frequently on an automated basis is, of course, to include them in the CI/CD pipeline. This way, the regression tests will execute automatically whenever we commit code to the repository.

In this tutorial, we'll see how to create regression tests using Selenium, and then include them in our pipeline using GitHub Actions:, to be run on the LambdaTest cloud grid:

>> How to Run Selenium Regression Tests With GitHub Actions

Course – LJB – NPI EA (cat = Core Java)
announcement - icon

Code your way through and build up a solid, practical foundation of Java:

>> Learn Java Basics

1. Overview

Apache Tika is a toolkit for extracting content and metadata from various types of documents, such as Word, Excel, and PDF or even multimedia files like JPEG and MP4.

All text-based and multimedia files can be parsed using a common interface, making Tika a powerful and versatile library for content analysis.

In this article, we’ll give an introduction to Apache Tika, including its parsing API and how it automatically detects the content type of a document. Working examples will also be provided to illustrate operations of this library.

2. Getting Started

In order to parse documents using Apache Tika, we need only one Maven dependency:

<dependency>
    <groupId>org.apache.tika</groupId>
    <artifactId>tika-parsers</artifactId>
    <version>1.17</version>
</dependency>

The latest version of this artifact can be found here.

3. The Parser API

The Parser API is the heart of Apache Tika, abstracting away the complexity of the parsing operations. This API relies on a single method:

void parse(
  InputStream stream, 
  ContentHandler handler, 
  Metadata metadata, 
  ParseContext context) 
  throws IOException, SAXException, TikaException

The meanings of this method’s parameters are:

  • stream an InputStream instance created from the document to be parsed
  • handler a ContentHandler object receiving a sequence of XHTML SAX events parsed from the input document; this handler will then process events and export the result in a particular form
  • metadata a Metadata object conveying metadata properties in and out of the parser
  • context a ParseContext instance carrying context-specific information, used to customize the parsing process

The parse method throws an IOException if it fails to read from the input stream, a TikaException if the document taken from the stream cannot be parsed and a SAXException if the handler is unable to process an event.

When parsing a document, Tika attempts to reuse existing parser libraries such as Apache POI or PDFBox as much as possible. As a result, most of the Parser implementation classes are just adapters to such external libraries.

In section 5, we’ll see how the handler and metadata parameters can be used to extract content and metadata of a document.

For convenience, we can use the facade class Tika to access the functionality of the Parser API.

4. Auto-Detection

Apache Tika can automatically detect the type of a document and its language based on the document itself rather than on additional information.

4.1. Document Type Detection

The detection of document types can be done using an implementation class of the Detector interface, which has a single method:

MediaType detect(java.io.InputStream input, Metadata metadata) 
  throws IOException

This method takes a document, and its associated metadata – then returns a MediaType object describing the best guess regarding the type of the document.

Metadata isn’t the only source of information on which a detector relies. The detector can also make use of magic bytes, which are a special pattern near the beginning of a file or delegate the detection process to a more suitable detector.

In fact, the algorithm used by the detector is implementation dependent.

For instance, the default detector works with magic bytes first, then metadata properties. If the content type hasn’t been found at this point, it will use the service loader to discover all available detectors and try them in turn.

4.2. Language Detection

In addition to the type of a document, Tika can also identify its language even without help from metadata information.

In previous releases of Tika, the language of the document is detected using a LanguageIdentifier instance.

However, LanguageIdentifier has been deprecated in favor of web services, which is not made clear in the Getting Started docs.

Language detection services are now provided via subtypes of the abstract class LanguageDetector. Using web services, you can also access fully-fledged online translation services, such as Google Translate or Microsoft Translator.

For the sake of brevity, we won’t go over those services in detail.

5. Tika in Action

This section illustrates Apache Tika features using working examples.

The illustration methods will be wrapped in a class:

public class TikaAnalysis {
    // illustration methods
}

5.1. Detecting Document Types

Here’s the code we can use to detect the type of a document read from an InputStream:

public static String detectDocTypeUsingDetector(InputStream stream) 
  throws IOException {
    Detector detector = new DefaultDetector();
    Metadata metadata = new Metadata();

    MediaType mediaType = detector.detect(stream, metadata);
    return mediaType.toString();
}

Assume we have a PDF file named tika.txt in the classpath. The extension of this file has been changed to try to trick our analysis tool. The real type of the document can still be found and confirmed by a test:

@Test
public void whenUsingDetector_thenDocumentTypeIsReturned() 
  throws IOException {
    InputStream stream = this.getClass().getClassLoader()
      .getResourceAsStream("tika.txt");
    String mediaType = TikaAnalysis.detectDocTypeUsingDetector(stream);

    assertEquals("application/pdf", mediaType);

    stream.close();
}

It’s clear that a wrong file extension can’t keep Tika from finding the correct media type, thanks to the magic bytes %PDF at the start of the file.

For convenience, we can re-write the detection code using the Tika facade class with the same result:

public static String detectDocTypeUsingFacade(InputStream stream) 
  throws IOException {
 
    Tika tika = new Tika();
    String mediaType = tika.detect(stream);
    return mediaType;
}

5.2. Extracting Content

Let’s now extract the content of a file and return the result as a String – using the Parser API:

public static String extractContentUsingParser(InputStream stream) 
  throws IOException, TikaException, SAXException {
 
    Parser parser = new AutoDetectParser();
    ContentHandler handler = new BodyContentHandler();
    Metadata metadata = new Metadata();
    ParseContext context = new ParseContext();

    parser.parse(stream, handler, metadata, context);
    return handler.toString();
}

Given a Microsoft Word file in the classpath with this content:

Apache Tika - a content analysis toolkit
The Apache Tika™ toolkit detects and extracts metadata and text ...

The content can be extracted and verified:

@Test
public void whenUsingParser_thenContentIsReturned() 
  throws IOException, TikaException, SAXException {
    InputStream stream = this.getClass().getClassLoader()
      .getResourceAsStream("tika.docx");
    String content = TikaAnalysis.extractContentUsingParser(stream);

    assertThat(content, 
      containsString("Apache Tika - a content analysis toolkit"));
    assertThat(content, 
      containsString("detects and extracts metadata and text"));

    stream.close();
}

Again, the Tika class can be used to write the code more conveniently:

public static String extractContentUsingFacade(InputStream stream) 
  throws IOException, TikaException {
 
    Tika tika = new Tika();
    String content = tika.parseToString(stream);
    return content;
}

5.3. Extracting Metadata

In addition to the content of a document, the Parser API can also extract metadata:

public static Metadata extractMetadatatUsingParser(InputStream stream) 
  throws IOException, SAXException, TikaException {
 
    Parser parser = new AutoDetectParser();
    ContentHandler handler = new BodyContentHandler();
    Metadata metadata = new Metadata();
    ParseContext context = new ParseContext();

    parser.parse(stream, handler, metadata, context);
    return metadata;
}

When a Microsoft Excel file exists in the classpath, this test case confirms that the extracted metadata is correct:

@Test
public void whenUsingParser_thenMetadataIsReturned() 
  throws IOException, TikaException, SAXException {
    InputStream stream = this.getClass().getClassLoader()
      .getResourceAsStream("tika.xlsx");
    Metadata metadata = TikaAnalysis.extractMetadatatUsingParser(stream);

    assertEquals("org.apache.tika.parser.DefaultParser", 
      metadata.get("X-Parsed-By"));
    assertEquals("Microsoft Office User", metadata.get("Author"));

    stream.close();
}

Finally, here’s another version of the extraction method using the Tika facade class:

public static Metadata extractMetadatatUsingFacade(InputStream stream) 
  throws IOException, TikaException {
    Tika tika = new Tika();
    Metadata metadata = new Metadata();

    tika.parse(stream, metadata);
    return metadata;
}

6. Conclusion

This tutorial focused on content analysis with Apache Tika. Using the Parser and Detector APIs, we can automatically detect the type of a document, as well as extract its content and metadata.

For advanced use cases, we can create custom Parser and Detector classes to have more control over the parsing process.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)