Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

Browser testing is essential if you have a website or web applications that users interact with. Manual testing can be very helpful to an extent, but given the multiple browsers available, not to mention versions and operating system, testing everything manually becomes time-consuming and repetitive.

To help automate this process, Selenium is a popular choice for developers, as an open-source tool with a large and active community. What's more, we can further scale our automation testing by running on theLambdaTest cloud-based testing platform.

Read more through our step-by-step tutorial on how to set up Selenium tests with Java and run them on LambdaTest:

>> Automated Browser Testing With Selenium

Partner – Orkes – NPI EA (cat=Java)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (cat=Spring Boot)
announcement - icon

Refactor Java code safely — and automatically — with OpenRewrite.

Refactoring big codebases by hand is slow, risky, and easy to put off. That’s where OpenRewrite comes in. The open-source framework for large-scale, automated code transformations helps teams modernize safely and consistently.

Each month, the creators and maintainers of OpenRewrite at Moderne run live, hands-on training sessions — one for newcomers and one for experienced users. You’ll see how recipes work, how to apply them across projects, and how to modernize code with confidence.

Join the next session, bring your questions, and learn how to automate the kind of work that usually eats your sprint time.

1. Overview

Logging events is a critical aspect of software development. While there are lots of frameworks available in Java ecosystem, Log4J has been the most popular for decades, due to the flexibility and simplicity it provides.

Log4j 2 is a new and improved version of the classic Log4j framework.

In this article, we’ll introduce the most common appenders, layouts, and filters via practical examples.

In Log4J 2, an appender is simply a destination for log events; it can be as simple as a console and can be complex like any RDBMS. Layouts determine how the logs will be presented and filters filter the data according to the various criterion.

2. Setup

In order to understand several logging components and their configuration let’s set up different test use-cases, each consisting of a log4J2.xml configuration file and a JUnit 4 test class.

Two maven dependencies are common to all examples:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.19.0</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.19.0</version>
    <type>test-jar</type>
    <scope>test</scope>
</dependency>

Besides the main log4j-core package we need to include the ‘test jar’ belonging to the package to gain access to a context rule needed for testing of uncommonly named configuration files.

3. Default Configuration

ConsoleAppender is the default configuration of the Log4J 2 core package. It logs messages to the system console in a simple pattern:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        <Console name="ConsoleAppender" target="SYSTEM_OUT">
            <PatternLayout 
              pattern="%d [%t] %-5level %logger{36} - %msg%n%throwable"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="ERROR">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Let’s analyze the tags in this simple XML configuration:

  • Configuration: The root element of a Log4J 2 configuration file and attribute status is the level of the internal Log4J events, that we want to log
  • Appenders: This element is holding one or more appenders. Here we’ll configure an appender that outputs to the system console at standard out
  • Loggers: This element can consist of multiple configured Logger elements. With the special Root tag, you can configure a nameless standard logger that will receive all log messages from the application. Each logger can be set to a minimum log level
  • AppenderRef: This element defines a reference to an element from the Appenders section. Therefore the attribute ‘ref‘ is linked with an appenders ‘name‘ attribute

The corresponding unit test will be similarly simple. We’ll obtain a Logger reference and print two messages:

@Test
public void givenLoggerWithDefaultConfig_whenLogToConsole_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger(getClass());
    Exception e = new RuntimeException("This is only a test!");

    logger.info("This is a simple message at INFO level. " +
      "It will be hidden.");
    logger.error("This is a simple message at ERROR level. " +
    "This is the minimum visible level.", e);
}

4. ConsoleAppender With PatternLayout

Let’s define a new console appender with a customized color pattern in a separate XML file, and include that in our main configuration:

<?xml version="1.0" encoding="UTF-8"?>
<Console name="ConsoleAppender" target="SYSTEM_OUT">
    <PatternLayout pattern="%style{%date{DEFAULT}}{yellow}
      %highlight{%-5level}{FATAL=bg_red, ERROR=red, WARN=yellow, INFO=green} 
      %message"/>
</Console>

This file is using some pattern variables that gets replaced by Log4J 2 at runtime:

  • %style{…}{colorname}: This will print the text in the first bracket pair () in a given color (colorname).
  • %highlight{…}{FATAL=colorname, …}: This is similar to the ‘style’ variable. But a different color can be given for each log level.
  • %date{format}: This gets replaced by the current date in the specified format. Here we’re using the ‘DEFAULT’ DateTime format, yyyy-MM-dd HH:mm:ss,SSS’.
  • %-5level: Prints the level of the log message in a right-aligned fashion.
  • %message: Represents the raw log message

But there exists many more variables and formatting in the PatternLayout. You can refer them to the Log4J 2‘s official documentation.

Now we’ll include the defined console appender into our main configuration:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" xmlns:xi="http://www.w3.org/2001/XInclude">
    <Appenders>
        <xi:include href="log4j2-includes/
          console-appender_pattern-layout_colored.xml"/>
    </Appenders>
    <Loggers>
        <Root level="DEBUG">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

The unit test:

@Test
public void givenLoggerWithConsoleConfig_whenLogToConsoleInColors_thanOK() 
  throws Exception {
    Logger logger = LogManager.getLogger("CONSOLE_PATTERN_APPENDER_MARKER");
    logger.trace("This is a colored message at TRACE level.");
    ...
}

5. Async File Appender With JSONLayout and BurstFilter

Sometimes it’s useful to write log messages in an asynchronous manner. For example, if application performance has priority over the availability of logs.

In such use-cases, we can use an AsyncAppender.

For our example, we’re configuring an asynchronous JSON log file. Furthermore, we’ll include a burst filter that limits the log output at a specified rate:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        ...
        <File name="JSONLogfileAppender" fileName="target/logfile.json">
            <JSONLayout compact="true" eventEol="true"/>
            <BurstFilter level="INFO" rate="2" maxBurst="10"/>
        </File>
        <Async name="AsyncAppender" bufferSize="80">
            <AppenderRef ref="JSONLogfileAppender"/>
        </Async>
    </Appenders>
    <Loggers>
        ...
        <Logger name="ASYNC_JSON_FILE_APPENDER" level="INFO"
          additivity="false">
            <AppenderRef ref="AsyncAppender" />
        </Logger>
        <Root level="INFO">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Notice that:

  • The JSONLayout is configured in a way, that writes one log event per row
  • The BurstFilter will drop every event with ‘INFO’ level and above if there are more than two of them, but at a maximum of 10 dropped events
  • The AsyncAppender is set to a buffer of 80 log messages; after that, the buffer is flushed to the log file

Let’s take a look at the corresponding unit test. We’re filling the appended buffer in a loop, let it write to disk and inspect the line count of the log file:

@Test
public void givenLoggerWithAsyncConfig_whenLogToJsonFile_thanOK() 
  throws Exception {
    Logger logger = LogManager.getLogger("ASYNC_JSON_FILE_APPENDER");

    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info("This is async JSON message #{} at INFO level.", count);
    }
    
    long logEventsCount 
      = Files.lines(Paths.get("target/logfile.json")).count();
    assertTrue(logEventsCount > 0 && logEventsCount <= count);
}

6. RollingFile Appender and XMLLayout

Next, we’ll create a rolling log file. After a configured file size, the log file gets compressed and rotated.

This time we’re using an XML layout:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        <RollingFile name="XMLRollingfileAppender"
          fileName="target/logfile.xml"
          filePattern="target/logfile-%d{yyyy-MM-dd}-%i.log.gz">
            <XMLLayout/>
            <Policies>
                <SizeBasedTriggeringPolicy size="17 kB"/>
            </Policies>
        </RollingFile>
    </Appenders>
    <Loggers>
        <Logger name="XML_ROLLING_FILE_APPENDER" 
       level="INFO" additivity="false">
            <AppenderRef ref="XMLRollingfileAppender" />
        </Logger>
        <Root level="TRACE">
            <AppenderRef ref="ConsoleAppender"/>
        </Root>
    </Loggers>
</Configuration>

Notice that:

  • The RollingFile appender has a ‘filePattern’ attribute, which is used to name rotated log files and can be configured with placeholder variables. In our example, it should contain a date and a counter before the file suffix.
  • The default configuration of XMLLayout will write single log event objects without the root element.
  • We’re using a size based policy for rotating our log files.

Our unit test class will look like the one from the previous section:

@Test
public void givenLoggerWithRollingFileConfig_whenLogToXMLFile_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("XML_ROLLING_FILE_APPENDER");
    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info(
          "This is rolling file XML message #{} at INFO level.", i);
    }
}

7. Syslog Appender

Let’s say we need to send logged event’s to a remote machine over the network. The simplest way to do that using Log4J 2 would be using it’s Syslog Appender:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
        ...
        <Syslog name="Syslog" 
          format="RFC5424" host="localhost" port="514" 
          protocol="TCP" facility="local3" connectTimeoutMillis="10000" 
          reconnectionDelayMillis="5000">
        </Syslog>
    </Appenders>
    <Loggers>
        ...
        <Logger name="FAIL_OVER_SYSLOG_APPENDER" 
          level="INFO" 
          additivity="false">
            <AppenderRef ref="FailoverAppender" />
        </Logger>
        <Root level="TRACE">
            <AppenderRef ref="Syslog" />
        </Root>
    </Loggers>
</Configuration>

The attributes in the Syslog tag:

  • name: defines the name of the appender, and must be unique. Since we can have multiple Syslog appenders for the same application and configuration
  • format: it can be either set to BSD or RFC5424, and the Syslog records would be formatted accordingly
  • host & port: the hostname and port of the remote Syslog server machine
  • protocol: whether to use TCP or UPD
  • facility: to which Syslog facility the event will be written
  • connectTimeoutMillis: time period of waiting for an established connection, defaults to zero
  • reconnectionDelayMillis: time to wait before re-attempting connection

8. FailoverAppender

Now there may be instances where one appender fails to process the log events and we do not want to lose the data. In such cases, the FailoverAppender comes handy.

For example, if the Syslog appender fails to send events to the remote machine, instead of losing that data we might fall back to FileAppender temporarily.

The FailoverAppender takes a primary appender and number of secondary appenders. In case the primary fails, it tries to process the log event with secondary ones in order until one succeeds or there aren’t any secondaries to try:

<Failover name="FailoverAppender" primary="Syslog">
    <Failovers>
        <AppenderRef ref="ConsoleAppender" />
    </Failovers>
</Failover>

Let’s test it:

@Test
public void givenLoggerWithFailoverConfig_whenLog_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("FAIL_OVER_SYSLOG_APPENDER");
    Exception e = new RuntimeException("This is only a test!"); 

    logger.trace("This is a syslog message at TRACE level.");
    logger.debug("This is a syslog message at DEBUG level.");
    logger.info("This is a syslog message at INFO level. 
      This is the minimum visible level.");
    logger.warn("This is a syslog message at WARN level.");
    logger.error("This is a syslog message at ERROR level.", e);
    logger.fatal("This is a syslog message at FATAL level.");
}

9. JDBC Appender

The JDBC appender sends log events to an RDBMS, using standard JDBC. The connection can be obtained either using any JNDI Datasource or any connection factory.

The basic configuration consists of a DataSource or ConnectionFactory, ColumnConfigs, and tableName:

<JDBC name="JDBCAppender" tableName="logs">
    <ConnectionFactory 
      class="com.baeldung.logging.log4j2.tests.jdbc.ConnectionFactory" 
      method="getConnection" />
    <Column name="when" isEventTimestamp="true" />
    <Column name="logger" pattern="%logger" />
    <Column name="level" pattern="%level" />
    <Column name="message" pattern="%message" />
    <Column name="throwable" pattern="%ex{full}" />
</JDBC>

Now let’s try out:

@Test
public void givenLoggerWithJdbcConfig_whenLogToDataSource_thanOK()
  throws Exception {
    Logger logger = LogManager.getLogger("JDBC_APPENDER");
    final int count = 88;
    for (int i = 0; i < count; i++) {
        logger.info("This is JDBC message #{} at INFO level.", count);
    }

    Connection connection = ConnectionFactory.getConnection();
    ResultSet resultSet = connection.createStatement()
      .executeQuery("SELECT COUNT(*) AS ROW_COUNT FROM logs");
    int logCount = 0;
    if (resultSet.next()) {
        logCount = resultSet.getInt("ROW_COUNT");
    }
    assertTrue(logCount == count);
}

10. Dynamic File Name in the Appender

Sometimes, we want the log file name to be dynamic — for example, based on the environment, date, or application instance. In Log4j 2, we use attributes and the ${sys:…} lookup to reference system properties:

<Appenders>
    <File name="DynamicFileAppender" fileName="${sys:logfilename}.log">
        <PatternLayout pattern="%d [%t] %-5level %logger - %msg%n"/>
    </File>
</Appenders>

Before using the appender, set the system property:

System.setProperty("log4j.configurationFile", "src/test/resources/log4j2-dynamic.xml");
System.setProperty("logfilename", "log4j2-dynamic-log");
Logger logger = LogManager.getLogger("DYNAMIC_FILE_APPENDER");
logger.error("This is an ERROR log message to the same file.");

File file = new File("log4j2-dynamic-log");
assertTrue(file.exists());

String content = Files.readString(file.toPath());
assertTrue(content.contains("This is an ERROR log message to the same file."));

In Log4j 2, the ${sys:logfilename} expression reads the value of the system property logfilename. Using this value, the appender creates the log file dynamically at runtime, so the file name doesn’t need to be hardcoded in the configuration.

11. Conclusion

This article shows very simple examples of how you can use different logging appenders, filter and layouts with Log4J 2 and ways to configure them.

The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Partner – Moderne – NPI EA (tag=Refactoring)
announcement - icon

Modern Java teams move fast — but codebases don’t always keep up. Frameworks change, dependencies drift, and tech debt builds until it starts to drag on delivery. OpenRewrite was built to fix that: an open-source refactoring engine that automates repetitive code changes while keeping developer intent intact.

The monthly training series, led by the creators and maintainers of OpenRewrite at Moderne, walks through real-world migrations and modernization patterns. Whether you’re new to recipes or ready to write your own, you’ll learn practical ways to refactor safely and at scale.

If you’ve ever wished refactoring felt as natural — and as fast — as writing code, this is a good place to start.

eBook Jackson – NPI EA – 3 (cat = Jackson)