Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Overview

In a distributed system, it’s expected that occasional errors are bound to happen when serving requests. A central observability platform helps by capturing application traces/logs and provides an interface to query for a specific request. OpenTelemetry helps in standardizing the process of capturing and exporting telemetry data.

In this tutorial, we’ll learn how to integrate a Spring Boot Application with OpenTelemetry. Also, we’ll configure OpenTelemetry to capture application traces and send them to a central system to monitor the requests.

First, let’s understand a few basic concepts.

2. Introduction to OpenTelemetry

OpenTelemetry (Otel) is a collection of standardized vendor-agnostic tools, APIs, and SDKs. It’s a CNCF incubating project and is a merger of the OpenTracing and OpenCensus projects.

OpenTracing is a vendor-neutral API for sending telemetry data over to an observability backend. The OpenCensus project provides a set of language-specific libraries that developers can use to instrument their code and send it to any supported backends. Otel uses the same concept of trace and span to represent the request flow across microservices as used by its predecessor projects.

OpenTelemetry allows us to instrument, generate, and collect telemetry data, which helps in analyzing application behavior or performance. Telemetry data can include logs, metrics, and traces. We can either automatically or manually instrument the code for HTTP, DB calls, and more.

Using the Otel SDK, we can easily override or add more attributes to the trace.

Let’s take a deep dive into this with an example.

3. Example Application

Let’s imagine we need to build two microservices, where one service interacts with the other. For instrumenting the application for telemetry data, we’ll integrate the application with Spring Cloud and OpenTelemetry.

3.1. Maven Dependencies

The spring-cloud-starter-sleuth, spring-cloud-sleuth-otel-autoconfigure, and opentelemetry-exporter-otlp dependencies will automatically capture and export traces to any supported collector.

First, we’ll start by creating a Spring Boot Web project and include the below Spring and OpenTelemetry dependencies into both applications:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-brave</artifactId>
        </exclusion>
   </exclusions>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-sleuth-otel-autoconfigure</artifactId>
</dependency>
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-exporter-otlp</artifactId>
    <version>1.23.1</version>
</dependency>

We should note that we’ve excluded the Spring Cloud Brave dependency for replacing the default tracing implementation with Otel.

Also, we’ll need to include the Spring Dependency management BOM for Spring Cloud Sleuth:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>2021.0.5</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-otel-dependencies</artifactId>
            <version>1.1.2</version>
            <scope>import</scope>
            <type>pom</type>
        </dependency>
    </dependencies>
</dependencyManagement>

3.2. Implement the Downstream Application

Our downstream application will have an endpoint to return Price data.

First, let’s model the Price class:

public class Price {
    private long productId;
    private double priceAmount;
    private double discount;
}

Next, let’s implement the PriceController with the Get Price endpoint:

@RestController(value = "/price")
public class PriceController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PriceController.class);

    @Autowired
    private PriceRepository priceRepository;

    @GetMapping(path = "/{id}")
    public Price getPrice(@PathVariable("id") long productId) {
        LOGGER.info("Getting Price details for Product Id {}", productId);
        return priceRepository.getPrice(productId);
    }
}

Then, we’ll implement the getPrice method in PriceRepository:

public Price getPrice(Long productId){
    LOGGER.info("Getting Price from Price Repo With Product Id {}", productId);
    if(!priceMap.containsKey(productId)){
        LOGGER.error("Price Not Found for Product Id {}", productId);
        throw new PriceNotFoundException("Price Not Found");
    }
    return priceMap.get(productId);
}

3.3. Implement the Upstream Application

The upstream application will also have an endpoint to get the Product details and integrate with the above Get Price endpoint.

First, let’s implement the Product class:

public class Product {
    private long id;
    private String name;
    private Price price;
}

Then, let’s implement the ProductController class with an endpoint for getting products:

@RestController
public class ProductController {

    private static final Logger LOGGER = LoggerFactory.getLogger(ProductController.class);

    @Autowired
    private PriceClient priceClient;

    @Autowired
    private ProductRepository productRepository;

    @GetMapping(path = "/product/{id}")
    public Product getProductDetails(@PathVariable("id") long productId){
        LOGGER.info("Getting Product and Price Details with Product Id {}", productId);
        Product product = productRepository.getProduct(productId);
        product.setPrice(priceClient.getPrice(productId));
        return product;
    }
}

Next, we’ll implement the getProduct method in the ProductRepository:

public Product getProduct(Long productId){
    LOGGER.info("Getting Product from Product Repo With Product Id {}", productId);
    if(!productMap.containsKey(productId)){
        LOGGER.error("Product Not Found for Product Id {}", productId);
        throw new ProductNotFoundException("Product Not Found");
    }
    return productMap.get(productId);
}

Finally, let’s implement the getPrice method in PriceClient:

public Price getPrice(@PathVariable("id") long productId){
    LOGGER.info("Fetching Price Details With Product Id {}", productId);
    String url = String.format("%s/price/%d", baseUrl, productId);
    ResponseEntity<Price> price = restTemplate.getForEntity(url, Price.class);
    return price.getBody();
}

4. Configure Spring Boot With OpenTelemetry

OpenTelemetry provides a collector known as an Otel collector that processes and exports the telemetry data to any observability backends like Jaeger, Prometheus, and others.

The traces can be exported to an Otel collector using a few Spring Sleuth configurations.

4.1. Configure Spring Sleuth

We’ll need to configure the application with the Otel endpoint to send telemetry data.

Let’s include the Spring Sleuth configuration in application.properties:

spring.sleuth.otel.config.trace-id-ratio-based=1.0
spring.sleuth.otel.exporter.otlp.endpoint=http://collector:4317

The trace-id-ratio-based property defines the sampling ratio for the spans collected. The value 1.0 means that all spans will be exported.

4.2. Configure OpenTelemetry Collector

The Otel collector is the engine of OpenTelemetry tracing. It consists of receivers, processors, and exporters components. There’s an optional extension component that helps in the health check, service discovery, or data forwarding. The extension component doesn’t involve processing telemetry data.

To quickly bootstrap Otel services, we’ll use the Jaeger backend endpoint hosted at port 14250.

Let’s configure the otel-config.yml with the Otel pipeline stages:

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  logging:
    loglevel: debug
  jaeger:
    endpoint: jaeger-service:14250
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers:  [ otlp ]
      processors: [ batch ]
      exporters:  [ logging, jaeger ]

We should note that the above processors configuration is optional and, by default, is not enabled. The processors batch option helps better compress the data and reduce the number of outgoing connections required to transmit the data.

Also, we should note that the receiver is configured with GRPC and HTTP protocol.

5. Run the Application

We’ll now configure and run the entire setup, the applications, and the Otel collector.

5.1. Configure Dockerfile in the Application

Let’s implement the Dockerfile for our Product Service:

FROM adoptopenjdk/openjdk11:alpine
COPY target/spring-cloud-open-telemetry1-1.0.0-SNAPSHOT.jar spring-cloud-open-telemetry.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/spring-cloud-open-telemetry.jar"]

We should note that the Dockerfile for the Price Service is essentially the same.

5.2. Configure Services With Docker Compose

Now, let’s configure the docker-compose.yml with the entire setup:

version: "4.0"

services:
  product-service:
    build: spring-cloud-open-telemetry1/
    ports:
      - "8080:8080"

  price-service:
    build: spring-cloud-open-telemetry2/
    ports:
      - "8081"

  collector:
    image: otel/opentelemetry-collector:0.72.0
    command: [ "--config=/etc/otel-collector-config.yml" ]
    volumes:
      - ./otel-config.yml:/etc/otel-collector-config.yml
    ports:
      - "4317:4317"
    depends_on:
      - jaeger-service

  jaeger-service:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14250"

Let’s now run the services via docker-compose:

$ docker-compose up

5.3. Validate the Running Docker Services

Along with the product-service and price-service, we’ve added collector-service and jaeger-service into the entire setup. The above product-service and price-service use the collector service port 4317 to send the trace data. The collector service in turn relies on the jaeger-service endpoint to export the tracing data to the Jaeger backend.

For the jaeger-service, we’re using the jaegertracing/all-in-one image, which includes its backend and UI components.

Let’s verify the services’ status using the docker container command:

$ docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Status}}\t{{.Ports}}"
CONTAINER ID   NAMES                                           STATUS         PORTS
7b874b9ee2e6   spring-cloud-open-telemetry-collector-1         Up 5 minutes   0.0.0.0:4317->4317/tcp, 55678-55679/tcp
29ed09779f98   spring-cloud-open-telemetry-jaeger-service-1    Up 5 minutes   5775/udp, 5778/tcp, 6831-6832/udp, 14268/tcp, 0.0.0.0:16686->16686/tcp, 0.0.0.0:61686->14250/tcp
75bfbf6d3551   spring-cloud-open-telemetry-product-service-1   Up 5 minutes   0.0.0.0:8080->8080/tcp, 8081/tcp
d2ca1457b5ab   spring-cloud-open-telemetry-price-service-1     Up 5 minutes   0.0.0.0:61687->8081/tcp

6. Monitor Traces in the Collector

Telemetry collector tools like Jaeger provide front-end applications to monitor the requests. We can view the request traces in real-time or later on.

Let’s monitor the traces when the request succeeds as well as when it fails.

6.1. Monitor Traces When Request Succeeds

First, let’s call the Product endpoint http://localhost:8080/product/100003.

The request will make some logs appear:

spring-cloud-open-telemetry-price-service-1 | 2023-01-06 19:03:03.985 INFO [price-service,825dad4a4a308e6f7c97171daf29041a,346a0590f545bbcf] 1 --- [nio-8081-exec-1] c.b.opentelemetry.PriceRepository : Getting Price from Price With Product Id 100003
spring-cloud-open-telemetry-product-service-1 | 2023-01-06 19:03:04.432 INFO [,825dad4a4a308e6f7c97171daf29041a,fb9c54565b028eb8] 1 --- [nio-8080-exec-1] c.b.opentelemetry.ProductRepository : Getting Product from Product Repo With Product Id 100003
spring-cloud-open-telemetry-collector-1 | Trace ID : 825dad4a4a308e6f7c97171daf29041a

Spring Sleuth will automatically configure the ProductService to attach the trace id to the current thread and as an HTTP Header to the downstream API calls. The PriceService will also automatically include the same trace id in the thread context and logs. The Otel service will use this trace id to determine the request flow across the services.

As expected, the above trace id ….f29041a is the same in both PriceService and ProductService logs.

Let’s visualize the whole request spans timeline in the Jaeger UI hosted at port 16686:

Jaegar UI Success Trace

The above shows the timeline of the request flows and contains the metadata to represent the request.

6.2. Monitor Traces When Request Fails

Imagine a scenario where the downstream service throws an exception, which results in request failure.

Again, we’ll leverage the same UI to analyze the root cause.

Let’s test the above scenario with the Product endpoint /product/100005 call where the Product is not present in the downstream application.

Now, let’s visualize the failed request spans:

Error-Trace

As seen above, we can trace back the request to the final API call where the error has originated.

7. Conclusion

In this article, we’ve learned how OpenTelemetry helps in standardizing observability patterns for microservices.

We’ve also seen how to configure the Spring Boot application with OpenTelemetry with an example. Finally, we traced an API request flow in a Collector.

As always, the example code can be found over on GitHub.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – Microservices (eBook) (cat=Cloud/Spring Cloud)
8 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.