Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Introduction

In this tutorial, we’ll cover CRUD operations on Kubernetes resources using its official Java API.

We’ve already covered the basics of this API usage in previous articles, including basic project setup and various ways in which we can use it to get information about a running cluster.

In general, Kubernetes deployments are mostly static. We create some artifacts (e.g. YAML files) describing what we want to create and submit them to a DevOps pipeline. The pieces of our system then remain the same until we add a new component or upgrade an existing one.

However, there are cases where we need to add resources on the fly. A common one is running Jobs in response to a user-initiated request. In response, the application would launch a background job to process the report and make it available for later retrieval.

The key point here is that, by using those APIs, we can make better use of the available infrastructure, as we can consume resources only when they’re needed, releasing them afterward.

2. Creating a New Resource

In this example, we’ll create a Job resource in a Kubernetes cluster. A Job is a kind of Kubernetes workload that, differently from other kinds, runs to completion. That is, once the programs running in its pod terminate, the job itself terminates. Its YAML representation is not unlike other resources:

apiVersion: batch/v1
kind: Job
metadata:
  namespace: jobs
  name: report-job
  labels:
    app: reports
spec:
  template:
    metadata:
      name: payroll-report
    spec:
      containers:
      - name: main
        image: report-runner
        command:
        - payroll
        args:
        - --date
        - 2021-05-01
      restartPolicy: Never

The Kubernetes API offers two ways to create the equivalent Java object:

  • Creating POJOS with new and populating all required properties via setters
  • Using a fluent API to build the Java resource representation

Which approach to use is mostly a personal preference. Here, we’ll use the fluent approach to create the V1Job object, as the building process looks very similar to its YAML counterpart:

ApiClient client  = Config.defaultClient();
BatchV1Api api = new BatchV1Api(client);
V1Job body = new V1JobBuilder()
  .withNewMetadata()
    .withNamespace("report-jobs")
    .withName("payroll-report-job")
    .endMetadata()
  .withNewSpec()
    .withNewTemplate()
      .withNewMetadata()
        .addToLabels("name", "payroll-report")
        .endMetadata()
      .editOrNewSpec()
        .addNewContainer()
          .withName("main")
          .withImage("report-runner")
          .addNewCommand("payroll")
          .addNewArg("--date")
          .addNewArg("2021-05-01")
          .endContainer()
        .withRestartPolicy("Never")
        .endSpec()
      .endTemplate()
    .endSpec()
  .build(); 
V1Job createdJob = api.createNamespacedJob("report-jobs", body, null, null, null);

We start by creating the ApiClient and then API stub instance. Job resources are part of the Batch API, so we create a BatchV1Api instance, which we’ll use to invoke the cluster’s API server.

Next, we instantiate a V1JobBuilder instance, which kind of leads us through the process of filling all the properties.  Notice the use of nested builders: to “close” a nested builder, we must call its endXXX() method, which brings us back to its parent builder.

Alternatively, it’s also possible to use a withXXX method to inject a nested object directly. This is useful when we want to reuse a common set of properties, such as metadata, labels, and annotations.

The final step is just a call to the API stub. This will serialize our resource object and POST the request to the server. As expected, there are synchronous (used above) and asynchronous versions of the API.

The returned object will contain metadata and status fields related to the created job. In the case of a Job, we can use its status field to check when it is finished. We can also one of the techniques presented in our article about monitoring resources to receive this notification.

3. Updating an Existing Resource

Updating an existing resource consists of sending a PATCH request to the Kubernetes API server, containing which fields we want to modify. As of Kubernetes version 1.16, there are four ways to specify those fields:

  • JSON Patch (RFC 6092)
  • JSON Merge Patch (RFC 7396)
  • Strategic Merge Patch
  • Apply YAML

Of those, the last one is the easiest one to use as it leaves all merging and conflict resolution to the server: all we have to do is send a YAML document with the fields we want to modify.

Unfortunately, the Java API offers no easy way to build this partial YAML document. Instead, we must resort to the PatchUtil helper class to send a raw YAML or JSON  string. However, we can use the built-in JSON serializer available through the ApiClient object to get it:

V1Job patchedJob = new V1JobBuilder(createdJob)
  .withNewMetadata()
    .withName(createdJob.getMetadata().getName())
    .withNamespace(createdJob.getMetadata().getNamespace())
    .endMetadata()
  .editSpec()
    .withParallelism(2)
  .endSpec()
  .build();

String patchedJobJSON = client.getJSON().serialize(patchedJob);

PatchUtils.patch(
  V1Job.class, 
  () -> api.patchNamespacedJobCall(
    createdJob.getMetadata().getName(), 
    createdJob.getMetadata().getNamespace(), 
    new V1Patch(patchedJobJSON), 
    null, 
    null, 
    "baeldung", 
    true, 
    null),
  V1Patch.PATCH_FORMAT_APPLY_YAML,
  api.getApiClient());

Here, we use the object returned from createNamespacedJob() as a template from which we’ll construct the patched version. In this case, we’re just increasing the parallelism value from one to two, leaving all other fields unchanged. An important point here is that as we build the modified resource, we must use the withNewMetadata().  This ensures that we don’t build an object containing managed fields, which are present in the returned object we got after creating the resource. For a full description of managed fields and how they’re used in Kubernetes, please refer to the documentation.

Once we’ve built an object with the modified fields, we then convert it to its JSON representation using the serialize method. We then use this serialized version to construct a V1Patch object used as the payload for the PATCH call. The patch method also takes an additional argument where we inform the kind of data present in the request. In our case, this is PATCH_FORMAT_APPLY_YAML, which the library uses as the Content-Type header included in the HTTP request.

The “baeldung” value passed to the fieldManager parameter defines the actor name who is manipulating the resource’s fields. Kubernetes uses this value internally to resolve an eventual conflict when two or more clients try to modify the same resource. We also pass true in the force parameter, meaning that we’ll take ownership of any modified field.

4. Deleting a Resource

Compared to the previous operations, deleting a resource is quite straightforward:

V1Status response = api.deleteNamespacedJob(
  createdJob.getMetadata().getName(), 
  createdJob.getMetadata().getNamespace(), 
  null, 
  null, 
  null, 
  null, 
  null, 
  null ) ;

Here, we’re just using the deleteNamespacedJob method to remove the job using default options for this specific kind of resource. If required, we can use the last parameter to control the details of the deletion process. This takes the form of a V1DeleteOptions  object, which we can use to specify a grace period and cascading behavior for any dependent resources.

5. Conclusion

In this article, we’ve covered how to manipulate Kubernetes resources using the Java Kubernetes API library. As usual, the full source code of the examples can be found over on GitHub.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.