Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

Partner – Orkes – NPI EA (cat=Kubernetes)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

1. Overview

There are many cases in which we need to limit the usage of resources on the docker host machine.

In this tutorial, we’ll learn how to set the memory and CPU limit for docker containers.

2. Setting Resources Limit With docker run

We can set the resource limits directly using the docker run command. It’s a simple solution. However, the limit will apply only to one specific execution of the image.

2.1. Memory

For instance, let’s limit the memory that the container can use to 512 megabytes.

To constrain memory, we need to use the m parameter:

$ docker run -m 512m nginx

We can also set a soft limit called a reservation.

It’s activated when docker detects low memory on the host machine:

$ docker run -m 512m --memory-reservation=256m nginx

In addition to setting the memory limit, we can define the amount of swap memory available to the container. To do this, set the –memory-swap parameter to a value greater than the –memory limit:

$ docker run -m 512m --memory-swap 1g nginx

If this parameter is assigned to 0, the value is processed as undefined, and the swap configuration is ignored. However, if set to -1, the container can use unlimited swap memory up to the host’s available capacity.

Finally, if both parameters have the same value, the container cannot use swap. This is because the –memory-swap parameter represents the total memory available, including physical and swap memory (RAM + swap).

To confer the configurations made, we can use the docker inspect <NAME|ID> command with grep to filter the necessary information:

$ docker inspect limit | grep MemorySwap
        "MemorySwap": 1073741824,
        "MemorySwappiness": null,

2.2. CPU

By default, access to the computing power of the host machine is unlimited. We can set the CPUs limit using the cpus parameter. 

Let’s constrain our container to use at most two CPUs:

$ docker run --cpus=2 nginx

We can also specify the priority of CPU allocation.

The default is 1024, and higher numbers are higher priority:

$ docker run --cpus=2 --cpu-shares=2000 nginx

Similar to the memory reservation, CPU shares play the main role when computing power is scarce and needs to be divided between competing processes.

Another possible setting is to specify which CPUs or cores the container will have access to:

$ docker run --cpus=.5 --cpuset-cpus=1 nginx

In this case, the container can use up to 50% of CPU 1. If the host has more than one CPU, it’s possible to specify a range, such as [0-2], where the container can use the first three CPUs. Another option is to define a list, such as [0, 2], which specifies that the container can only use these CPUs.

In addition, we can configure the CFS period and a quota for a container. We must configure both at the same time. In the command below, we specify that the container can use up to 50% CPU (equivalent to –cpus=.5):

$ docker run --cpu-period=100000 --cpu-quota=50000 nginx

3. Setting Limits With the docker-compose File

We can achieve similar results using docker-compose files. Remember that the format and possibilities will vary between versions of docker-compose.

3.1. Versions 3 and Newer With docker swarm

Let’s give the Nginx service some of the limits mentioned above, including access to half of the CPU 1 and 512 megabytes of memory. As a reservation, we’ll set a quarter of the CPU and 128 megabytes of memory.

We need to create deploy and then resources segments in our service configuration:

services:
  service:
    image: nginx
    deploy:
      resources:
        limits:
          cpus: '0.50'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 128M
    cpuset: "1"
    ports:
      - "80:80"

In Docker Compose v3, the memswap_limit parameter isn’t supported directly. Note also that the cpuset-cpus parameter has changed and is now just cpuset.

To take advantage of the deploy segment in a docker-compose file, we need to use the docker stack command.

To deploy a stack to the swarm, we run the deploy command:

$ docker stack deploy --compose-file docker-compose.yml bael_stack

3.2. Version 2 With docker-compose

In older versions of docker-compose, we can put resource limits on the same level as the service’s main properties.

They also have slightly different naming:

services:
  service:
    image: nginx
    mem_limit: 512m
    mem_reservation: 128M
    memswap_limit: 1g
    cpus: "0.5"
    cpuset: "1"
    ports:
      - "80:80"

To create configured containers, we need to run the docker-compose command:

$ docker-compose up

4. Verifying Resources Usage

After we set the limits, we can verify them using the docker stats command:

$ docker stats
CONTAINER ID        NAME                                             CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
8ad2f2c17078        bael_stack_service.1.jz2ks49finy61kiq1r12da73k   0.00%               2.578MiB / 512MiB   0.50%               936B / 0B           0B / 0B             2

5. Conclusion

In this article, we explored ways of limiting the docker’s access to the host’s resources.

We looked at usage with the docker run and docker-compose commands. Finally, we controlled resource consumption with docker stats.