Authors Top

If you have a few years of experience in the Linux ecosystem, and you’re interested in sharing that experience with the community, have a look at our Contribution Guidelines.

1. Overview

A system may depend on other systems to provide a service. Typical requirements for applications are database servers and network resources like REST services. Moreover, an application may need its dependencies to be up and running when it starts.

This tutorial will examine how we can make a Docker container wait until another container becomes fully functional before starting.

2. Sample Application

We’ll create a client and a server container for our test case. Our server will use the netcat command and listen on port 1234, while our client will use the Bash shell to create a socket and send a text message to the server. If the server isn’t listening on the port, then the client will fail to send the message. Thus, the client depends on the server.

2.1. The Server Container

Firstly, let’s create the Dockerfile for the server container. In our working directory, we create a subdirectory with the name server. Inside the server directory, we create the Dockerfile of the server:

FROM ubuntu:latest
RUN apt-get update;apt-get -y install netcat;apt-get -y install net-tools
COPY ./server-entrypoint.sh /opt/server-entrypoint.sh
ENTRYPOINT /opt/server-entrypoint.sh

Here, we first install netcat and then the net-tools package. The net-tools package contains the netstat command.

Next, we set the entrypoint of our container to /opt/server-entrypoint.sh. Furthermore, we create the server-entrypoint.sh Bash script in the same directory we created the Dockerfile:

#!/bin/bash
netcat -l 1234

In the script, we use netcat with the -l option to make our service listen on port 1234 of the server container.

2.2. The Client Container

To construct the client container, let’s first create a client subdirectory. In it, we create the client’s Dockerfile:

FROM ubuntu:latest
COPY ./client-entrypoint.sh /opt/client-entrypoint.sh
ENTRYPOINT /opt/client-entrypoint.sh

Here, our client container image is based on Ubuntu. As before, we set the entrypoint of the container to a Bash script (/opt/client-entrypoint.sh), copied from the local client-entrypoint.sh:

#!/bin/bash
exec 3>/dev/tcp/192.168.11.2/1234;
echo -e "Hello from client" >&3

The second line creates a client socket that connects to the host IP 192.168.11.2 on port 1234. This is the IP address that we’ll assign to the server container in the Docker Compose configuration file.

In detail, we create the socket with exec and the redirection operator. The exec command opens a file descriptor with the number 3 and assigns it to a pseudo-device in /dev/tcp that corresponds to the address 192.168.11.2:1234. Notably, the /dev/tcp exists only in the Bash shell and enables us to create a socket that we can use to send content.

The last line uses the echo command to print a text message to the standard output, which is redirected to the socket descriptor. As a result, the text message is sent to the server container.

2.3. The Docker Compose Configuration File

Let’s create the Docker Compose file in our working directory. In this file, we define the client and the server as services. Moreover, we define a network to connect our services:

services:
  client:
    build: ./client
    networks:
      my_network:
        ipv4_address: 192.168.11.3
  server:
    build: ./server
    ports:
      - "1234:1234"
    networks:
      my_network:
        ipv4_address: 192.168.11.2
networks:
  my_network:
    driver: bridge
    external: true

Let’s start with the client service:

  • The build directive builds it from the Dockerfile in the ./client directory
  • We configure the network directive to use the my_network network, and we assign it an IP address in the range 192.168.11.0/24

The server service has a similar configuration to our client service, with the addition that it exposes port 1234. In a similar way to the client, we build this container from the Dockerfile in the ./server directory.

We also create the my_network network, and we define it as external. This means that we’ve used the docker network create command to create it before we use it with the docker-compose tool:

$ sudo docker network create --driver=bridge --subnet=192.168.11.0/24 --gateway=192.168.11.1 my_network
2fcd4bfd234fccbb2abee2c7df27b44599ca636f015a7882312d6511dce3d401

The IP range we’ve given to the network is 192.168.11.0/24.

Importantly, we haven’t defined a version with the version tag in the Docker Compose file. The version tag is deprecated after Docker Compose version 3, but it’s required in all previous versions.

2.4. Test Execution

Next, let’s start our application with the docker-compose build and up commands:

$ sudo docker-compose build
...
$ sudo docker-compose up
Starting article11_client_1 ... done
Starting article11_server_1 ... done
Attaching to article11_client_1, article11_server_1
client_1  | /opt/client-entrypoint.sh: connect: Connection refused
client_1  | /opt/client-entrypoint.sh: line 2: /dev/tcp/192.168.11.2/1234: Connection refused
client_1  | /opt/client-entrypoint.sh: line 2: 3: Bad file descriptor
article11_client_1 exited with code 1

As we can see, using the normal start-up, the client failed to connect to the server since the server wasn’t ready. We have to introduce a delay to the client so that the server manages to start listening to port 1234.

3. Defining Dependencies and Healthchecks

With the depends_on directive, we can define that one service depends on another service in the Docker Compose file. In its simple form, depends_on defines the order that Docker should start services. In our case, the client container depends on the server container, so Docker should start them accordingly:

  1. Start the server
  2. Start the client

Although the depends_on directive can guarantee the starting order of our services, we still haven’t solved the problem. Imagine that we have an application server that takes some time to start. This means that there’s a period when the container is started but the application server isn’t ready to accept requests. As a result, our client could again fail.

3.1. The healthcheck Directive

To address any delays in the server start, Docker Compose files after version 3 combine the depends_on directive with healthcheck. The healthcheck directive defines a shell script that tests whether the server is up and running. The test runs in the server container. If the script exits with zero status, then the healthcheck is considered successful.

So, first, we add a healthcheck to the Docker Compose file for our server:

healthcheck:
  test: ["CMD-SHELL", " netstat -an | grep -q 1234"]

Here, we use the CMD-SHELL format for the test:

  • Execute netstat to get all used ports
  • Filter the output of netstat with grep, keeping only lines with our port number 1234
  • Use -q to ensure grep returns an exit status of zero only when it finds at least one line

So, the Docker engine will keep executing this test until the server container’s service starts to listen on port 1234.

3.2. The depends_on Directive

The next step is to define the dependency on the client service:

depends_on:
  server:
    condition: service_healthy

So, we’ve defined that our client service depends on the server service. Moreover, the condition: service_healthy line declares that the client service should wait until the server container is healthy — in other words, when the server is ready to accept connections on port 1234.

3.3. Run the Application

Let’s run the application with the latest additions to the Docker Compose file:

$ sudo docker-compose build;sudo docker-compose up
...
Starting article11_server_1 ... done
Starting article11_client_1 ... done
Attaching to article11_server_1, article11_client_1
server_1  | Hello from client
article11_server_1 exited with code 0
article11_client_1 exited with code 0

In the output, we can observe the execution of our application step by step:

  1. The Docker engine starts the server.
  2. There’s a small delay since the engine waits for the server to be ready.
  3. When the server is ready, the engine starts the client.
  4. The client sends a test message to the server.
  5. The server prints the message to the standard output.
  6. Finally, both the server and the client exit with a zero status.

Note that the netcat command in the server’s entrypoint will exit after the first message it receives on port 1234.

4. Healthchecks in entrypoint

In Docker Compose file versions where the combination of the depends_on and healthcheck directives isn’t available, we can implement this functionality in the entrypoint script of the client service.

Let’s modify the client entrypoint script so that the client waits until the server starts listening on port 1234:

#!/bin/bash
test=1
while [ $test -eq 1 ]
do
    echo -n > /dev/tcp/192.168.11.2/1234
    test=$?
    echo test result: $test
done

exec 3>/dev/tcp/192.168.11.2/1234;echo -e "Hello from client" >&3

Here, we’ve created a while loop that exits only when we manage to send a message.

To verify the client entrypoint script works as expected, we also have to make a minor modification to the server entrypoint:

#!/bin/bash
netcat -kl 1234

In this case, the -k option to netcat ensures that the server keeps listening to a port even after the first message it receives. Thus, we can test freely, without the server exiting.

So, we’re ready to test:

$ sudo docker-compose build;sudo docker-compose up
...
Starting article11_server_1 ... done
Starting article11_client_1 ... done
Attaching to article11_server_1, article11_client_1
client_1  | test result: 0
server_1  | Hello from client
article11_client_1 exited with code 0

The server has successfully received the test message. Also, since we’ve added the -k option to the server entrypoint, the server keeps running.

5. Conclusion

This article demonstrated two ways for making a container wait until another container is ready. The first uses the depends_on and healthcheck directives that are available in recent versions of the Docker Compose configuration file. The second uses the entrypoint script to make one container wait until the other is up and running.

Authors Bottom

If you have a few years of experience in the Linux ecosystem, and you’re interested in sharing that experience with the community, have a look at our Contribution Guidelines.

Comments are closed on this article!