1. Overview

With Docker, we commonly build containers that can communicate over isolated networks. However, we might have multiple containers communicating on the same host. This happens, for example, during integration tests. Docker provides a network definition with different options. We commonly connect on the same network using, for example, the container name. We might also want to use a static IP or directly reach the host where the containers are running.

In this tutorial, we’ll explore how two containers can connect on the same machine using Docker Compose.

2. Docker Setup

Let’s create a scenario where two containers communicate in a network.

We’ll run the containers from a simple Alpine Docker image. Then, we’ll check if those containers can ping each other over the network.

Let’s look at the Dockerfile. We’ll also add commands that aren’t present in the Alpine image, for example, ping or bash:

FROM alpine:latest
MAINTAINER baeldung.com
RUN apk update && apk add bash && apk add iputils

We’ll be using Docker Compose. We might want to run different examples in the same session, so we’ll use the -f option:

docker-compose -f yaml-file up -d

Also, let’s make sure we remove our containers, for example, when we want to move to another example:

docker-compose -f yaml-file down

Before starting doing some Docker testing, let’s make sure there will be no conflict with our networking and remove unused networks:

docker network prune

Still, about networking, Docker creates the network names from the YAML file’s directory name. It also appends the default suffix if we don’t provide a network name.

3. Using DNS

Docker has a built-in DNS service. DNS maps the IP address to aliases like, for example, the container name. This makes containers always reachable even though the IP address changes over time.

Let’s run the compose services and make them communicate on the same network. By default, Docker assigns the containers a default bridge network at runtime if no networking is specified.

We define the services in the docker-compose.yml file in the dns directory:

services:
  alpine-app-1:
    container_name: alpine-app-1
    image: alpine-app-1
    build:
      context: ..
      dockerfile: Dockerfile
    tty: true
  
  alpine-app-2:
    container_name: alpine-app-2
    image: alpine-app-2
    build:
      context: ..
      dockerfile: Dockerfile
    tty: true

We are using the tty option to start an interactive terminal in the container.

Once the containers run, they belong to the same network. We can run the docker network inspect dns_default command to verify:

"Containers": {
    "577c6ac4aae4f1e915148ebdc04df9ca997bc919d954ec41334b4a9b14115528": {
        "Name": "alpine-app-1",
        "EndpointID": "247d49a3ccd1590c740b2f4dfc438567219d5edcb6b7d9c1c0ef88c638dba371",
        "MacAddress": "02:42:ac:19:00:03",
        "IPv4Address": "172.25.0.2/16",
        "IPv6Address": ""
    },
    "e16023ac252d73977567a6fb17ce3936413955e135812e9a866d84a3a7a06ef8": {
        "Name": "alpine-app-2",
        "EndpointID": "8bd4907e4fb85e41e2e854bb7c132c31d5ef02a8e7bba3b95065b9c10ec8cbfb",
        "MacAddress": "02:42:ac:19:00:02",
        "IPv4Address": "172.25.0.3/16",
        "IPv6Address": ""
    }
}

The Docker network assigns available IPs. In this case, 172.25.0.2 and 172.25.0.3.

Now, let’s try to ping the second container from the first one. We can try with the host IP:

docker exec -it alpine-app-1 ping 172.25.0.3

More interestingly, we can ping by the container name:

docker exec -it alpine-app-1 ping alpine-app-2

Or, we can even ping by the container ID:

docker exec -it alpine-app-1 ping 577c6ac4aae4

However, they all produce a similar ping response:

64 bytes from alpine-app-2.dns_default (172.25.0.3): icmp_seq=1 ttl=64 time=0.152 ms
64 bytes from alpine-app-2.dns_default (172.25.0.3): icmp_seq=2 ttl=64 time=0.192 ms
64 bytes from alpine-app-2.dns_default (172.25.0.3): icmp_seq=3 ttl=64 time=0.159 ms

We can ping successfully from the second container to the first one, for example:

docker exec -it alpine-app-2 ping alpine-app-1

To understand, we can examine the network from a container perspective. Let’s inspect the alpine-app-2 container:

docker inspect --format='{{json .NetworkSettings.Networks}}' alpine-app-2 | jq .

We can see some network aliases are available:

{
  "dns_default": {
    "IPAMConfig": null,
    "Links": null,
    "Aliases": [
      "alpine-app-2",
      "alpine-app-2",
      "577c6ac4aae4"
    ],
    "NetworkID": "4a10961e55733500114537a9f8b454d256443b8fd50f8a01ef9ee1208c94dac9",
    "EndpointID": "247d49a3ccd1590c740b2f4dfc438567219d5edcb6b7d9c1c0ef88c638dba371",
    "Gateway": "172.25.0.1",
    "IPAddress": "172.25.0.3",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "MacAddress": "02:42:ac:19:00:03",
    "DriverOpts": null
  }
}

That explains why we can communicate using the name we assign to the container. Furthermore, that’s most likely what we need when running, for example, a test suite where our services simulate a microservice-like or SOA architecture.

We might also want to create a network between different machines. In this case, we can look at the Docker overlay network.

4. Using a Static IP

As a rule, we aren’t worried about IP address management. The Docker built-in networking can already handle this. We have already seen that in the DNS example.

However, we might also want to assign a static IP to a container.

4.1. Static IP With Bridge Network

Let’s look at the docker-compose.yml file in the static_ip_bridge directory:

services:
  alpine-app-1:
    container_name: alpine-app-1
    build:
      context: ..
      dockerfile: Dockerfile
    image: alpine-app-1
    tty: true
    networks:
      network-example:
        ipv4_address: 10.5.0.2

  alpine-app-2:
    container_name: alpine-app-2
    build:
      context: ..
      dockerfile: Dockerfile
    image: alpine-app-2
    tty: true
    networks:
      network-example:
        ipv4_address: 10.5.0.3

networks:
  network-example:
    driver: bridge
    ipam:
      config:
        - subnet: 10.5.0.0/16
          gateway: 10.5.0.1

This still creates a subnet. However, this time Docker assigns static IPs. We can spot the subnet by running the ifconfig command in our host:

br-2fda6ab68472: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    inet 10.5.0.1  netmask 255.255.0.0  broadcast 10.5.255.255
    inet6 fe80::42:9dff:fe07:5b59  prefixlen 64  scopeid 0x20<link>
    ether 02:42:9d:07:5b:59  txqueuelen 0  (Ethernet)
    RX packets 0  bytes 0 (0.0 B)
    RX errors 0  dropped 0  overruns 0  frame 0
    TX packets 30  bytes 4426 (4.3 KiB)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

If we inspect the alpine-app-2 container, we can see that this time the IPAM (IP address management) assigns an IPv4 address:

{
  "static_ip_network-example": {
    "IPAMConfig": {
      "IPv4Address": "10.5.0.3"
    },
    "Links": null,
    "Aliases": [
      "alpine-app-2",
      "alpine-app-2",
      "acafb03c009c"
    ],
    "NetworkID": "33ebac5be422f1e8cef8509b69e3a7af55e1da365fe7f9c6fc184159c13bbdee",
    "EndpointID": "ebfd36619104ee368e64042090eb02b2dd3167d0c844b80c40b6ab63eb6afb76",
    "Gateway": "10.5.0.1",
    "IPAddress": "10.5.0.3",
    "IPPrefixLen": 16,
    "IPv6Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "MacAddress": "02:42:0a:05:00:05",
    "DriverOpts": null
  }
}

Again, we can successfully ping our alpine-app-2 container on its IP address :

docker exec -it alpine-app-1 ping 10.5.0.3

However, we can still ping using the name or ID of the container because we are in the same network. Thus, ping alpine-app-2 or ping acafb03c009c are still valid options.

In this case, we can access the 10.5.0.2 or 10.5.0.3 address of the container also directly from the Docker host.

4.2. Macvlan

If we want to assign a MAC address to one of our containers, we can look a the Macvlan network. This makes the container network interface directly connected to the physical network. It works only for Linux OS. It doesn’t apply to Windows or Mac.

Macvlan networks are special virtual networks that create clones of the host physical network interface and attach containers directly to the LAN.

Let’s look at the docker-compose.yml file in the static_ip_macvlan directory:

services:
  alpine-app-1:
    container_name: alpine-app-1
    build:
      context: ..
      dockerfile: Dockerfile
    image: alpine-app-1
    tty: true
    networks:
      network-example:
        ipv4_address: 192.168.2.2

  alpine-app-2:
    container_name: alpine-app-2
    build:
      context: ..
      dockerfile: Dockerfile
    image: alpine-app-2
    tty: true
    networks:
      network-example:
        ipv4_address: 192.168.2.3

networks:
  network-example:
    driver: macvlan
    driver_opts:
      parent: enp0s3
    ipam:
      config:
        - subnet: 192.168.2.0/24
          gateway: 192.168.2.1

The example is similar to the previous one using a simple bridge network. Still, Docker assigns an IPv4 address via the IPAM config.

However, it differs in specifying a parent where to attach the Docker network interface. In this case, we use the enp0s3, but the same would apply, for instance, with the eth0 interface.

Again, let’s ping the alpine-app-2 container:

docker exec -it alpine-app ping 192.168.2.3

Notably, in this case, we can’t directly access the containers from our host. For the containers to communicate with the host, we need to create a macvlan interface on the Docker host and configure a route to the containers’ macvlan interface.

For example, let’s first create a macvlan interface in our host, macvlan-net, assign an available IP, and bring up the interface:

sudo ip link add macvlan-net link enp0s3 type macvlan mode bridge && sudo ip addr add 192.168.2.50/32 dev macvlan-net \\ 
&& sudo ip link set macvlan-net up 

Finally, we can add an IP route to the Docker macvlan interface:

sudo ip route add 192.168.2.0/24 dev macvlan-net

5. Using host.docker.internal

When Docker is installed, a default bridge network named docker0 is created. Each new Docker container is automatically attached to this network.

We can have a look at its definition by running the command ifconfig docker0:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
    ether 02:42:ee:96:b3:94  txqueuelen 0  (Ethernet)
    RX packets 0  bytes 0 (0.0 B)
    RX errors 0  dropped 0  overruns 0  frame 0
    TX packets 0  bytes 0 (0.0 B)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

In this case, the IP is 172.17.0.1. Docker provides a way to resolve this IP address using DNS with the name host.docker.internal.

So, if we add that specific DNS to the list of known hosts in a container, we’ll be able to communicate with the Docker host.

Although not recommended, this can be useful when we want to run containers locally but still get a connection, for example, with services running in our host.

To understand, this time we’ll set up the containers on different networks. They will be able to communicate only via the Docker host at the host.docker.internal address.

5.1. Creating a Nodejs App

So, let’s connect from our Alpine container to a Nodejs container called node-app. The Nodejs application will expose a “hello world” REST endpoint. Let’s have a look at the package.json:

{
  "name": "host_docker_internal",
  "version": "1.0.0",
  "description": "node js app",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Baeldung",
  "license": "ISC",
  "dependencies": {
    "express": "^4.18.2"
  }
}

Also, we need to start a server at a port, for example, 8080, and add the “hello world” endpoint:

var express = require('express')
var app = express()

app.get('/', function (req, res) {
    res.send('Hello World!')
})

app.listen(8080, function () {
    console.log('app listening on port 8080!')
})

Finally, we need a Dockerfile:

FROM node:8.16.1-alpine
WORKDIR /app
COPY host_docker_internal/package.json /app
COPY host_docker_internal/index.js /app
RUN npm install
CMD node index.js
EXPOSE 8080

5.2. Connecting to host.docker.internal

Let’s create a docker-compose.yml file in the host_docker_internal directory:

services:
  alpine-app-1:
    container_name: alpine-app-1
    extra_hosts: # for linux hosts since version 20.10
      - host.docker.internal:host-gateway
    build:
      context: ..
      dockerfile: Dockerfile
    image: alpine-app-1
    tty: true
    networks:
      - first-network

  node-app:
    container_name: node-app
    build:
      context: ..
      dockerfile: Dockerfile.node
    image: node-app
    ports:
      - 8080:8080
    networks:
      - second-network

networks:
  first-network:
    driver: bridge
  second-network:
    driver: bridge

For Linux OS, we must explicitly add to extra_hosts. We don’t need to do it for Windows or Mac. Furthermore, for Linux, this wasn’t possible before Docker version 20.10.

Let’s invoke the endpoint in the node-app container from the alpine-app-1. As mentioned, we need to go through host.docker.internal:

docker exec -it alpine-app-1 curl -i -X GET http://host.docker.internal:8080

We’ll get a 200 status with the response:

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 12
ETag: W/"c-Lve95gjOVATpfV8EL5X4nxwjKHE"
Date: Tue, 24 Jan 2023 18:33:38 GMT
Connection: keep-alive

Hello World!

Finally, let’s have a look at the /etc/hosts file in the alpine-app-1 container:

127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1      host.docker.internal
172.26.0.2      81945d220c5e

The host.docker.internal is a known host for the container. Although not in a bridge network, the containers can now communicate like they are using the docker0 interface as a bridge to discover each other.

6. Conclusion

In this article, we saw a few examples of how Docker containers can communicate on the same machine.

When connecting over the same network, we learned about the in-built DNS service and how to assign static IPs with a bridge or macvlan network. Over different networks, we saw an example of using the host.docker.internal DNS name that resolves to the Docker host.

As always, we can find working code examples over on GitHub.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.