Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

Partner – Orkes – NPI EA (cat=Kubernetes)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

1. Overview

When managing Docker containers, it’s common to run a series of CLI commands manually. While this may be manageable for one or two containers, it can become tedious or error-prone when we’re dealing with multiple services, environments, or configurations. In such a scenario, we can utilize Ansible, an automation tool that helps to execute tasks across systems using human-readable YAML files called playbooks.

Conveniently, Ansible can integrate with other tools such as Docker, enabling us to automate Docker commands, container deployments, or even orchestration with ease.

In this tutorial, we’ll learn how to run Docker commands with Ansible using a practical and easy-to-follow example. In the guide, we’ll utilize Ubuntu Linux with Docker Engine and Ansible.

2. Why Run Docker Commands With Ansible?

So, here are a few reasons why we may consider running Docker or Docker Compose commands with Ansible:

  • To automatically deploy containers as part of CI/CD pipelines
  • To avoid repeating the same command-line steps
  • To simplify the development workflow with automation
  • To configure a local or test environment consistently

With Ansible, we can version control our Docker workflows and reuse them with ease.

3. Problem Statement

For the problem statement, we discuss how to run Docker Compose commands with Ansible. To demonstrate, let’s use Ansible to run the following command:

$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d

Before we proceed, we need to ensure that Ansible is installed.

Although Ubuntu provides Ansible through the apt package manager, it’s not uncommon for this version to be bundled with broken plugin dependencies, such as Jinja2-related warnings. To avoid such issues and ensure better compatibility, we can install Ansible using pip:

$ sudo apt update && sudo apt install python3-pip -y && pip install --user ansible jinja2

Using the command above, we install Ansible into our user environment (~/.local/bin/), avoiding system-wide conflicts and plugin errors. Thus, we ensure a clean and reproducible setup across Ubuntu systems.

Once installation is complete, we can verify the installation:

$ ansible --version
ansible [core 2.17.13]
...

The display of the Ansible version confirms that the installation was successful.

Notably, we utilize Docker Engine over Docker Desktop since it’s suitable for consistent deployment workflows and offers broader compatibility with Ansible on Linux.

4. Project Setup

Here, we create a project that launches a simple Nginx web server using Docker Compose and use Ansible to automate the process of bringing up the service.

To begin, let’s create the working directory and navigate into it to commence on the project:

$ mkdir ansible-docker-demo && cd ansible-docker-demo

Let’s now proceed to complete the rest of the project.

4.1. Create a Simple Web App

First, let’s create a basic HTML file:

$ mkdir app && touch app/index.html

After creating index.html for Nginx to serve, let’s open the file and paste:

<!DOCTYPE html>
<html>
  <head>
    <title>Hello from Ansible</title>
  </head>
  <body>
    <h1>Hello from Docker + Ansible!</h1>
  </body>
</html>

So, we create the HTML file inside the app directory since Nginx, by default, looks for website files in /usr/share/nginx/html. Later, we link our app directory to that path using a Docker volume so that Nginx can serve our HTML file.

4.2. Create the Docker Compose Files

Next, let’s create the docker-compose.yml file:

version: '3'

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./app:/usr/share/nginx/html:ro

So, the base docker-compose.yml file sets up the core Nginx service, exposing port 8080 and serving our HTML file from the mounted app directory.

Let’s now create the file docker-compose.prod.yml:

version: '3'

services:
  web:
    environment:
      - ENV=production

The override file docker-compose.prod.yml adds an environment variable to the container. Even though it’s a minimal example, it demonstrates how we can implement override files to customize deployments for different environments (e.g., dev, staging, production) without modifying the base file.

4.3. Create Ansible Inventory File

Ansible uses inventory files to determine which machines to manage. In our case, we want to use Ansible for local development. Thus, let’s create the inventory file inventory.ini with the following content:

[local]
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3

The file above does the following things:

  • localhost ansible_connection=local – tells Ansible to execute commands locally instead of over SSH
  • ansible_python_interpreter=/usr/bin/python3 – informs Ansible exactly on the version of Python to use, so it doesn’t guess and show warnings

If we’re deploying to multiple servers in production, we can expand the inventory:

[webservers]
192.168.10.10 ansible_user=ubuntu
192.168.10.11 ansible_user=ubuntu

Hence, we can reuse the same playbook for local testing and production deployment.

4.4. Write the Ansible Playbook

In this step, we write the Ansible playbook playbook.yml:

- name: Run Docker Compose with Ansible
  hosts: local
  become: yes
  tasks:

    - name: Ensure Docker is installed
      package:
        name: docker.io
        state: present

    - name: Ensure Docker Compose is installed
      package:
        name: docker-compose
        state: present

    - name: Ensure Docker service is running
      service:
        name: docker
        state: started
        enabled: true

    - name: Pull Docker images
      command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull
      args:
        chdir: "{{ playbook_dir }}"

    - name: Run Docker Compose
      command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
      args:
        chdir: "{{ playbook_dir }}"

Let’s analyze the playbook playbook.yml:

  • name – helps identify what each part of the playbook is doing in the command output
  • hosts: local – targets the local group defined in our inventory.ini file, instructing Ansible to apply the tasks to the local machine
  • become: yes – executes all tasks with elevated privileges using sudo, which is required for package installation and interacting with the Docker daemon
  • tasks – represents a list of operations that Ansible executes in the specified order

Let’s now break down the task-level parameters:

  • package – makes sure Docker and Docker Compose are installed
  • state: – defines the desired condition for a resource, for instance, present means the package should be installed, while the state started ensures the service is running
  • enabled: true – ensures the service starts automatically at boot
  • service – starts the Docker daemon if it’s not already running
  • command – runs our Docker Compose commands in the current project directory, whereby {{ playbook_dir }} resolves to the directory containing the playbook
  • chdir – ensures the Docker Compose command runs in the directory where the Compose files are located
  • {{ playbook_dir }} – represents an Ansible variable that resolves to the directory containing the playbook, ensuring file paths are accurate regardless of how we invoke the playbook

So, the playbook installs Docker and Docker Compose if need be, then runs the docker-compose command to bring up the service.

4.5. Run the Playbook

Once everything is set up, we can run the playbook:

$ ansible-playbook -i inventory.ini playbook.yml --ask-become-pass
BECOME password: 

PLAY [Run Docker Compose with Ansible] *******************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [localhost]

TASK [Ensure Docker is installed] ************************************************************************************************************
ok: [localhost]

TASK [Ensure Docker Compose is installed] ****************************************************************************************************
ok: [localhost]

TASK [Ensure Docker service is running] ******************************************************************************************************
ok: [localhost]

TASK [Pull Docker images] ********************************************************************************************************************
changed: [localhost]

TASK [Run Docker Compose] ********************************************************************************************************************
changed: [localhost]

PLAY RECAP ***********************************************************************************************************************************
localhost                  : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

The output above shows Ansible executing each task, and before that, the –ask-become-pass flag prompts us for the sudo password when using become: yes.

Let’s now confirm everything works as expected:

$ curl http://localhost:8080
<!DOCTYPE html>
<html>
  <head>
    <title>Hello from Ansible</title>
  </head>
  <body>
    <h1>Hello from Docker + Ansible!</h1>
  </body>
</html>

In the output, we can see our Hello from Docker + Ansible! page running inside an Nginx container, all managed through Ansible.

For troubleshooting purposes, let’s explore what we can do in case we encounter the common permission error permission denied while trying to connect to the Docker daemon socket. To resolve the issue, we need to ensure our user is in the docker group:

$ sudo usermod -aG docker $USER

After that, we need to reboot our system, and as a result, we avoid the Docker permission issue.

5. Conclusion

In this article, we explored running Docker and Docker Compose commands using Ansible.

We built a simple Nginx app, defined multiple Compose files, and utilized Ansible to automate pulling and running containers with only one command. By combining the power of Ansible with Docker Compose, we can set up consistent environments (reproducibility), add more containers or services easily(scalability), and automate deployment steps.

We can now automate complex infrastructure in testing and production environments, and not only for development.