
Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: August 2, 2025
When managing Docker containers, it’s common to run a series of CLI commands manually. While this may be manageable for one or two containers, it can become tedious or error-prone when we’re dealing with multiple services, environments, or configurations. In such a scenario, we can utilize Ansible, an automation tool that helps to execute tasks across systems using human-readable YAML files called playbooks.
Conveniently, Ansible can integrate with other tools such as Docker, enabling us to automate Docker commands, container deployments, or even orchestration with ease.
In this tutorial, we’ll learn how to run Docker commands with Ansible using a practical and easy-to-follow example. In the guide, we’ll utilize Ubuntu Linux with Docker Engine and Ansible.
So, here are a few reasons why we may consider running Docker or Docker Compose commands with Ansible:
With Ansible, we can version control our Docker workflows and reuse them with ease.
For the problem statement, we discuss how to run Docker Compose commands with Ansible. To demonstrate, let’s use Ansible to run the following command:
$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Before we proceed, we need to ensure that Ansible is installed.
Although Ubuntu provides Ansible through the apt package manager, it’s not uncommon for this version to be bundled with broken plugin dependencies, such as Jinja2-related warnings. To avoid such issues and ensure better compatibility, we can install Ansible using pip:
$ sudo apt update && sudo apt install python3-pip -y && pip install --user ansible jinja2
Using the command above, we install Ansible into our user environment (~/.local/bin/), avoiding system-wide conflicts and plugin errors. Thus, we ensure a clean and reproducible setup across Ubuntu systems.
Once installation is complete, we can verify the installation:
$ ansible --version
ansible [core 2.17.13]
...
The display of the Ansible version confirms that the installation was successful.
Notably, we utilize Docker Engine over Docker Desktop since it’s suitable for consistent deployment workflows and offers broader compatibility with Ansible on Linux.
Here, we create a project that launches a simple Nginx web server using Docker Compose and use Ansible to automate the process of bringing up the service.
To begin, let’s create the working directory and navigate into it to commence on the project:
$ mkdir ansible-docker-demo && cd ansible-docker-demo
Let’s now proceed to complete the rest of the project.
First, let’s create a basic HTML file:
$ mkdir app && touch app/index.html
After creating index.html for Nginx to serve, let’s open the file and paste:
<!DOCTYPE html>
<html>
<head>
<title>Hello from Ansible</title>
</head>
<body>
<h1>Hello from Docker + Ansible!</h1>
</body>
</html>
So, we create the HTML file inside the app directory since Nginx, by default, looks for website files in /usr/share/nginx/html. Later, we link our app directory to that path using a Docker volume so that Nginx can serve our HTML file.
Next, let’s create the docker-compose.yml file:
version: '3'
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./app:/usr/share/nginx/html:ro
So, the base docker-compose.yml file sets up the core Nginx service, exposing port 8080 and serving our HTML file from the mounted app directory.
Let’s now create the file docker-compose.prod.yml:
version: '3'
services:
web:
environment:
- ENV=production
The override file docker-compose.prod.yml adds an environment variable to the container. Even though it’s a minimal example, it demonstrates how we can implement override files to customize deployments for different environments (e.g., dev, staging, production) without modifying the base file.
Ansible uses inventory files to determine which machines to manage. In our case, we want to use Ansible for local development. Thus, let’s create the inventory file inventory.ini with the following content:
[local]
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
The file above does the following things:
If we’re deploying to multiple servers in production, we can expand the inventory:
[webservers]
192.168.10.10 ansible_user=ubuntu
192.168.10.11 ansible_user=ubuntu
Hence, we can reuse the same playbook for local testing and production deployment.
In this step, we write the Ansible playbook playbook.yml:
- name: Run Docker Compose with Ansible
hosts: local
become: yes
tasks:
- name: Ensure Docker is installed
package:
name: docker.io
state: present
- name: Ensure Docker Compose is installed
package:
name: docker-compose
state: present
- name: Ensure Docker service is running
service:
name: docker
state: started
enabled: true
- name: Pull Docker images
command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull
args:
chdir: "{{ playbook_dir }}"
- name: Run Docker Compose
command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
args:
chdir: "{{ playbook_dir }}"
Let’s analyze the playbook playbook.yml:
Let’s now break down the task-level parameters:
So, the playbook installs Docker and Docker Compose if need be, then runs the docker-compose command to bring up the service.
Once everything is set up, we can run the playbook:
$ ansible-playbook -i inventory.ini playbook.yml --ask-become-pass
BECOME password:
PLAY [Run Docker Compose with Ansible] *******************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************
ok: [localhost]
TASK [Ensure Docker is installed] ************************************************************************************************************
ok: [localhost]
TASK [Ensure Docker Compose is installed] ****************************************************************************************************
ok: [localhost]
TASK [Ensure Docker service is running] ******************************************************************************************************
ok: [localhost]
TASK [Pull Docker images] ********************************************************************************************************************
changed: [localhost]
TASK [Run Docker Compose] ********************************************************************************************************************
changed: [localhost]
PLAY RECAP ***********************************************************************************************************************************
localhost : ok=6 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The output above shows Ansible executing each task, and before that, the –ask-become-pass flag prompts us for the sudo password when using become: yes.
Let’s now confirm everything works as expected:
$ curl http://localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Hello from Ansible</title>
</head>
<body>
<h1>Hello from Docker + Ansible!</h1>
</body>
</html>
In the output, we can see our Hello from Docker + Ansible! page running inside an Nginx container, all managed through Ansible.
For troubleshooting purposes, let’s explore what we can do in case we encounter the common permission error permission denied while trying to connect to the Docker daemon socket. To resolve the issue, we need to ensure our user is in the docker group:
$ sudo usermod -aG docker $USER
After that, we need to reboot our system, and as a result, we avoid the Docker permission issue.
In this article, we explored running Docker and Docker Compose commands using Ansible.
We built a simple Nginx app, defined multiple Compose files, and utilized Ansible to automate pulling and running containers with only one command. By combining the power of Ansible with Docker Compose, we can set up consistent environments (reproducibility), add more containers or services easily(scalability), and automate deployment steps.
We can now automate complex infrastructure in testing and production environments, and not only for development.