
Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: September 20, 2024
Containers are essential in modern software development, ensuring consistency across environments and simplifying dependency management. Indeed, as a leading containerization platform, Docker enables developers to manage containers with ease. Nevertheless, customization is often necessary, such as adding configuration files or dependencies to existing images, without the need to rebuild from scratch.
In this tutorial, we’ll explore how docker compose can be used to add files to standard Docker images, thereby enabling developers to customize and extend images efficiently while avoiding the need to modify the original Dockerfile.
Before diving into customization, it’s essential to first understand the basics of Docker images and Docker Compose.
Docker images are pre-built environments containing everything needed to run an application, such as code, libraries, and settings. In essence, containers are instances of these images.
Docker Compose is a tool that uses a YAML file (docker-compose.yml) to manage multi-container applications, specifying how containers should run, including images, networks, and storage.
With this understanding in mind, we can now explore how to use Docker Compose to add files to existing Docker images.
With the build approach, we include the necessary files inside the Docker image by leveraging docker compose to build the image. These may include configuration files, SQL scripts, application data, and others. As a result, builds package files along with the container image itself.
To begin with, let’s set up the project structure:
├── docker-compose.yml
└── LocalFolder/
├── Dockerfile
└── configfile/
└── config.txt
At this point, to verify the included file in the image, we check the contents of the config.txt:
$ cat LocalFolder/configfile/config.txt
This is a demo config file.
Now, we have a skeleton in place. Let’s populate it.
Now, we create a docker-compose.yml file to build a custom image:
services:
my-config-app:
build: LocalFolder/.
image: custom-image
In the file, we define a service that uses an image based on the nginx:alpine image. Moreover, the docker-compose.yml specifies the build directive, which tells Docker Compose to build the image from the LocalFolder directory.
Then, let’s define the Dockerfile in the LocalFolder:
FROM nginx:alpine
COPY ./LocalFolder /LocalFolder
The file extends the nginx:alpine image and copies the local files to the image.
After creating and populating the project structure, we build the image via docker compose:
$ docker compose up --build
At this point, docker compose rebuilds the image with the local files.
Finally, we verify the files are inside the image by running a container and checking the path within:
$ docker run -it --rm custom-image /bin/bash
bash-5.1# cat LocalFolder/configfile/config.txt
This is a demo config file.
From the output, we see that the local files were successfully included in the image.
One of the most efficient ways to add files to a Docker container is through Docker volumes. Instead of rebuilding the image every time a file changes, we can dynamically mount files or directories from the host into the container. This method not only saves time but also ensures that the container can access up-to-date files. Now, let’s walk through the process.
Initially, we determine the local (source) directory and where to place the files inside the container (target).
This is an important step, as the mapping is rarely the same on both sides, i.e., the source and target aren’t the same paths.
After that, let’s define a simple docker-compose.yml file to build a custom image:
services:
web:
image: nginx:alpine
volumes:
- ./LocalFolder:/usr/share/LocalFolder
In this case, we use the lightweight nginx:alpine image and mount the local directory ./LocalFolder to /usr/share/LocalFolder inside the container. As a result, any files in ./LocalFolder on the host can be accessible in the container.
Following the configuration, we can now run the container based on the image defined in the YAML file:
$ docker compose up -d
[+] Running 9/9
✔ web Pulled 7.0s
✔ 43c4264eed91 Pull complete 2.9s
✔ 5b19511a843d Pull complete 3.8s
✔ 652d69a25e85 Pull complete 4.0s
✔ 51676974aef5 Pull complete 4.1s
...
Thus, the command above runs a container with the customized image defined in the YAML configuration. Subsequently, let’s check the container ID of the running container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47624ce40bce nginx:alpine "/docker-entrypoint.…" 5 seconds ago Up 4 seconds 80/tcp docker_compose_test-web-1
As we can observe, the output displays the details of the running container, including the container ID, which in this case is 47624ce40bce.
Finally, we inspect the running container and verify that the files have been mounted:
$ docker exec -it 47624ce40bce /bin/sh
/ # ls
bin docker-entrypoint.d etc lib mnt proc run srv tmp var
dev docker-entrypoint.sh home media opt root sbin sys usr
/ # cd usr/share/LocalFolder/configfile
/usr/share/Folder# ls
config.txt
/usr/share/Folder# cat config.txt
This is a demo config file.
As shown, the local host files are readily available in the container through the Docker volume.
Docker Swarm is a clustering and orchestration tool for Docker containers. It lets us deploy and manage multiple Docker engines as one, ensuring high availability and scalability.
In Swarm mode, we can use bind mounts to map directories from the host into containers, just like we did in the Docker Compose file from earlier. Let’s walk through deploying a stack in Swarm mode.
First, let’s initialize the docker swarm system:
$ docker swarm init
This command turns the Docker engine into a Swarm manager, enabling it to manage clusters.
In Docker Swarm, a stack is defined by a docker-compose file, which specifies the services, volumes, and similar elements that make up the application.
So, let’s deploy the stack:
$ docker stack deploy -c docker-compose.yml my_stack
In the above code snippet, the -c flag specifies the Compose file to use, and my_stack is the name of the stack being deployed. After the deployment, we use the docker ps command to check running containers and inspect them to confirm the files are properly mounted.
Docker Compose provides different ways to copy files from the host machine into a running container, depending on the version in use. Both methods are a fairly easy way to customize a container without rebuilding the image.
Let’s explore how to do this in both Docker Compose V1 and V2.
In Docker Compose V1, we can use the docker cp command to copy files directly into a running container:
$ docker cp /path/to/config.txt "$(docker compose ps -q web)":/usr/share
Here, the config.txt file is copied from the host system to the /usr/share directory inside the web service container. The docker compose ps -q web command retrieves the container ID of the running service.
Additionally, to verify that the file has been successfully copied, we can inspect the container.
In Docker Compose V2, the process is slightly different. The docker compose cp command simplifies the task by integrating the copy function directly with Compose:
$ docker compose cp ./config.txt web:/usr/share
This way, we copy config.txt from the local machine to the /usr/share directory inside the web service container. Similar to V1, we can verify the file transfer by inspecting the container.
In this article, we explored how Docker Compose can be used to customize standard Docker images by adding files and making adjustments without rebuilding from scratch.
Whether through the build feature or using volumes to mount files dynamically, Docker Compose offers a flexible and efficient approach for extending images. Additionally, Docker Swarm enables scaling these customizations across multiple containers.
Overall, Docker Compose simplifies the process of managing, customizing, and deploying containerized applications.