
Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: June 21, 2025
Docker popularized containerization, simplifying the process of creating, delivering, and running highly distributed applications. We often manage isolated Docker containers using the docker container command.
Docker initially focused on developers and the development cycle. However, the platform has evolved to offer orchestrated services via a simple and powerful orchestrator.
In this tutorial, we’ll first discuss containerization and why an orchestrator is needed. Then, we’ll explore the differences between Docker containers and orchestrated services, as well as their respective use cases.
Containerizing an application into autonomous units is a crucial step in the software development cycle. However, deploying and managing these containers can quickly become a major challenge. As long as we’re dealing with a few containers, managing them manually via Docker remains feasible. In the event of a failure, we can manually perform a restart or update.
However, in modern architectures, an application is often broken down into several components, each deployed in a separate container. With dozens or even hundreds of containers that must coexist or be distributed across multiple servers, manual management quickly becomes untenable.
This is where orchestration engines come in.
Container orchestration tools automate container management and ensure, in particular:
Among these tools, Kubernetes is undoubtedly the most widespread and comprehensive.
Docker also offers its own integrated orchestrator with Docker Swarm mode. This mode enables the orchestration of Docker containers using the concept of services, which we’ll detail in the following sections.
Running containers in standalone mode involves launching them directly on the local host, without an orchestrator. Each container is isolated and managed manually with the Docker CLI or API.
Let’s consider deploying bael-api, a small REST web service that allows users to discover recent or upcoming courses or tutorials. Let’s also assume that we have a database for storing metadata about courses and articles:
$ docker network create bael-network
$ docker volume create bael-db-data
$ docker container run -d --name db -v bael-db-data:/var/lib/mysql --network bael-network mysql
$ docker container run -d --name bael-api -p 8080:8080 --network bael-network bael-api
Here we create a custom Docker network to ensure communication between containers, as well as a persistent volume of data. Next, we run both containers using docker container run. The db container remains internal, while bael-api is exposed on port 8080 of the host.
Let’s just say that our REST API is a great success and therefore receives a large volume of traffic. A new challenge arises: The single container can no longer effectively handle the growing load. To cope with this increase in load, we can scale horizontally by manually replicating our bael-api container:
$ docker container run -d --name bael-api_1 -p 8081:8080 --network bael-network bael-api
$ docker container run -d --name bael-api_2 -p 8082:8080 --network bael-network bael-api
We now have three instances of the API running in parallel. However, we still face a new challenge: intelligent load balancing between the different instances.
When we run containers individually using docker container run, there is no built-in mechanism to distribute traffic across multiple containers. To achieve load balancing between containers, we must configure an external load balancer such as Traefik or HAProxy.
When it’s necessary to update the image of our API, the process is equally manual. We stop and then restart the containers with the new version:
$ docker stop bael-api
$ docker rm bael-api
$ docker container run -d --name bael-api -p 8080:8080 --network bael-network bael-api:latest
Docker Container is simple, but quickly reaches its limits when it comes to scalability, high availability, and automated management requirements.
To overcome these limitations, Docker introduces a higher level of abstraction with Swarm mode and the concept of services.
To streamline the management of distributed applications in production, Docker has an advanced feature called Docker Swarm mode. Swarm mode is based on the native Docker SwarmKit orchestrator, which is directly integrated into the Docker engine. It allows us to manage a cluster of Docker daemons, called a swarm. We can enable Swarm mode by initializing a Docker Swarm cluster before using the docker service command.
When moving from standalone containers to swarm mode, the concepts also evolve.
A Service is the main abstraction in Docker Swarm. It defines the desired state of a container. Unlike a Docker container, a Docker service doesn’t create a container but describes a state to be maintained over time:
$ docker network create --driver overlay bael-network
$ docker service create --name bael-api --replicas 3 --update-delay 10s --update-parallelism 1 --publish 80:8080 --network bael-network bael-api:latest
Here, we specify an overlay network to enable communication between different containers located on different nodes. We then ask Docker Swarm to maintain three replicas of our API, accessible via port 80 on the host. Additionally, we apply rolling updates one by one using the –update-parallelism option, with a 10-second delay between each successfully updated instance.
Docker Swarm automatically performs the following actions:
Each replica is represented by a task. A Docker Swarm service, therefore, corresponds to a set of active tasks that Docker automatically orchestrates.
We can check the status of our service using the docker service ls command. We can also check the tasks created using docker service ps.
We can use a Compose YAML file to create our services using stack. A stack is a collection of services that define and run multi-container applications in Docker Swarm mode.
We can express the deployment of section 3 as follows in a bael-stack.yml file:
version: "3.7"
services:
db:
image: mysql
networks:
- bael-network
volumes:
- bael-db-data:/var/lib/mysql
bael-api:
image: bael-api
networks:
- bael-network
ports:
- "80:8080"
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
volumes:
bael-db-data:
networks:
bael-network:
driver: overlay
This makes it easier to manage multiple services and connect them. The deploy section allows us to customize the behavior of the services.
Next, we can run our stack with the following command:
$ docker stack deploy --compose-file bael-stack.yml bael-stack
We’re therefore deploying a stack called bael-stack with two services: db and bael-api.
Let’s summarize the fundamental differences between an isolated Docker Container and an orchestrated Docker Service:
Docker Container (Standalone) | Docker Service (Swarm) | |
---|---|---|
Scope | Single Docker host | Cluster of nodes (multi-host) |
Deployment | Imperative | Declarative |
High Availability | Not managed | Automatically managed |
Scaling | Manual | Native |
Rolling Updates | Manual | Automatically managed |
Load Balancing | Not natively provided | Built-in, automatic via routing mesh |
Use Case | Local development, testing, simple workloads | Production environments, scalability, fault tolerance |
If our use case is limited to a single host and we don’t require advanced orchestration features, Docker Container with Compose may be sufficient. For production environments that need scalability, fault tolerance, and orchestration across multiple hosts, we recommend Docker Swarm.
In this article, we explored standalone containers and Docker’s native orchestrator. This allowed us to push the limits of Docker Container and Docker Swarm solutions through Docker Service.
Understanding this distinction allows us to better structure deployments, optimize resources, and improve the reliability of our applications.