
Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: August 2, 2025
We’ve all experienced that frustrating moment when our Jenkins pipeline fails with “Cannot connect to the Docker daemon.” This error often appears suddenly, even though our build was working perfectly the day before.
This connection failure occurs because Jenkins can’t communicate with Docker’s background service. In this tutorial, we’ll explore why this happens and walk through several solutions.
Before we fix the connection issues, we need to understand how Jenkins and Docker communicate. This knowledge enables us to select the appropriate solution and prevent future problems.
The Docker daemon is the heart of Docker‘s architecture. Essentially, it’s a background service that does all the heavy lifting—building images, running containers, and managing networks.
To illustrate this concept, think of it as a restaurant kitchen: we place orders (Docker commands) at the counter (Docker client), and the kitchen (daemon) prepares everything behind the scenes.
Furthermore, the daemon typically listens on a Unix socket at /var/run/docker.sock.
Consequently, when we run docker build, our command goes through this socket to reach the daemon. The daemon requires root privileges because it needs to manage kernel-level features, like namespaces and cgroups.
Jenkins uses the Docker CLI to execute our build commands. The Docker CLI is the standard client tool for interacting with the Docker daemon. For instance, we can build our Docker image in an example pipeline:
stage('Build') {
steps {
sh 'docker build -t myapp .'
}
}
In this scenario, Jenkins executes this command as the jenkins user. This user needs permission to access the Docker socket, just like we need a key to enter a building. Without proper access, the command fails.
Finally, Jenkins doesn’t have any special privileges by default—it’s treated like any other system user trying to access Docker.
We encounter two main error types during Jenkins Docker operations. First, there’s the connection error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
This means Jenkins can’t find or reach the Docker daemon. Additionally, this error indicates the Docker service is stopped or the socket file is missing entirely.
Second, we’ll see permission errors:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock
This tells us Jenkins found the daemon but lacks permission to use it. Specifically, this error points to Unix file permission issues rather than Docker service problems.
Understanding why these errors occur helps us apply the correct fix.
The most common cause is insufficient permissions. Indeed, Docker restricts socket access for security reasons. Let’s check the socket permissions:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jul 24 08:28 /var/run/docker.sock
The output tells us only the root user and members of the docker group can access this socket. Consequently, if Jenkins isn’t in the docker group, it can’t communicate with Docker.
Furthermore, these permissions are set during Docker installation and remain consistent across most Linux distributions. The 660 permission mode (rw-rw—-) ensures that both the owner and group members have read and write access, while others have no access at all.
Sometimes Docker isn’t running or is configured differently than expected. First, we can check the service status:
$ systemctl status docker
\u25cf docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-07-24 08:30:38 UTC; 11min ago
...
If Docker is inactive or failed, Jenkins can’t connect regardless of permissions. Additionally, the daemon might be configured to listen on a TCP port instead of the Unix socket.
We can check if Docker is configured for TCP in the service file:
$ grep ExecStart /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375
If configured for TCP only, we’d connect like this:
$ docker -H tcp://localhost:2375 ps
When Docker listens only on TCP, the Unix socket won’t exist. We’d need to set the DOCKER_HOST environment variable or use the -H flag with every command.
Some enterprise environments disable the Unix socket entirely for security reasons, forcing all communication through authenticated TCP connections. This configuration is displayed in the Docker daemon’s startup parameters, which can be inspected in the systemd service file.
When Jenkins runs inside a container, it exists in an isolated environment. Specifically, the container can’t see the host’s Docker socket unless we explicitly share it.
Container isolation is a security feature, not a bug. Docker deliberately prevents containers from accessing the host system to maintain security boundaries. Therefore, any access to host resources requires explicit configuration. For example, we’d mount the socket like this:
...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
...
We’ll explore this solution further later.
For Jenkins installed directly on the host, the cleanest solution is granting group permissions. Specifically, this approach maintains security while providing necessary access. Let’s walk through this process.
First, let’s verify the docker group exists:
$ getent group docker
docker:x:986:baeldung
If we don’t see output, we need to create the group:
$ sudo groupadd docker
Most Docker installations create this group automatically, so we’ll likely find it already exists. Additionally, the docker group receives special treatment from the Docker daemon—it’s hardcoded to check for this specific group name when validating socket access permissions.
Now, let’s add jenkins to the docker group:
$ sudo usermod -aG docker jenkins
This command modifies the jenkins user, adding (-a) it to the docker group (-G). Subsequently, the change grants Jenkins permission to access the Docker socket.
However, it’s important to note that this change doesn’t take effect immediately for running processes. We need to restart the Jenkins service for the group membership to update. Additionally, the usermod command updates the /etc/group file, which is processed read-only at startup.
Let’s confirm our change worked:
$ id jenkins
uid=124(jenkins) gid=125(jenkins) groups=125(jenkins),986(docker)
We should see “docker” in the groups list. If not, our command might have failed, or we need to restart the Jenkins service. Additionally, we can double-check by examining the /etc/group file directly to ensure the jenkins user appears in the docker group line.
Group changes require a service restart to take effect. Therefore, let’s restart Jenkins:
$ sudo systemctl restart jenkins
Now, we can test Docker access by creating a simple Jenkins job with:
pipeline {
agent any
stages {
stage('Test Docker Access') {
steps {
sh 'docker version'
sh 'docker ps'
}
}
}
}
If successful, we’ll see version information without errors. Our Jenkins can now build Docker images.
Sometimes group permissions aren’t enough, especially when Jenkins and Docker run on different systems. In these cases, TCP exposure lets Docker listen on a network port, enabling remote access.
We need to modify how Docker starts. First, let’s find the service file:
$ systemctl show -p FragmentPath docker
FragmentPath=/usr/lib/systemd/system/docker.service
This command reveals the exact location of Docker’s systemd service file. We need to edit this file with appropriate permissions. We should also create a backup before making changes, as incorrect modifications can prevent Docker from starting.
Let’s edit the service file and find the ExecStart line. Then, we’ll add TCP listener parameters:
...
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375 --containerd=/run/containerd/containerd.sock
...
The -H TCP://127.0.0.1:2375 tells Docker to listen on port 2375. Specifically, we use 127.0.0.1 (localhost) for security—this prevents external access. Additionally, the -H fd:// parameter maintains systemd socket activation compatibility, ensuring Docker starts correctly with systemd.
After saving our changes, we need to reload systemd and restart Docker:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
Subsequently, we can verify the TCP listener is active:
$ netstat -tlnp | grep 2375
This verification step ensures our configuration changes are active.
When Jenkins runs in a container, it needs access to the host’s Docker socket. We achieve this through volume mounting—essentially creating a window between the container and host.
Socket mounting shares the host’s /var/run/docker.sock file with the container. This gives the Jenkins container full control over the host’s Docker daemon.
Additionally, mounted sockets maintain the same permission requirements as on the host, so the container’s user still needs appropriate access rights.
Let’s create a Docker Compose configuration that mounts the socket:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
ports:
- "8080:8080"
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
user: jenkins
The crucial line is the socket mount. Specifically, we’re telling Docker to share the host’s socket with the container at the same path.
For Kubernetes deployments, we configure the pod template differently:
spec:
containers:
- name: jenkins-agent
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
This configuration tells Kubernetes to mount the host’s Docker socket into our pod. Kubernetes adds another layer of complexity with its security policies, which might block hostPath volumes by default.
Therefore, we might need to adjust PodSecurityPolicies or SecurityContextConstraints depending on our cluster configuration.
Docker daemon connection issues in Jenkins stem from permissions, socket visibility, or service configuration. Host-based Jenkins typically needs group membership, while containerized deployments require socket mounting.
Security remains paramount—choose solutions matching your environment’s requirements. With a proper understanding of these components, we can confidently resolve any Docker-Jenkins connection issue.