Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: June 2, 2025
Docker is a platform that packages applications and their dependencies into lightweight containers. These containers use virtual Ethernet interfaces to communicate over networks, inheriting their MTU (Maximum Transmission Unit) settings from the underlying Docker network or host system. When MTU values are inconsistent, whether between containers, hosts, or external infrastructure, it can lead to issues like dropped packets, failed connections, or degraded performance.
In this tutorial, we’ll look at how to change MTU settings in a Docker container. We’ll cover how to apply global changes through the Docker daemon, set MTU values for custom bridge networks, and adjust them manually for specific containers.
MTU, or Maximum Transmission Unit, refers to the largest packet size that can be transmitted across a network interface without fragmentation. Most Ethernet networks use an MTU of 1500 bytes. However, environments that use tunneling, VPNs, or cloud overlays may enforce smaller MTUs, often around 1400 bytes or less.
Docker uses virtual Ethernet (veth) interfaces paired with bridge networks like docker0. By default, these interfaces inherit the MTU from the host or a value Docker selects during startup. When a container sends packets larger than the configured MTU and they aren’t fragmented properly, the packets may be dropped or cause poor connectivity. This problem becomes more noticeable with latency-sensitive applications or large payloads.
Docker supports multiple methods to configure MTU values. These range from system-wide settings to more granular configurations for specific networks or containers. The most consistent method involves setting the MTU globally through the Docker daemon.
To set a global MTU value that affects containers using the default bridge network, we should modify the Docker daemon’s configuration file which is typically located at /etc/docker/daemon.json:
{
"mtu": 1400
}
After updating the file, let’s restart the Docker service:
$ sudo systemctl restart docker
New containers launched after this point will use the specified MTU. Existing containers will continue using the MTU they were created with and should be recreated to reflect the update.
This method ensures consistency of the MTU value across all containers connected to the default bridge on the host.
For finer control, Docker allows creating custom bridge networks with specific MTU settings using the –opt flag:
$ docker network create \
--driver=bridge \
--opt com.docker.network.driver.mtu=1400 \
custom-net
Any container connected to custom-net inherits the MTU defined by com.docker.network.driver.mtu. This is useful in environments where containers interact with services across networks with different MTU limits.
Let’s verify the MTU in a container attached to this network:
$ docker run --rm --net custom-net alpine ip link show eth0
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP mode DEFAULT group default
This output confirms the MTU setting for eth0, reflecting the custom configuration.
In some situations, it may be necessary to change the MTU value inside a container manually during runtime. While this approach is temporary and requires elevated permissions, it’s valuable for debugging.
We are going to start the container with NET_ADMIN capabilities:
$ docker run -it --rm --cap-add=NET_ADMIN alpine /bin/sh
Then, we run the following command inside the container:
$ ip link set dev eth0 mtu 1400
It’s important to note that this MTU change applies only to the current session and will reset when the container stops. This approach is best suited for temporary adjustments or when troubleshooting connectivity issues in one-off cases.
Regardless of whether the MTU is set through the Docker daemon or a custom network, confirming that the configuration is in effect remains essential. Verifying MTU settings ensures that containers communicate using the expected packet size, which helps avoid fragmentation or connection issues.
One reliable way to check this is by inspecting the container’s network interface using the ip link command. This approach provides a direct view of the MTU applied inside the container and helps catch discrepancies early.
We can check this, for example, in an Alpine container:
$ docker run --rm alpine ip link show eth03: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP mode DEFAULT group default
In this output, the mtu 1400 part shows the actual MTU in use for the container’s eth0 interface. This value should match the one configured earlier. If it doesn’t, it could indicate a misconfiguration or an override from a different Docker network setting. Regularly checking this helps catch such issues early, especially when working across varying environments like local setups, VPNs, or cloud platforms.
MTU mismatches between a container and its host or surrounding network can cause subtle issues like slow HTTP responses, TLS failures, or timeouts, especially over tunnels or overlay networks.
To test for these problems, we can use the ping command with the Don’t Fragment flag (-M do), which tells the network to drop packets that exceed the path MTU instead of fragmenting them:
$ ping -c 4 -s 1472 -M do <destination-ip>
The -s 1472 option sets the payload size, and when combined with 28 bytes of IP and ICMP overhead, the total packet size reaches 1500 bytes, the default Ethernet MTU. If the ping fails, the path likely supports a lower MTU. Reducing the size gradually helps identify the maximum allowed MTU.
For deeper inspection, tools like tcpdump, iftop, or ethtool can help spot dropped packets and retransmissions caused by mismatched values.
Reliable MTU configuration in Docker environments calls for thoughtful planning. Docker offers several ways to set MTU, but applying them carelessly can lead to connectivity problems. These best practices help maintain consistent and predictable networking across containers and hosts:
In this article, we covered a range of techniques for modifying the MTU of Docker containers. We began by explaining how to apply MTU settings globally through the Docker daemon to ensure consistency across containers. From there, we explored how to tailor MTU values for custom bridge networks, followed by a look at manual adjustments for testing scenarios. In addition, we examined how to verify these settings using command-line tools and addressed common connectivity issues that can result from mismatched MTU values.
Ultimately, by keeping container MTU settings in sync with the host and the broader network infrastructure, teams can minimize packet loss, avoid fragmentation, and maintain consistent performance, even in environments that involve tunnels, VPNs, or overlay networks.