Baeldung Pro – Ops – NPI EA (cat = Baeldung on Ops)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

Partner – Orkes – NPI EA (cat=Kubernetes)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

1. Overview

Docker is a platform that packages applications and their dependencies into lightweight containers. These containers use virtual Ethernet interfaces to communicate over networks, inheriting their MTU (Maximum Transmission Unit) settings from the underlying Docker network or host system. When MTU values are inconsistent, whether between containers, hosts, or external infrastructure, it can lead to issues like dropped packets, failed connections, or degraded performance.

In this tutorial, we’ll look at how to change MTU settings in a Docker container. We’ll cover how to apply global changes through the Docker daemon, set MTU values for custom bridge networks, and adjust them manually for specific containers.

2. Understanding MTU and Docker Networking

MTU, or Maximum Transmission Unit, refers to the largest packet size that can be transmitted across a network interface without fragmentation. Most Ethernet networks use an MTU of 1500 bytes. However, environments that use tunneling, VPNs, or cloud overlays may enforce smaller MTUs, often around 1400 bytes or less.

Docker uses virtual Ethernet (veth) interfaces paired with bridge networks like docker0. By default, these interfaces inherit the MTU from the host or a value Docker selects during startup. When a container sends packets larger than the configured MTU and they aren’t fragmented properly, the packets may be dropped or cause poor connectivity. This problem becomes more noticeable with latency-sensitive applications or large payloads.

3. Changing Docker MTU

Docker supports multiple methods to configure MTU values. These range from system-wide settings to more granular configurations for specific networks or containers. The most consistent method involves setting the MTU globally through the Docker daemon.

3.1. Configuring MTU via the Docker Daemon

To set a global MTU value that affects containers using the default bridge network, we should modify the Docker daemon’s configuration file which is typically located at /etc/docker/daemon.json:

{
  "mtu": 1400
}

After updating the file, let’s restart the Docker service:

$ sudo systemctl restart docker

New containers launched after this point will use the specified MTU. Existing containers will continue using the MTU they were created with and should be recreated to reflect the update.

This method ensures consistency of the MTU value across all containers connected to the default bridge on the host.

3.2. Setting MTU on a Custom Docker Network

For finer control, Docker allows creating custom bridge networks with specific MTU settings using the –opt flag:

$ docker network create \
  --driver=bridge \
  --opt com.docker.network.driver.mtu=1400 \
  custom-net

Any container connected to custom-net inherits the MTU defined by com.docker.network.driver.mtu. This is useful in environments where containers interact with services across networks with different MTU limits.

Let’s verify the MTU in a container attached to this network:

$ docker run --rm --net custom-net alpine ip link show eth0
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP mode DEFAULT group default
This output confirms the MTU setting for eth0, reflecting the custom configuration.

3.3. Changing MTU at Container Runtime

In some situations, it may be necessary to change the MTU value inside a container manually during runtime. While this approach is temporary and requires elevated permissions, it’s valuable for debugging.

We are going to start the container with NET_ADMIN capabilities:

$ docker run -it --rm --cap-add=NET_ADMIN alpine /bin/sh

Then, we run the following command inside the container:

$ ip link set dev eth0 mtu 1400

It’s important to note that this MTU change applies only to the current session and will reset when the container stops. This approach is best suited for temporary adjustments or when troubleshooting connectivity issues in one-off cases.

4. Verifying MTU Settings

Regardless of whether the MTU is set through the Docker daemon or a custom network, confirming that the configuration is in effect remains essential. Verifying MTU settings ensures that containers communicate using the expected packet size, which helps avoid fragmentation or connection issues.

One reliable way to check this is by inspecting the container’s network interface using the ip link command. This approach provides a direct view of the MTU applied inside the container and helps catch discrepancies early.

We can check this, for example, in an Alpine container:

$ docker run --rm alpine ip link show eth0
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP mode DEFAULT group default

In this output, the mtu 1400 part shows the actual MTU in use for the container’s eth0 interface. This value should match the one configured earlier. If it doesn’t, it could indicate a misconfiguration or an override from a different Docker network setting. Regularly checking this helps catch such issues early, especially when working across varying environments like local setups, VPNs, or cloud platforms.

5. Detecting MTU Mismatches Between Host and Container

MTU mismatches between a container and its host or surrounding network can cause subtle issues like slow HTTP responses, TLS failures, or timeouts, especially over tunnels or overlay networks.

To test for these problems, we can use the ping command with the Don’t Fragment flag (-M do), which tells the network to drop packets that exceed the path MTU instead of fragmenting them:

$ ping -c 4 -s 1472 -M do <destination-ip>

The -s 1472 option sets the payload size, and when combined with 28 bytes of IP and ICMP overhead, the total packet size reaches 1500 bytes, the default Ethernet MTU. If the ping fails, the path likely supports a lower MTU. Reducing the size gradually helps identify the maximum allowed MTU.

For deeper inspection, tools like tcpdump, iftop, or ethtool can help spot dropped packets and retransmissions caused by mismatched values.

6. Best Practices

Reliable MTU configuration in Docker environments calls for thoughtful planning. Docker offers several ways to set MTU, but applying them carelessly can lead to connectivity problems. These best practices help maintain consistent and predictable networking across containers and hosts:

  • Set MTU at the Docker daemon level for consistency: Defining the MTU in the Docker daemon config ensures all containers using the default bridge network start with the correct value. This avoids mismatches and reduces issues like packet drops, especially when working with VPNs or overlay networks
  • Use custom networks for segmented MTU control: When containers need different MTU settings, for instance, internal traffic versus external endpoints, creating custom Docker networks with specific MTU values gives better control. This keeps each container operating with the right settings without affecting the host or other networks
  • Avoid one-off runtime changes in production: Manually changing MTU inside a running container might help for testing, but the setting won’t persist after a restart. Using this in production makes the setup harder to manage and less predictable. It’s best reserved for quick debugging or temporary use
  • Verify MTU values regularly: After configuring MTU, it’s important to confirm that it’s applied. Tools like ip link show, ping -M do, and tcpdump help check the MTU on container interfaces and spot mismatches early. This is especially useful across environments like on-premises, cloud, or VPNs, where network setups can differ.
  • Align container MTU with host and infrastructure limits: MTU mismatches between containers, hosts, or network paths can cause hard-to-diagnose issues. Docker’s MTU should not exceed the host’s interface limits. In setups involving tunnels or overlays, always match MTU to the lowest size across the path to avoid packet loss and maintain performance.

7. Conclusion

In this article, we covered a range of techniques for modifying the MTU of Docker containers. We began by explaining how to apply MTU settings globally through the Docker daemon to ensure consistency across containers. From there, we explored how to tailor MTU values for custom bridge networks, followed by a look at manual adjustments for testing scenarios. In addition, we examined how to verify these settings using command-line tools and addressed common connectivity issues that can result from mismatched MTU values.

Ultimately, by keeping container MTU settings in sync with the host and the broader network infrastructure, teams can minimize packet loss, avoid fragmentation, and maintain consistent performance, even in environments that involve tunnels, VPNs, or overlay networks.