1. Overview

We often have access to multiple network connections, like Wi-Fi, Ethernet, and cellular data. However, despite having these options, we can usually only use one connection at a time on Linux. This might leave us wondering: is there a way to merge these connections for a better overall experience?

In this tutorial, we’ll delve into combining multiple connections into one using network bonding. Further, we demonstrate the most popular methods to achieve this.

Notably, most of the commands require root privileges to function, so it’s usually necessary to use sudo.

2. What Is Network Bonding

Network bonding in Linux means merging multiple network interfaces into a single logical one. It’s also known as link aggregation, NIC teaming, and port trunking, among other names.

In particular, this method offers three main benefits:

  • increased bandwidth: combines multiple network interfaces to increase the total available bandwidth
  • load balancing: distributes network traffic across multiple interfaces to optimize performance and prevent congestion
  • redundancy: ensures fault tolerance by seamlessly rerouting traffic to functional interfaces if one or more fail

One important aspect of network bonding is choosing the bonding mode, which determines how the merged network interfaces work together. There are seven bonding modes, each using a different policy to manage traffic load and offering varying levels of load balancing and fault tolerance:

Mode Description Load Balancing Fault Tolerance
0, balance-rr Transmits packets sequentially from the first slave interface to the last. Yes Yes
1, active-backup Keeps one interface active at a time. If it fails, a different interface takes over. No Yes
2, balance-xor Splits traffic packets between all interfaces more evenly using a hashing algorithm. Yes Yes
3, broadcast Transmits packets on all slave interfaces simultaneously. No Yes
4, 802.3ad Creates aggregation group interfaces using the IEEE 802.3ad standard. Interfaces within a group must share the same speed and duplex settings. Yes Yes
5, balance-tlb Adaptive transmit load balancing: distributes outgoing traffic based on the current load of each slave interface. Yes Yes
6, balance-alb Adaptive load balancing: uses balance-tlb and also balances incoming IPv4 traffic using ARP negotiation. Yes Yes

As usual, it’s best to choose the most appropriate mode for the current requirements; otherwise, we risk bonding failure. That said, one of the most common bonding modes is balance-rr (Round-robin policy).

3. Loading the Bonding Kernel Module

The very first step to achieve bonding is to ensure the bonding kernel module is loaded. We can use the modprobe command to force it into memory:

$ sudo modprobe bonding

We may still face an error that forces the kernel to unload the module. That’s why, it’s always best to check if the bonding driver is successfully loaded with modinfo:

$ modinfo bonding

What’s more, we can specify the bonding mode at this stage, rather than later when creating a bonding interface:

$ sudo modprobe bonding mode=balance-rr miimon=100

Here, the mode parameter sets the bonding mode to balance-rr or 0. The second parameter, miimon, helps us specify the MII (Media Independent Interface) link monitoring frequency. This makes the bonding driver check the status of each slave interface every 100 milliseconds for failure. If an interface goes down, it switches to the other.

4. Using IPRoute2

There are many tools to bond multiple network interfaces, but it’s always wise to stay current. For example, the ifenslave command could do the job. However, we may face unexpected performance issues since the tool has been deprecated.

Alternatively, we can rely on the commands available under the IPRoute2 package to manage and aggregate network interfaces.

4.1. Create Bonding Interface

First, let’s see the ip link command to create a bonding interface:

$ sudo ip link add bond0 type bond

In this step, we add a device called bond0 and set its type to bond (a virtual bonding device).

If we don’t specify the bonding mode while loading the bonding module, we can do it here as well. Let’s append the mode option to the command and choose the balance-rr mode:

$ sudo ip link add bond0 type bond mode balance-rr

Thus, we have bond0 ready for use.

4.2. Verify Interfaces and Relations

Next, we list the available network interfaces on the system:

$ ip link show
...
2: enp0s3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether de:c6:c0:d5:d9:42 brd ff:ff:ff:ff:ff:ff permaddr 08:00:27:11:9e:6a

3: wlx00c0cad5d942: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DORMANT group default qlen 1000
    link/ether de:c6:c0:d5:d9:42 brd ff:ff:ff:ff:ff:ff permaddr 00:c0:ca:97:04:42
...
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether de:c6:c0:d5:d9:42 brd ff:ff:ff:ff:ff:ff

The show option above displays all network interfaces and their attributes. After we’re sure the new bond0 interface is on the list, we can turn it into a proper master interface. That is, by enslaving two or more working interfaces to it. In this case, wlx00c0cad5d942 and enp0s3.

Before we can configure these network interfaces, we should first bring them down. So, let’s use the set subcommand to select the interface and then specify the down action to disable it:

$ sudo ip link set enp0s3 down
$ sudo ip link set wlx00c0ca970442 down

Now we can continue setting up bond0 as the master interface of enp0s3 and wlx00c0cad5d942:

$ sudo ip link set enp0s3 master bond0
$ sudo ip link set wlx00c0cad5d942 master bond0

Finally, let’s use the up attribute to turn the state of all interfaces back on:

$ sudo ip link set enp0s3 up
$ sudo ip link set wlx00c0cad5d942 up
$ sudo ip link set bond0 up

As a result, bond0 manages the network traffic for wlx00c0cad5d942 and enp0s3. This setup distributes network traffic across these slave interfaces based on the balance-rr bonding policy.

It’s worth noting that these changes are only temporary and are lost on reboot.

5. Using nmcli

Most Linux distros use NetworkManager to manage network devices. Obviously, we can control the manager via various configuration files or tools. However, the most common way is with nmcli. Its user-friendliness and wide adoption make it a wise choice for setting up a bonding interface.

Unlike the ip command, changes made with nmcli are persistent.

5.1. Creating Bonding Interface

Let’s create a new connection for the bonding interface:

$ nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=balance-rr,miimon=100"

In this command, we add a new connection of type bond and name it bond0. We also set the interface name to bond0. Furthermore, we assign the previous bonding mode and MI monitoring interval to bond.options.

We can list the available connections to verify that bond0 has been added and choose which interfaces to enslave:

$ nmcli connection

At this point, we should see the respective interfaces.

Next, let’s create two new connections for enp0s3 and wlx00c0cad5d942, assigning bond0 as their master interface:

$ nmcli connection add type wifi ssid bouhannana-AP con-name slave-wlx00c0cad5d942 ifname wlx00c0cad5d942 master bond0
$ nmcli connection add type ethernet con-name slave-enp0s3 ifname enp0s3 master bond0

In the first command, type wifi indicates that the slave-wlx00c0cad5d942 connection is a Wi-Fi connection, with the SSID set to bouhannana-AP. However, in the second command, type ethernet specifies that the slave-enp0s3 connection is an Ethernet connection.
To apply the changes, let’s restart NetworkManager or simply reload on all network devices with:

$ nmcli networking off
$ nmcli networking on

The last step is to activate the bond0 connection:

$ nmcli connection up bond0

Now, bond0 should be up.

5.3. Verify Interfaces and Relations

Finally, let’s check if bond0 is working correctly by displaying /proc/net/bonding/bond0 with cat:

$ sudo cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.5.0-35-generic

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: enp0s3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:11:9e:6a
Slave queue ID: 0

Slave Interface: wlx00c0cad5d942
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 00:c0:ca:97:04:42
Slave queue ID: 0

This output provides detailed information about the bond0, including the bonding mode, MII status, and the state of each slave interface. Also, we can notice the sections include everything exactly as we set it up.

6. Using Network Configuration Files

We already discussed that ip and nmcli commands offer a modern and efficient way to manage and bond network interfaces. However, configuring interfaces via network configuration files may simplify the process when dealing with a complex setup. Also, it ensures persistence upon reboot.

By default, NetworkManager stores network configurations using keyfile format in /etc/NetworkManager/system-connections/.

For example, let’s check the network profiles that we’ve created before with nmcli:

$ sudo ls /etc/NetworkManager/system-connections/
bond0.nmconnection	   slave-wlx00c0cad5d942.nmconnection
slave-enp0s3.nmconnection 

NetworkManager can use other classic plugins like ifupdown or ifcfg-rh, which many users still rely on. However, the choice of plugin depends on the Linux distribution. For instance, Debian-based distributions traditionally use ifupdown, while Red Hat-based distributions use ifcfg-rh.

6.1. In Debian Distributions Like Ubuntu

The ifupdown plugin reads network configurations written in the /etc/network/interfaces file. To ensure this method of interface management is enabled, we must configure it in /etc/NetworkManager/NetworkManager.conf:

$ cat /etc/NetworkManager/NetworkManager.conf
...
[main]
plugins=ifupdown,keyfile

[ifupdown]
managed=true
...

Here, the plugins=ifupdown,keyfile directive instructs NetworkManager to look for ifupdown profiles first, then keyfile profiles.

Next, let’s edit /etc/network/interfaces and add a stanza to define the master interface bond0:

$ cat /etc/network/interfaces
...
# Master interface bond0 
auto bond0
iface bond0 inet static
    address 192.168.1.100/24
    gateway 192.168.1.1
    bond-slaves enp0s3 wlx00c0cad5d942
    bond-mode balance-rr
    bond-miimon 100
...

This part defines the bonding interface bond0 with a static IP configuration. Specifically, it assigns an IP address of 192.168.1.100, a subnet mask of 255.255.255.0, and a gateway of 192.168.1.1. As before, it also specifies enp0s3 and wlx00c0cad5d942 as slave interfaces. Additionally, the configuration uses the balance-rr mode and sets the MII monitoring interval to 100 milliseconds. Lastly, the auto keyword ensures bond0 is automatically brought up during system boot.

Now, let’s configure wlx00c0cad5d942 and enp0s3 to recognize bond0 as their master interface with these stanzas:

# Slave interface enp0s3
auto enp0s3
iface enp0s3 inet manual
    bond-master bond0

# Slave interface wlx00c0cad5d942
auto wlx00c0cad5d942
iface wlx00c0cad5d942 inet manual
    bond-master bond0

Here, we use inet manual to indicate that these interfaces shouldn’t be automatically configured with a static IP address or via DHCP. In other words, they won’t receive different IP addresses from bond0 as they are part of it now.

6.2. In RedHat Distributions Like Fedora and CentOS

Just like before, we should make sure that /etc/NetworkManager/NetworkManager.conf recognizes the ifcfg-rh plugin:

$ cat /etc/NetworkManager/NetworkManager.conf
...
[main]
plugins=ifcfg-rh,keyfile
...

Unlike ifupdown, we can configure each network profile in a separate ifcfg file within the /etc/sysconfig/network-scripts directory. Additionally, each filename should always start with ifcfg-.

For instance, let’s create a file named ifcfg-bond0 in /etc/sysconfig/network-scripts to define bond0:

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.100
PREFIX=24
GATEWAY=192.168.1.1
BONDING_OPTS="mode=0 miimon=100"

Most of these parameters are self-explanatory and similar to the previous network configuration.

Using the same naming syntax, let’s create two separate ifcfg files for enp0s3 and wlx00c0cad5d942:

$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
DEVICE=enp0s3
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
$
$ cat /etc/sysconfig/network-scripts/ifcfg-wlx00c0ca970442
DEVICE=wlx00c0cad5d942
TYPE=Wireless
WIRELESS_ESSID=bouhannana-AP
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Here, the difference between these two ifcfg files is the TYPE of the connection. We specify that enp0s3 is an Ethernet interface, while wlx00c0cad5d942 is wireless with an ESSID.

Now, all we have to do is reload the NetworkManager to force it to use these new connection profiles.

7. Conclusion

In this article, we covered network bonding, what to expect from it, and the available bonding modes. Additionally, we explained the most common methods to merge multiple interfaces into one, using commands and configuration files in multiple environments.

In conclusion, these network bonding methods are different ways to communicate our network configuration to the kernel. So, managing the network with multiple configurations may result in system confusion and unexpected faults. Therefore, it’s wise to stick with one modern, consistent method to ensure stability.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments