1. Introduction

Administrators of high load systems may wonder if there is a Linux TCP/IP connections limit on their servers. In this tutorial, we’ll discuss the limits on the number of concurrent TCP connections, how they work, and how to tune our servers for specific workloads.

2. Limits and Resource Management

Linux, like all general-purpose Operating Systems, works with a best-effort approach. That means that, as a rule, as long as there are available resources, applications may request them. On the other hand, if some resource pool depletes, this can affect the whole system’s health and responsiveness.

So, even if, in theory, Linux should not restrict resource usage up to the hardware limits, it must. Nowadays, it is required to do so! Many denial-of-service attacks work by trying to deplete the target resources. To avoid major impact, any modern operating system will have resource usage policies in place by default. The administrator may need to tune the limit policies according to their use cases. Moreover, the defaults are usually suitable for general use.

There are a lot of o security controls in place to ensure systems stability and responsiveness. Let’s see some of the limits we may run into.

3. File Descriptors

The way Linux, and other POSIX-based operating systems, communicate between processes is called Inter-process communication or IPC. One of the beauties of this concept is that it applies to communications between processes in a single host or through a network of computers. This means that both scenarios share a common basis for their underlying APIs.

So, if we have two programs on the same host talking to each other using the Sockets API (the de-facto standard for data streaming in POSIX), converting them to run on different servers should require minimal changes. Hence, the kernel exposes the communication endpoints in similar forms.

Notably, the Sockets IPC API used in Linux TCP/IP connections uses file descriptors. Therefore, the number of opened file descriptors is one of the first limits we may face. By the way, this applies to both TCP and UDP sockets.

3.1. Kernel-Level File Descriptors Limits

The kernel has a system-wide limit. We can check it with:

# cat /proc/sys/fs/file-max
2147483647

This huge number is the default in many distributions. To change this limit, we can set it on-the-fly using the sysctl command.

# sysctl fs.file-max=65536

To make it persistent, we can add an entry to the /etc/sysctl.conf file where we can set persistent tunning settings by adding a line like this:

fs.file-max=65536  # Limits the number of open files to 65536

Whenever this limit is reached, the system will throw a “Too many open files in system” event. We can see the current usage like this:

# cat /proc/sys/fs/file-nr
1952    0       2147483647

The first number is the file descriptor’s current usage, the second is the allocated but unused (always 0), and the third is the maximum (same as fs.file-max).

Along with the system-wide limit, the Linux kernel imposes file-descriptor limits on each process. This is controlled using the fs.nr_open kernel tunable. The default is 1048576. Quite high, again.

3.2. User-Level Descriptors Limits

Now we begin to wonder: if the kernel limits are that high, why do we need to bother? As we can imagine, a 1 GB Ram server probably will not run smoothly with more than 2 trillion open files! So these limits can be tuned down as a safeguard measure.

Well, the fact is that the actual limits are imposed by the shell on the ‘user-level’. Each shell instance sets a much stricter and reasonable limit. By default, 1024 opened files.

That limit is more than okay for regular users. However, for server applications, it is, most likely, low enough. Large database servers can have thousands of data files and opened connections, for instance.

These limits can be controlled using the ulimit command and persisted by editing the /etc/security/limits.conf file. For instance, to change the limit of the Oracle process to 8192, we would add this line on the file:

#<domain>      <type>  <item>         <value>
oracle          hard    nofile         8192

The keyword hard means that unprivileged users can not change the limit ad-hoc. A soft limit would allow a non-root user to use the ulimit command to change it for specific use-cases.

4. Processes and Threads

Again, there are both kernel and user-space limits to the number of processes and threads. On server applications, we usually assign connections to worker processes or threads. So, their limits can restrict the number of connections they can handle. We have a tutorial on these limits, also.

For processes, the limiting parameters are:

  • kernel: kernel.pid_max. Defaults to 32767 and controls the system-wide size of the process table
  • user-level: ulimit -u, or limits.conf nproc option. Maximum user processes

And, for threads:

  • kernel: kernel.threads-max. Maximum number of threads the fork system call can create. It can be reduced in the realtime whenever the process table reaches 1/8th of the system’s RAM
  • user-level: total virtual memory / (stack size * 1024 * 1024). The stack size is controlled using ulimit -s  or by the stack item on the limits.conf

5. Network Stack Kernel Parameters

If that was not enough, there are kernel parameters that can indirectly affect the number of TCP connections. TCP has a quite complex finite-state machine. The kernel must track each connection state, its timings, and its transitions. To do that, it uses some data structures that have bounds. So, even though they may not relate directly to the Linux TCP/IP connections limit, they may affect it.

Along with the TCP controlling tables, if we use Netfilter, it also has its own tuning. We’ll review some of the more usual tunable settings that can influence the limit on TCP connections.

For Netfilter:

  • net.netfilter.nf_conntrack_max: maximum number of connections to track
  • nf_conntrack_tcp_timeout_*: limits timeout for each TCP connection tracking state such as (SYN sent or received, Close Wait, Time Wait, and so on…)

For the TCP stack:

  • net.core.netdev_max_backlog: maximum number of packets queued on the input side when the interface receives packets faster than the kernel can process them
  • net.ipv4.ip_local_port_range: ephemeral port range (ports allocated dynamically on the client side of TCP connections)
  • net.ipv4.somaxconn: limit of socket listening backlog, i. e., connection waiting to the application server endpoint accept
  • net.ipv4.tcp_fin_timeout: the time an orphaned connect will wait before it is aborted (TIME_WAIT state)
  • net.ipv4.tcp_tw_reuse: allows reuse of time-wait sockets for new connections, saves resources on high connection create and destroy rates
  • net.ipv4.tcp_max_orphans: maximum numbers of TCP sockets not attached to a file handle
  • net.ipv4.tcp_max_syn_backlog: maximal number of remembered connection requests (SYN_RECV) that have not received an acknowledgment from connecting client.
  • net.ipv4.tcp_max_tw_buckets: maximal number of timewait sockets held by system simultaneously

The default for these parameters is good for a lot of applications. As with other kernel parameters, we can set them with the sysctl command. To persist the changes, we can use the /etc/sysctl.conf file.

6. IP Tables Limits

Last but not least, we may also use the IP tables to assign connection limits. We can set limits based on the source addresses, destination ports, and a lot of other options. This uses the connlimit IP tables modules. For example, to limit the SSH connections to 3 per IP or host, we can use:

# /sbin/iptables  -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 3 -j REJECT

7. Conclusion

In this tutorial, we explored a lot of variables that control the Linux TCP/IP connections limit, among others, that can influence it. Most of the time, we don’t need to bother changing them. However, there are occasions when we must.

If we have high traffic or a huge number of connections, chances are that we’ll need to increase systems’ ulimit defaults. On the other hand, if the system is a likely target of denial-of-service attacks, some parameters must be set to give even better resilience.ipp

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.