Yes, we're now running our Black Friday Sale. All Access and Pro are 33% off until 2nd December, 2025:
Linux TCP/IP Connections Limit
Last updated: March 18, 2024
1. Overview
Administrators managing high-load Linux servers often need to handle many concurrent TCP/IP connections efficiently.
In this tutorial, we’ll explore the various limits that affect TCP/IP connections in Linux. We’ll also discuss how they function and how to adjust these settings to optimize server performance for demanding workloads.
2. Understanding Limits and Resource Management
Like other general-purpose operating systems, Linux manages resources using a best-effort approach. As long as resources (such as memory or CPU) are available, applications can use them. However, once resources become scarce, it can impact system responsiveness and stability.
To ensure system integrity and prevent denial-of-service (DoS) attacks, Linux imposes default resource usage policies. These policies are generally suitable for most cases, but administrators may need to fine-tune them based on specific server requirements. Let’s dive into some critical limits and how they affect TCP/IP connections.
3. Use of File Descriptors to Manage TCP/IP
In Linux, inter-process communication (IPC) is fundamental for data exchange between processes, both locally and over a network. The Sockets API, a standard IPC mechanism, uses file descriptors to manage TCP/IP connections. Each active TCP or UDP socket uses a file descriptor, making it essential to manage these limits effectively.
3.1. Kernel-Level File Descriptor Limits
The Linux kernel sets a system-wide file descriptor limit. We can view this limit with:
$ cat /proc/sys/fs/file-max
2147483647
This default value is high in many distributions.
We can use sysctl to adjust this limit temporarily:
$ sysctl fs.file-max=65536
Or, to make such a change permanent, we can add the following line to /etc/sysctl.conf:
fs.file-max=65536 # Sets the maximum number of open files to 65,536
We can check the current usage and allocation of file descriptors using:
$ cat /proc/sys/fs/file-nr
1952 0 2147483647
Here, the first number represents the current usage, the second (always 0) indicates allocated but unused descriptors and the third shows the maximum limit.
Additionally, the fs.nr_open kernel parameter controls the maximum number of file descriptors per process, with a default value of 1,048,576.
3.2. User-Level File Descriptor Limits
While the kernel-level limits are high, each user session has stricter default limits. For example, the default shell limit is typically 1,024 open files. This might be insufficient for server applications, as they may require thousands of open connections.
We can modify these limits using the ulimit command or by editing /etc/security/limits.conf. To increase the limit for an Oracle process to 8,192, we may add:
oracle hard nofile 8192
The hard keyword specifies that non-root users cannot modify this limit, ensuring greater control and stability.
4. TCP/IP Reliance on Processes and Threads
TCP connections often rely on processes or threads to manage network traffic. Therefore, understanding the limits on processes and threads is essential.
For processes, the limiting parameters are:
- Kernel-Level: The limit at the kernel level is controlled by the kernel.pid_max (default 32,767). This parameter sets the maximum number of processes system-wide.
- User-Level: It is managed via ulimit -u or the nproc option in /etc/security/limits.conf.
And, the limiting parameters for threads are:
- Kernel-Level: The limit at the kernel level is controlled by the kernel.threads-max. This sets the maximum threads the system can create and may be reduced in real time if the process table reaches 1/8th of system RAM.
- User-Level: The thread limit depends on the total virtual memory and stack size (ulimit -s). We may adjust the stack item in /etc/security/limits.conf to change it.
5. Network Stack Kernel Parameters Managing TCP/IP
In addition to file descriptors, the Linux network stack involves a variety of kernel parameters that significantly influence how TCP/IP connections are managed and handled. Specifically, these parameters control how TCP connections are initiated, maintained, and terminated. They play a crucial role in defining the overall performance and stability of a networked system, especially under heavy loads.
5.1. Netfilter Parameters
Netfilter is a framework built into the Linux kernel that provides packet filtering, network address translation (NAT), and connection tracking. It helps manage active TCP connections and their state.
- net.netfilter.nf_conntrack_max: This parameter sets the maximum number of connections that the system’s connection tracking table can hold at any time.
- nf_conntrack_tcp_timeout_*: These parameters control the timeout values for different TCP connection states like SYN_SENT, TIME_WAIT, and CLOSE_WAIT. For example, a higher timeout in the TIME_WAIT state means that closed connections remain tracked longer, potentially reducing the number of available slots in the connection table for new connections.
5.2. TCP Stack Parameters
The TCP stack manages the way TCP/IP connections are handled by the kernel, which can affect performance, especially for servers handling thousands or millions of connections:
- net.core.netdev_max_backlog: This parameter defines the maximum number of incoming packets that can be queued on the network interface before they are processed by the kernel. If the backlog is too small, packets may be dropped under high-traffic conditions.
- net.ipv4.ip_local_port_range: Specifies the range of ephemeral (temporary) ports that the system can use for outgoing connections. By default, it might be set to a smaller range (say, 32,768 to 61,000).
- net.ipv4.somaxconn: Sets the maximum queue length for pending (incomplete) connections waiting to be accepted by an application
- net.ipv4.tcp_fin_timeout: Determines how long a connection remains in the FIN_WAIT2 state before it is closed
- net.ipv4.tcp_tw_reuse: When enabled (1), this allows sockets in the TIME_WAIT state to be reused for new connections.
- net.ipv4.tcp_max_orphans: Defines the maximum number of TCP sockets not attached to any user process, that is, orphaned sockets
- net.ipv4.tcp_max_syn_backlog: Controls the maximum number of incoming connection requests that have not yet been acknowledged by the application (that is, half-open connections)
These network stack kernel parameters have default values suitable for general-purpose systems. However, servers with high traffic or special requirements may need tuning to ensure optimal handling of TCP/IP connections. Modifying these parameters allows administrators to manage connection limits better, reduce dropped connections, and enhance overall server performance under heavy loads. We may use the sysctl command to adjust these settings, and make them persistent by adding entries to /etc/sysctl.conf.
6. IP Tables Limiting TCP/IP
IPTables is a powerful firewall utility in Linux that allows administrators to manage network traffic by defining rules that control incoming and outgoing packets. Besides its core function of packet filtering, IPTables can also limit the number of simultaneous TCP/IP connections. This in turn provides an additional layer of security for specific services, IP addresses, or ports.
6.1. Using IPTables for Connection Limits
By setting a connection limit, we can reduce the risk of such abuse and ensure that our server resources aren’t overwhelmed by too many connections from a single source. One practical example is restricting the number of concurrent SSH connections from a single IP address:
$ /sbin/iptables -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 3 -j REJECT
Here’s a breakdown of this command:
- -A INPUT: This option appends the rule to the INPUT chain, meaning it applies to incoming traffic.
- -p tcp: Specifies that this rule is for TCP packets only
- –syn: This flag matches the initial SYN packets, which indicate the start of a TCP connection. It ensures the rule only counts new connections, not existing ones.
- –dport 22: Defines the destination port, which in this case is 22 (the default SSH port)
- -m connlimit: Activates the connection limit (connlimit) module, allowing us to set a limit on concurrent connections
- –connlimit-above 3: Specifies that the rule should match if there are more than three simultaneous connections from the same IP address
- -j REJECT: Instructs IPTables to reject any connection attempts that exceed the specified limit
This is especially useful for preventing brute-force attacks, where malicious users try to guess the login credentials by rapidly attempting multiple SSH connections.
6.2. Why is This Important?
By enforcing such limits, we effectively reduce the potential impact of unauthorized access attempts or denial-of-service (DoS) attacks targeting critical services like SSH. This not only protects against brute-force attacks but also prevents a single IP address from monopolizing the server’s resources.
We can implement similar rules for other services or protocols, such as limiting the number of concurrent HTTP connections to a web server. This ensures the safeguarding of our infrastructure from potential abuse or resource exhaustion.
7. Conclusion
In this article, we discussed how tuning Linux TCP/IP connection limits is crucial for optimizing server performance, particularly for high-load systems. We also learned that by adjusting file descriptor limits, process/thread limits, network stack parameters, and IPTables rules, we can significantly increase the number of concurrent connections our server can handle.
While most systems work well with default values, this article depicted how increasing these limits may be necessary for handling high traffic or improving resilience against DoS attacks. We should always monitor system performance and adjust these parameters to fit the workload requirements.