1. Introduction

On Linux, processes can use many resources during their lifetime. The kernel keeps track of the current user’s limits for most resources. This lets the operating system moderate loads by checking the usage against these limits per process.

Furthermore, one of the many resources that we can specify is the number of open files. In this tutorial, we’re going to discuss how we can determine and change the limits on the number of open file descriptors.

2. File Structures

File descriptors or file handles are integer identifiers that specify data structures. The Linux kernel refers to these structures as file structs since they describe open files. The index of a file struct is the file handle.

In Linux, each process has its own set of open files. Hence, each process also has its own table of file structs that the kernel keeps track of.

Notably, each file struct is allocated in kernel memory. As such, the kernel can only allocate so many file structs until the system runs out of memory.

The kernel exports the maximum number of file structs that can be allocated in the /proc psuedo filesystem under /proc/sys/fs/file-max:

$ cat /proc/sys/fs/file-max
9484

Examining this file, we see that our kernel can currently allocate a maximum of 9484 file structs. Let’s explore how we can modify this number.

3. ulimit and Soft Limits

We can use the ulimit command to set or get the resource limits of the current user:

$ ulimit
unlimited

We see unlimited in the output when we run ulimit without any arguments.
This means that we can access any resource on the system as the current user.

Additionally, we can use the -a option to view our limits for each resource:

$ ulimit -a
core file size (blocks)         (-c) 0
data seg size (kb)              (-d) unlimited
scheduling priority             (-e) 0
file size (blocks)              (-f) unlimited
pending signals                 (-i) 378
max locked memory (kb)          (-l) 64
max memory size (kb)            (-m) unlimited
open files                      (-n) 1024
POSIX message queues (bytes)    (-q) 819200
real-time priority              (-r) 0
stack size (kb)                 (-s) 8192
cpu time (seconds)              (-t) unlimited
max user processes              (-u) 378
virtual memory (kb)             (-v) unlimited
file locks                      (-x) unlimited

This output shows us the limits on each of the listed system resources, along with the option needed to specify that specific resource to the ulimit command.

Actually, these are the current user’s soft limits. A soft limit restricts us, but we can raise it further by using the ulimit command.

The -n option shows the soft limit on the number of open file descriptors:

$ ulimit -n
1024

Here, we can see that our current soft limit on the number of open file descriptors is 1024.

4. Hard Limits

A hard limit is different than a soft limit, in that we can’t increase it as an unprivileged user. The hard limit of a resource is the maximum value that a user can increase their soft limit to.

We can view the hard limit on the number of open file descriptors by adding the -H option to our last command:

$ ulimit -n -H
4096

Thus, we know that this system’s hard limit for the number of open file descriptors is 4096.

5. Increasing Limits

Once we know the values above, we can use the ulimit command again to increase our soft limits, as long as they are below the hard limits.

In practice, we can insert the new soft limit value after the resource we want to change. Since we know the hard limit (4096) for open file descriptors (-n), we’ll try increasing the soft limit up to that:

$ ulimit -n 4096

Now, we can check the soft limit to verify that we have successfully increased it from its previous value of 1024:

$ ulimit -n
4096

Thus, we see that we’re able to increase our soft limit on the number of file descriptors up to our hard limit.

Importantly, attempting to increase the soft limit to a value higher than the hard limit will simply throw an error.

$ ulimit -n 5000
sh: error setting limit: Operation not permitted

From this, we see that these limits are actually enforced by the kernel and the system.

6. Limits in Action

Now that we’ve raised our soft limit to 4096, we can test whether this actually stops us from opening too many files:

First, let’s make a new directory to work in and use a loop to create 4098 files with touch:

$ mkdir temp
$ cd temp
$ for x in {3..4100}
do
  touch tempfile$x
done

Next, we check whether we have more than 4096 files in the current directory via ls and wc:

$ ls | wc -l
4098

Finally, let’s try to open all of these files via eval and stream redirection:

$ for x in {3..4100}
do
  echo $x
  eval 'exec '"$x"'< '"tempfile$x"
done
3
[...]
4098
bash: tempfile4098: Too many open files
4099
bash: tempfile4099: Too many open files
4100
bash: tempfile4100: Too many open files

Thus, we see that the kernel will throw an error when we reach our soft limit on this resource. Note that there are most often other open files and the error could appear even earlier.

7. Conclusion

In conclusion, Linux manages limits on system resources by using per-process resource limits. They come in the form of soft limits and hard limits. We can use the ulimit command to view both the soft limits and hard limits. Additionally, we can use ulimit to change the soft limit of a given resource.

Comments are closed on this article!