1. Introduction

In Linux, checking and setting soft and hard limits is a way to constrain and organize system usage. However, the main ulimit tool usually works proactively instead of reactively and doesn’t have much granularity.

In this tutorial, we explore ways to change the limits of a running process as opposed to those for a user or the system. First, we explain how processes keep track of open files and introduce a way to programmatically open many files in a short time. Next, we look at the standard tool for limiting resource usage for a specific process. After that, we apply the tool to our example of limiting the maximum number of open files and verify the results. Finally, we look at a possible alternative.

We tested the code in this tutorial on Debian 11 (Bullseye) with GNU Bash 5.1.4. It should work in most POSIX-compliant environments.

2. File Handle Generator

File descriptors or handles help the system and its processes keep track of open files, as well as the permitted operations over them. In fact, Linux allows us to limit the number of file descriptors globally.

In this article, we use a one-liner, which gradually accumulates file handles:

$ { for h in {3..63}; do eval 'exec '$h'</dev/null'; sleep 1; done; } &

Above, we run a loop in the background to open a new file descriptor every second for a minute:

  • eval ensures we interpolate the handle number from the loop counter $h
  • exec opens the file descriptor to /dev/null for reading via stream redirection

To employ the script, we need to know our process ID (PID) as output when we background the generator, and the current number of open file handles for that PID from /proc/PID/fd. By default, each process has four files: three for the standard streams plus one for stream 255.

At this point, we are ready to test our script:

$ { for h in {3..63}; do eval 'exec '$h'</dev/null'; sleep 1; done; } &
[1] 666

Now, we can monitor the file count in /proc/666/fd to verify that it increases by one each second. For instance, we can do that with watch and a simple ls pipe to the wc command.

$ watch 'ls /proc/666/fd | wc --lines'

So, having a way to verify them, let’s explore solutions for limiting the maximum number of open file handles for a given process.

3. prlimit

As part of the util-linux package since version 2.21, the prlimit tool can get and set resource usage limits for specific processes.

Let’s run prlimit without any arguments:

$ prlimit
RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                         0 unlimited bytes
CPU        CPU time                           unlimited unlimited seconds
DATA       max data size                      unlimited unlimited bytes
FSIZE      max file size                      unlimited unlimited bytes
LOCKS      max number of file locks held      unlimited unlimited locks
MEMLOCK    max locked-in-memory address space     65536     65536 bytes
MSGQUEUE   max bytes in POSIX mqueues            819200    819200 bytes
NICE       max nice prio allowed to raise             0         0
NOFILE     max number of open files                1024     65536 files
NPROC      max number of processes                 7823      7823 processes
RSS        max resident set size              unlimited unlimited bytes
RTPRIO     max real-time priority                     0         0
RTTIME     timeout for real-time tasks        unlimited unlimited microsecs
SIGPENDING max number of pending signals           7823      7823 signals
STACK      max stack size                       8388608   8388608 bytes

Similar to ulimit, the output above shows soft and hard general system resource usage limits.

Unlike ulimit, prlimit allows us to specify a PID in addition to the resource type:

$ prlimit --pid 1
RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                         0 unlimited bytes
[...]

Consequently, we can use prlimit to get and set the limit on a specific resource for a given process:

$ prlimit --pid $$ --nproc
RESOURCE DESCRIPTION             SOFT HARD UNITS
NPROC    max number of processes 7823 7823 processes
$ prlimit --pid $$ --nproc=unlimited:unlimited
$ prlimit --pid $$ --nproc
RESOURCE DESCRIPTION                  SOFT      HARD UNITS
NPROC    max number of processes unlimited unlimited processes

First, we get the current limit on the number of processes (-u, –nproc) in the current shell via its PID $$. After that, we set both the soft and hard limits, separated by a : colon, to unlimited. To set only one, we can simply omit the other, preserving the colon on the correct side.

4. Running Process Limits with prlimit

Combining prlimit with our file handle generator, we can see the process limits in action.

To demonstrate, we use the -n or –nofile flag to check the default maximum number of open files:

$ prlimit --nofile
RESOURCE DESCRIPTION              SOFT  HARD UNITS
NOFILE   max number of open files 1024 65536 files

Since the file handle generator only produces 60 handles on top of the usual 4, the total of 64 is much below the soft limit of 1024. Let’s start our one-liner and lower the maximum file handle limit below the current number of open file descriptors while it’s running:

$ { for h in {3..63}; do eval 'exec '$h'</dev/null'; sleep 1; done; } &
[1] 666
$ sleep 7
$ prlimit --pid 666 --nofile='5:5'
-bash: line 32: /dev/null: Too many open files
sleep: error while loading shared libraries: libc.so.6: cannot open shared object file: Error 24
[...]

As expected, we get the Too many open files error. Notably, even sleep is unable to function since it needs to open a shared library, which it’s unable to do due to the restriction on the number of file handles.

5. Running Process Limits with /proc

Indeed, the source of many kernel options, including limits, is the /proc pseudo-filesystem.

In this case, we can also use /proc/PID/limits to get the limits for a particular process:

$ cat /proc/$$/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             8041                 8041                 processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       8041                 8041                 signals
Max msgqueue size         819200               819200               bytes
Max nice priority         40                   40
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Moreover, on some Linux distributions, we can even set limits by modifying the /proc/PID/limits file directly. How the latter is implemented depends on the kernel version and any additional patches.

Still, for most systems, the /proc/PID/limits file is read-only.

6. Summary

In this article, we looked at ways to limit resource usage for a given process, with the main example of lowering the maximum number of open file handles.

In conclusion, while there can be non-standard ways to introduce granular resource usage constraints at the level of specific processes, prlimit is the de facto tool for the task.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.