Running multiple processes on our Linux systems requires them to share resources such as CPU, RAM, and disk space.
To avoid resource-hungry processes causing a system freeze, we may wish to limit the resources consumed by other processes.
In this tutorial, we’ll learn how to add constraints with respect to CPU, RAM, and disk space. We’ll use commands like ulimit, cgroups, systemd-run, ionice, and nice. In some cases, these commands will need superuser permissions.
2. Using systemd-run
systemd-run is a system and service manager directly available in most Linux distros. It deals with starting and running processes, services, and daemons.
This tool allows us to add limitations directly to a specific process we want to run.
For example, we can launch a process with a 1GB RAM limitation:
$ systemd-run --scope -p MemoryLimit=1000M ./myProcess.sh
Here, myProcess.sh has been launched by systemd-run and can only access the RAM specified.
We can also use it to force a process to use only a maximum percentage of the CPU:
$ systemd-run --scope -p CPUQuota=10% ./myProcess.sh
systemd-run allows us to combine both a CPU and Memory limit:
$ systemd-run --scope -p MemoryLimit=1000M -p CPUQuota=10% ./myProcess.sh
3. Using ulimit
ulimit allows us to overwrite or create limitations on RAM and Disk Space. It’s a built-in function of the bash shell.
3.1. Hard and Soft Limits
There’s a hard limit for all users in the system, set by the administrator. Then, there’s a soft limit, which each user can set themselves. These limits remain in place until they’re changed.
Let’s set a 1GB RAM soft limit for our current user:
$ ulimit -Sv 1000000
Now, any process we run will only be able to use 1GB of RAM. This limitation will affect all future processes we launch, but we can change it before running the next process if we wish.
With ulimit, we can also add limitations to the maximum file size created by a process, thus limiting disk space used:
$ ulimit -Sf 2000
Now, all processes launched by our user will only be capable of creating or copying files with a maximum size of 2MB. We should note that this limit is on a per-file basis.
We can also combine limits:
$ ulimit -Sv 1000000 -Sf 2000 && ./myProcess.sh
Here we’ve set the soft limits on RAM and file size and immediately launched myProcess.sh.
3.2. How to Revert the Limits Set With ulimit
Any user can change their soft limit. The overall hard limits still take precedence, though. However, only the administrator can change the hard limit.
Usually, we provide the limit to ulimit numerically. However, for resetting ulimit limits, we can use the keyword unlimited. Let’s imagine we’ve set some limits:
$ ulimit -Sv 1000000 -Sf 2000 $ ulimit -a ... file size (blocks, -f) 2000 ... virtual memory (kbytes, -v) 1000000 ...
To remove them, we set an unlimited limit instead:
$ ulimit -Sv unlimited -Sf unlimited $ ulimit -a ... file size (blocks, -f) unlimited ... virtual memory (kbytes, -v) unlimited ...
4. Using cpulimit
cpulimit can limit the CPU usage of new or existing processes. This tool is usually available in Ubuntu. If we’re using a different distro, we may need to get it:
$ sudo apt-get install cpulimit
4.1. How to Use It on a New Process
We can directly launch new processes with a CPU limitation applied:
$ cpulimit -l 20 firefox Process 5202 detected
Here, we launched a new process – the Firefox browser – with a CPU limitation of 20%.
4.2. How to Use It on an Existing Process
To limit a process that’s already running, we must provide the name of the process for cpulimit to find:
$ firefox & $ cpulimit -e firefox -l 30 Process 2666 detected
Here, cpulimit detected the firefox process with the PID of 2666, and limited the CPU percentage to 30%.
5. Using ionice
ionice allows us to limit when a program is allowed to use the system disk IO. Currently, there are only four types, or classes, of scheduling for ionice: 0, 1, 2, and 3. To launch an idle process, meaning that it won’t use disk IO unless a certain grace period has passed since the last disk IO request, we use class 3:
$ ionice --class 3 ./myProcess.sh
With class 1, we can set a process to have top priority on the disk IO:
$ ionice --class 1 ./myProcess.sh
We need to be careful when launching a process with class 1 because it doesn’t take into account what is happening on the disk at the moment of launch. We could run into some issues with other processes that are using disk IO at that moment.
6. Using cgroups
Control groups (also known as cgroups) are limitation groups that we can use to apply to our processes. We can think of them like resource limitation filters through which our processes must pass.
When we run a process through a cgroup, all the limitations of that group will be applied.
6.1. How Do We Get It?
It’s possible that some of the cgroups commands are not available in our distro, so we may first need to install them:
$ apt-get install cgroup-tools
6.2. Creating Groups
We’ll now use the cgcreate command to create two cgroups, which we’ll use to run some processes:
$ sudo cgcreate -t $USER:$USER -a $USER:$USER -g cpu:/cpunormalusage $ sudo cgcreate -t $USER:$USER -a $USER:$USER -g cpu:/cpulowusage
Here, we’ve created two cgroups assigned to our user. For simplicity, we’ve used $USER as both the user and the group, though the second occurrence could be replaced by a group if we prefer.
Now, we need to add limitations to our cgroups:
$ cgset -r cpu.shares=512 cpulowusage
Here, we set the number of shares to the cpulowusage group to 512, with cgset. When we create a cgroup, it has a default of 1024 shares. The more shares a group has, the more resources it can receive from the CPU.
6.3. Launching a Process
To launch a process in a cgroup, we use the cgexec command:
$ cgexec -g cpu:cpulowusage firefox &
Running the above command will launch firefox in the control group, in the background. Since it doesn’t have any other cgroup to compete with, it will have access to all of the CPU. If we launch another process in a control group, it will compete with it by its number of shares.
6.4. Limiting RAM
So far, we’ve used cgroups to limit CPU. We can also limit RAM usage. For this, we’ll create a new cgroup and override a property called memory.limit_in_bytes.
Then we’ll use a combination of echo and tee to write a value, for that group specifically, into the property:
$ sudo cgcreate -g memory:/memoryLimitGroup $ echo $((1000000000)) | sudo tee /sys/fs/cgroup/memory/memoryLimitGroup/memory.limit_in_bytes
In our case, we set the limit to 1GB.
If we want to limit swap space as well, we override the memory.memsw.limit_in_bytes property:
$ echo $((1000000000)) | sudo tee /sys/fs/cgroup/memory/memoryLimitGroup/memory.memsw.limit_in_bytes
This adds 1GB of swap space to our limit group.
Now, we can just launch a process in the background in that control group:
$ cgexec -g memory:memoryLimitGroup firefox &
7. Process Schedule Manipulation
7.1. Using nice
The niceness ranges from -20 to 19 and shows the potential of a process being chosen by the CPU to run. We can set this value before launching a process:
$ nice -n 19 ./myProcess.sh
Here, we set the niceness to 19 and launched myProcess. The higher the niceness, the less the process demands of the scheduler, so this process will be given less CPU time when other processes require the CPU.
7.2. Using renice
To change the niceness value of a process while it’s running, we use a tool called renice:
$ ./myProcess.sh && ps aux | grep -i 'myProcess' baeldung-reader 20935 0.0 0.0 6376 2420 pts/0 S+ 10:16 0:00 grep --color=auto -i myProcess $ renice -n 19 20935
In the example above, we launched a process called myProcess, got its PID, and changed its niceness value to 19.
In this article, we learned how to work with limitations on the CPU, RAM, swap space, and disk IO using systemd-run, cpulimit, ulimit, ionice, and cgroups. We also learned where these limitations can be applied to a process we’re launching, one that’s already running, or the whole system.
We also learned how to indirectly limit a process from consuming resources by changing its niceness property.