1. Overview

Linux operating systems have specific ways of managing memory. One of the policies is overcommitment, which allows applications to book in advance as much memory as it wants. However, the promised memory may not be available when it comes to its actual use. Then, the system must provide a special means to avoid running out of memory.

In this tutorial, we’ll learn about the Out-Of-Memory (OOM) killer, a process that eliminates applications for the sake of system stability.

2. When the OOM Killer Is Called

Let’s notice that for the killer to work, the system must allow overcommitting. Then, each process is scored by how much the system would gain from eliminating it.

Finally, when it comes to the low memory state, the kernel kills the process of the highest score.

We can find the score of a process by its PID in the /proc/PID/oom_score file. Now, let’s start a terminal and print its process score, as the $$ variable holds its PID:

$ cat /proc/$$/oom_score
0

Next, let’s list all processes together with their PIDs and names, sorted from lowest to highest by oom_score, with the oom_score_reader script:

#!/bin/bash

while read -r pid comm
do
    printf '%d\t%d\t%s\n' "$pid" "$(cat /proc/$pid/oom_score)" "$comm"
done < <(ps -e -o pid= -o comm=) | sort -k2 -n

We use the process substitution to feed the read command with the results of ps.

Now let’s check the result:

10      0       rcu_sched
102     0       kswapd0
...
97      0       devfreq_wq
99      0       watchdogd
1051    1       upowerd
1114    1       sddm-helper
1126    1       systemd
...
1147    2       pulseaudio
2005    2       gnome-shell-cal
2172    2       gsd-datetime
...
4329    6       gedit
2186    7       evolution-alarm
5300    9       qterminal
875     9       Xorg
9215    10      Web Content
3527    17      Privileged Cont
6353    19      Web Content
1679    20      gnome-shell
6314    21      Web Content
8625    21      Web Content
4070    22      Web Content
7753    23      Web Content
3170    27      gnome-software
3615    41      WebExtensions
3653    41      Web Content
3160    62      firefox

The score takes values from 0 to 1000. Let’s notice that the value zero of oom_score means, that the process is safe from the OOM killer.

3. Protecting the Process From the OOM Killer

Now, let’s make the elimination of the process less likely. It’s especially important for long-living processes and services. So, for such a process, we should set the oom_score_adj parameter.

The parameter takes values in the range from -1000 to 1000 inclusively. Thus, its negative value decreases the oom_score, making the process less attractive for the killer. On the contrary, positive values cause the score to rise. Finally, a process with oom_score_adj = -1000 is immune to the killing.

We should check the parameter in the file /proc/PID/oom_score_adj. So, for our terminal:

$ cat /proc/$$/oom_score_adj
0

Let’s see that the terminal’s score isn’t adjusted in either direction.

3.1. Setting oom_score_adj by Hand

As the simplest way, we can write to the oom_score_adj file by hand. First, let’s check the score of the Firefox web browser process. We need pgrep to obtain its PID:

$ cat /proc/$(pgrep firefox)/oom_score
60

Next, let’s make it more killer prone:

$ echo 500 > /proc/$(pgrep firefox)/oom_score_adj

$ cat /proc/$(pgrep firefox)/oom_score
562

And finally, let’s increase its safety:

$ sudo echo -30 > /proc/$(pgrep firefox)/oom_score_adj

$ cat /proc/$(pgrep firefox)/oom_score
31

Let’s notice that we need the sudo privilege to decrease the adjustment factor below zero.

3.2. The choom Command

We should use choom to report the score and modify its adjustment. The command is a part of the util-linux package. So, let’s check the Firefox process one more time using the p switch:

$ choom -p $(pgrep firefox)

pid 3061's current OOM score: 40
pid 3061's current OOM score adjust value: -30

Then, let’s increase its score by providing the new value of oom_score_adj with the n switch:

$ choom -p $(pgrep firefox) -n 300
pid 3061's OOM score adjust value changed from -30 to 300

$ choom -p $(pgrep firefox)
pid 3061's current OOM score: 371
pid 3061's current OOM score adjust value: 300

Finally, with choom, we can start a process right away with the given oom_score_adj:

$ choom -n 300 firefox

$ choom -p $(pgrep firefox)
pid 3061's current OOM score: 346
pid 3061's current OOM score adjust value: 300

3.3. Configuring Services

In the case of a service, we can permanently adjust the score in the service configuration located or linked in the /etc/systemd folder. So, we need to edit the OOMScoreAdjust entry in the Service section. As an example, let’s look into the configuration of the snapd service:

[Unit]
Description=Snap Daemon

# some output skipped

[Service]

# some output skipped

OOMScoreAdjust=-900
ExecStart=/usr/lib/snapd/snapd

# more output skipped

4. How the oom_score Is Calculated

We should be aware that how the score is calculated depends on the kernel version. Besides memory footprint, older releases might account for the running time, nice level, or root ownership.

However, in version 5, only the total memory usage matters. To find it out, we should examine the function oom_badness in the oom_kill.c source file.

First, the function checks if the process is immune. Usually, it’s thanks to the oom_score_adj = -1000. In this case, the task obtains a zero score.

Otherwise, the task’s RAM, virtual memory, and swap space sizes are summed up. Then the result is divided by the total available memory. Finally, the function normalizes the ratio to 1000

At this moment, the oom_score_adj comes into play. So, it adds to the score. Thus, its negative value actually reduces this value.

Now we should realize that if the process is important for us, we need to look after its survivability on our own. Therefore, we should appropriately adjust its oom_score_adj parameter.

5. System-Wide Parameter to Control Overcommit

We can change the Linux system’s overcommit policy by setting the overcommit_memory parameter. The parameter is in the /proc/sys/vm/overcommit_memory file and takes:

  • 0 allows moderate overcommit. However, unreasonable memory allocation will fail. It is a default setting
  • 1 always overcommit
  • 2 don’t allow overcommit. A process usually won’t be terminated by the OOM killer, but the memory allocation attempts may return an error

We should be aware that using policies other than the default one imposes greater requirements on the way the applications deal with memory.

6. Demonstration

Now let’s show how the OOM killer is working. So, let’s simulate processes with high memory consumption. However, they shouldn’t exhaust the system’s capabilities alone. Then, only starting the next process will be detected as the out-of-memory threat.

So, let’s use the test script to eat the memory:

#!/bin/bash

for x in {0..6999999}
do
    y=$x$y
done

Let’s notice that the script allocates a lot of memory but without any spiky demands. Therefore, its memory usage levels quickly.

The demonstration was carried out on the Ubuntu 20.04 LTS with kernel 5.4.0-58-generic with around 4 GB of memory and 4 GB of swap space. Usually, the system could maintain up to three test applications running simultaneously. Then, starting the fourth instance woke the OOM killer up.

6.1. Logging the Process’ oom_score

Because the oom_score of the process doesn’t appear in the kernel’s log, we need to device a simple logger based on cron. So, let’s use our oom_score_reader to log the five most scored processes each minute:

$ crontab -l
#some output skipped

OOMTEST=/home/joe/prj/oom
*/1 * * * * $OOMTEST/./oom_score_reader | tail -n 5<br />  >> $OOMTEST/oom_score.log && echo "-------------" >> $OOMTEST/oom_score.log

6.2. Tracking Down the OOM Killer

Since we’ve started our tasks in terminals, at some point, we’re going to obtain a message inside one of them:

$ ./test
Killed

Let’s look for the corresponding event in kern.log:

$ grep -ia "Killed process" kern.log

Jul  7 18:40:56 virtual kernel: [ 7269.971178] Out of memory: Killed process 20257 (test) total-vm:1980996kB,<br />  anon-rss:21128kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3920kB oom_score_adj:0

And then let’s obtain more information about PID 20257:

Jul  7 18:40:56 virtual kernel: [ 7269.971162] oom-kill:constraint=CONSTRAINT_NONE,<br />  nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/[email protected],task=test,pid=20257,uid=1000
Jul  7 18:40:56 virtual kernel: [ 7269.971178] Out of memory: Killed process 20257 (test) total-vm:1980996kB,<br />  anon-rss:21128kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3920kB oom_score_adj:0
Jul  7 18:40:56 virtual kernel: [ 7270.002859] oom_reaper: reaped process 20257 (test),<br />  now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

Here, oom-reaper is a helper process to reclaim the memory. Finally, we can find process’ last oom_score in our log file:

1537	13	gnome-shell
2361	24	gnome-software
20256	235	test
20257	235	test
24214	236	test

Let’s find out that the 20257’s last score of 235 was the second-largest at the moment of logging. However, that’s rather due to the cron‘s granularity as big as 1 minute.

7. Conclusion

In this tutorial, we learned about the Linux way to manage memory. First, we looked at the overcommit policy, which allows any reasonable memory allocation. Then we came across the OOM killer, the process to guard the system stability in the face of the memory shortage.

Next, we looked through the scoring of processes by their memory usage and learned how to secure them from the OOM killer. In addition, we had a look at the system-wide overcommit settings.

Finally, we provided an example of how the OOM killer worked.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.