1. Introduction

As computing demands have skyrocketed over the past few decades, so have the architectures underpinning our machines. One of these architectural developments, pivotal in ensuring optimal system performance, especially in multi-processor systems, is Non-Uniform Memory Access (NUMA). Hence, understanding NUMA is crucial for enhancing system performance and resource utilization in today’s computing landscape.

In this tutorial, we’ll learn how to determine whether our system supports NUMA. 

2. Understanding NUMA

Before diving into the methods of checking for NUMA capabilities, let’s briefly understand what NUMA is and why it matters.

NUMA refers to a memory architecture that allows processors in a multi-socket system to access different memory regions with varying latencies. In contrast, we have the traditional Symmetric Multiprocessing (SMP) approach, where all processors have uniform memory access.

Consequently, with the NUMA architecture, processors can access local memory, which is closer and, therefore, faster than memory located further away (remote memory). This distinction becomes particularly important in systems with multiple sockets, where varying memory access times can significantly impact application performance.

In a server environment, understanding if a server supports NUMA is essential. This is because we can ensure our applications effectively utilize available resources, optimizing for improved efficiency and responsiveness.

3. Using the System Message Log

To begin our exploration, let’s use the system message log, often accessible through the dmesg command.

This command displays the kernel ring buffer, which contains messages generated by the kernel during the boot process and while the system runs. It provides a way to view system events and log messages, including information about hardware initialization, driver loading, and other kernel-related activities.

In addition, the system message log captures crucial information during the system’s boot process, including NUMA configuration details. We can query this with dmesg:

$ dmesg | grep -i numa
NUMA: Initialized distance table, cnt=8
NUMA: Node 4 [0,80000000) + [100000000,280000000) -> [0,280000000)

Here, we run the dmesg utility to display the kernel ring buffer’s content, which includes various system messages. Then, we use the pipe symbol to pass the output of one command as the input to another command. Afterward, we use the grep utility to search for lines containing the text “numa” in the output of the dmesg command. The -i flag makes the search case-insensitive so that it matches “NUMA” or “numa” regardless of the letter case.

As we can see, our output indicates that NUMA has been initialized on the system. The messages state that the NUMA distance table has been initialized with 8 entries, and there is a description of memory ranges associated with a NUMA node (in this case, Node 4).

4. Exploring the /proc Directory

We can also gain insights into NUMA capabilities by exploring the /proc directory, which contains a wealth of system information. One effective way to uncover NUMA-related data within this directory is using the find command, a versatile tool for locating files and directories.

The find command recursively searches through specified directories and subdirectories, applying various filters to identify files or directories that match specific criteria.

In this context of NUMA exploration, we can utilize the find command with superuser permissions (sudo) to locate “numa_maps” files within the /proc directory:

$ sudo find /proc -name numa_maps
/proc/1234/numa_maps
/proc/5678/numa_maps
/proc/9876/task/1357/numa_maps
...

Upon executing this command, find traverses the /proc directory and its subdirectories, searching for numa_maps files. Then, it displays a list of file paths that match the specified criteria. These files contain information about the memory allocation and distribution for specific processes and threads, enabling us to understand how the NUMA architecture utilizes memory access. The numbers in the paths represent the process or thread IDs.

5. Accessing Kernel Configuration Settings

Kernel configuration settings are stored in files within the /boot directory, typically config-<kernel_version>. These files contain a comprehensive set of options that builds the currently running kernel.

However, we can use the cat and grep utilities to display and examine the contents of the config-<kernel_version> kernel configuration file, located in the /boot directory. This reveals the current state of NUMA-related kernel configuration settings:

$ cat /boot/config-$(uname -r) | grep NUMA
CONFIG_NUMA=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_EMU=y

Here, the $(uname -r) part is a command substitution that retrieves the currently running kernel version. Let’s better understand our output:

  • CONFIG_NUMA=y – indicates that NUMA support is enabled in the kernel configuration (y stands for yes)
  • CONFIG_NUMA_BALANCING=y – NUMA balancing is enabled (y), which involves redistributing processes and their associated memory pages across NUMA nodes to ensure optimal memory access and load distribution
  • CONFIG_NUMA_EMU=y – NUMA emulation is enabled (y), allowing systems without a physical NUMA architecture to simulate NUMA behavior for testing and development purposes

In short, our command here and its output provide insights into the present state of NUMA-related kernel configuration settings. They indicate whether NUMA support and balancing are enabled and show additional options related to NUMA emulation and balancing.

6. Using numactl to Determine NUMA Configurations

The numactl utility is a powerful tool for managing NUMA policies and controlling process placement. It gives us a comprehensive overview of NUMA hardware details, helping us understand how our system’s memory is organized and accessed:

$ numactl --hardware
available: 2 nodes (0-1)
node 0 size: 18156 MB
node 0 free: 9053 MB
node 1 size: 18180 MB
node 1 free: 6853 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

This output indicates the presence of two NUMA nodes, along with their respective sizes and free memory.

6.1. NUMA Node Distance Table

In addition to our previous interaction, the node distances table provides insights into memory access latencies between nodes. Let’s analyze our previous command output for a better understanding:

  • available: 2 nodes (0-1) – indicates that our system has two NUMA nodes, labeled as Node 0 and Node 1
  • node 0 size: 18156 MB – provides the total size of memory available in Node 0
  • node 0 free: 9053 MB – indicates the amount of free memory in Node 0, meaning that a portion of the memory in Node 0 is currently in use
  • node 1 size: 18180 MB – similar to Node 0, this line provides the total size of memory available in Node 1
  • node 1 free: 6853 MB – indicates the amount of free memory in Node 1

Lastly, we can see a node distances matrix showing the distances between NUMA nodes, representing the relative latencies or distances between different pairs of nodes.

6.2. NUMA Node Distance Matrix

From our previous command output, the matrix shows that the value 10 at position (0, 0) indicates the distance (latency) between Node 0 and itself is relatively low. The value 20 at position (0, 1) indicates the distance between Node 0 and Node 1, and it’s higher than the distance within Node 0. Similarly, the value 20 at position (1, 0) indicates the distance between Node 1 and Node 0. The value 10 at position (1, 1) indicates the distance between Node 1 and itself, which is again relatively low.

Generally, lower distances indicate faster memory access within the same node, while higher distances indicate slower access when accessing memory across nodes.

We can utilize this information for optimizing process placement and memory allocation in NUMA systems to minimize latency and improve performance.

7. Getting NUMA Information Using lscpu

In addition to the numactl utility, we can use the lscpu command. The lscpu command is a Linux utility that provides detailed information about a system’s Central Processing Unit (CPU) architecture and configuration.

Furthermore, lscpu provides a straightforward way to determine NUMA-related details on our system by offering a concise overview of NUMA nodes and associated CPUs:

$ lscpu | grep -i numa
NUMA node(s):          2
NUMA node0 CPU(s):     0-19,40-59
NUMA node1 CPU(s):     20-39,60-79

Our output here indicates the presence of two NUMA nodes, along with the specific CPU ranges associated with each node:

  • NUMA node(s): 2 – indicates that the system has a total of 2 NUMA nodes
  • NUMA node0 CPU(s): 0-19,40-59 – specifies the CPUs that belong to NUMA node 0. Here, CPUs 0 to 19 and CPUs 40 to 59 are part of NUMA node 0 and share a common memory domain, and are likely physically located closer to each other, resulting in faster memory access within this node
  • NUMA node1 CPU(s): 20-39,60-79 – indicates the CPUs that belong to NUMA node 1 like NUMA node 0

Notably, a NUMA configuration like this is essential for applications that require efficient memory access, as understanding the distribution of memory and CPUs across NUMA nodes can help optimize performance.

8. Querying NUMA Nodes From the /sys Directory

Another approach to gathering NUMA information is by querying the /sys directory. The /sys directory is a virtual filesystem that exposes real-time information about kernel parameters and system attributes. By navigating through the /sys directory structure, we can access various details about our system’s hardware and configuration, including NUMA nodes.

To do this, we can use the cat command to display the contents of the /sys/devices/system/node/online file.

In the Linux /sys directory, the devices subdirectory hosts information about various hardware devices and components. The system subdirectory further contains system-related information, and within it, the node subdirectory provides details about NUMA nodes. The online file within the node subdirectory contains a list of online NUMA nodes, thus retrieving information about the online NUMA nodes in our system:

$ cat /sys/devices/system/node/online
0-3

The output will be a range or a list of NUMA node identifiers that are currently online. For this example, our output 0-3 indicates that NUMA nodes 0 through 3 are online and available.

9. Conclusion

In this article, we explored the world of NUMA capabilities in Linux-based architectures. We discussed the dmesg approach, which provides quick insights during system startup but lacks detailed information. In contrast, the /proc directory and utilities like numactl and lscpu offer in-depth information and flexibility. Furthermore, querying /sys might be better to quickly monitor NUMA utilization in dynamic environments.

As we wrap up, we should remember that the choice of method depends on our specific use case and the level of detail we require. Whether we’re configuring a high-performance application or monitoring dynamic workloads, understanding NUMA capabilities equips us with the knowledge to make our server architecture work in harmony with our applications.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.