1. Introduction

Virtual memory allows processes to use more memory that is physically available on the machine.

In this tutorial, we’ll elaborate on the virtual memory fundamentals. We’ll describe two crucial virtual memory implementations, namely, staged and paged. Finally, we’ll compare them.

2. Fundamentals

Virtual memory separates the physical memory (random-access memory, RAM) from the logical one. It’s based on a property that at anytime only a part of the process must be placed in the random-access memory of the computer. Specifically, the part that the process is actually using.

Systems that use virtual memory improve the utilization of random-access memory in multi-task environments. The core benefits of using virtual memory are:

  • Ability to share the memory between the processes
  • Improved security thanks to memory isolation
  • Possibility to use more memory that is physically available on the machine

Virtual memory creates an illusion that the process is working in a single and continuous memory area. Whereas, physically it can be defragmented, noncontinuous, and partially stored on the mass storage devices. It is possible by mapping addresses in physical memory into logical ones. Moreover, the area of logical addresses can be bigger than physical. Below, we can see the general concept of virtual memory:

Compilation Flow Example Page 2

Virtual memory can be implemented using two common mechanisms called demand segmentation and demand paging. In the next section, we’ll discuss them in detail.

2. Paged Virtual Memory

Paged virtual memory is the most often used implementation. In general, paging is a technique that allows storing and retrieving data from secondary storage. The physical memory is being split into fixed-size blocks called frames. The logical memory is being split into fixed-size blocks called pages. Subsequently, pages and frames are the same sizes.

While the process is executing, the pages are mounted into appropriate frames if needed. Therefore, the process is called demand paging or paging on-demand:

Compilation Flow Example Page 4

So loading the page into random access memory only when we need it brings a lot of benefits. It decreases the amount of input/output operations and RAM memory consumption. That way, a single execution of a large process could require just a little part of its code. Therefore, the system’s response time is improved. Subsequently, more users can be handled.

While the process is refereeing to the memory following situations can occur:

  • The reference is invalid
  • The reference is valid and the required page is present
  • The reference is valid and the required page is absent

In the first case, the request is rejected. In the second case which is the success scenario, the request is just handled correctly. In the last case, the system must bring the required page from the disk into the memory.

2.2. Validation Bit

A validation bit is a hardware support mechanism for verifying the page’s state. The validation bit is set for each record in the table of pages. The bit can take only two possible values: zero or one.

Initially, all bits are set to zero. It means that the page isn’t present in the random access memory or the reference is invalid. On the other hand, when the bit is set to 1, the reference is valid and the page is present. If the bit is set to 0 during the address translation, the situation it’s called a page fault, and the processor triggers the appropriate mechanism, which we’ll describe in the next subsection.

2.3. Page Faults

When the page fault occurs the processor transfers the control to the operating system. Firstly, the system locates the required data on the disk. Secondly, it looks for a free frame in the random access memory. Then, it loads the page into that frame. After that, it updates the table of pages. Finally, the process continues to execute.

It’s possible that there aren’t any free frames in the RAM. Therefore there is a need to free up some. There exist special techniques for serving that propose called page replacement algorithms.

2.4. Page Replacement

The general workflow of the page replacement process consists of the following actions:

  1. Locate the required page on the disk
  2. Identify the frame that will be removed from the random access memory
  3. Backup the frame to remove (if there is no copy already) by copying it to the disk
  4. Update the table of pages by setting the reference to the removed frame as invalid
  5. Load the required page into the freed up frame
  6. Update the table of pages again

The page replacement algorithms are mainly responsible for the second point. The selection of a specific algorithm can impact the performance of the virtual memory. The optimal page replacement algorithm exists only theoretically. In simple words, it swaps the page that will be used the farthest in the future.

In a real-world scenario, it’s impossible to implement such a feature. Although, there are other algorithms that handle it more or less effectively. Let’s briefly introduce them:

  • FIFO (First In First Out) – replaces the oldest page
  • LRU (Least Recently Used) – replaces the page that wasn’t used for the longest time
  • LFU (Least Frequently Used) – replaces the page that was used the rarest
  • MFU (Most Frequently Used) – replaces the most often used page

3. Segmented Virtual Memory

The second popular implementation is segmented virtual memory. Segmentation is the process of splitting the physical memory into continuous blocks called segmentsThe segments can be of different sizes. Therefore logical address is represented by two values: a segment number and an offset.

The segment number is an index in the table of segments. Each record in that table has a base address which refers to the beginning of the segment in the physical memory. Moreover, the segment has a limit value that points to the end of the segment in the physical memory. Subsequently, the offset is a value between 0 and the limit.

Blank diagram 4

The system creates the segments on the application’s request. It passes the segment’s indexes to the application. So, the processes refer to memory areas within the segments. Therefore, they don’t know the location of the data in the physical memory to which those segments refer. Moreover, the processes are unable to access segments of others processes. 

3.1. Segment Descriptor

The segment’s parameters are stored in the 8 bites long record called segment descriptor:

Blank diagram Page 2

Descriptors consist of:

  • BA – base address, physical address of the segments beginning in that memory
  • G – granularity, when empty, the limit is described by bytes. If set, the limit is defined by 4096-byte pages
  • D – default operand size, when set, the segment is 16-bit code, if note, the segment is 32-bit
  • B – big, the same meaning as D
  • L – long, if set, the segment is 64-bit
  • AVL – available, reserved for software use
  • P – present, if empty an exception is generated
  • DPL – descriptor privilege level
  • T – type
  • C – conforming, defines if code within the segment can be accessed from less-privileged levels
  • E – expand-down, if set, the segment expand from maximum offset down to limit, if empty, the segment expands from the base address up to base + limit
  • R – readable, if empty, the segment can’t be read, just executed
  • W – writable, if empty, the segment can’t be written, just read
  • A – accessed. set to 1 when the segment is accessed

The segments are stored in the two types of tables. The first of them is GDT (Global Descriptor Table). There is only one GDT in the system and it stores the segments of all processes. The second one is LDT (Local Descriptor Table). There are many local descriptor tables that describe segments of individual processes.

3.2. Segment Selector

Each descriptor has a corresponding selector. The segment selector is a reference to the descriptor in a specific table (GDT or LDT). The selector contains the following fields:

  • The segment’s descriptor index in the specific table
  • The table indicator field informs in which table locate the descriptor. It can hold only one of two values: zero value means that descriptor is in GDT, the value of one means LDT
  • The requestor’s privilege level field defines security properties, e.g., what instructions can process execute, and can access also data that it doesn’t own

So, when the process wants to access the segmented memory, firstly, it points to the segment selector in the segment’s registry. Then, the specific selector refers to the appropriate descriptor in GDT or LDT table. Finally, the descriptor informs where to find a required segment.

4. Paging vs. Segmentation

Rendered by QuickLaTeX.com

5. Conclusion

In this article, we described the virtual memory concept in detail.

There are two mainly used implementations based on paging or segmentation. Both types differ significantly and have some pros and cons. Nowadays, modern operating systems tend to use a combination of both types to achieve all benefits.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.