Baeldung Pro – Linux – NPI EA (cat = Baeldung on Linux)
announcement - icon

Learn through the super-clean Baeldung Pro experience:

>> Membership and Baeldung Pro.

No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.

1. Introduction

Managing logical volume management (LVM) in Linux requires a good understanding of how storage components – physical volumes (PVs), volume groups (VGs), and logical volumes (LVs) interact. One of the common tasks for us as system administrators is identifying which PVs store specific LVs. This information becomes critical during maintenance, troubleshooting, or optimizing storage layouts.

In this tutorial, we’ll explore how to map LVs to their underlying PVs using various LVM commands. We’ll progress from basic commands to advanced mapping techniques, ensuring we can effectively track and manage our storage resources. Let’s get started!

2. Understanding LVM Layout Basics

Before we dive into specific commands, let’s recap the LVM hierarchy to set the context and establish a clear picture of how LVM organizes storage space:

  • Physical volumes: These represent the physical storage devices or partitions.
  • Volume groups: These are aggregates of PVs that act as a storage pool.
  • Logical volumes: These are the logical partitions created from a VG.

LVM implements a flexible abstraction layer that transforms physical storage into manageable, logical units, which follows a hierarchical structure:

Physical Disks → PVs → VGs → LVs → Filesystems

Notably, a crucial aspect of this structure is that LVs aren’t confined to a single PV. Instead, they can span multiple PVs within the same VG. Therefore, this flexibility enables some powerful features:

  • Storage aggregation across multiple disks
  • Dynamic volume resizing
  • Seamless storage migration
  • Snapshot capabilities

However, this flexibility also means that tracking which PVs hold specific LVs becomes increasingly important, especially when performing maintenance or troubleshooting.

3. Basic Commands for Mapping Logical Volumes to Physical Volumes

Let’s start with the most straightforward approach to identifying PV mappings.

3.1. Using lvs to View Devices

The simplest way to see which PVs an LV uses is with the lvs command and the +devices option. This provides an overview of LVs and their associated PVs:

$ lvs -o +devices
LV     VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices            
root   vg_sys   -wi-ao---- 100.00g                                       /dev/sda2(0)                     
home   vg_sys   -wi-ao---- 200.00g                                       /dev/sdb1(0),/dev/sdc1(0)

As we can see, the command outputs a concise list showing each LV and its corresponding PV mapping. The VG column shows the volume group containing the LV, while the Devices column lists the PV(s) backing each LV. In this output, we can observe:

  • The root LV resides entirely on /dev/sda2
  • The home LV spans across two PVs – /dev/sdb1 and /dev/sdc1
  • The ‘wi’ Attr field indicates writeable and initialized
  • The ‘ao’ Attr field shows the volume is active and open
  • The dashes () represent unset attributes

In short, the lvs command is great for getting a quick summary of our LVs storage.

3.2. Using lvdisplay for Detailed Logical Volume Mapping

To get more detailed information about an LV, we use the lvdisplay command with the -m option. This will display the logical extents (LEs) of the LV and their corresponding physical extents (PEs).

Let’s see a quick example:

$ lvdisplay -m
--- Logical volume ---
LV Path                /dev/vg_sys/home
LV Name                home
VG Name                vg_sys
LV Size                200.00 GiB

--- Segments ---
Logical extents 0 to 255:
  Type                linear
  Physical volume     /dev/sdb1
  Physical extents    0 to 255
Logical extents 256 to 511:
  Type                linear
  Physical volume     /dev/sdc1
  Physical extents    0 to 255

Each segment shows the distribution of LV’s extents across the PVs. Then, the Physical extents specifies the exact PE ranges on the PV that correspond to the LEs of the LV. In this example:

  • The first 256 LEs of the home LV are stored on /dev/sdb1.
  • The next 256 LEs are stored on /dev/sdc1.

This detailed view is helpful when dealing with LVs that span multiple PVs. These basic commands form the foundation for understanding LV-PV relationships. However, in more complex scenarios, we’ll need additional tools and techniques.

4. Advanced Mapping Techniques

When dealing with complex LVM configurations, we need more sophisticated tools to understand the exact distribution of LVs across PVs.

4.1. Using pvdisplay for Physical Volume Mapping

The pvdisplay command with the -m option lets us inspect the allocation of PEs on a PV to LVs:

$ pvdisplay -m
--- Physical volume ---
PV Name               /dev/sdb1
VG Name               vg_sys
PV Size               100.00 GiB
Allocatable           yes 
PE Size               4.00 MiB
Total PE              25500
Free PE               1000
Allocated PE          24500

--- Physical Segments ---
Physical extent 0 to 511:
  Logical volume      /dev/vg_sys/home
  Logical extents     0 to 511
Physical extent 512 to 1023:
  Logical volume      /dev/vg_sys/data
  Logical extents     0 to 511

Our output reveals crucial information. Free PE shows the unused extents on the PV. Then, Physical Segments details the allocation of PEs to specific LVs. In this example:

  • The first 512 PEs of /dev/sdb1 are allocated to the home LV.
  • The next 512 PEs are allocated to the data LV.

This is useful when troubleshooting PV utilization or planning volume migrations.

4.2. Using lvs –segments for Logical Volume Segmentation

For further segment-specific analysis (segment-wise breakdown of LVs), we can utilize the lvs command with the –segments option:

$ lvs --segments -o +devices
LV     VG     Attr       #Str Type   SSize   Devices            
data01 vg_test -wi-ao----    1 linear  300m   /dev/sdb1(0)      
data02 vg_test -wi-ao----    1 linear  500m   /dev/sdb1(75)

Let’s better understand our output:

  • #Str – shows the number of stripes for striped volumes
  • Type – indicates the allocation type (e.g., linear, striped, mirrored)
  • SSize – represents the segment size in the LV
  • Devices – displays the PVs and extent ranges allocated to the segment

In our example here, data01 is a linear volume stored on /dev/sdb1. data02 spans additional extents on /dev/sdb1.

This is particularly useful for analyzing striped or mirrored LVs.

5. Mapping Examples for Complex Configurations

Striped LVs distribute data across multiple PVs to enhance I/O performance. Monitoring their stripe distribution is essential for both maintenance and performance optimization.

5.1. Identifying Striped Volumes

Let’s see how to identify striped volumes and their stripe configurations:

$ lvs -o name,segtype,stripes,stripe_size
LV       SegType  Stripes  Stripe_size
data     striped       2         64K
logs     striped       3        128K

This command shows which LVs are striped, along with their stripe counts and stripe sizes. Here, we can see the data LV has 2 stripes, while logs has 3.

5.2. Examining Detailed Stripe Distribution

To further examine the detailed distribution of stripes in an LV, we use the lvdisplay command with the -m option:

$ lvdisplay -m /dev/vg_main/data
--- Segments ---
  Logical extents 0 to 511:
    Type                striped
    Stripes            2
    Stripe size        64.00 KiB
    Stripe 0:
      Physical volume  /dev/sdc1
      Physical extents 0 to 255
    Stripe 1:
      Physical volume  /dev/sdd1
      Physical extents 0 to 255

As we can see, Stripe size shows the size of data chunks written across the stripes (64 KiB here). Then, Stripe 0 and Stripe 1 describe how each stripe maps to a PV, including the PE ranges used.

Notably, this information is vital when tuning storage performance or troubleshooting issues. Regular monitoring of stripe distribution ensures optimal performance and helps prevent potential I/O bottlenecks during peak usage periods.

6. Managing Mirrored Volumes

Mirrored LVs add complexity to PV mapping as they maintain synchronized copies across multiple PVs to provide redundancy.

Let’s analyze a mirrored LV configuration using the lvs command with specific options:

$ lvs -a -o +devices,copy_percent
LV              VG      Attr       Devices     Copy%
  root            vg_main rwi-a-r--- sda(0)      100.00
  root_rimage_0   vg_main iwi-aor--- sda(0)         -
  root_rimage_1   vg_main iwi-aor--- sdb(0)         -
  root_rmeta_0    vg_main ewi-aor--- sda(1024)      -
  root_rmeta_1    vg_main ewi-aor--- sdb(1024)      -

As we can see, this reveals the mirror structure:

  • root – shows the primary LV, mirrored between two PVs
  • rimage_0/rimage_1 – represent the actual mirrored copies of the LV on /dev/sda and /dev/sdb
  • rmeta_0/rmeta_1 – contain the metadata required to maintain synchronization between the mirrored copies
  • Copy% – indicates the completion percentage of the mirroring process

Understanding this structure helps us identify the health of mirrored volumes, assess synchronization status, and locate specific segments.

7. Moving Logical Volume Segments

When reorganizing storage or freeing up a specific PV, we may need to move LVs or segments between PVs. Moving LV segments between PVs requires careful planning and execution. Let’s explore the process systematically.

7.1. Identifying Segment Locations

First, we need to identify segment locations and determine where an LV’s segments reside:

$ lvs --segments /dev/vg_main/data -o +seg_pe_ranges
PE Ranges               
  /dev/sda1:0-511
  /dev/sdb1:0-511

PE Ranges shows the PE ranges where the LV resides. In this example, the first 512 PEs of the LV are on /dev/sda1. The next 512 PEs are on /dev/sdb1.

7.2. Initiating the Migration

Once we’ve identified the segment locations, we can move them to a different PV with pvmove.

Let’s see how to migrate segments from /dev/sda1 to /dev/sdc1:

$ pvmove /dev/sda1:0-511 /dev/sdc1

During migration, we can track the progress with the –interval option:

$ pvmove --interval 2
/dev/sda1: 0.00%
/dev/sda1: 32.15%
/dev/sda1: 67.89%
/dev/sda1: 100.00%

The percentage indicator shows the progress of the data transfer. Once the migration is complete, the LV will entirely relocate to /dev/sdc1.

8. Conclusion

In this article, we discussed commands and techniques for effectively monitoring, optimizing, and maintaining Logical Volumes in LVM. Understanding stripe and mirror distributions, relocating volumes, and optimizing PV utilization ensures a robust and efficient storage environment.