Learn through the super-clean Baeldung Pro experience:
>> Membership and Baeldung Pro.
No ads, dark-mode and 6 months free of IntelliJ Idea Ultimate to start with.
Last updated: December 14, 2024
Managing logical volume management (LVM) in Linux requires a good understanding of how storage components – physical volumes (PVs), volume groups (VGs), and logical volumes (LVs) interact. One of the common tasks for us as system administrators is identifying which PVs store specific LVs. This information becomes critical during maintenance, troubleshooting, or optimizing storage layouts.
In this tutorial, we’ll explore how to map LVs to their underlying PVs using various LVM commands. We’ll progress from basic commands to advanced mapping techniques, ensuring we can effectively track and manage our storage resources. Let’s get started!
Before we dive into specific commands, let’s recap the LVM hierarchy to set the context and establish a clear picture of how LVM organizes storage space:
LVM implements a flexible abstraction layer that transforms physical storage into manageable, logical units, which follows a hierarchical structure:
Physical Disks → PVs → VGs → LVs → Filesystems
Notably, a crucial aspect of this structure is that LVs aren’t confined to a single PV. Instead, they can span multiple PVs within the same VG. Therefore, this flexibility enables some powerful features:
However, this flexibility also means that tracking which PVs hold specific LVs becomes increasingly important, especially when performing maintenance or troubleshooting.
Let’s start with the most straightforward approach to identifying PV mappings.
The simplest way to see which PVs an LV uses is with the lvs command and the +devices option. This provides an overview of LVs and their associated PVs:
$ lvs -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
root vg_sys -wi-ao---- 100.00g /dev/sda2(0)
home vg_sys -wi-ao---- 200.00g /dev/sdb1(0),/dev/sdc1(0)
As we can see, the command outputs a concise list showing each LV and its corresponding PV mapping. The VG column shows the volume group containing the LV, while the Devices column lists the PV(s) backing each LV. In this output, we can observe:
In short, the lvs command is great for getting a quick summary of our LVs storage.
To get more detailed information about an LV, we use the lvdisplay command with the -m option. This will display the logical extents (LEs) of the LV and their corresponding physical extents (PEs).
Let’s see a quick example:
$ lvdisplay -m
--- Logical volume ---
LV Path /dev/vg_sys/home
LV Name home
VG Name vg_sys
LV Size 200.00 GiB
--- Segments ---
Logical extents 0 to 255:
Type linear
Physical volume /dev/sdb1
Physical extents 0 to 255
Logical extents 256 to 511:
Type linear
Physical volume /dev/sdc1
Physical extents 0 to 255
Each segment shows the distribution of LV’s extents across the PVs. Then, the Physical extents specifies the exact PE ranges on the PV that correspond to the LEs of the LV. In this example:
This detailed view is helpful when dealing with LVs that span multiple PVs. These basic commands form the foundation for understanding LV-PV relationships. However, in more complex scenarios, we’ll need additional tools and techniques.
When dealing with complex LVM configurations, we need more sophisticated tools to understand the exact distribution of LVs across PVs.
The pvdisplay command with the -m option lets us inspect the allocation of PEs on a PV to LVs:
$ pvdisplay -m
--- Physical volume ---
PV Name /dev/sdb1
VG Name vg_sys
PV Size 100.00 GiB
Allocatable yes
PE Size 4.00 MiB
Total PE 25500
Free PE 1000
Allocated PE 24500
--- Physical Segments ---
Physical extent 0 to 511:
Logical volume /dev/vg_sys/home
Logical extents 0 to 511
Physical extent 512 to 1023:
Logical volume /dev/vg_sys/data
Logical extents 0 to 511
Our output reveals crucial information. Free PE shows the unused extents on the PV. Then, Physical Segments details the allocation of PEs to specific LVs. In this example:
This is useful when troubleshooting PV utilization or planning volume migrations.
For further segment-specific analysis (segment-wise breakdown of LVs), we can utilize the lvs command with the –segments option:
$ lvs --segments -o +devices
LV VG Attr #Str Type SSize Devices
data01 vg_test -wi-ao---- 1 linear 300m /dev/sdb1(0)
data02 vg_test -wi-ao---- 1 linear 500m /dev/sdb1(75)
Let’s better understand our output:
In our example here, data01 is a linear volume stored on /dev/sdb1. data02 spans additional extents on /dev/sdb1.
This is particularly useful for analyzing striped or mirrored LVs.
Striped LVs distribute data across multiple PVs to enhance I/O performance. Monitoring their stripe distribution is essential for both maintenance and performance optimization.
Let’s see how to identify striped volumes and their stripe configurations:
$ lvs -o name,segtype,stripes,stripe_size
LV SegType Stripes Stripe_size
data striped 2 64K
logs striped 3 128K
This command shows which LVs are striped, along with their stripe counts and stripe sizes. Here, we can see the data LV has 2 stripes, while logs has 3.
To further examine the detailed distribution of stripes in an LV, we use the lvdisplay command with the -m option:
$ lvdisplay -m /dev/vg_main/data
--- Segments ---
Logical extents 0 to 511:
Type striped
Stripes 2
Stripe size 64.00 KiB
Stripe 0:
Physical volume /dev/sdc1
Physical extents 0 to 255
Stripe 1:
Physical volume /dev/sdd1
Physical extents 0 to 255
As we can see, Stripe size shows the size of data chunks written across the stripes (64 KiB here). Then, Stripe 0 and Stripe 1 describe how each stripe maps to a PV, including the PE ranges used.
Notably, this information is vital when tuning storage performance or troubleshooting issues. Regular monitoring of stripe distribution ensures optimal performance and helps prevent potential I/O bottlenecks during peak usage periods.
Mirrored LVs add complexity to PV mapping as they maintain synchronized copies across multiple PVs to provide redundancy.
Let’s analyze a mirrored LV configuration using the lvs command with specific options:
$ lvs -a -o +devices,copy_percent
LV VG Attr Devices Copy%
root vg_main rwi-a-r--- sda(0) 100.00
root_rimage_0 vg_main iwi-aor--- sda(0) -
root_rimage_1 vg_main iwi-aor--- sdb(0) -
root_rmeta_0 vg_main ewi-aor--- sda(1024) -
root_rmeta_1 vg_main ewi-aor--- sdb(1024) -
As we can see, this reveals the mirror structure:
Understanding this structure helps us identify the health of mirrored volumes, assess synchronization status, and locate specific segments.
When reorganizing storage or freeing up a specific PV, we may need to move LVs or segments between PVs. Moving LV segments between PVs requires careful planning and execution. Let’s explore the process systematically.
First, we need to identify segment locations and determine where an LV’s segments reside:
$ lvs --segments /dev/vg_main/data -o +seg_pe_ranges
PE Ranges
/dev/sda1:0-511
/dev/sdb1:0-511
PE Ranges shows the PE ranges where the LV resides. In this example, the first 512 PEs of the LV are on /dev/sda1. The next 512 PEs are on /dev/sdb1.
Once we’ve identified the segment locations, we can move them to a different PV with pvmove.
Let’s see how to migrate segments from /dev/sda1 to /dev/sdc1:
$ pvmove /dev/sda1:0-511 /dev/sdc1
During migration, we can track the progress with the –interval option:
$ pvmove --interval 2
/dev/sda1: 0.00%
/dev/sda1: 32.15%
/dev/sda1: 67.89%
/dev/sda1: 100.00%
The percentage indicator shows the progress of the data transfer. Once the migration is complete, the LV will entirely relocate to /dev/sdc1.
In this article, we discussed commands and techniques for effectively monitoring, optimizing, and maintaining Logical Volumes in LVM. Understanding stripe and mirror distributions, relocating volumes, and optimizing PV utilization ensures a robust and efficient storage environment.