1. Overview

In this tutorial, we’ll understand the risks and problems associated with using Logical Volume Management (LVM). We’ll delve into the shortcomings of LVM.

Apart from that, we’ll also discuss workarounds and prevention techniques where possible.

2. Introduction to LVM

LVM stands for Logical Volume Manager. It’s a way to setup physical disk(s) that can be easily created, resized, and deleted.

Let’s suppose we have two physical disks of 1 TB each. With the LVM, the disk capacity of these two disks is aggregated. Therefore, LVM will consider the total storage capacity to be 2 TB. The LVM treats these disks as physical volumes, which can be divided into volume groups.

The volume groups are then sliced into a single volume or multiple logical volumes, which the LVM treats as regular (traditional) partitions.

In the subsequent sections, we’ll look at the different issues that are associated with using LVM.

3. Write Caching Vulnerability

LVM is prone to write caching vulnerabilities, increasing the likelihood of losing data. The lost data can become very difficult to recover. However, there are specific measures that we can take against this issue.

3.1. Write Barriers

Most modern file systems keep track of the changes in a data structure known as a journal. Therefore, in the event of a crash or power loss, the data can be replayed to ensure the successful writing of data, thereby reducing the chances of data corruption.

During this process, the write cache re-orders the data to maximize throughput. Before this, it must also ensure that actual file data is written before the metadata. Thus, in the event of a crash, the metadata won’t be out of sync with the data.

The issue is that many disks have caches of their own, which might re-order the writes. Therefore, some file systems assume this possibility and flush the cache to prevent this. This mechanism is known as write barriers.

Additionally, write caching and write re-ordering improve the performance of a system. However, the disk can fail to correctly flush the blocks to disk. This can be caused by VM hypervisors, in-built hard drive write caching, and old Linux kernels (<= 2.6.32).

This combination of LVM and disk write caching can be a dangerous combo.

The kernel can also use Fast Unit Access operations to flush certain blocks to the disk without carrying out a full cache flush. Moreover, barriers can be combined with efficient command queuing to enable the hard drive to perform intelligent write re-ordering without increasing the risk of data loss.

3.2. Virtual Machine Hypervisors

Running an LVM in a Linux guest on top of VMs such as VirtualBox, VMware, and KVM can create similar issues to a kernel without write barriers due to write caching and write reordering.

Therefore, we should check our VM documentation for flush to disk or write-through cache options and test against our setup.

Additionally, VirtualBox, by default, ignores disk flushes from the guest machine.

3.3. Prevention: Battery-backed RAID Controller

For enterprise servers, we should always use a battery-backed RAID controller and disable the hard disk write caching.

Without a battery-backed RAID controller, we can disable hard drive write caching. Indeed, this reduces the write performance, but makes the LVM safe.

3.4. Prevention: EXT3 and EXT4 File Systems

For EXT3 file systems, we can set data=journal and barrier=1 options in /etc/fstab for safety. EXT4, on the other hand, has these options enabled by default.

For an EXT2 file system, we can simply convert it safely to an EXT3 file system:

$ tune2fs -j /dev/sdx

An EXT3 file system is nothing but an EXT2 file system with journaling support. If we want to rollback, we can simply convert it back to EXT2:

$ tune2fs -O^has_journal /dev/sdx

3.5. Prevention: Disable ATA Write Caching

For the hard drive write caching, we should check whether it’s enabled on our SATA disk:

$ hdparm -i /dev/sda | grep -i writecache
 AdvancedPM=yes: unknown setting WriteCache=enabled

We should check whether we have hdparm installed beforehand. We can use the hdparm utility to disable the write caching on specific drives:

$ hdparm -q -W0 /dev/sda

We’ll need to execute this every time we boot into the system. To make our life easier, we can create a shell script:

#!/bin/sh

# Substitute with correct drive names
hdparm -q -W0 /dev/sdX
hdparm -q -W0 /dev/sdY

Afterward, we can create a systemd entry for this script to execute after boot.

3.6. Prevention: Disable SCSI Write Caching

We can also disable caching on SCSI-based disks using sdparm. First, let’s check whether write caching is enabled:

$ sdparm /dev/sdb | grep "write caching"
WCE 0 [cha: y, def: 1, sav: 0] --> write caching is enabled

Then, if it’s enabled, we simply disable it using the -c or –clear option:

$ sdparm –c WCE /dev/sdb

Similarly, we can create a shell script for this to execute on boot.

3.7. Prevention: Kernel Upgrade

There is incomplete support for write barriers in the earlier versions of the Linux kernel (2.6.32 and lower). Using these kernels can result in data loss in the LVM setup due to hard crashes.

Therefore, we can either upgrade the kernel or apply appropriate patches to remedy this shortcoming.

In conclusion, we must take care of the file system, RAID, VM hypervisor, kernel, and hard drive setup used with LVM. In addition, we should always take backups of the LVM – containing LVM metadata, physical partition setup, MBR, and volume boot sectors.

4. Data Recovery

In the case of lost data, there are no suitable tools for data recovery. Our best chance is to try to recover the lost data manually.

Tools like testdisk and ext3grep lack support for LVM data recovery. However, we can use file carving tools like PhotoRec because they re-assemble files from data blocks. The downside is it doesn’t work well with fragmented files.

LVM puts the basic metadata in /etc/lvm, which can assist in restoring the basic structure of Logical Volumes, Volume Groups, and Physical Volumes. However, it doesn’t include the file system metadata.

5. Problems With Snapshots

LVM snapshots are slow and sometimes buggy. If a snapshot runs out of space, the snapshot will be dropped automatically.

The resulting snapshot of a Logical Volume requires a lot of space, with significantly high write activity. Therefore, it’s safer to create a Logical Volume for snapshots that’s the same size as the original Logical Volume.

In addition to this, the snapshots are considerably slow (three to six times) because of the many synchronous writes.

Moreover, we can use the rsnapshot utility to easily manage the snapshots.

5.1. Alternatives to LVM Snapshots

We can easily create backup snapshots provided by VMs. These snapshots are more reliable for backup purposes. However, we should freeze our file system beforehand.

On bare metal, we can consider file system-level snapshots with ZFS or Btrfs. They are easy to use and better than LVM snapshots.

6. Difficulty in Configurations

Given the above issues, we can see that it can be quite challenging to configure LVM correctly. There are so many factors that we’ll need to take into account before using the LVM in production and database servers.

An overall sane approach would be to avoid LVM entirely and use ZFS or Btrfs. They are very mature at this point and we can entirely replace our disk setup with these filesystems instead of sticking with LVM.

7. Conclusion

In this article, we discussed the various caveats and issues that can arise when using LVM. Besides that, we also covered several prevention measures and alternatives to remedy this issue.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.