1. Overview

RAID stands for Redundant Array of Inexpensive/Independent Disks.

We build our storage with redundancy — duplication of critical functions — so that no one part can fail and bring down our whole system. Because the data reads and writes are spread out over more than one disk, RAID can also provide us performance benefits.

Modern filesystems like ZFS and btrfs have built-in RAID functionality. It’s also important we remember what RAID is not: it’s not a backup. For example, if our database gets wiped or corrupted, a mirrored RAID gives us two copies of our blank or broken database. A separate backup gives us a recovery option.

In this tutorial, we’ll explore ways to use RAID in Linux.

2. Types of RAID

RAID can be implemented with a dedicated hardware controller or entirely in software. Software RAID is more common today.

We refer to different kinds of RAID via a standard numbering system of “raid levels”. The numbers do not refer to how many disks are used.

RAID’s biggest advantage comes from the replication of data. Our data exists in more than one place on our RAID system, so we can avoid downtime during hardware failure. The replication may be via mirroring (keeping duplicate copies of everything) or parity (checksum calculations of our data).

2.1. Hardware vs. Software

In this guide, we’ll explore the RAID options built into Linux via software. Hardware RAID is beyond the scope of this article; just be aware that it is only useful on Linux in special cases, and we may need to turn it off in our computer’s BIOS.

2.2. Striped And/or Mirrored (RAID 0, 1, or 10)

RAID level 0 has an appropriate number: it has zero redundancy!

RAID Level 0

Instead, in RAID 0, data is written across the drives, or “striped”. This means it can potentially be read from more than one drive concurrently. That can give us a real performance boost.

But at the same time, now we have two drives that could fail, taking out all our data. So, RAID 0 is only useful if we want a performance boost but don’t care about long-term storage.

RAID Level 1

We refer to RAID level 1 as “mirrored” because it is created with a pair of equal drives. Each time data is written to a RAID 1 device, it goes to both drives in the pair.

Write performance is thus slightly slower, but read performance can be much faster as data is concurrently read from both disks.

RAID Level 10

These two levels of RAID can be combined or nested, creating what’s called RAID 1+0 or just RAID 10. (There are other permutations, but RAID 10 is the most common.)

We can create a RAID 10 device with four disks: one pair of disks in RAID 0, mirroring another pair of disks in RAID 0.

This RAID of RAIDs attempts to combine RAID 0’s performance with RAID 1’s redundancy, to be both speedy and reliable.

2.3. Parity (RAID 5 or RAID 6)

Instead of storing complete copies of our data, we can save space by storing parity data. Parity allows our RAIDs to reconstruct data stored on failed drives.

RAID Level 51

RAID 5 requires at least three equal-size drives to function. In practice, we can add several more, though rarely more than ten are used.

RAID 5 sets aside one drive’s worth of space for checksum parity data. It is not all kept on one drive, however; instead, the parity data is striped across all of the devices along with the filesystem data.

This means we usually want to build our RAID out of a set of drives of identical size and speed. Adding a larger drive won’t get us more space, as the RAID will just use the size of the smallest member. Similarly, the RAID’s performance will be limited by its slowest member.

RAID 5 can recover and rebuild with no data loss if one drive dies. If two or more drives crash, we’ll have to restore the whole thing from backups.

RAID 6 is similar to RAID 5 but sets aside two disks’ worth for parity data. That means a RAID 6 can recover from two failed members.

RAID 5 gives us more usable storage than mirroring does, but at the price of some performance. A quick way to estimate storage is the total amount of equal-sized drives, minus one drive. For example, if we have 6 drives of 1 terabyte, our RAID 5 will have 5 terabytes of usable space. That’s 83%, compared to 50% of our drives were mirrored in RAID 1.

At one time, server manufacturers considered RAID 5 the best practice in storage. It has fallen out of favor to some degree due to the so-called “RAID 5 write hole”, a problem addressed by next-generation filesystems and RAIDZ.

3. Linux Kernel RAID (mdraid)

Let’s create some new RAIDs with the mdadm tool.

3.1. Your Basic RAID

We’ll start with two identical disks or partitions, and create a striped RAID 0 device.

First, let’s make sure we have the correct partitions. We don’t want to destroy something important:

# lsblk -o NAME,SIZE,TYPE
NAME      SIZE TYPE
sdb     931.5G disk
└─sdb1 4G part
sdc     931.5G disk
└─sdc1 4G part

We’ll use the mdadm command (multi-disk administrator):

# mdadm --verbose --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Our first RAID device has been created! Let’s break down the options we use with mdadm:

  • –verbose tells us more about what is happening.
  • –create tells mdadm to create a new RAID device, naming it whatever we want (in this case, md0).
  • –level=0 is our RAID level, as discussed above. Level 0 is just striped, with no redundancy.
  • –raid-devices=2 lets mdadm know to expect two physical disks for this array.
  • /dev/sdb1 and /dev/sdc1 are the two partitions included in our array of independent disks.

So our RAID of partitions has been created, but like any device, it does not yet have a filesystem and it hasn’t been mounted.

We can look at it again with lsblk:

# lsblk -o NAME,SIZE,TYPE
NAME      SIZE TYPE
sdb     931.5G disk
└─sdb1      4G part
  └─md0     8G raid0
sdc     931.5G disk
└─sdc1      4G part
  └─md0     8G raid0

Notice how the md0 device is the size of the two partitions added together, as we’d expect from RAID 0.

3.2. Managing Our RAID

We also find useful information in /proc/mdstat:

# cat /proc/mdstat 
Personalities : [raid0] 
md0 : active raid0 sdc1[1] sdb1[0]
      1952448512 blocks super 1.2 512k chunks
      
unused devices: <none>

To use this new RAID, we need to format it with a filesystem and mount it:

# mkfs /dev/md0 
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 2094592 4k blocks and 524288 inodes
Filesystem UUID: 947484b6-05ff-4d34-a0ed-49ee7c5eebd5
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done 

# mount /dev/md0 /mnt/myraid/
# df -h /mnt/myraid
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        7.8G   24K  7.4G   1% /mnt/myraid

Like any other filesystem besides ZFS, we would add a line to /etc/fstab to make this mount point permanent.

If we want to boot from our RAID device (and we may not, to keep things simple), or otherwise allow mdadm to manage the array during startup or shutdown, we can append our array’s info to an optional /etc/mdadm/mdadm.conf file:

# mdadm --detail --scan
ARRAY /dev/md1 metadata=1.2 spares=1 name=salvage:1 UUID=0c32834c:e5491814:94a4aa96:32d87024

And if we want to take down our raid, we can use mdadm again:

# mdadm -S /dev/md0
mdadm: stopped /dev/md0

We can create similar RAIDs with variations of the –level and –raid-devices options.

For example, we could create a 5-disk RAID 5:

# mdadm --verbose --create /dev/md1 --level=5 --raid-devices=5 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 4189184K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

Then, we can mkfs and mount our latest RAID.

3.3. Failed Drives and Hot Spares

What would happen to our new RAID 5 if one of the drives failed? Let’s simulate that event with mdadm:

# mdadm /dev/md1 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md1

Now, what does /proc/mdstat tell us? Let’s take a look:

# cat /proc/mdstat 
Personalities : [raid0] [raid6] [raid5] [raid4] 
md1 : active raid5 sdf1[5] sde1[3] sdd1[2] sdb1[1] sdc1[0](F)
      16756736 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [_UUUU]
      
unused devices: <none>

Here, we see the partition we selected, marked (F) for failed.

We can also ask mdadm for more details of our array:

# mdadm --detail /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Aug 10 14:52:59 2021
        Raid Level : raid5
        Array Size : 16756736 (15.98 GiB 17.16 GB)
     Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Tue Aug 10 14:59:20 2021
             State : clean, degraded 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : salvage:1  (local to host salvage)
              UUID : 0c32834c:e5491814:94a4aa96:32d87024
            Events : 24

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       34        1      active sync   /dev/sdb1
       2       8       35        2      active sync   /dev/sdd1
       3       8       36        3      active sync   /dev/sde1
       5       8       37        4      active sync   /dev/sdf1

       0       8       33        -      faulty   /dev/sdc1

Our RAID is still going strong. A user should not be able to tell any difference. But we can see it’s in a “degraded” state, so we need to replace that faulty hard drive.

Let’s say we have a replacement for our dead drive. It should be identical to the originals.

We can remove our faulty drive and add a new one. We should remember that the /dev/sd* list of devices will sometimes change if the hardware changes, so double-check with lsblk.

First, we remove our faulty drive from the array:

# mdadm /dev/md1 --remove /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md1

Next, we physically replace our drive and add the new one. (This is where hot-swappable drive hardware saves us a lot of time!)

We can look at /proc/mdstat to watch the RAID automatically rebuild:

# mdadm /dev/md1 --remove /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md1
root@salvage:~# mdadm /dev/md1 --add /dev/sdc1
mdadm: added /dev/sdc1
# cat /proc/mdstat 
Personalities : [raid0] [raid6] [raid5] [raid4] 
md1 : active raid5 sdc1[6] sdf1[5] sde1[3] sdd1[2] sdb1[1]
      16756736 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [_UUUU]
      [==>..................]  recovery = 10.7% (452572/4189184) finish=3.4min speed=18102K/sec

If uptime is really important, we can add a dedicated spare drive to let mdadm automatically switch over to:

# mdadm /dev/md1 --add-spare /dev/sdg1
mdadm: added /dev/sdg1

It might be worth it; we can weigh the time and money involved.

Let’s check on our array again:

# mdadm --detail /dev/md1 | grep spare
       7       8       38        -      spare   /dev/sdg1

Five disks are striped with data and parity. One disk is unused, just waiting to be needed.

4. The Logical Volume Manager

Most modern Linux filesystems are no longer created directly on a drive or a partition, but on a logical volume created with the LVM.

Briefly, LVM combines Physical Volumes (drives or partitions) into Volume Groups. Volume Groups are pools from which we can create Logical Volumes. We can put filesystems onto these Logical Volumes.

RAID comes into it during the creation of Logical Volumes. These may be linear, striped, mirrored, or a more complex parity configuration.

We should note that creating a RAID LVM Logical Volume uses Linux kernel RAID (mdraid). If we want the convenience of LVM, being able to expand Volume Groups and resize Logical Volumes, we can have it along with the reliability of simple mdraid.

But if LVM sounds like too much added complexity, we can always stick with mdraid on our physical drives.

Yet another common option is creating our RAID devices with mdadm and then using them as PVs with LVM.

4.1. Telling LVM to Use Our Volumes in a RAID

LVM RAIDs are created at the logical volume level.

So that means we need to have first created partitions, used pvcreate to tag them as LVM physical volumes, and used vgcreate to put them into a volume group. In this example, we’ve called the volume group raid1vg0.

The RAID creation step specifies the type of RAID and how many disks to use for mirroring or striping. We don’t need to specify each physical volume. We can let LVM handle all of that:

# lvcreate --mirrors 1 --type raid1 -l 100%FREE -n raid01v0 raid1vg0
  Logical volume "raid01v0" created.
# mkfs.ext4 /dev/raid1vg0/raid01v0

As usual, we then format and mount our new RAID volume. If we want a system that handles all of that automatically, we have ZFS.

5. Integrated Filesystem RAID with ZFS or btrfs

We won’t cover the details of next-generation filesystems in this article, but many of the concepts from software RAID and LVM translate over.

ZFS uses “vdevs”, virtual devices, much as LVM uses Volume Groups. These vdevs may be physical disks, mirrors, raidz variants (ZFS’s take on RAID 5), or as of OpenZFS 2.1, draid.

For example, we can create a RAID 1 mirror zpool:

# zpool create -f demo mirror /dev/sdc /dev/sdd

ZFS handles everything else for us, formatting and mounting our new volume pool under /demo.

The equivalent in btrfs is:

# mkfs.btrfs -L demo -d raid1 /dev/sdc /dev/sdd

One major limitation of btrfs is that it does not support RAID5 or RAID 6, at least not reliably. So, we’ll keep that far away from production systems.

These next-generation filesystems take care of many of the details of RAID and volume management. In addition, they provide much greater data integrity through block-level checksums.

Although they are a whole other topic, we may solve more of our storage problems by investigating ZFS or btrfs.

6. Further Reading

7. Conclusion

We use RAID for reliability and to limit downtime.

In this article, we’ve looked at the building blocks of Linux software RAID (md). We’ve also considered some more complex and advanced additions.

There are more details to consider in the day-to-day monitoring and maintenance of our RAID, but this gets us started.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.