Setting up a software RAID1

12.07.2017 by Dirk Olmes

I’m helping a friend to set up a machine with Gentoo. This will be a machine to host production applications and it is equipped with 2 identical hard drives. I will set up a software RAID 1. This has been a life saver on a similar machine before where one of the drives failed and we had no data loss and only minimal downtime during which the drive was replaced.

One goal of the new setup is to remain bootable even if one of the drives fails. I had trouble accomplishing this in earlier setups so this time I tested the process locally on a virtual machine before setting up the real iron.

The first step of the setup is partitioning the drives. The handbook suggests adding a small partition at the beginning of the drive to enable booting from a gpt partitioned drive. The /boot partition will be formatted using ext4 because the filesystem will remain bootable even if one of the drives is missing. The rest of the disk will be formatted using xfs. To recap the layout:

Number  Start    End      Size     File system  Name    Flags
 1      1.00MiB  3.00MiB  2.00MiB               grub    bios_grub
 2      3.00MiB  95.0MiB  92.0MiB               boot
 3      95.0MiB  8191MiB  8096MiB               rootfs  raid

The second drive is partitioned exactly the same.

Now let’s create a RAID 1 for the boot partition:

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2

and for the rootfs:

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb3 /dev/sdc3

To maintain the RAID device numbering even after reboot, the RAID config has to be saved. This will be done by

mdadm --detail --scan >> /etc/mdadm.conf

Then create an ext4 filesystem on /dev/md0 and an xfs filesystem on /dev/md1. Nothing noteworthy here.

The observant reader will have noticed from the device names above that I’m testing my installation from a running system on /dev/sda. To save the hassle of going through the entire stage3 setup process I’m simply copying the running system to the newly created RAID filesystems.

After chrooting into the new system some changes have to be made to the genkernel config in order to produce a RAID enabled initramfs. In /etc/genkernel.conf set

MDADM="yes"
MDADM_CONFIG="/etc/mdadm.conf"

Now we’re set to build the kernel.

While it’s compiling, edit /etc/default/grub (I’m of course using grub2 for booting) and add

GRUB_CMDLINE_LINUX="domdadm"

Setup grub on both devices individually using grub-install /dev/sdb and grub-install /dev/sdc.

After the kernel has finished compiling, generate the proper grub config using grub-mkconfig -o /boot/grub/grub.cfg.

Before rebooting the fstab has to be set up correctly. I prefer to use UUIDs for filesystems instead of device names - this is a bit more fault tolerant in case device numbering is mixed up. It was a main challenge to find an unambiguous UUID for the both RAID filesystems. There are a number of places to get a UUID from: mdadm --detail, blkid, tune2fs -l, xfs_admin -u and I’m sure I forgot more. The helpful guys at the gentoo IRC channel pointed me in the right direction. Use lsblk -f /dev/md0 to find a UUID that uniquely identifies the RAID and check using findfs UUID=<uuid>.

After updating the fstab the system is ready for a first boot into the RAID setup.

I tested failing drives by simply removing the first (or the second) drive from the virtual machine. The whole setup still boots off either drive.


Comments

There are no comments yet.

Leave a comment
Your name:
Comment: