Convert / root filesytem to use lvm over raid

I have outlined below the steps taken to move from a normal single disk system and convert to a raid1 using an additional drive:

Note: /boot cannot be on lvm nor any other raids other than raid1 partition.

Don't forget to make the essential backups prior.

Convert / to raid/lvm

  1. fdisk /dev/sdb and make clone of sda (convert partition type to "Linux raid autodetect")
  2. add /dev/sdb1 -- /boot - 13 blocks (100Mb) - type raid
  3. add /dev/sdb2 -- rest - type raid
  4. Create raid partitions:
    mdadm -C /dev/md0 --auto=yes -l 1 -n 2 missing /dev/sdb1
    mdadm -C /dev/md1 --auto=yes -l 1 -n 2 missing /dev/sdb2
  5. Create logical volume and copy / partition and /dev files over:
    pvcreate /dev/md1
    vgcreate vg0 /dev/md1
    lvcreate -L 4G -n lv0 vg0
    mke2fs -j /dev/vg0/lv0
    mkdir /mnt/lv0
    mount /dev/vg0/lv0 /mnt/lv0
    find / -xdev | cpio -pvmd /mnt/lv0
    cp -aux /dev /mnt/lv0
  6. Edit /mnt/lv0/etc/fstab to reflect the new root
    /dev/vg0/lv0            /     ;     ;     ;     ;   ext3    defaults        1 1
  7. Chroot to new filesystem and create initrd with raid and lvm support
    mount --bind /dev /mnt/lv0/dev
    chroot /mnt/lv0
    mount -t proc /proc /proc
    mount -t sysfs /sys /sys
    vgchange -ay
    mkinitrd -v /boot/initrd-`uname -r`.lvm.img `uname -r`
    umount /sys
    umount /proc
    mv /mnt/lv0/boot/initrd-`uname -r`.lvm.img /boot
  8. Edit grub.conf to point to new root /dev/vg0/lv0
  9. reboot

Convert /boot to raid1

  1. Upon successful reboot
  2. Create file-system and copy /boot partition files over:
    mke2fs -j /dev/md0
    mkdir /mnt/md0
    mount /dev/md0 /dev/mnt0
    cp -aux /boot/* /mnt/md0
    umount /mnt/md0
    umount /boot
  3. fdisk /dev/sda and change the type for /dev/sda1 to raid autodetect
  4. Add /dev/sda1 to /dev/md0 raid
    mdadm /dev/md0 -a /dev/sda1
  5. Edit /etc/fstab to reflect new /boot
    /dev/md0 /boot ext3 defaults 1 2
  6. Mount /boot
    mount /boot
  7. failover setup for /boot
    # grub
    grub> root (hd1,0)
    grub> setup (hd1)
    grub> root (hd0,0)
    grub> setup (hd0)
    grub> find /boot/grub/stage1 {to confirm}
    grub> quit
  8. Edit /etc/grub.conf
    title CentOS-LVM-HD0 (2.6.18-92.1.6.el5)
            root (hd0,0)
            kernel /vmlinuz-2.6.18-92.1.6.el5 ro root=/dev/vg0/lv0 console=tty0 console=ttyS1,19200n8
            initrd /initrd-2.6.18-92.1.6.el5.lvm.img
    title CentOS-LVM-HD1 (2.6.18-92.1.6.el5)
            root (hd1,0)
            kernel /vmlinuz-2.6.18-92.1.6.el5 ro root=/dev/vg0/lv0 console=tty0 console=ttyS1,19200n8
            initrd /initrd-2.6.18-92.1.6.el5.lvm.img
  9. Reboot
    shutdown -r now

Create swap on raid1 and add sda2 to raid1:

  1. Create logical volume swap.
    swapoff -a
    lvcreate -L 4G -n lv-swap vg0
    mkswap -L SWAP-md1  /dev/vg0/lv-swap
  2. Edit /etc/fstab with the new swap and turn swap on:
    LABEL=SWAP-md1   swap   swap    pri=0,defaults        0 0
    swapon -a
  3. fdisk /dev/sda and create sda2 as raid autodetect:
    mdadm /dev/md1 -a /dev/sda2


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
grub on raid1 with metadata 0.90

On CentOS-6.0, I've had to set the raid metadata to the old 0.90 version for grub-install to work.

mdadm -C /dev/md0 --auto=yes --metadata=0.90 -l 1 -n 2 /dev/sda1 /dev/sdb1

Also, check "/etc/mdadm.conf" to make sure that the raid device is auto-detected at the next boot. It should include "+0.90" in the "AUTO" line and before "-all".

AUTO +imsm +1.x +0.90 -all

Then either use the label or UUID of device in "/etc/fstab" mounts file.

You can label via:

e2label /dev/md0 /boot

Find the UUID via:

blkid /dev/md0


This was very helpful. I did have one problem though. After rebooting I could not log back in on the new raid partition. Login just quickly returned with failure. ssh'd to the host and saw /bin/bash: permission denied. Googled that and found many references to file system permissions. I couldn't see any problems when I booted back to my non-raid setup and mounted the raid one. I did a clean mkfs on the raid side, and did:

cd /mnt/lv0
dump -0 -f - / |restore -r -f -
cd /
cp -aux /dev /mnt/lv0/

Rebooted and that worked. The cpio should have worked. Maybe I messed it up somewhere else. At any rate, thanks for the instructions.


Thanks too

I had the exact same problem with cpio on selinux enabled Fedora 11

Thanks for the dump/resore hint!

And thanks for this howto. It was so helpful!


Thank you for the step-by-step instructions, it worked for me. This area is not covered well in the documentation to both RAID and LVM.

cheers, Martin