lvm

Resolve partial lvm logical volume

  1. Server rebooted during lvm snapshot and put volume group in inconsistent state.
    # vgs
      VG   #PV #LV #SN Attr   VSize   VFree
      vg0    1   3   0 wz--n- 229.66G 9.66G
      vg1    2   3   0 wz-pn- 279.46G 9.46G
    

    Note the "p" attribute in vg1 which put the group in partial mode.

  2. To resolve, I first vgreduce then re-created the volume group from a good known archive:

Backup and restore lvm data with dd

Recently I've had to backup/restore data from a failing drive with LVM over Raid.

Luckily I had access to the backup of the current metadata configuration located in "/etc/lvm/backup/".

Below is what the volumegroup looked like:

vg0 {
        id = "xvni1W-24Xu-dVoR-PlXh-gQvQ-62fL-QX64O3"
        seqno = 9
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 65536             # 32 Megabytes
        max_lv = 0
        max_pv = 0

        physical_volumes {

                pv0 {
                        id = "9gbyhX-Owvj-u4Q4-wR1E-IEf2-gyUA-CJBCJK"
                        device = "/dev/md3"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1928892288   # 919.768 Gigabytes
                        pe_start = 384
                        pe_count = 29432        # 919.75 Gigabytes
                }
        }

        logical_volumes {

                lv0_sites {
                        id = "Sg1fYr-NTzr-8AA2-v29K-tcz5-rUMj-uRoXY1"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 1280     # 40 Gigabytes

                                type = "striped"
                                stripe_count = 1     ;   # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                lv0_m {
                        id = "scNeN4-4bmg-Y6kq-zKuO-n8B8-s8mw-FTUYqk"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 12800    # 400 Gigabytes

                                type = "striped"
                                stripe_count = 1     ;   # linear

                                stripes = [
                                        "pv0", 1280
                                ]
                        }
                }
        }
}

Now to extract data with dd, use the below formula (this will only work for linear stripe):

skip=$[extent_size*stripes+pe_start] count=$[extent_size*(extent_count-1)]

So to get the lv0_m data off of the volume:

dd if=/dev/sdb4 of=/opt/bak/lv0_m.iso bs=512 skip=$[65536*1280+384] count=$[65536*(12800-1)] conv=sync,noerror

Once the iso is created, it can then be loop mounted via:

mount -o loop -t ext3 /opt/bak/lv0_m.iso /mnt/lv0_m

You should then be able to see all the files in the mount point which can then be used for data restoration.

vzdump LVM snapshots kernel errors

On running daily lvm snapshot backups via vzdump on OpenVZ servers, I noticed the below Kernel errors in logwatch reports.


WARNING:  Kernel Errors Present
    Buffer I/O error on device dm-4,  ...:  22 Time(s)
    EXT3-fs error (device dm-4): e ...:  60 Time(s)
    lost page write due to I/O error on dm-4 ...:  22 Time(s)

This would show up on busy servers only, probably caused due to lvm snapshot running out of space.

I edited "/usr/bin/vzdump" and increased the size from 500m to 1000m which seems to have resolved the issue for now.


run_command (\*LOG, "$lvcreate --size 1000m --snapshot --name vzsnap /dev/$lvmvg/$lvmlv");

Convert / root filesytem to use lvm over raid

I have outlined below the steps taken to move from a normal single disk system and convert to a raid1 using an additional drive:

Note: /boot cannot be on lvm nor any other raids other than raid1 partition.

Don't forget to make the essential backups prior.

Convert / to raid/lvm

  1. fdisk /dev/sdb and make clone of sda (convert partition type to "Linux raid autodetect")
  2. add /dev/sdb1 -- /boot - 13 blocks (100Mb) - type raid
  3. add /dev/sdb2 -- rest - type raid
  4. Create raid partitions:
    partprobe
    mdadm -C /dev/md0 --auto=yes -l 1 -n 2 missing /dev/sdb1
    mdadm -C /dev/md1 --auto=yes -l 1 -n 2 missing /dev/sdb2
  5. Create logical volume and copy / partition and /dev files over:
    pvcreate /dev/md1
    vgcreate vg0 /dev/md1
    lvcreate -L 4G -n lv0 vg0
    mke2fs -j /dev/vg0/lv0
    mkdir /mnt/lv0
    mount /dev/vg0/lv0 /mnt/lv0
    find / -xdev | cpio -pvmd /mnt/lv0
    cp -aux /dev /mnt/lv0
  6. Edit /mnt/lv0/etc/fstab to reflect the new root
    /dev/vg0/lv0            /     ;     ;     ;     ;   ext3    defaults        1 1
  7. Chroot to new filesystem and create initrd with raid and lvm support
    mount --bind /dev /mnt/lv0/dev
    chroot /mnt/lv0
    mount -t proc /proc /proc
    mount -t sysfs /sys /sys
    vgscan
    vgchange -ay
    mkinitrd -v /boot/initrd-`uname -r`.lvm.img `uname -r`
    umount /sys
    umount /proc
    exit
    mv /mnt/lv0/boot/initrd-`uname -r`.lvm.img /boot
  8. Edit grub.conf to point to new root /dev/vg0/lv0
  9. reboot

Convert /boot to raid1

Reducing logical volume

I've had to reduce the logical volume that was alloted for mysql data from 8GB to 4GB, which was a breeze with e2fsadm available for lvm1 on RHEL-3 .

Stop the running serivces using the volume:

# service httpd stop
# service mysqld stop

e2fsadm will reduce the filesystem and then the logical volume.

# umount /mnt/lv-mysql
# e2fsadm -L -4G /dev/hdb2-vg00/lv-mysql
# mount /mnt/lv-mysql

Check with df, which should now show the new volume size:

$ df -h /mnt/lv-mysql
Filesystem              Size  Used Avail Use% Mounted on
/dev/hdb2-vg00/lv-mysql 4.0G  531M  3.3G  14% /mnt/lv-mysql

Start the running serivces using the volume:

# service httpd start
# service mysqld start

Note: e2fsadm is not available in lvm2 and will need to reduce in two steps:

1. Reduce the filesystem residing on the logical volume.
2. Reduce the logical volume.

# resize2fs /dev/vg0/lv0 4G
# lvreduce -L -4G /dev/vg0/lv0

Convert root filesystem to LVM

I converted root filesystem to lvm since the root partition was huge and I needed more flexibility in managing the partitions. Besides, lvm would also enable for easy backups with lvm snapshots.

I had a sizable swap partition of 2GB which I used to transfer my root files to and rebooted to it, prior to the conversion.

Please know what you are doing prior and make sure to create backups.

Extending LVM

Extend partition by 1Gb.

# lvresize -L +1G /dev/vg00/lvol0
# e2fsck -f /dev/vg00/lvol0
# resize2fs -pf /dev/vg00/lvol0


Notes:

  • resize2fs has replaced ext2online in FC6.
  • need to unmount volume prior to doing resize2fs.

Extending Logical Volume

I was only using a part of my external usb hard drive to keep backups and needed to extend the partition to accomodate the ever-growing backup files.

There are quite some ways to do this which can be referenced at FedoraNews.org and TLDP.org .

Below is a quick command line reference if you are familiar with the process already.

  1. I added an additional 10GB of space:
    # lvextend -L+10G /dev/vg00/lvol0
    
  2. Unmount the drive:
    # umount /mnt/usbdisk
    
  3. Check the logical volume:
    # e2fsck -f /dev/vg00/lvol0
    
  4. Increase the file system size to match.
    # resize2fs -pf /dev/vg00/lvol0
    
  5. Remount:
    # mount /mnt/usbdisk
    
Comment