Resolve partial lvm logical volume
Wed, 03/19/2014 - 10:51 — sandip-
Server rebooted during lvm snapshot and put volume group in inconsistent state.
# vgs VG #PV #LV #SN Attr VSize VFree vg0 1 3 0 wz--n- 229.66G 9.66G vg1 2 3 0 wz-pn- 279.46G 9.46G
Note the "p" attribute in vg1 which put the group in partial mode.
To resolve, I first vgreduce then re-created the volume group from a good known archive:- sandip's blog
- Login or register to post comments
- Read more
Backup and restore lvm data with dd
Sun, 03/13/2011 - 01:18 — sandipRecently I've had to backup/restore data from a failing drive with LVM over Raid.
Luckily I had access to the backup of the current metadata configuration located in "/etc/lvm/backup/".
Below is what the volumegroup looked like:
vg0 {
&nbs p; id = "xvni1W-24Xu-dVoR-PlXh-gQ vQ-62fL-QX64O3"
&nbs p; seqno = 9
&nbs p; status = ["RESIZEABLE", "READ", "WRITE"]
&nbs p; flags = []
&nbs p; extent_size = 65536 & nbsp; & nbsp; # 32 Megabytes
&nbs p; max_lv = 0
&nbs p; max_pv = 0
&nbs p; physical_volumes {
&nbs p; &nbs p; &nbs p; pv0 {
&nbs p; &nbs p; &nbs p; &nbs p; id = "9gbyhX-Owvj-u4Q4-wR1E-IE f2-gyUA-CJBCJK"
&nbs p; &nbs p; &nbs p; &nbs p; device = "/dev/md3" &nbs p; # Hint only
&nbs p; &nbs p; &nbs p; &nbs p; status = ["ALLOCATABLE"]
&nbs p; &nbs p; &nbs p; &nbs p; flags = []
&nbs p; &nbs p; &nbs p; &nbs p; dev_size = 1928892288 # 919.768 Gigabytes
&nbs p; &nbs p; &nbs p; &nbs p; pe_start = 384
&nbs p; &nbs p; &nbs p; &nbs p; pe_count = 29432 & nbsp; # 919.75 Gigabytes
&nbs p; &nbs p; &nbs p; }
&nbs p; }
&nbs p; logical_volumes {
&nbs p; &nbs p; &nbs p; lv0_sites {
&nbs p; &nbs p; &nbs p; &nbs p; id = "Sg1fYr-NTzr-8AA2-v29K-tc z5-rUMj-uRoXY1"
&nbs p; &nbs p; &nbs p; &nbs p; status = ["READ", "WRITE", "VISIBLE"]
&nbs p; &nbs p; &nbs p; &nbs p; flags = []
&nbs p; &nbs p; &nbs p; &nbs p; segment_count = 1
&nbs p; &nbs p; &nbs p; &nbs p; segment1 {
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; start_extent = 0
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; extent_count = 1280 # 40 Gigabytes
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; type = "striped"
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; stripe_count = 1   ; # linear
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; stripes = [
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; "pv0", 0
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; ]
&nbs p; &nbs p; &nbs p; &nbs p; }
&nbs p; &nbs p; &nbs p; }
&nbs p; &nbs p; &nbs p; lv0_m {
&nbs p; &nbs p; &nbs p; &nbs p; id = "scNeN4-4bmg-Y6kq-zKuO-n8 B8-s8mw-FTUYqk"
&nbs p; &nbs p; &nbs p; &nbs p; status = ["READ", "WRITE", "VISIBLE"]
&nbs p; &nbs p; &nbs p; &nbs p; flags = []
&nbs p; &nbs p; &nbs p; &nbs p; segment_count = 1
&nbs p; &nbs p; &nbs p; &nbs p; segment1 {
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; start_extent = 0
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; extent_count = 12800 # 400 Gigabytes
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; type = "striped"
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; stripe_count = 1   ; # linear
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; stripes = [
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; "pv0", 1280
&nbs p; &nbs p; &nbs p; &nbs p; &nbs p; &nbs p; ]
&nbs p; &nbs p; &nbs p; &nbs p; }
&nbs p; &nbs p; &nbs p; }
&nbs p; }
}
Now to extract data with dd, use the below formula (this will only work for linear stripe):
skip=$[extent_size*stripes+pe_ start] count=$[extent_size*(extent_co unt-1)]
So to get the lv0_m data off of the volume:
dd if=/dev/sdb4 of=/opt/bak/lv0_m.iso bs=512 skip=$[65536*1280+384] count=$[65536*(12800-1)] conv=sync,noerror
Once the iso is created, it can then be loop mounted via:
mount -o loop -t ext3 /opt/bak/lv0_m.iso /mnt/lv0_m
You should then be able to see all the files in the mount point which can then be used for data restoration.
vzdump LVM snapshots kernel errors
Tue, 03/31/2009 - 09:49 — sandipOn running daily lvm snapshot backups via vzdump on OpenVZ servers, I noticed the below Kernel errors in logwatch reports.
WARNING: Kernel Errors Present
Buffer I/O error on device dm-4, ...: 22 Time(s)
EXT3-fs error (device dm-4): e ...: 60 Time(s)
lost page write due to I/O error on dm-4 ...: 22 Time(s)
This would show up on busy servers only, probably caused due to lvm snapshot running out of space.
I edited "/usr/bin/vzdump" and increased the size from 500m to 1000m which seems to have resolved the issue for now.
run_command (\*LOG, "$lvcreate --size 1000m --snapshot --name vzsnap /dev/$lvmvg/$lvmlv");
- sandip's blog
- Login or register to post comments
Convert / root filesytem to use lvm over raid
Sat, 11/01/2008 - 16:57 — sandipI have outlined below the steps taken to move from a normal single disk system and convert to a raid1 using an additional drive:
Note: /boot cannot be on lvm nor any other raids other than raid1 partition.
Don't forget to make the essential backups prior.
Convert / to raid/lvm
-
fdisk /dev/sdb and make clone of sda (convert partition type to "Linux raid autodetect")
add /dev/sdb1 -- /boot - 13 blocks (100Mb) - type raid
add /dev/sdb2 -- rest - type raid
Create raid partitions:
partprobe
mdadm -C /dev/md0 --auto=yes -l 1 -n 2 missing /dev/sdb1
mdadm -C /dev/md1 --auto=yes -l 1 -n 2 missing /dev/sdb2
pvcreate /dev/md1
vgcreate vg0 /dev/md1
lvcreate -L 4G -n lv0 vg0
mke2fs -j /dev/vg0/lv0
mkdir /mnt/lv0
mount /dev/vg0/lv0 /mnt/lv0
find / -xdev | cpio -pvmd /mnt/lv0
cp -aux /dev /mnt/lv0
/dev/vg0/lv0 /   ;   ;   ;   ; ext3 defaults &nbs p; 1 1
mount --bind /dev /mnt/lv0/dev
chroot /mnt/lv0
mount -t proc /proc /proc
mount -t sysfs /sys /sys
vgscan
vgchange -ay
mkinitrd -v /boot/initrd-`uname -r`.lvm.img `uname -r`
umount /sys
umount /proc
exit
mv /mnt/lv0/boot/initrd-`uname -r`.lvm.img /boot
reboot
Convert /boot to raid1
Reducing logical volume
Thu, 07/03/2008 - 09:56 — sandipI've had to reduce the logical volume that was alloted for mysql data from 8GB to 4GB, which was a breeze with e2fsadm available for lvm1 on RHEL-3 .
Stop the running serivces using the volume:
# service httpd stop
# service mysqld stop
e2fsadm will reduce the filesystem and then the logical volume.
# umount /mnt/lv-mysql
# e2fsadm -L -4G /dev/hdb2-vg00/lv-mysql
# mount /mnt/lv-mysql
Check with df, which should now show the new volume size:
$ df -h /mnt/lv-mysql
Filesystem Size Used Avail Use% Mounted on
/dev/hdb2-vg00/lv-mysql 4.0G 531M 3.3G 14% /mnt/lv-mysql
Start the running serivces using the volume:
# service httpd start
# service mysqld start
Note: e2fsadm is not available in lvm2 and will need to reduce in two steps:
1. Reduce the filesystem residing on the logical volume.
2. Reduce the logical volume.
# resize2fs /dev/vg0/lv0 4G
# lvreduce -L -4G /dev/vg0/lv0
- sandip's blog
- Login or register to post comments
Convert root filesystem to LVM
Sun, 10/07/2007 - 15:35 — sandipI converted root filesystem to lvm since the root partition was huge and I needed more flexibility in managing the partitions. Besides, lvm would also enable for easy backups with lvm snapshots.
I had a sizable swap partition of 2GB which I used to transfer my root files to and rebooted to it, prior to the conversion.
Please know what you are doing prior and make sure to create backups.
Extending LVM
Fri, 03/23/2007 - 09:19 — sandipExtend partition by 1Gb.
# lvresize -L +1G /dev/vg00/lvol0
# e2fsck -f /dev/vg00/lvol0
# resize2fs -pf /dev/vg00/lvol0
Notes:
-
resize2fs has replaced ext2online in FC6.
need to unmount volume prior to doing resize2fs.
Extending Logical Volume
Thu, 08/24/2006 - 22:40 — sandipI was only using a part of my external usb hard drive to keep backups and needed to extend the partition to accomodate the ever-growing backup files.
There are quite some ways to do this which can be referenced at FedoraNews.org and TLDP.org .
Below is a quick command line reference if you are familiar with the process already.
-
I added an additional 10GB of space:
# lvextend -L+10G /dev/vg00/lvol0Unmount the drive:
# umount /mnt/usbdiskCheck the logical volume:
# e2fsck -f /dev/vg00/lvol0Increase the file system size to match.
# resize2fs -pf /dev/vg00/lvol0Remount:
# mount /mnt/usbdisk