openvz

Updating OpenVZ vzctl on CentOS-5.8

While updating vzctl to latest on CentOS-5.8, I was getting the below error:

# yum update vzctl
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* openvz-kernel-rhel5: mirror.fdcservers.net
* openvz-utils: mirror.fdcservers.net
Excluding Packages in global exclude list
Finished
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package vzctl.x86_64 0:4.1-1 set to be updated
--> Processing Dependency: vzctl-core = 4.1-1 for package: vzctl
--> Processing Dependency: libvzctl-4.1.so()(64bit) for package: vzctl
--> Processing Dependency: libcgroup.so.1()(64bit) for package: vzctl
--> Running transaction check
---> Package libcgroup.x86_64 0:0.37-4 set to be updated
---> Package vzctl-core.x86_64 0:4.1-1 set to be updated
--> Processing Conflict: vzctl conflicts ploop-lib < 1.5-1
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package ploop-lib.x86_64 0:1.5-1 set to be updated
--> Processing Conflict: ploop-lib conflicts vzkernel < 2.6.32-042stab061.1
--> Processing Conflict: ploop-lib conflicts vzkernel < 2.6.32-042stab061.1
--> Processing Conflict: ploop-lib conflicts vzkernel < 2.6.32-042stab061.1
--> Processing Conflict: ploop-lib conflicts vzkernel < 2.6.32-042stab061.1
--> Finished Dependency Resolution
ploop-lib-1.5-1.x86_64 from openvz-utils has depsolving problems
  --> ploop-lib conflicts with ovzkernel
Error: ploop-lib conflicts with ovzkernel
You could try using --skip-broken to work around the problem
You could try running: package-cleanup --problems
    &nbsp;    &nbsp;    &nbsp;    &nbsp;    package-cleanup --dupes
    &nbsp;    &nbsp;    &nbsp;    &nbsp;    rpm -Va --nofiles --nodigest
The program package-cleanup is found in the yum-utils package.

Turns out that ploop is no longer required for vzctl on CentOS-5.8 and can be removed:

yum update problem on CentOS 5.8 server

"Since you have RHEL5-based kernel that do not require ploop, you can remove ploop when installing vzctl-4.0. I have made vzctl not requiring ploop by dynamically loading it when it's available. Note that vzctl is not requiring ploop anymore, it just conflicts with the old version of it."

The solution was to remove ploop in single transaction as mentioned:

# yum shell
> update vzctl
> remove ploop\*
> run
> quit

openvz tmpfs and dcachesize

When using tmpfs inside OpenVZ containers, make sure to monitor and increase dcachesize (directory and inode entries) appropriately.

tmpfs mounts can be used to speed up applications doing lots of read/writes to temporary diskspace such a php sessions and mysql tmp directory.

Mount using "/etc/fstab":

tmpfs   /dev/shm   &nbsp;    tmpfs   noexec,nosuid,nodev  ; 0 0
tmpfs   /var/lib/php/session &nbsp;  tmpfs   mode=770,gid=48,size=500M,noexec,nosuid,nodev,noatime &nbsp;   0 0

Note: default folder permission of "/var/lib/php/session" is 770 and set to the apache Group ID.

auotmount shares for vzyum updates

Note: The directions at "http://wiki.openvz.org/Install_OpenVZ_on_a_x86_64_system_Centos-Fedora#STEP_12" did not quite work for me as ".gpgkeyschecked.yum" gets created in the yum-cache directory as well and is not available to the containers. The workaround below worked for me.

To share the vzyum cache directory between various containers. Edit "/etc/auto.master" to include the following:

/vz/root/{vpsid}/var/cache/yum-cache /etc/auto.vzyum

Include one line for each installed or planned VPS, replacing {vpsid} with the adequate value.

Then, create "/etc/auto.vzyum" file with only this line:

share -bind,ro,nosuid,nodev :/var/cache/yum-cache/share

Restart the automounter daemon.

Edit "/vz/template/centos/5/x86_64/config/yum.conf" and change cachedir location:

cachedir=/var/cache/yum-cache/share

Create the corresponding cachedir:

mkdir /var/cache/yum-cache/share

Test with:

vzyum {vpsid} clean all

This should create all of the yum cache directory at "/var/cache/yum-cache/share" location and should be available to the openvz container via bind mount.

Redirect ports inside OpenVZ containers

For port redirection to work inside OpenVZ containers, ipt_REDIRECT kernel module needs to be loaded in the host. Edit "/etc/sysconfig/vz" and add it to the IPTABLES list.

IPTABLES="ipt_REJECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_owner ipt_length ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp ipt_recent ipt_REDIRECT"

This should then allow to redirect ports. So if you need to proxy existing apache via nginx or lighttpd and you do not want to switch apaches' default port 80, then the below rules will do the appropriate redirection to port 81 where nginx/lighttpd server is listening, serving static content and proxying to apache for dynamic content:

# Redirect external web traffic to port 81
iptables -t nat -A PREROUTING -s ! 127.0.0.1 -p tcp --dport 80 -j REDIRECT --to-ports 81

# Redirect internal port 80 to 81
iptables -t nat -A OUTPUT -s 0/0 -d 192.168.10.2 -p tcp --dport 80 -j REDIRECT --to-ports 81

Where 192.168.10.2 is the internal IP resolver of domain/host.

vzdump of CentOS

Current versions of vzdump has dependency for cstream and perl-LockFile-Simple, both available via rpmforge. Below is how I got it to install and run on CentOS-5.5 x86_64 architecture.

wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm
rpm -ivh rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm
yum --enablerepo=rpmforge install cstream perl-LockFile-Simple
rpm -ivh http://download.openvz.org/contrib/utils/vzdump/vzdump-1.2-4.noarch.rpm

It's necessary to export the location of the PVE libraries that vzdump requires. This can be added to ".bash_profile":

export PERL5LIB=/usr/share/perl5/

Upgrade CentOS 5.4 to 5.5 for OpenVZ containers

Edit "/vz/template/centos/5/{ARCH}/config/yum.conf", and change the base and updates repositories as below:

[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

Do a `vzyum {VEID} clean all`.

List updates:

vzyum {VEID} list updates

Update:

vzyum {VEID} update

Confirm that all VEs' have been updated to 5.5 with:

cat /vz/root/{VEID}/etc/redhat-release

You should see "CentOS release 5.5 (Final)".

Unable to open pty: No such file or directory

Udev is a dependency of xorg and other development packages that breaks OpenVZ containers if installed or upgraded.

Re-create the missing devices after an upgrade via:

vzctl exec {VEID} /sbin/MAKEDEV tty
vzctl exec {VEID} /sbin/MAKEDEV pty

For a permanent fix, edit /etc/rc.sysinit to disable udev and auto-repair the devices:

#/sbin/start_udev
/sbin/MAKEDEV tty
/sbin/MAKEDEV pty

vzdump LVM snapshots kernel errors

On running daily lvm snapshot backups via vzdump on OpenVZ servers, I noticed the below Kernel errors in logwatch reports.


WARNING:  Kernel Errors Present
    Buffer I/O error on device dm-4,  ...:  22 Time(s)
    EXT3-fs error (device dm-4): e ...:  60 Time(s)
    lost page write due to I/O error on dm-4 ...:  22 Time(s)

This would show up on busy servers only, probably caused due to lvm snapshot running out of space.

I edited "/usr/bin/vzdump" and increased the size from 500m to 1000m which seems to have resolved the issue for now.


run_command (\*LOG, "$lvcreate --size 1000m --snapshot --name vzsnap /dev/$lvmvg/$lvmlv");

Loss in network connectivity on OpenVZ host

I was seeing loss in network connectivity when an OpenVZ container is stopped and noticed that the bridge mac address was taking the mac address of an existing containers virtual network interface instead of the physical interface.

The solution was to set the bridge mac address to the physical interface:

/sbin/ifconfig br0 hw ether $(ifconfig eth0 | awk '{print $5; exit}')

Here is what my "/etc/sysconfig/vz-scripts/vps.umount" looks like which is used to remove routes to container with veth-bridge from bridge.

Weekly backups of all OpenVZ container

Here's is a simple shell script to run a weekly lvm snapshot dump of all OpenVZ containers using the vzdump utility:

#!/bin/bash
# ve_dumps.sh
# Dump all VEs

# Todays' date
DATE=$(date +%d)

# Paths
BAK_PATH=/opt/bak/vz_dumpsr /># Week of month
BAK_DIR=$(cal | awk -v date="${DATE}" '{ for( i=1; i <= NF ; i++ ) if ($i==date) { print FNR-2 } }')

# Function to check and remove previously failed snapshot.
check_vzsnap() {
  OUTPUT=`/usr/sbin/lvdisplay | grep vzsnap`
  [ -n "$OUTPUT" ] && lvremove -f /dev/vg0/vzsnap
}

# Function to perform backup.
backup() {
  # Check and create the required backup directory
  [ -d "${BAK_PATH}/${BAK_DIR}&quot; ] || mkdir -p ${BAK_PATH}/${BAK_DIR}
  # do dumps
  echo "Starting dump at `date`"
  /usr/bin/vzdump --exclude-path '.+/log/.+' --exclude-path '.+/bak/.+' --exclude-path '/tmp/.+' --exclude-path '/var/tmp/.+' --exclude-path '/var/run/.+pid' --snapshot --dumpdir=${BAK_PATH}/${BAK_DIR} --compress --all
  echo "Completed dump at `date`"
}

# Main ############################r />
# Remove previously failed snapshot
check_vzsnap

# Run backups
backup

exit 0

Comment