Tag Archives: OpenVZ High Availability

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 2

DRBD installation with OpenVZ, plese follow the link below.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 1

1.  Download repository.

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

2. Install HeartBeat

yum install heartbeat

3. Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes.

# vi /etc/ha.d/ha.cf

debugfile
logfile /var/log/ha-log
logfacility local0
node masternode
node slavenode
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
bcast em2
auto_failback on

4. Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes

# vi /etc/ha.d/haresources

masternode drbddisk::clusterdb Filesystem::/dev/drbd0::/vz::ext4 openvz

5. Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes.

#vi /etc/ha.d/authkeys

auth 1
1 crc

6. Change the permission

chmod 600 /etc/ha.d/authkeys

7. Start heartbeat service.

/etc/init.d/heartbeat start

8. Add heartbeat on system boot.

chkconfig –add heartbeat

chkconfig heartbeat on

9.  Openvz Startup script for OpenVZ

# vi /etc/init.d/openvz

######Code Start Here ##########

#!/bin/bash
#
# openvz        Startup script for OpenVZ
#

start() {
/etc/init.d/vz start > /dev/null 2>&1
RETVAL=$?
/root/live-switchover/cluster_unfreeze.sh
return $RETVAL
}
stop() {
/etc/init.d/vz stop > /dev/null 2>&1
RETVAL=$?
return $RETVAL
}
status() {
/etc/init.d/vz status > /dev/null 2>&1
RETVAL=$?
return $RETVAL
}

# See how we were called.
case “$1″ in
start)
start
;;
stop)
stop
;;
status)
status
;;
*)
echo $”Usage: openvz {start|stop|status}”
exit 1
esac

exit $RETVAL

######Code End Here ##########

10. Make openvz script executable.

chmod +x /etc/init.d/openvz

11. Cluster freeze script.

#vi /root/live-switchover/cluster_freeze.sh

######Code Start Here ##########

#!/bin/bash
#Script by Thomas Kappelmueller
#Version 1.0
#LIVESWITCH_PATH=’/vz/cluster/liveswitch’

if [ -f $LIVESWITCH_PATH ]
then
rm -f $LIVESWITCH_PATH
fi

RUNNING_VE=$(vzlist -1)

for I in $RUNNING_VE
do
BOOTLINE=$(cat /etc/sysconfig/vz-scripts/$I.conf | grep -i “^onboot”)
if [ $I != 1 -a “$BOOTLINE” = “ONBOOT=”yes”” ]
then
vzctl chkpnt $I

if [ $? -eq 0 ]
then
vzctl set $I –onboot no –save
echo $I >> $LIVESWITCH_PATH
fi
fi
done

exit 0

######Code End Here ##########

12.  Make cluster_freeze.sh executable.

chmod +x /root/live-switchover/cluster_freeze.sh

13.  Cluster unfreeze script.

#vi /root/live-switchover/cluster_unfreeze.sh

######Code Start Here ##########

#!/bin/bash
#Script by Thomas Kappelmueller
#Version 1.0

LIVESWITCH_PATH=’/vz/cluster/liveswitch’

if [ -f $LIVESWITCH_PATH ]
then
FROZEN_VE=$(cat $LIVESWITCH_PATH)
else
exit 1
fi

for I in $FROZEN_VE
do
vzctl restore $I

if [ $? != 0 ]
then
vzctl start $I
fi

vzctl set $I –onboot yes –save
done

rm -f $LIVESWITCH_PATH

exit 0

######Code End Here ##########

14. Make cluster unfreeze script executable.

chmod +x /root/live-switchover/cluster_unfreeze.sh

15. Live switchover script.

#vi /root/live-switchover/live_switchover.sh

######Code Start Here ##########

#!/bin/bash
#Script by Thomas Kappelmueller
#Version 1.0

ps -eaf | grep ‘vzctl enter’ | grep -v ‘grep’ > /dev/null
if [ $? -eq 0 ]
then
echo ‘vzctl enter is active. please finish before live switchover.’
exit 1
fi
ps -eaf | grep ‘vzctl exec’ | grep -v ‘grep’ > /dev/null
if [ $? -eq 0 ]
then
echo ‘vzctl exec is active. please finish before live switchover.’
exit 1
fi
echo “Freezing VEs…”
/root/live-switchover/cluster_freeze.sh
echo “Starting Switchover…”
/usr/lib64/heartbeat/hb_standby

######Code End Here ##########

16. Make live_switchover.sh executable.

chmod +x /root/live-switchover/live_switchover.sh

17. Start OpenVZ script.

/etc/init.d/openvz start

18. Do the testing. Restart the masternode or your primary server. If the /vz will automatically mounted in the secondary node then it works.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 1

Install OpenVZ in both machines, follow the link below

How to install OpenVZ in Centos 6.5

DRBD installation.

1. BOTH MACHINES: Install this repository “elrepo-release-6-6.el6.elrepo.noarch.rpm”.

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

2. BOTH MACHINES: Install DRBD.

yum install drbd83-utils-8.3.13 kmod-drbd83-8.3.13 -y

3. BOTH MACHINES: Insert drbd module manually or reboot both machines.

/sbin/modprobe drbd

4. BOTH MACHINES: Create the Distributed Replicated Block Device resource file in both machines
Note: These files should be exactly the same on both machines

vi /etc/drbd.d/clusterdb.res

resource clusterdb
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
}
on masternode {
device /dev/drbd0;
disk /dev/sda2;
address 192.168.1.100:7788;
flexible-meta-disk internal;
}
on slavenode {
device /dev/drbd0;
disk /dev/sda2;
address 192.168.1.101:7788;
meta-disk internal;
}
}

5. BOTH MACHINES: Install crontabs, ntpdate then create a cronjob that check the server time every 5 minutes.

yum install crontabs ntpdate -y

crontab -e # insert the command below.
5 * * * * /usr/sbin/ntpdate 2.asia.pool.ntp.org

6. BOTH MACHINES: To have a DNS resolution, you may add the IP address FQDN to /etc/hosts on both machines.

vi /etc/hosts
192.168.1.100 masternode
192.168.1.101 slavenode

7. BOTH MACHINES: Unmount /vz to sda2 device. # this would matter on what device /dev/sdXX was assign to your /vz during the installation.

umount /dev/sda2 /vz

8. BOTH MACHINES: Initialize DRBD meta data

drbdadm create-md clusterdb

9. BOTH MACHINES: If create-md in step 8 fails, you may execute the command below.

dd if=/dev/zero of=/dev/sda2 #This may take very long, hours or even 1 day so be patient.

10. Comment the /vz in /etc/fstab so that it will not automatically mount during boot up.

vi /etc/fstab
UUID=a5cc99aa-76bc-44a5-b898-845bac0135fa /vz                     ext4    defaults        1 2

11. BOTH MACHINES: Start DRBD service

/etc/init.d/drbd start

12. Run the command below to the Primary node only (mine is masternode server).

drbdadm — –overwrite-data-of-peer primary clusterdb
#drbdadm — –overwrite-data-of-peer primary all

13. Check if the synchronization is finish.

cat /proc/drbd
————–
Sample output.
————–
cat /proc/drbd
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root@sighted, 2012-10-09 12:47:51
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–
ns:203296636 nr:291892 dw:203588528 dr:63212661 al:131223 bm:28 lo:0 pe:46 ua:0 ap:45 ep:1 wo:b oos:0

14. Create desired filesystem on Distributed Replicated Block Device device. I use mkfs.ext4

/sbin/mkfs.ext4 /dev/drbd0

12. Mount /dev/drbd0 to /vz

mount -t ext4 /dev/drbd0 /vz

To check the role:
drbdadm role clusterdb

—————————–
To promote the second node manually:

1. In the primary node run.
drbdadm secondary clusterdb

2. In the secondary node run.
drbdadm primary clusterdb
—————————–

Copy necessary OpenVZ files to DRBD device

13. Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point (do this on both nodes):

mv /vz /vz.orig
mkdir /vz

14. Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links (do this on both nodes):

mv /etc/vz /etc/vz.orig
mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig
mv /var/vzquota /var/vzquota.orig
ln -s /vz/cluster/etc/vz /etc/vz
ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts
ln -s /vz/cluster/var/vzquota /var/vzquota

15. Currently, masternode is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on masternode!):

mount /dev/drbd0 /vz
cp -a /vz.orig/* /vz/
mkdir -p /vz/cluster/etc
mkdir -p /vz/cluster/etc/sysconfig
mkdir -p /vz/cluster/var
cp -a /etc/vz.orig /vz/cluster/etc/vz/
cp -a /etc/sysconfig/vz-scripts.orig /vz/cluster/etc/sysconfig/vz-scripts
cp -a /var/vzquota.orig /vz/cluster/var/vzquota
umount /dev/drbd0

 

Proceed to the Heartbeat installation, please follow the link below.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 2