Monthly Archives: June 2014

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 2

DRBD installation with OpenVZ, plese follow the link below.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 1

1.  Download repository.

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

2. Install HeartBeat

yum install heartbeat

3. Create the Heartbeat configuration file ha.cf and copy it to /etc/ha.d/ha.cf on both nodes.

# vi /etc/ha.d/ha.cf

debugfile
logfile /var/log/ha-log
logfacility local0
node masternode
node slavenode
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
bcast em2
auto_failback on

4. Create the Heartbeat configuration file haresources and copy it to /etc/ha.d/haresources on both nodes

# vi /etc/ha.d/haresources

masternode drbddisk::clusterdb Filesystem::/dev/drbd0::/vz::ext4 openvz

5. Create the Heartbeat configuration file authkeys and copy it to /etc/ha.d/authkeys on both nodes.

#vi /etc/ha.d/authkeys

auth 1
1 crc

6. Change the permission

chmod 600 /etc/ha.d/authkeys

7. Start heartbeat service.

/etc/init.d/heartbeat start

8. Add heartbeat on system boot.

chkconfig –add heartbeat

chkconfig heartbeat on

9.  Openvz Startup script for OpenVZ

# vi /etc/init.d/openvz

######Code Start Here ##########

#!/bin/bash
#
# openvz        Startup script for OpenVZ
#

start() {
/etc/init.d/vz start > /dev/null 2>&1
RETVAL=$?
/root/live-switchover/cluster_unfreeze.sh
return $RETVAL
}
stop() {
/etc/init.d/vz stop > /dev/null 2>&1
RETVAL=$?
return $RETVAL
}
status() {
/etc/init.d/vz status > /dev/null 2>&1
RETVAL=$?
return $RETVAL
}

# See how we were called.
case “$1″ in
start)
start
;;
stop)
stop
;;
status)
status
;;
*)
echo $”Usage: openvz {start|stop|status}”
exit 1
esac

exit $RETVAL

######Code End Here ##########

10. Make openvz script executable.

chmod +x /etc/init.d/openvz

11. Cluster freeze script.

#vi /root/live-switchover/cluster_freeze.sh

######Code Start Here ##########

#!/bin/bash
#Script by Thomas Kappelmueller
#Version 1.0
#LIVESWITCH_PATH=’/vz/cluster/liveswitch’

if [ -f $LIVESWITCH_PATH ]
then
rm -f $LIVESWITCH_PATH
fi

RUNNING_VE=$(vzlist -1)

for I in $RUNNING_VE
do
BOOTLINE=$(cat /etc/sysconfig/vz-scripts/$I.conf | grep -i “^onboot”)
if [ $I != 1 -a “$BOOTLINE” = “ONBOOT=”yes”” ]
then
vzctl chkpnt $I

if [ $? -eq 0 ]
then
vzctl set $I –onboot no –save
echo $I >> $LIVESWITCH_PATH
fi
fi
done

exit 0

######Code End Here ##########

12.  Make cluster_freeze.sh executable.

chmod +x /root/live-switchover/cluster_freeze.sh

13.  Cluster unfreeze script.

#vi /root/live-switchover/cluster_unfreeze.sh

######Code Start Here ##########

#!/bin/bash
#Script by Thomas Kappelmueller
#Version 1.0

LIVESWITCH_PATH=’/vz/cluster/liveswitch’

if [ -f $LIVESWITCH_PATH ]
then
FROZEN_VE=$(cat $LIVESWITCH_PATH)
else
exit 1
fi

for I in $FROZEN_VE
do
vzctl restore $I

if [ $? != 0 ]
then
vzctl start $I
fi

vzctl set $I –onboot yes –save
done

rm -f $LIVESWITCH_PATH

exit 0

######Code End Here ##########

14. Make cluster unfreeze script executable.

chmod +x /root/live-switchover/cluster_unfreeze.sh

15. Live switchover script.

#vi /root/live-switchover/live_switchover.sh

######Code Start Here ##########

#!/bin/bash
#Script by Thomas Kappelmueller
#Version 1.0

ps -eaf | grep ‘vzctl enter’ | grep -v ‘grep’ > /dev/null
if [ $? -eq 0 ]
then
echo ‘vzctl enter is active. please finish before live switchover.’
exit 1
fi
ps -eaf | grep ‘vzctl exec’ | grep -v ‘grep’ > /dev/null
if [ $? -eq 0 ]
then
echo ‘vzctl exec is active. please finish before live switchover.’
exit 1
fi
echo “Freezing VEs…”
/root/live-switchover/cluster_freeze.sh
echo “Starting Switchover…”
/usr/lib64/heartbeat/hb_standby

######Code End Here ##########

16. Make live_switchover.sh executable.

chmod +x /root/live-switchover/live_switchover.sh

17. Start OpenVZ script.

/etc/init.d/openvz start

18. Do the testing. Restart the masternode or your primary server. If the /vz will automatically mounted in the secondary node then it works.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 1

Install OpenVZ in both machines, follow the link below

How to install OpenVZ in Centos 6.5

DRBD installation.

1. BOTH MACHINES: Install this repository “elrepo-release-6-6.el6.elrepo.noarch.rpm”.

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

2. BOTH MACHINES: Install DRBD.

yum install drbd83-utils-8.3.13 kmod-drbd83-8.3.13 -y

3. BOTH MACHINES: Insert drbd module manually or reboot both machines.

/sbin/modprobe drbd

4. BOTH MACHINES: Create the Distributed Replicated Block Device resource file in both machines
Note: These files should be exactly the same on both machines

vi /etc/drbd.d/clusterdb.res

resource clusterdb
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
}
on masternode {
device /dev/drbd0;
disk /dev/sda2;
address 192.168.1.100:7788;
flexible-meta-disk internal;
}
on slavenode {
device /dev/drbd0;
disk /dev/sda2;
address 192.168.1.101:7788;
meta-disk internal;
}
}

5. BOTH MACHINES: Install crontabs, ntpdate then create a cronjob that check the server time every 5 minutes.

yum install crontabs ntpdate -y

crontab -e # insert the command below.
5 * * * * /usr/sbin/ntpdate 2.asia.pool.ntp.org

6. BOTH MACHINES: To have a DNS resolution, you may add the IP address FQDN to /etc/hosts on both machines.

vi /etc/hosts
192.168.1.100 masternode
192.168.1.101 slavenode

7. BOTH MACHINES: Unmount /vz to sda2 device. # this would matter on what device /dev/sdXX was assign to your /vz during the installation.

umount /dev/sda2 /vz

8. BOTH MACHINES: Initialize DRBD meta data

drbdadm create-md clusterdb

9. BOTH MACHINES: If create-md in step 8 fails, you may execute the command below.

dd if=/dev/zero of=/dev/sda2 #This may take very long, hours or even 1 day so be patient.

10. Comment the /vz in /etc/fstab so that it will not automatically mount during boot up.

vi /etc/fstab
UUID=a5cc99aa-76bc-44a5-b898-845bac0135fa /vz                     ext4    defaults        1 2

11. BOTH MACHINES: Start DRBD service

/etc/init.d/drbd start

12. Run the command below to the Primary node only (mine is masternode server).

drbdadm — –overwrite-data-of-peer primary clusterdb
#drbdadm — –overwrite-data-of-peer primary all

13. Check if the synchronization is finish.

cat /proc/drbd
————–
Sample output.
————–
cat /proc/drbd
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root@sighted, 2012-10-09 12:47:51
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–
ns:203296636 nr:291892 dw:203588528 dr:63212661 al:131223 bm:28 lo:0 pe:46 ua:0 ap:45 ep:1 wo:b oos:0

14. Create desired filesystem on Distributed Replicated Block Device device. I use mkfs.ext4

/sbin/mkfs.ext4 /dev/drbd0

12. Mount /dev/drbd0 to /vz

mount -t ext4 /dev/drbd0 /vz

To check the role:
drbdadm role clusterdb

—————————–
To promote the second node manually:

1. In the primary node run.
drbdadm secondary clusterdb

2. In the secondary node run.
drbdadm primary clusterdb
—————————–

Copy necessary OpenVZ files to DRBD device

13. Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point (do this on both nodes):

mv /vz /vz.orig
mkdir /vz

14. Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links (do this on both nodes):

mv /etc/vz /etc/vz.orig
mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig
mv /var/vzquota /var/vzquota.orig
ln -s /vz/cluster/etc/vz /etc/vz
ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts
ln -s /vz/cluster/var/vzquota /var/vzquota

15. Currently, masternode is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on masternode!):

mount /dev/drbd0 /vz
cp -a /vz.orig/* /vz/
mkdir -p /vz/cluster/etc
mkdir -p /vz/cluster/etc/sysconfig
mkdir -p /vz/cluster/var
cp -a /etc/vz.orig /vz/cluster/etc/vz/
cp -a /etc/sysconfig/vz-scripts.orig /vz/cluster/etc/sysconfig/vz-scripts
cp -a /var/vzquota.orig /vz/cluster/var/vzquota
umount /dev/drbd0

 

Proceed to the Heartbeat installation, please follow the link below.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 2

How to install OpenVZ in Centos 6.5

Install first a fresh OS Centos 6.5 with the partitions below. In this example we will use 1 TB of hardisk.

/ – 20000mb

/swap – 5000mb

/vz – All the available space.

 

Below is the steps on how to install OpenVZ in Centos 6.5

1. Edit the config file below and change SELINUX=enforcing to SELINUX=disabled.

vi /etc/selinux/config

2. Turn off firewall

chkconfig iptables –level 2345 off
chkconfig iptables –list

3. Install wget package if it’s not installed.

yum install wget

4. Download the repository of OpenVZ.

cd /etc/yum.repos.d

wget http://download.openvz.org/openvz.repo

rpm –import  http://download.openvz.org/RPM-GPG-Key-OpenVZ

5. Edit the repository and make sure in “[openvz-kernel-rhel6]” is enable.

vi /etc/yum.repos.d/openvz.repo

name=OpenVZ RHEL6-based kernel
#baseurl=http://download.openvz.org/kernel/branches/rhel6-2.6.32/current/
mirrorlist=http://download.openvz.org/kernel/mirrors-rhel6-2.6.32
enabled=1
gpgcheck=1
gpgkey=http://download.openvz.org/RPM-GPG-Key-OpenVZ
#exclude=vzkernel-firmware

6. Update/install OpenVZ and it’s packages required.

yum install openvz-kernel-rhel6 vzctl vzquota bridge-utils -y

7. Make OPENVZ to be the first priority in the boot loader.

vi /boot/grub/grub.conf

default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title OpenVZ (2.6.32-042stab072.10)
root (hd0,0)
kernel /boot/vmlinuz-2.6.32-042stab072.10 ro root=UUID=a4bcc31b-4164-4a6b-aa91-a4d3577d9963 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /boot/initramfs-2.6.32-042stab072.10.img
title CentOS (2.6.32-279.19.1.el6.x86_64)
root (hd0,0)

8. Reboot the machine.

init 6

9. Edit the config to the below parameters.

vi /etc/sysctl.conf

net.ipv4.ip_forward = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
kernel.core_uses_pid = 1

10. Download a template for guest OS.

cd /vz/template/cache

wget http://download.openvz.org/template/precreated/centos-6-x86_64.tar.gz

11.Install ploop.

yum install ploop -y

12. It’s time to create guest os config.

vzctl create 1001 –ostemplate centos-6-x86_64
vzctl set 1001 –hostname webserverpage.com –save
vzctl set 1001 –ipadd 192.168.1.10 –save
vzctl set 1001 –nameserver 8.8.8.8 –save
vzctl set 1001 –userpasswd root:’mypassword’ –save
vzctl set 1001 –onboot yes –save
vzctl set 1001 –applyconfig unlimited –save

13. To view/start/stop/restart the created VPS you may use the command below.

vzlist -a # view

vzctl start 1001 # start

vzctl stop 1001 # stop

vzctl restart 1001 # restart

vzctl enter 1001 # enter to the VPS