Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 1


Install OpenVZ in both machines, follow the link below

How to install OpenVZ in Centos 6.5

DRBD installation.

1. BOTH MACHINES: Install this repository “elrepo-release-6-6.el6.elrepo.noarch.rpm”.

rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

2. BOTH MACHINES: Install DRBD.

yum install drbd83-utils-8.3.13 kmod-drbd83-8.3.13 -y

3. BOTH MACHINES: Insert drbd module manually or reboot both machines.

/sbin/modprobe drbd

4. BOTH MACHINES: Create the Distributed Replicated Block Device resource file in both machines
Note: These files should be exactly the same on both machines

vi /etc/drbd.d/clusterdb.res

resource clusterdb
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
}
on masternode {
device /dev/drbd0;
disk /dev/sda2;
address 192.168.1.100:7788;
flexible-meta-disk internal;
}
on slavenode {
device /dev/drbd0;
disk /dev/sda2;
address 192.168.1.101:7788;
meta-disk internal;
}
}

5. BOTH MACHINES: Install crontabs, ntpdate then create a cronjob that check the server time every 5 minutes.

yum install crontabs ntpdate -y

crontab -e # insert the command below.
5 * * * * /usr/sbin/ntpdate 2.asia.pool.ntp.org

6. BOTH MACHINES: To have a DNS resolution, you may add the IP address FQDN to /etc/hosts on both machines.

vi /etc/hosts
192.168.1.100 masternode
192.168.1.101 slavenode

7. BOTH MACHINES: Unmount /vz to sda2 device. # this would matter on what device /dev/sdXX was assign to your /vz during the installation.

umount /dev/sda2 /vz

8. BOTH MACHINES: Initialize DRBD meta data

drbdadm create-md clusterdb

9. BOTH MACHINES: If create-md in step 8 fails, you may execute the command below.

dd if=/dev/zero of=/dev/sda2 #This may take very long, hours or even 1 day so be patient.

10. Comment the /vz in /etc/fstab so that it will not automatically mount during boot up.

vi /etc/fstab
UUID=a5cc99aa-76bc-44a5-b898-845bac0135fa /vz                     ext4    defaults        1 2

11. BOTH MACHINES: Start DRBD service

/etc/init.d/drbd start

12. Run the command below to the Primary node only (mine is masternode server).

drbdadm — –overwrite-data-of-peer primary clusterdb
#drbdadm — –overwrite-data-of-peer primary all

13. Check if the synchronization is finish.

cat /proc/drbd
————–
Sample output.
————–
cat /proc/drbd
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root@sighted, 2012-10-09 12:47:51
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–
ns:203296636 nr:291892 dw:203588528 dr:63212661 al:131223 bm:28 lo:0 pe:46 ua:0 ap:45 ep:1 wo:b oos:0

14. Create desired filesystem on Distributed Replicated Block Device device. I use mkfs.ext4

/sbin/mkfs.ext4 /dev/drbd0

12. Mount /dev/drbd0 to /vz

mount -t ext4 /dev/drbd0 /vz

To check the role:
drbdadm role clusterdb

—————————–
To promote the second node manually:

1. In the primary node run.
drbdadm secondary clusterdb

2. In the secondary node run.
drbdadm primary clusterdb
—————————–

Copy necessary OpenVZ files to DRBD device

13. Move the original /vz directory to /vz.orig and recreate the /vz directory to have it as a mount point (do this on both nodes):

mv /vz /vz.orig
mkdir /vz

14. Afterwards move the necessary OpenVZ directories (/etc/vz, /etc/sysconfig/vz-scripts, /var/vzquota) and replace them with symbolic links (do this on both nodes):

mv /etc/vz /etc/vz.orig
mv /etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts.orig
mv /var/vzquota /var/vzquota.orig
ln -s /vz/cluster/etc/vz /etc/vz
ln -s /vz/cluster/etc/sysconfig/vz-scripts /etc/sysconfig/vz-scripts
ln -s /vz/cluster/var/vzquota /var/vzquota

15. Currently, masternode is still Primary of /dev/drbd0. You can now mount it and copy the necessary files to it (only on masternode!):

mount /dev/drbd0 /vz
cp -a /vz.orig/* /vz/
mkdir -p /vz/cluster/etc
mkdir -p /vz/cluster/etc/sysconfig
mkdir -p /vz/cluster/var
cp -a /etc/vz.orig /vz/cluster/etc/vz/
cp -a /etc/sysconfig/vz-scripts.orig /vz/cluster/etc/sysconfig/vz-scripts
cp -a /var/vzquota.orig /vz/cluster/var/vzquota
umount /dev/drbd0

 

Proceed to the Heartbeat installation, please follow the link below.

Centos High Availability Cluster with OpenVZ, DRBD and Heartbeat – Part 2


Leave a Reply