21 - 12 - 2024

XenServer 7 HA Cluster with CEPH

 

 

This guide describes deployment of two node XenServer without need of dedicated external storage. The concept is to provide redundancy without additional components, closing storage and compute on twin physical (or virtual for Proof of Concept) servers.

Initial requirements are:

  • Two servers withf two or more hard drivers - one for XenServer and CEPH Journal, second for CEPH data.
  • Pre-installed XenServer 7 on the two nodes
  • Internet Access

I assume that two severs are already installed without local repository created on the disks - this requires to unselect all disks under Virtual Machine Storage section.

 

1. (both servers)The first step is to define HTTP proxy if required. This is optional steps for users. On both servers edit /etc/environment and paste following configuration:

http_proxy=<proxy_address>
https_proxy=<proxy_address>
ftp_proxy=<proxy_address>
no_proxy=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
HTTP_PROXY=<proxy_address>
HTTPS_PROXY=<proxy_address>
FTP_PROXY=<proxy_address>
NO_PROXY=localhost,localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16

2. (both servers) Enable Centos base repository by editing /etc/yum.repos.d/CentOS-Base.repo

You need to change following elements or paste the configuration from attached to this article file (CentOS-Base.repo):

  • Change every occurence $releasever to 7.2.1511 and set all repos to enabled  1.
# sed -i 's/\$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo
  • Enable all repositories
# sed -i 's/enabled=0/enabled=1/g' /etc/yum.repos.d/CentOS-Base.repo
  • Hash mirrorlist and unhash baseurl
# sed -i 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-Base.repo
# sed -i 's/#baseurl=/baseurl=/g' /etc/yum.repos.d/CentOS-Base.repo

3. (both servers) Add CEPH repository

As in previous step you can do it manually by pasting the configuration or download attached file.

  • Paste the /etc/yum.repos.d/ceph.repo
# cat <<EOF > /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-hammer/el7/x86_64/
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-hammer/el7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-hammer/el7/SRPMS
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOF

4. (both servers) Enable EPEL repository

# yum -y install epel-release 

5. (both servers) Install required packages

# yum -y install lttng-ust lttng-tools fcgi leveldb

6. (both servers) Add CEPH user

# useradd -d /srv/ceph -m ceph -s /bin/bash
# passwd ceph

7. (both servers) Configure sudo

  • Add sudoers.d file
# echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
# chmod 0440 /etc/sudoers.d/ceph
  • Edit /etc/sudoers and hash Defaults requiretty
# sed -i 's/Defaults    requiretty/#Defaults    requiretty/g' /etc/sudoers

8. At this moment you should configure XenSevers starting from creating the Pool and adding Bonding and other network settings (when you liked to have separate network for management and storage etc.)

9.  (both servers) Edit /etc/hosts

  • NODE001
172.20.255.10 XEN-SRV-01 XEN-SRV-00
172.20.255.20 XEN-SRV-02 
  • NODE002
172.20.255.10 XEN-SRV-01
172.20.255.20 XEN-SRV-02 XEN-SRV-00

10.  (both servers) Generate SSH keys

# sudo -u ceph ssh-keygen
11.  (both servers) Add trust forSSH keys
  • NODE001
# sudo -u ceph ssh-copy-id ceph@XEN-SRV-01
  • NODE002
# sudo -u ceph ssh-copy-id ceph@XEN-SRV-02

12.  (both servers) Change system release information

# echo "CentOS Linux release 7.2.1511 (Core)" > /etc/redhat-release

12.  (both servers) Add SSH config

# su ceph
$ cat <<EOF > ~/.ssh/config
Host XEN-SRV-01
   Hostname XEN-SRV-01
   User ceph

Host XEN-SRV-02
   Hostname XEN-SRV-02
   User ceph
EOF
$ chmod 600 /srv/ceph/.ssh/config

13.  (master server) Install ceph-deploy

$ sudo yum install -y ceph-deploy

13.  (master server) Initial configuration of CEPH cluster nodes

$ cd ~; ceph-deploy new XEN-SRV-01 XEN-SRV-02
$ cd ~; ceph-deploy install --no-adjust-repos XEN-SRV-01 XEN-SRV-02

14.  (master server) Fix for http://tracker.ceph.com/issues/16443

$ sudo sed -ie "s/'mds', 'allow \*'/'mds', 'allow'/g" /usr/lib/python2.7/site-packages/ceph_deploy/gatherkeys.py

15.  (both servers) Disable and stop iptables

$ sudo systemctl disable iptables
$ sudo systemctl stop iptables

16.  (master server) Deploy MONs

$ cd ~ ; ceph-deploy mon create-initial

17.  (both servers) Prepare CEPH partition, adujst your sizes to you disks capacity

$ sudo parted /dev/sda
$ mkpart CEPH-JOURNAL ext2 44.6GB  53.7GB 
$ select /dev/sdb
$ mktable gpt
$ mkpart CEPH-DATA ext2 1M  100GB
$ q
$ exit 

18.  (both servers) Deploy OSDs

  • NODE001
# echo "/dev/sdb1       /var/lib/ceph/osd/ceph-0 xfs" >> /etc/fstab
  • NODE002
# echo "/dev/sdb1       /var/lib/ceph/osd/ceph-1 xfs" >> /etc/fstab
 
19.  (master server) Deploy OSDs
# su ceph
$ cd ~ ; ceph-deploy osd create XEN-SRV-02:/dev/sdb1:/dev/sda4 XEN-SRV-01:/dev/sdb1:/dev/sda4
$ cd ~ ; ceph-deploy osd activate XEN-SRV-01:/dev/sdb1:/dev/sda4 XEN-SRV-02:/dev/sdb1:/dev/sda4

20.  (master server) Final deployments commands

$ cd ~ ; ceph-deploy admin XEN-SRV-01 XEN-SRV-02

21.  (master server) Check the MON and OSD status, for two servers PGs should be marked as undersized it will be fixed in next steps

$ sudo ceph -s

21.  (master server) Fix PGs for two-node clusters

$ sudo ceph osd pool set rbd pgp_num 128
$ sudo ceph osd pool set rbd pg_num 128
$ sudo ceph osd pool set rbd size 2

$ sudo ceph osd crush rule create-simple xenserver_replica default osd
$ sudo ceph osd crush rule dump "xenserver_replica"
$ sudo ceph osd pool set rbd  crush_ruleset 1

22.  (both servers) Install rbdsr driver for XenServer

$ exit
# wget https://github.com/mstarikov/rbdsr/archive/master.zip
# unzip master.zip
# cd rbdsr-master; python ./install_rbdsr.py enable

23.  (both servers) Patch rbdsr driver for XenServer HA 

  • Create patch file - RBDSR-HA.patch
--- a/RBDSR.py2016-06-29 11:11:56.000000000 +1000
+++ b/RBDSR.py2016-06-29 11:10:23.000000000 +1000
@@ -58,6 +58,7 @@
             raise xs_errors.XenError('ConfigTargetMissing')


         self.path = ''
+        self.localIQN = 'rados_sr'
         real_address = ''
         try:
             # For monitors we only need one address, since we get accurate map from the ceph later on in attach.
  • Apply the patch
# patch /opt/xensource/sm/RBDSR.py RBDSR-HA.patch

24.  (master server) Add RBD disk for XenServer instances, size is given in megabytes

# rbd create rbd_CEPH-SR --size 614400

25.  (master server) Add Storage Repository to XenServer

# xe sr-create type=lvmoiscsi name-label=CEPH_RBD shared=true device-config:target=XEN-SRV-00 device-config:port=6789 device-config:targetIQN=rbd device-config:SCSIid=rbd_CEPH-SR device-config:chapuser=ceph device-config:chappassword=<your user ceph password>

26.   (both servers) Upload new systemd services and disabled init.d ceph service

  • NODE001 & NODE002
# systemctl disable ceph
# wget -P /etc/systemd/system/ http://tomz.pl/attachments/article/104/ceph-mon.service
# wget -P /etc/systemd/system/ http://tomz.pl/attachments/article/104/ceph-osd.service
  • NODE002 - Change the "-i 0" to "-i 1" to represent valid ID of the MON node
# sed -i 's/-i 0/-i 1/g' /etc/systemd/system/ceph-osd.service
  • NODE001 & NODE002
# systemctl enable ceph-mon.service ceph-osd.service

27.   (master server) Configure XenServer to use new RBD storage for HA and DR, have fun ;]

Attachments:
Download this file (CentOS-Base.repo)CentOS-Base.repo[Base repor for CEPH]2 kB
Download this file (ceph-mon.service)ceph-mon.service[systemd mon service]0.4 kB
Download this file (ceph-osd.service)ceph-osd.service[systemd osd service]0.4 kB
Download this file (ceph.repo)ceph.repo[Ceph repo]0.5 kB