How To Configure High - 2
How To Configure High - 2
Environment
Shared Storage
o Install Packages
iSCSI Server
Cluster Nodes
o Setup Disk
o Create Shared Storage
o Discover Shared Storage
Setup Cluster Nodes
o Host Entry
o Shared Storage
o Install Packages
Create a High Availability Cluster
Fencing Devices
Cluster Resources
o Prepare resources
Apache Web Server
o Create Resources
Verify High Availability Cluster
Test High Availability Cluster
Conclusion
High-Availability cluster aka Failover-cluster (active-passive cluster) is one of the most widely used cluster types in the production environment. This type of
cluster provides you the continued availability of services even one of the cluster nodes fails. If the server running application has failed for some reason
(hardware failure), cluster software (pacemaker) will restart the application on another node.
High-Availability is mainly used for databases, custom application, and also for file sharing. Fail-over is not just starting an application. It has some series of
operations associated with it like, mounting filesystems, configuring networks and starting dependent applications.
Environment
CentOS 7 / RHEL 7 supports Fail-over cluster using the pacemaker. Here, we will be looking at configuring the Apache (web) server as a highly available
application.
As I said, fail-over is a series of operations, so we would need to configure filesystem and networks as a resource. For a filesystem, we would
server.itzgeek.loca
192.168.1.20 iSCSI Shared Storage
l
Shared Storage
Shared storage is one of the important resources in the high-availability cluster as it holds the data of a running application. All the nodes in a cluster will have
access to shared storage for recent data. SAN storage is the most widely used shared storage in the production environment. For this demo, we will configure a
cluster with iSCSI storage for a demonstration purpose.
Install Packages
iSCSI Server
[root@server ~]# yum install targetcli -y
Cluster Nodes
It’s the time to configure cluster nodes to make use of iSCSI storage, perform below steps on all of your cluster nodes.
Setup Disk
Here, we will create 10GB of LVM disk on the iSCSI server to use as shared storage for our cluster nodes. Let’s list the available disks attached to the target
server using the command.
Output:
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 209715199 104344576 8e Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
From the above output, you can see that my system has 10GB of hard disk (/dev/sdb). Create an LVM with /dev/sdb (replace /dev/sdb with your disk name)
cat /etc/iscsi/initiatorname.iscsi
Node 1:
InitiatorName=iqn.1994-05.com.redhat:b11df35b6f75
Node 2:
InitiatorName=iqn.1994-05.com.redhat:119eaf9252a
Output:
/> cd /backstores/block
/backstores/block> create iscsi_shared_storage /dev/vg_iscsi/lv_iscsi
Created block storage object iscsi_shared_storage using /dev/vg_iscsi/lv_iscsi.
/backstores/block> cd /iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.server.x8664:sn.518a1f561ad5.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd iqn.2003-01.org.linux-iscsi.server.x8664:sn.518a1f561ad5/tpg1/acls
/iscsi/iqn.20...ad5/tpg1/acls> create iqn.1994-05.com.redhat:b11df35b6f75 << Initiator of Node 1
Created Node ACL for iqn.1994-05.com.redhat:b11df35b6f75
/iscsi/iqn.20...ad5/tpg1/acls> create iqn.1994-05.com.redhat:119eaf9252a << Initiator of Node 2
Created Node ACL for iqn.1994-05.com.redhat:119eaf9252a
/iscsi/iqn.20...ad5/tpg1/acls> cd /iscsi/iqn.2003-01.org.linux-iscsi.server.x8664:sn.518a1f561ad5/tpg1/luns
/iscsi/iqn.20...ad5/tpg1/luns> create /backstores/block/iscsi_shared_storage
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:119eaf9252a
Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:b11df35b6f75
/iscsi/iqn.20...ad5/tpg1/luns> cd /
/> ls
o- / ...................................................................................................... [...]
o- backstores ........................................................................................... [...]
| o- block ............................................................................... [Storage Objects: 1]
| | o- iscsi_shared_storage ........................... [/dev/vg_iscsi/lv_iscsi (10.0GiB) write-thru activated]
| | o- alua ................................................................................ [ALUA Groups: 1]
| | o- default_tg_pt_gp .................................................... [ALUA state: Active/optimized]
| o- fileio .............................................................................. [Storage Objects: 0]
| o- pscsi ............................................................................... [Storage Objects: 0]
| o- ramdisk ............................................................................. [Storage Objects: 0]
o- iscsi ......................................................................................... [Targets: 1]
| o- iqn.2003-01.org.linux-iscsi.server.x8664:sn.518a1f561ad5 ....................................... [TPGs: 1]
| o- tpg1 ............................................................................ [no-gen-acls, no-auth]
| o- acls ....................................................................................... [ACLs: 2]
| | o- iqn.1994-05.com.redhat:119eaf9252a ................................................ [Mapped LUNs: 1]
| | | o- mapped_lun0 ............................................... [lun0 block/iscsi_shared_storage (rw)]
| | o- iqn.1994-05.com.redhat:b11df35b6f75 ............................................... [Mapped LUNs: 1]
| | o- mapped_lun0 ............................................... [lun0 block/iscsi_shared_storage (rw)]
| o- luns ....................................................................................... [LUNs: 1]
| | o- lun0 ...................... [block/iscsi_shared_storage (/dev/vg_iscsi/lv_iscsi) (default_tg_pt_gp)]
| o- portals ................................................................................. [Portals: 1]
| o- 0.0.0.0:3260 .................................................................................. [OK]
o- loopback ...................................................................................... [Targets: 0]
/> saveconfig
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
[root@server ~]#
Enable and restart the target service.
Output:
192.168.1.20:3260,1 iqn.2003-01.org.linux-iscsi.server.x8664:sn.518a1f561ad5
Output:
# vi /etc/hosts
Shared Storage
Go to all of your nodes and check whether the new disk is visible or not. In my nodes, /dev/sdb is the disk coming from our iSCSI storage.
# fdisk -l | grep -i sd
Output:
On any one of your node (Ex, node1), create a filesystem for the Apache web server to hold the website files. We will create a filesystem with LVM.
[root@node1 ~]# pvcreate /dev/sdb
[root@node1 ~]# vgcreate vg_apache /dev/sdb
[root@node1 ~]# lvcreate -n lv_apache -l 100%FREE vg_apache
[root@node1 ~]# mkfs.ext4 /dev/vg_apache/lv_apache
Now, go to another node and run below commands to detect the new filesystem.
Finally, verify the LVM we created on node1 is available to you on another node (Ex. node2) using below command.
If the system doesn’t display the logical volume, consider rebooting the second node.
Install Packages
Install cluster packages (pacemaker) on all nodes using below command.
Allow all high availability application on the firewall to have proper communication between nodes. You can skip this step if the system doesn’t have firewalld
installed.
Use below command to list down the allowed applications in the firewall.
# firewall-cmd --list-service
Output:
Set password for the hacluster user. This user account is a cluster administration account. We suggest you set the same password for all nodes.
# passwd haclusterCOPY
Start the cluster service. Also, enable it to start automatically on system startup.
Output:
Username: hacluster
Password: << Enter Password
node1.itzgeek.local: Authorized
node2.itzgeek.local: Authorized
Create a cluster.
[root@node1 ~]# pcs cluster setup --start --name itzgeek_cluster node1.itzgeek.local node2.itzgeek.local
Output:
Output:
Output:
Cluster Status:
Stack: corosync
Current DC: node2.itzgeek.local (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Fri Jul 5 09:14:57 2019
Last change: Fri Jul 5 09:13:12 2019 by hacluster via crmd on node2.itzgeek.local
2 nodes configured
0 resources configured
PCSD Status:
node1.itzgeek.local: Online
node2.itzgeek.local: Online
Run the below command to get detailed information about the cluster, including its resources, pacemaker status, and nodes details.
Output:
WARNINGS:
No stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: node2.itzgeek.local (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Fri Jul 5 09:15:37 2019
Last change: Fri Jul 5 09:13:12 2019 by hacluster via crmd on node2.itzgeek.local
2 nodes configured
0 resources configured
Online: [ node1.itzgeek.local node2.itzgeek.local ]
No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Fencing Devices
The fencing device is a hardware/software device which helps to disconnect the problem node by resetting node / disconnecting shared storage from accessing
it. My demo cluster is running on top of VMware Virtual machine, so I am not showing you a fencing device setup, but you can follow this guide to set up a
fencing device.
Cluster Resources
Prepare resources
Apache Web Server
Install Apache web server on both nodes.
# vi /etc/httpd/conf/httpd.conf
Create Resources
Create a filesystem resource for Apache server. Use the storage coming from the iSCSI server.
# pcs resource create httpd_fs Filesystem device="/dev/mapper/vg_apache-lv_apache" directory="/var/www" fstype="ext4" --group apache
Output:
Create an IP address resource. This IP address will act a virtual IP address for the Apache and clients will use this ip address for accessing the web content
instead of individual node’s ip.
# pcs resource create httpd_vip IPaddr2 ip=192.168.1.100 cidr_netmask=24 --group apache
Output:
Create an Apache resource which will monitor the status of the Apache server and move the resource to another node in case of any failure.
Output:
Since we are not using fencing, disable it (STONITH). You must disable to start the cluster resources, but disabling STONITH in the production environment is
not recommended.
Output:
2 nodes configured
3 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Conclusion
That’s All. In this post, you have learned how to setup a High-Availability cluster on CentOS 7. Please let us know your thoughts in the comment section.