31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
SUSE Linux Enterprise High Availability 15 SP6
Installation and Setup
Quick Start
Publication Date: March 27, 2025
This document guides you through the setup of a very basic two-node cluster,
using the bootstrap scripts provided by the crm shell. This includes the configu-
ration of a virtual IP address as a cluster resource and the use of SBD on shared
storage as a node fencing mechanism.
Revision History: SUSE Linux Enterprise High Availability Documentation
Copyright © 2006–2025 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation
License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A
copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see https://www.suse.com/company/legal/ (https://www.suse.com/company/legal/) ↗. All third-
party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE
and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee
complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors
or the consequences thereof.
1 Usage scenario
The procedures in this document lead to a minimal setup of a two-node cluster
with the following properties:
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 1/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
Two nodes: alice (IP: 192.168.1.1 ) and bob (IP: 192.168.1.2 ),
connected to each other via network.
A floating, virtual IP address ( 192.168.1.10 ) that allows clients to con-
nect to the service no matter which node it is running on. This IP address
is used to connect to the graphical management tool Hawk2.
A shared storage device, used as SBD fencing mechanism. This avoids
split-brain scenarios.
Failover of resources from one node to the other if the active host breaks
down (active/passive setup).
You can use the two-node cluster for testing purposes or as a minimal cluster
configuration that you can extend later on. Before using the cluster in a produc-
tion environment, see Administration Guide to modify the cluster according to
your requirements.
2 System requirements
This section informs you about the key system requirements for the scenario
described in Section 1. To adjust the cluster for use in a production environ-
ment, refer to the full list in Chapter 2, System requirements and
recommendations.
2.1 Hardware requirements
Servers
Two servers with software as specified in Section 2.2, “Software
requirements”.
The servers can be bare metal or virtual machines. They do not require iden-
tical hardware (memory, disk space, etc.), but they must have the same ar-
chitecture. Cross-platform clusters are not supported.
Communication channels
At least two TCP/IP communication media per cluster node. The network
equipment must support the communication means you want to use for
cluster communication: multicast or unicast. The communication media
should support a data rate of 100 Mbit/s or higher. For a supported cluster
d d i i h i d Thi
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 2/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
setup, two or more redundant communication paths are required. This can
be done via:
Network Device Bonding (preferred)
A second communication channel in Corosync
Node fencing/STONITH
A node fencing (STONITH) device to avoid split-brain scenarios. This can be
either a physical device (a power switch) or a mechanism like SBD
(STONITH by disk) in combination with a watchdog. SBD can be used either
with shared storage or in diskless mode. This document describes using
SBD with shared storage. The following requirements must be met:
A shared storage device. For information on setting up shared stor-
age, see Storage Administration Guide for SUSE Linux Enterprise
Server (https://documentation.suse.com/sles/html/SLES-all/book-
storage.html) ↗. If you only need basic shared storage for testing pur-
poses, see Appendix A, Basic iSCSI storage for SBD.
The path to the shared storage device must be persistent and consis-
tent across all nodes in the cluster. Use stable device names such as
/dev/disk/by-id/dm-uuid-part1-mpath-abcedf12345 .
The SBD device must not use host-based RAID, LVM, or DRBD*.
For more information on STONITH, see Chapter 12, Fencing and STONITH.
For more information on SBD, see Chapter 13, Storage protection and SBD.
2.2 Software requirements
All nodes need at least the following modules and extensions:
Basesystem Module 15 SP6
Server Applications Module 15 SP6
SUSE Linux Enterprise High Availability 15 SP6
2.3 Other requirements and recommendations
Time synchronization
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 3/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
Time synchronization
Cluster nodes must synchronize to an NTP server outside the cluster. Since
SUSE Linux Enterprise High Availability 15, chrony is the default implemen-
tation of NTP. For more information, see the Administration Guide for SUSE
Linux Enterprise Server 15 SP6 (https://documentation.suse.com/sles-
15/html/SLES-all/cha-ntp.html) ↗.
The cluster might not work properly if the nodes are not synchronized, or
even if they are synchronized but have different timezones configured. In
addition, log files and cluster reports are very hard to analyze without syn-
chronization. If you use the bootstrap scripts, you will be warned if NTP is
not configured yet.
Host name and IP address
Use static IP addresses.
Only the primary IP address is supported.
List all cluster nodes in the /etc/hosts file with their fully qualified
host name and short host name. It is essential that members of the
cluster can find each other by name. If the names are not available,
internal cluster communication will fail.
SSH
All cluster nodes must be able to access each other via SSH. Tools like
crm report (for troubleshooting) and Hawk2's History Explorer require
passwordless SSH access between the nodes, otherwise they can only col-
lect data from the current node.
If you use the bootstrap scripts for setting up the cluster, the SSH keys are
automatically created and copied.
3 Overview of the bootstrap scripts
The following commands execute bootstrap scripts that require only a mini-
mum of time and manual intervention.
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 4/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
With crm cluster init , define the basic parameters needed for clus-
ter communication. This leaves you with a running one-node cluster.
With crm cluster join , add more nodes to your cluster.
With crm cluster remove , remove nodes from your cluster.
The options set by the bootstrap scripts might not be the same as the
Pacemaker default settings. You can check which settings the bootstrap scripts
changed in /var/log/crmsh/crmsh.log . Any options set during the boot-
strap process can be modified later with the YaST cluster module. See
Chapter 4, Using the YaST cluster module for details.
The bootstrap script crm cluster init checks and configures the following
components:
NTP
Checks if NTP is configured to start at boot time. If not, a message appears.
SSH
Creates SSH keys for passwordless login between cluster nodes.
Csync2
Configures Csync2 to replicate configuration files across all nodes in a
cluster.
Corosync
Configures the cluster communication system.
SBD/watchdog
Checks if a watchdog exists and asks you whether to configure SBD as node
fencing mechanism.
Virtual floating IP
Asks you whether to configure a virtual IP address for cluster administration
with Hawk2.
Firewall
Opens the ports in the firewall that are needed for cluster communication.
Cluster name
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 5/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
C us e a e
Defines a name for the cluster, by default hacluster . This is optional and
mostly useful for Geo clusters. Usually, the cluster name reflects the geo-
graphical location and makes it easier to distinguish a site inside a Geo
cluster.
QDevice/QNetd
Asks you whether to configure QDevice/QNetd to participate in quorum de-
cisions. We recommend using QDevice and QNetd for clusters with an even
number of nodes, and especially for two-node clusters.
This configuration is not covered here, but you can set it up later as de-
scribed in Chapter 14, QDevice and QNetd.
Note: Cluster configuration for different platforms
The crm cluster init script detects the system environment (for exam-
ple, Microsoft Azure) and adjusts certain cluster settings based on the profile
for that environment. For more information, see the file
/etc/crm/profiles.yml .
4 Installing the High Availability packages
The packages for configuring and managing a cluster are included in the
High Availability installation pattern. This pattern is only available after
SUSE Linux Enterprise High Availability is installed.
You can register with the SUSE Customer Center and install SUSE Linux
Enterprise High Availability while installing SUSE Linux Enterprise Server, or af-
ter installation. For more information, see the Deployment Guide
(https://documentation.suse.com/sles/html/SLES-all/cha-register-sle.html) ↗
for SUSE Linux Enterprise Server.
PROCEDURE 1: INSTALLING THE HIGH AVAILABILITY PATTERN
1. Install the High Availability pattern from the command line:
# zypper install -t pattern ha_sles
2. Install the High Availability pattern on all machines that will be part of
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 6/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
your cluster.
Note: Installing software packages on all nodes
For an automated installation of SUSE Linux Enterprise Server 15
SP6 and SUSE Linux Enterprise High Availability 15 SP6, use
AutoYaST to clone existing nodes. For more information, see
Section 3.2, “Mass installation and deployment with AutoYaST”.
5 Using SBD for node fencing
Before you can configure SBD with the bootstrap script, you must enable a
watchdog on each node. SUSE Linux Enterprise Server ships with several kernel
modules that provide hardware-specific watchdog drivers. SUSE Linux
Enterprise High Availability uses the SBD daemon as the software component
that “feeds” the watchdog.
The following procedure uses the softdog watchdog.
Important: Softdog Limitations
The softdog driver assumes that at least one CPU is still running. If all CPUs
are stuck, the code in the softdog driver that should reboot the system will
never be executed. In contrast, hardware watchdogs keep working even if all
CPUs are stuck.
Before using the cluster in a production environment, we highly recommend
replacing the softdog module with the hardware module that best fits
your hardware.
However, if no watchdog matches your hardware, softdog can be used as
kernel watchdog module.
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 7/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
PROCEDURE 2: ENABLING THE SOFTDOG WATCHDOG FOR SBD
1. On each node, enable the softdog watchdog:
# echo softdog > /etc/modules-load.d/watchdog.conf
# systemctl restart systemd-modules-load
2. Test if the softdog module is loaded correctly:
# lsmod | grep dog
softdog 16384 1
6 Setting up the first node
Set up the first node with the crm cluster init script. This requires only a
minimum of time and manual intervention.
PROCEDURE 3: SETTING UP THE FIRST NODE ( alice ) WITH crm cluster init
1. Log in to the first cluster node as root , or as a user with sudo
privileges.
Important: SSH key access
The cluster uses passwordless SSH access for communication be-
tween the nodes. The crm cluster init script checks for SSH
keys and generates them if they do not already exist.
In most cases, the root or sudo user's SSH keys must exist (or be
generated) on the node.
Alternatively, a sudo user's SSH keys can exist on a local machine
and be passed to the node via SSH agent forwarding. This requires
additional configuration that is not described for this minimal setup.
For more information, see Section 5.5.1, “Logging in”.
2. Start the bootstrap script:
# crm cluster init --name CLUSTERNAME
Replace the CLUSTERNAME placeholder with a meaningful name, like
the geographical location of your cluster (for example amsterdam )
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 8/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
the geographical location of your cluster (for example, amsterdam ).
This is especially helpful to create a Geo cluster later on, as it simplifies
the identification of a site.
If you need to use multicast instead of unicast (the default) for your
cluster communication, use the option --multicast (or -U ).
The script checks for NTP configuration and a hardware watchdog ser-
vice. If required, it generates the public and private SSH keys used for
SSH access and Csync2 synchronization and starts the respective
services.
3. Configure the cluster communication layer (Corosync):
a. Enter a network address to bind to. By default, the script proposes
the network address of eth0 . Alternatively, enter a different net-
work address, for example the address of bond0 .
b. Accept the proposed port ( 5405 ) or enter a different one.
4. Set up SBD as the node fencing mechanism:
a. Confirm with y that you want to use SBD.
b. Enter a persistent path to the partition of your block device that
you want to use for SBD. The path must be consistent across all
nodes in the cluster.
The script creates a small partition on the device to be used for
SBD.
5. Configure a virtual IP address for cluster administration with Hawk2:
a. Confirm with y that you want to configure a virtual IP address.
b. Enter an unused IP address that you want to use as administration
IP for Hawk2: 192.168.1.10
Instead of logging in to an individual cluster node with Hawk2,
you can connect to the virtual IP address.
6. Choose whether to configure QDevice and QNetd. For the minimal
setup described in this document, decline with n for now. You can set
up QDevice and QNetd later, as described in Chapter 14, QDevice and
QNetd.
Finally, the script starts the cluster services to bring the cluster online and en-
bl H k2 Th URL f H k2 i di l d h
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 9/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
able Hawk2. The URL to use for Hawk2 is displayed on the screen.
You now have a running one-node cluster. To view its status, proceed as follows:
PROCEDURE 4: LOGGING IN TO THE HAWK2 WEB INTERFACE
1. On any machine, start a Web browser and make sure that JavaScript and
cookies are enabled.
2. As URL, enter the virtual IP address that you configured with the boot-
strap script:
https://192.168.1.10:7630/
Note: Certificate warning
If a certificate warning appears when you try to access the URL for
the first time, a self-signed certificate is in use. Self-signed certifi-
cates are not considered trustworthy by default.
Ask your cluster operator for the certificate details to verify the
certificate.
To proceed anyway, you can add an exception in the browser to by-
pass the warning.
3. On the Hawk2 login screen, enter the Username and Password of the
user that was created by the bootstrap script (user hacluster , pass-
word linux ).
Important: Secure password
Replace the default password with a secure one as soon as possible:
# passwd hacluster
4. Click Log In. The Hawk2 Web interface shows the Status screen by
default:
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 10/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
FIGURE 1: STATUS OF THE ONE-NODE CLUSTER IN HAWK2
7 Adding the second node
Add a second node to the cluster with the crm cluster join bootstrap
script. The script only needs access to an existing cluster node, and completes
the basic setup on the current machine automatically.
For more information, see the crm cluster join --help command.
PROCEDURE 5: ADDING THE SECOND NODE ( bob ) WITH crm cluster join
1. Log in to the second node as root , or as a user with sudo privileges.
2. Start the bootstrap script:
If you set up the first node as root , you can run this command with no
additional parameters:
# crm cluster join
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 11/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
If you set up the first node as a sudo user, you must specify that user
with the -c option:
> sudo crm cluster join -c USER@alice
If NTP is not configured to start at boot time, a message appears. The
script also checks for a hardware watchdog device. You are warned if
none is present.
3. If you did not already specify alice with -c , you are prompted for the
IP address of the first node.
4. If you did not already configure passwordless SSH access between both
machines, you are prompted for the password of the first node.
After logging in to the specified node, the script copies the Corosync
configuration, configures SSH and Csync2, brings the current machine
online as a new cluster node, and starts the service needed for Hawk2.
Check the cluster status in Hawk2. Under Status › Nodes you should see two
nodes with a green status:
FIGURE 2: STATUS OF THE TWO-NODE CLUSTER
8 Testing the cluster
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 12/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
The following tests can help you identify issues with the cluster setup. However,
a realistic test involves specific use cases and scenarios. Before using the cluster
in a production environment, test it thoroughly according to your use cases.
The command sbd -d DEVICE_NAME list lists all the nodes that are
visible to SBD. For the setup described in this document, the output
should show both alice and bob .
Section 8.1, “Testing resource failover” is a simple test to check if the
cluster moves the virtual IP address to the other node if the node that
currently runs the resource is set to standby .
Section 8.2, “Testing with the crm cluster crash_test command”
simulates cluster failures and reports the results.
8.1 Testing resource failover
As a quick test, the following procedure checks on resource failovers:
PROCEDURE 6: TESTING RESOURCE FAILOVER
1. Open a terminal and ping 192.168.1.10 , your virtual IP address:
# ping 192.168.1.10
2. Log in to Hawk2.
3. Under Status › Resources, check which node the virtual IP address (re-
source admin_addr ) is running on. This procedure assumes the re-
source is running on alice .
4. Put alice into Standby mode:
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 13/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
FIGURE 3: NODE alice IN STANDBY MODE
5. Click Status › Resources. The resource admin_addr has been migrated
to bob .
During the migration, you should see an uninterrupted flow of pings to the vir-
tual IP address. This shows that the cluster setup and the floating IP work cor-
rectly. Cancel the ping command with Ctrl – C .
8.2 Testing with the crm cluster crash_test
command
The command crm cluster crash_test triggers cluster failures to find
problems. Before you use your cluster in production, it is recommended to use
this command to make sure everything works as expected.
The command supports the following checks:
--split-brain-iptables
Simulates a split-brain scenario by blocking the Corosync port. Checks
whether one node can be fenced as expected.
--kill-sbd / --kill-corosync / --kill-pacemakerd
Kills the daemons for SBD, Corosync, and Pacemaker. After running one of
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 14/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
these tests, you can find a report in the directory
/var/lib/crmsh/crash_test/ . The report includes a test case descrip-
tion, action logging, and an explanation of possible results.
--fence-node NODE
Fences a specific node passed from the command line.
For more information, see crm cluster crash_test --help .
EXAMPLE 1: TESTING THE CLUSTER: NODE FENCING
# crm_mon -1
Stack: corosync
Current DC: alice (version ...) - partition with quorum
Last updated: Fri Mar 03 14:40:21 2020
Last change: Fri Mar 03 14:35:07 2020 by root via cibadmin on alice
2 nodes configured
1 resource configured
Online: [ alice bob ]
Active resources:
stonith-sbd (stonith:external/sbd): Started alice
# crm cluster crash_test --fence-node bob
==============================================
Testcase: Fence node bob
Fence action: reboot
Fence timeout: 60
!!! WARNING WARNING WARNING !!!
THIS CASE MAY LEAD TO NODE BE FENCED.
TYPE Yes TO CONTINUE, OTHER INPUTS WILL CANCEL THIS CASE [Yes/No](No): Yes
INFO: Trying to fence node "bob"
INFO: Waiting 60s for node "bob" reboot...
INFO: Node "bob" will be fenced by "alice"!
INFO: Node "bob" was successfully fenced by "alice"
To watch bob change status during the test, log in to Hawk2 and navigate to
Status › Nodes.
9 Next steps
The bootstrap scripts provide a quick way to set up a basic High Availability
cluster that can be used for testing purposes. However, to expand this cluster
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 15/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
into a functioning High Availability cluster that can be used in production envi-
ronments, more steps are recommended.
RECOMMENDED STEPS TO COMPLETE THE HIGH AVAILABILITY CLUSTER SETUP
Adding more nodes
Add more nodes to the cluster using one of the following methods:
For individual nodes, use the crm cluster join script as de-
scribed in Section 7, “Adding the second node”.
For mass installation of multiple nodes, use AutoYaST as described in
Section 3.2, “Mass installation and deployment with AutoYaST”.
A regular cluster can contain up to 32 nodes. With the pacemaker_remote
service, High Availability clusters can be extended to include additional
nodes beyond this limit. See Pacemaker Remote Quick Start for more
details.
Enabling a hardware watchdog
Before using the cluster in a production environment, replace the softdog
module with the hardware module that best fits your hardware. For details,
see Section 13.6, “Setting up the watchdog”.
Adding more STONITH devices
For critical workloads, we highly recommend using two or three STONITH
devices:
To continue using SBD, see Chapter 13, Storage protection and SBD.
To use physical STONITH devices instead, see Chapter 12, Fencing
and STONITH.
Configuring QDevice
If the cluster has an even number of nodes, configure QDevice and QNetd
to participate in quorum decisions. QDevice provides a configurable num-
ber of votes, allowing a cluster to sustain more node failures than the stan-
dard quorum rules allow. For details, see Chapter 14, QDevice and QNetd.
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 16/17
31/03/2025, 13:14 SLE HA 15 SP6 | Installation and Setup Quick Start
10 For more information
More documentation for this product is available at
https://documentation.suse.com/sle-ha/ (https://documentation.suse.com/sle-
ha/) ↗. For further configuration and administration tasks, see the comprehen-
sive Administration Guide (https://documentation.suse.com/sle-ha/html/SLE-
HA-all/book-sleha-guide.html) ↗.
https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/article-installation.html 17/17