SLES4SAP Hana SR Guide PerfOpt 12 - AWS - Color - en
SLES4SAP Hana SR Guide PerfOpt 12 - AWS - Color - en
1 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
SUSE® Linux Enterprise Server for SAP Applications is optimized in various
ways for SAP* applications. This guide provides detailed information
about installing and customizing SUSE Linux Enterprise Server for SAP
Applications for SAP HANA system replication in the performance
optimized scenario on the AWS platform. The document focuses on the
steps to integrate an already installed and working SAP HANA with system
replication. This document is based on SUSE Linux Enterprise Server for
SAP Applications 12 SP5.
Disclaimer: This document is part of the SUSE Best Practices series. All
documents published in this series were contributed voluntarily by SUSE
employees and by third parties. If not stated otherwise inside the document,
the articles are intended only to be one example of how a particular action
could be taken. Also, SUSE cannot verify either that the actions described in
the articles do what they claim to do or that they do not have unintended
consequences. All information found in this document has been compiled
with utmost attention to detail. However, this does not guarantee complete
accuracy. Therefore, we need to specically state that neither SUSE LLC,
its aliates, the authors, nor the translators may be held liable for possible
errors or the consequences thereof.
Contents
1 About This Guide 4
2 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
7 Setting Up SAP HANA System Replication 31
11 Administration 64
13 Examples 77
14 Reference 83
15 Appendix: Troubleshooting 84
16 Legal Notice 87
3 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
1 About This Guide
1.1 Introduction
SUSE® Linux Enterprise Server for SAP Applications is optimized in various ways for SAP*. This
guide provides detailed information about installing and customizing SUSE Linux Enterprise
Server for SAP Applications for SAP HANA system replication in the performance optimized
scenario.
“SAP customers invest in SAP HANA” is the conclusion reached by a recent market study carried
out by Pierre Audoin Consultants (PAC). In Germany, half of the companies expect SAP HANA
to become the dominant database platform in the SAP environment. Often the “SAP Business
Suite* powered by SAP HANA*” scenario is already being discussed in concrete terms.
SUSE is also accommodating this development by providing SUSE Linux Enterprise Server for
SAP Applications – the recommended and supported operating system for SAP HANA. In close
collaboration with SAP and hardware partners, SUSE provides two resource agents for customers
to ensure the high availability of SAP HANA system replications.
1.1.1 Abstract
This guide describes planning, setup, and basic testing of SUSE Linux Enterprise Server for
SAP Applications based on the high availability solution scenario "SAP HANA Scale-Up System
Replication Performance Optimized".
From the application perspective the following variants are covered:
4 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
From the infrastructure perspective the following variants are covered:
node 1 node 2
pacemaker
active / active
A B A B A B
For this scenarios SUSE developed the scale-up resource agent package SAPHanaSR . System
replication will help to replicate the database data from one computer to another to compensate
for database failures (single-box replication).
The second set of scenarios includes the architecture and development of scale-out solutions
(multi-box replication). For these scenarios SUSE developed the scale-out resource agent
package SAPHanaSR-ScaleOut .
SR sync
SAP HANA PR1 – site WDF SAP HANA PR1 – site ROT
5 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
With this mode of operation, internal SAP HANA high availability (HA) mechanisms and the
resource agent must work together or be coordinated with each other. SAP HANA system
replication automation for scale-out is described in a separate document available on our
documentation Web page at https://documentation.suse.com/sbp/all/ . The document for scale-
out is named "SAP HANA System Replication Scale-Out High Availability in Amazon Web Services".
SUSE has implemented the scale-up scenario with the SAPHana resource agent (RA), which
performs the actual check of the SAP HANA database instances. This RA is congured as a
master/slave resource. In the scale-up scenario, the master assumes responsibility for the SAP
HANA databases running in primary mode. The slave is responsible for instances that are
operated in synchronous (secondary) status.
To make conguring the cluster as simple as possible, SUSE has also developed the
SAPHanaTopology resource agent. This RA runs on all nodes of a SUSE Linux Enterprise Server
for SAP Applications cluster and gathers information about the statuses and congurations of
SAP HANA system replications. It is designed as a normal (stateless) clone.
SAP HANA system replication for scale-up is supported in the following scenarios or use cases:
Performance optimized (A ⇒ B). This scenario and setup is described in this document.
pacemaker
SAPHana Promoted SAPHana Demoted
SAP HANA
active / active SAP HANA
primary secondary
SAPHanaTopology SAPHanaTopology
System Replication
vIP vIP
FIGURE 3: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - PERFORMANCE OPTIMIZED
In the performance optimized scenario, an SAP HANA RDBMS site A is synchronizing with
an SAP HANA RDBMS site B on a second node. As the HANA RDBMS on the second node
is congured to pre-load the tables, the takeover time is typically very short.
6 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
One big advance of the performance optimized scenario of SAP HANA is the possibility to
allow read access on the secondary database site. To support this read enabled scenario,
a second virtual IP address is added to the cluster and bound to the secondary role of the
system replication.
Cost optimized (A ⇒ B, Q). This scenario and setup is described in another document
available from the documentation Web page at https://documentation.suse.com/sbp/all/ .
The document for cost optimized is named "Setting up a SAP HANA SR Cost Optimized
Infrastructure".
pacemaker
SAPHana Promoted vIPSAP HANA SAPInstance
SAP HANA
active / active QAS
primary SAPHana Demoted
SAPHanaTopology SAP HANA
System Replication secondary
vIP SAPHanaTopology
FIGURE 4: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - COST OPTIMIZED
In the cost optimized scenario, the second node is also used for a non-productive SAP HANA
RDBMS system (like QAS or TST). Whenever a takeover is needed, the non-productive
system must be stopped rst. As the productive secondary system on this node must be
limited in using system resources, the table preload must be switched o. A possible
takeover needs longer than in the performance optimized use case.
In the cost optimized scenario, the secondary needs to be running in a reduced memory
consumption conguration. This why read enabled must not be used in this scenario.
7 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
SAPHana Promoted pacemaker SAPHana Demoted
SAP HANA SAP HANA SAP HANA
SAPHanaTopology primary secondary SAPHanaTopology secondary
vIP vIP
FIGURE 5: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - PERFORMANCE OPTIMIZED CHAIN
A multi-tier system replication has an additional target. In the past this third side must have
been connected to the secondary (chain topology). With current SAP HANA versions, also
multiple target topology is allowed by SAP.
vIP vIP
FIGURE 6: SAP HANA SYSTEM REPLICATION SCALE-UP IN THE CLUSTER - PERFORMANCE OPTIMIZED MULTI
TARGET
Multi-tier and multi-target systems are implemented as described in this document. Only
the rst replication pair (A and B) is handled by the cluster itself. The main dierence to
the plain performance optimized scenario is that the auto registration must be switched o.
Multi-tenancy or MDC.
8 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Multi-tenancy is supported for all above scenarios and use cases. This scenario is supported
since SAP HANA SPS09. The setup and conguration from a cluster point of view is the
same for multi-tenancy and single containers. Thus you can use the above documents for
both kinds of scenarios.
In case of failure of the primary SAP HANA on node 1 (node or database instance), the cluster
rst tries to start the takeover process. This allows to use the already loaded data at the secondary
site. Typically the takeover is much faster than the local restart.
To achieve an automation of this resource handling process, use the SAP HANA resource
agents included in SAPHanaSR. System replication of the productive database is automated with
SAPHana and SAPHanaTopology.
The cluster only allows a takeover to the secondary site if the SAP HANA system replication
was in sync until the point when the service of the primary got lost. This ensures that the last
commits processed on the primary site are already available at the secondary site.
SAP did improve the interfaces between SAP HANA and external software such as cluster
frameworks. These improvements also include the implementation of SAP HANA call outs in
case of special events such as status changes for services or system replication channels. These
call outs are also called HA/DR providers. This interface can be used by implementing SAP
HANA hooks written in python. SUSE improved the SAPHanaSR package to include such SAP
HANA hooks to optimize the cluster interface. Using the SAP HANA hook described in this
document allows to inform the cluster immediately if the SAP HANA system replication brakes.
In addition to the SAP HANA hook status, the cluster continues to poll the system replication
status on a regular basis.
You can set up the level of automation by setting the parameter AUTOMATED_REGISTER . If
automated registration is activated, the cluster will also automatically register a former failed
primary to get the new secondary.
Important
The solution is not designed to manually 'migrate' the primary or secondary instance
using HAWK or any other cluster client commands. In the Administration section of this
document we describe how to 'migrate' the primary to the secondary site using SAP and
cluster commands.
9 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
1.1.5 Customers Receive Complete Package
Using the SAPHana and SAPHanaTopology resource agents, customers can integrate SAP HANA
system replications in their cluster. This has the advantage of enabling companies to use not
only their business-critical SAP systems but also their SAP HANA databases without interruption
while noticeably reducing needed budgets. SUSE provides the extended solution together with
best practices documentation.
SAP and hardware partners who do not have their own SAP HANA high availability solution
will also benet from this development from SUSE.
1.3 Errata
To deliver urgent smaller xes and important information in a timely manner, the Technical
Information Document (TID) for this setup guide will be updated, maintained and published at
a higher frequency:
In addition to this guide, check the SUSE SAP Best Practice Guide Errata for other solutions
(https://www.suse.com/support/kb/doc/?id=7023713 ).
10 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
1.4 Feedback
Several feedback channels are available:
Mail
For feedback on the documentation of this product, you can send a mail to doc-
[email protected] (mailto:[email protected]) . Make sure to include the document title,
the product version and the publication date of the documentation. To report errors or
suggest enhancements, provide a concise description of the problem and refer to the
respective section number and page (or URL).
Two-node cluster.
The AWS EC2 STONITH mechanism supported by SUSE Linux Enterprise High Availability
Extension 12 is supported with SAPHanaSR.
Each cluster node is in a dierent Availability Zone (AZ) within the same AWS Region.
The Overlay IP address must be an IP outside the Virtual Private Cloud (VPC) CIDR.
Technical users and groups, such as <sid>adm, are dened locally in the Linux system.
Name resolution of the cluster nodes and the virtual IP address must be done locally on
all cluster nodes.
Both SAP HANA instances (primary and secondary) have the same SAP Identier (SID)
and instance number.
11 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
If the cluster nodes are installed in dierent AWS Availability Zones, the environment
must match the requirements of the SLE HAE cluster product. Of particular concern is
the network latency and recommended maximum distance between the nodes. Review the
product documentation for SUSE Linux Enterprise High Availability Extension regarding
those recommendations.
SAP HANA Replication mode should be set to SYNC or SYNCMEM - ASYNC is not
supported by the cluster.
Automated start of SAP HANA instances during system boot must be switched o.
In MDC congurations the SAP HANA RDBMS is treated as a single system including
all database containers. Therefore, cluster takeover decisions are based on the
complete RDBMS status independent of the status of individual database containers.
For SAP HANA 1.0 you need version SPS10 rev3, SPS11 or newer if you want to stop
tenants during production and if you want the cluster to be able to take over. Older
SAP HANA versions are marking the system replication as failed if you stop a tenant.
Tests on multi-tenancy databases could force a dierent test procedure if you are
using strong separation of the tenants. As an example, killing the complete SAP HANA
instance using HDB kill does not work, because the tenants are running with dierent
Linux user UIDs. <sidadm> is not allowed to terminate the processes of the other
tenant users.
12 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
You need at least SAPHanaSR version 0.152 and in best SUSE Linux Enterprise Server for SAP
Applications 12 SP4 or newer. SAP HANA 1.0 is supported since SPS09 (095) for all mentioned
setups. SAP HANA 2.0 is supported with all known SPS versions.
Important
Without a valid STONITH method, the complete cluster is unsupported and will not work
properly.
13 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Region
VPC
System Replication
[Not supported by viewer]
This guide focuses on the manual setup of the cluster to explain the details and to give you the
possibility to create your own automation.
The seven main setup steps are:
Database installation (see Section 6, “Installing the SAP HANA Databases on Both Cluster Nodes”)
SAP HANA system replication setup (see Section 7, “Setting Up SAP HANA System Replication”
SAP HANA HA/DR provider hooks (see Section 8, “Setting Up SAP HANA HA/DR Providers”)
14 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
4 Planning the Installation
Planning the installation is essential for a successful SAP HANA cluster setup.
What you need before you start:
(Optional) Software from SUSE: a valid SUSE subscription, and access to update channels
15 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Note
The preferred method to deploy SAP HANA Scale-Up clusters in AWS is to use the
AWS Launch Wizard for SAP (https://docs.aws.amazon.com/launchwizard/latest/userguide/
launch-wizard-sap.html) . However, if you are installing SAP HANA Scale-Up manually,
refer to the AWS SAP HANA Guides (https://docs.aws.amazon.com/sap/latest/sap-hana/
welcome.html) for detailed installation instructions, including recommended storage
conguration and le systems.
Select two Availability Zones within an AWS Region for the SAP HANA cluster
implementation.
Use one or more VPC routing tables which are attached to the two subnets being used.
Optionally, host a Route53 private hosted naming zone to manage names in the VPC.
All components of the cluster and AWS services should reside in the same AWS account.
The use of networking components such as a VPC route table in another account (Shared
VPC setup) is not supported. If a multi account landscape is required, we advise you reach
to your AWS representative to have a look at implementing a Transit Gateway for cross
account/VPC access.
16 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
The virtual IP address for the SAP HANA will be an AWS Overlay IP address. This is an AWS
specic routing table entry which will send network trac to an instance, no matter which AZ
the instance is located in. The SUSE Linux Enterprise High Availability Extension cluster updates
this VPC routing table entry as needed.
The Overlay IP addresses needs to be dierent from the VPC CIDR range. All SAP system
components within the VPC can reach an AWS EC2 instance through this Overlay IP address.
On-premises users and clients, like SAP HANA Studio, cannot reach the Overlay IP address
because the AWS Virtual Private Network (VPN) gateway is not able to route trac to the
Overlay IP address. To overcome this limitation, refer to AWS' Overlay IP documentation and
learn how to use native AWS services with the Overlay IP address for your on-premises clients
and users:
Below are the prerequisites which need to be met before starting the cluster implementation:
Have one or more routing tables which are implicitly or explicitly attached to
the two subnets
17 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Allow network trac in between the two subnets
Use the checklist in the appendix to note down all information needed before starting the
installation.
Port 5405 for inbound UDP: Required by the cluster’s communication layer (corosync).
Port 7630 for inbound TCP: Used by the SUSE "HAWK" Web GUI.
There are two options for which Amazon Machine Image (AMI) to use:
Use the AWS Marketplace AMI "SUSE Linux Enterprise Server for SAP Applications 12
SP5" which already includes the required SUSE subscription and all High Availability
components for this solution.
Use a "SUSE Linux Enterprise Server for SAP" AMI. Search for "suse-sles-sap-12-sp5-byos" in
the list of AMIs. There are several BYOS (Bring Your Own Subscription) AMIs available.
Use these AMIs if you have a valid SUSE subscription. Register your system with the
Subscription Management Tool (SMT) from SUSE, SUSE Manager or directly with the SUSE
Customer Center.
18 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Launch all EC2 instances into the Availability Zones (AZ) specic subnets. The subnets need to
be able to communicate with each other.
Note
It is not possible to migrate from standard "SUSE Linux Enterprise Server" to "SUSE Linux
Enterprise Server for SAP Applications" in AWS. Therefore, use a "SLES for SAP" AMI which
includes the SUSE Linux Enterprise High Availability Extension.
Make sure that both EC2 instances part of the cluster are tagged.
Note
Use only ASCII characters in any AWS tag assigned to cluster managed resources.
19 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
4.4.1 Disabling Source/Destination Check for Cluster Instances
The source/destination check needs to be disabled. This can be done through scripts using the
AWS CLI or by using the AWS console.
The following command needs to be executed one time for both EC2 instances that are part
of the cluster:
Replace the variable EC2-instance with the EC2 instance IDs of the two cluster AWS EC2
instances.
The system on which this command gets executed needs temporarily a role with the following
policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1424870324000",
"Effect": "Allow",
"Action": [ "ec2:ModifyInstanceAttribute" ],
"Resource": [
"arn:aws:ec2:region-name:account-id:instance/instance-a",
"arn:aws:ec2:region-name:account-id:instance/instance-b"
]
}
]
}
account-id : The number of the AWS account in which the policy is used
instance-a and instance-b : The two EC2 instance ids participating in the cluster
The source/destination check can be also disabled from the AWS console. It requires the
following action in the console on both EC2 instances (see below).
20 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
FIGURE 9: DISABLE SOURCE/DESTINATION CHECK AT CONSOLE
{
"Statement": [
21 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
{
"Effect": "Allow",
"Action": [
"EC2:DescribeInstances",
"EC2:DescribeVolumes"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:GetMetricStatistics",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::aws-sap-data-provider/config.properties"
}
]
}
For more details about the permissions required by the AWS Data Provider for SAP, refer to
AWS public documentation: * AWS Data Provider for SAP: https://docs.aws.amazon.com/sap/
latest/general/aws-data-provider.html
The EC2 instances part of the cluster must have permission to make start and stop API calls to
the other nodes in the cluster as part of the fencing operation. Create an IAM policy with a name
like EC2-stonith-policy with the following content and attach it to the cluster IAM Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1424870324000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeTags"
],
"Resource": "*"
},
22 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
{
"Sid": "Stmt1424870324001",
"Effect": "Allow",
"Action": [
"ec2:RebootInstances",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": [
"arn:aws:ec2:region-name:account-id:instance/instance-a",
"arn:aws:ec2:region-name:account-id:instance/instance-b"
]
}
]
}
This policy allows the EC2 STONITH agent to make the proper API calls to operate correctly.
From the above example, replace the following variables with the appropriate names:
account-id : The number of the AWS account in which the policy is used
instance-a and instance-b : The two EC2 instance IDs participating in the cluster
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:ReplaceRoute",
"Resource": "arn:aws:ec2:region-name:account-id:route-table/rtb-XYZ"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
23 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
"Action": "ec2:DescribeRouteTables",
"Resource": "*"
}
]
}
This policy allows the agent to update the routing table(s) where the Overlay IP address has
been congured. From the above example, replace the following variables with the appropriate
names:
account-id : The number of the AWS account in which the policy is used
rtb-XYZ : The VPC routing table identier to be congured by the cluster. It is possible to
add more routing table IDs to the resource clause if you need to use multiple routing tables.
Select the route table used by the subnets from one of your SAP EC2 instances and their
application servers.
Click “Edit”.
Scroll to the end of the list and click “Add another route”.
Add the Overlay IP address of the SAP HANA database. Use as lter /32 (example:
192.168.10.1/32). Add the Elastic Network Interface (ENI) name to one of your existing
instance. The resource agent will modify this later automatically.
24 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Note
The VPC routing table containing the routing entry needs to be inherited to all
subnets in the VPC which have consumers or clients of the service. Add more routing
tables if required. Check the AWS VPC documentation at http://docs.aws.amazon.com/
AmazonVPC/latest/UserGuide/VPC_Introduction.html for more details on routing table
inheritance.
This section contains information you should consider during the installation of the operating
system.
For the scope of this document, rst SUSE Linux Enterprise Server for SAP Applications is
congured. Then the SAP HANA database including the system replication is set up. Finally the
automation with the cluster is set up and congured.
preserve_hostname: true
25 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Note
To learn how to change the default host name for an EC2 instance running SUSE
Linux Enterprise, refer to the AWS' public documentation at https://aws.amazon.com/
premiumsupport/knowledge-center/linux-static-hostname-suse/ .
Depending on the installed packages, a conict may be shown, like in the below example:
26 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
It will use an AWS CLI prole which needs to be created for the user root on both instances. The
SUSE resources agents require a prole that creates output in text format.
The name of the AWS CLI prole is arbitrary. The name chosen in this example is cluster. The
region of the instance needs to be added as well. Replace the string region-name with your target
region in the following example.
One way to create such a prole is to create a le /root/.aws/cong with the following content:
[default]
region = region-name
[profile cluster]
region = region-name
output = text
The other way is to use the aws configure CLI command in the following way:
# aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: region-name
Default output format [None]:
27 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Add the following environment variables to the root user’s .bashrc and to /etc/syscong/
pacemaker les:
EXAMPLE 11: ENVIRONMENT VARIABLES FOR PROXY
export HTTP_PROXY=http://a.b.c.d:n
export HTTPS_PROXY=http://a.b.c.d:m
export NO_PROXY=169.254.169.254
Add the following environment variables instead of the ones above if authentication is required:
EXAMPLE 12: ENVIRONMENT VARIABLES FOR PROXY WITH AUTHENTICATION
export HTTP_PROXY=http://username:[email protected]:n
export HTTPS_PROXY=http://username:[email protected]:m
export NO_PROXY=169.254.169.254
There is also the option to congure the proxy system wide, which is detailed in the following
SUSE Support Knowledgebase article:
2205917 SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP
Applications 12.
Other related SAP Notes are the following: - 1275776 Linux: Preparing SLES for SAP
environments - 2382421 Optimizing the Network Conguration on HANA- and OS-Level
28 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
5.5 Managing Networking for Cluster Instances
The cluster conguration requires two IP addresses per cluster instance, as corosync requires a
redundant communication ring.
The redundant corosync ring conguration allows the cluster nodes to communicate with each
other using the secondary IP address if there is an issue communicating with each other over
the primary IP address. This avoids unnecessary cluster failovers and split-brain situations.
Refer to the AWS documentation at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
MultipleIP.html#assignIP-existing to understand how to assign a secondary IP address.
After the secondary IP address is associated to the cluster instance in AWS, you need to congure
the secondary IP address in the cluster instance. Update the le /etc/syscong/network/ifcfg-eth0
as shown below. Replace XX.XX.XX.XX with the new secondary IP address and replace 'XX' with
the two digit subnet mask.
IPADDR_1="XX.XX.XX.XX/XX"
LABEL_1="1"
The system will read the le and add the secondary IP address after the cluster instance is
rebooted. Additionally, executing the command below as root will add the IP address to the
cluster instance network stack without rebooting.
Replace XX.XX.XX.XX with the new secondary IP address and replace XX with the two digit
subnet mask.
SUSE Linux Enterprise Server ships with the cloud-netconfig-ec2 package which contains
scripts to automatically congure network interfaces in an EC2 instance.
29 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
This package may remove secondary IP addresses which are managed by the cluster agents
from the network interface. This can cause service interruptions for users of the cluster services.
Perform the following task on all cluster nodes:
Check whether the package cloud-netconfig-ec2 is installed with the command.
If this package is installed, update the le /etc/syscong/network/ifcfg-eth0 and change the
following line to a no setting. If the package is not yet installed, add the following line:
CLOUD_NETCONFIG_MANAGE='no'
Even though this document focuses on the integration of an installed SAP HANA with
system replication already set up into the pacemaker cluster, this chapter summarizes the test
environment. Always use the ocial documentation from SAP to install SAP HANA and to set
up the system replication.
PREPARATION
Read the SAP Installation and Setup Manuals available at the SAP Marketplace.
ACTIONS
1. Install the SAP HANA Database as described in the SAP HANA Server Installation Guide.
30 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
2. Check if the SAP Host Agent is installed on all cluster nodes. If this SAP service is not
installed, install it now.
3. Verify that both databases are up and all processes of these databases are running correctly.
As Linux user <sid>adm, use the command line tool HDB to get an overview of the running
HANA processes. The output of HDB info should be similar to the output shown below:
For more information, read the section Setting Up System Replication of the SAP HANA
Administration Guide.
Procedure
31 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
3. Register the secondary database.
0 rows affected (overall time 15.352069 sec; server time 15.347745 sec)
EXAMPLE 18: SIMPLE BACKUP FOR A SINGLE CONTAINER (NON MDC) DATABASE
Important
Without a valid backup, you cannot bring SAP HANA into a system replication
conguration.
Note
Do not use strings like "primary" and "secondary" as site names.
32 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
EXAMPLE 19: ENABLE THE PRIMARY
The mode has changed from “none” to “primary” and the site now has a site name and a site ID.
To stop the secondary you can use the command line tool HDB.
EXAMPLE 22: COPY THE KEY AND KEY-DATA FILE FROM THE PRIMARY TO THE SECONDARY SITE
Beginning with SAP HANA 2.0, the system replication is running encrypted. This is why
the key les need to be copied over from the primary to the secondary site.
cd /usr/sap/<SID>/SYS/global/security/rsecssfs
rsync -va {,<node1-siteB>:}$PWD/data/SSFS_<SID>.DAT
rsync -va {,<node1-siteB>:}$PWD/key/SSFS_<SID>.KEY
33 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
EXAMPLE 23: REGISTER THE SECONDARY
...
suse02:~> hdbnsutil -sr_register --name=ROT \
--remoteHost=suse01 --remoteInstance=10 \
--replicationMode=sync --operationMode=logreplay
adding site ...
checking for inactive nameserver ...
nameserver suse02:30001 not responding.
collecting information ...
updating local ini files ...
done.
The remoteHost is the primary node in our case, the remoteInstance is the database instance
number (here 10).
Now start the database instance again and verify the system replication status. On the secondary
node, the mode should be one of "SYNC" or "SYNCMEM". "ASYNC" is also a possible replication
mode but not supported with automated cluster takeover. The mode depends on the "sync"
option dened during the registration of the secondary.
To start the new secondary, use the command line tool HDB . Then check the SR
conguration using hdbnsutil -sr_stateConfiguration .
To view the replication state of the whole SAP HANA cluster, use the following command as
<sid>adm user on the primary node:
The python script systemReplicationStatus.py provides details about the current system
replication.
34 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
suse01:~> HDBSettings.sh systemReplicationStatus.py --sapcontrol=1
...
site/2/SITE_NAME=ROT1
site/2/SOURCE_SITE_ID=1
site/2/REPLICATION_MODE=SYNC
site/2/REPLICATION_STATUS=ACTIVE
site/1/REPLICATION_MODE=PRIMARY
site/1/SITE_NAME=WDF1
local_site_id=1
...
35 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
8 Setting Up SAP HANA HA/DR Providers
This step is mandatory to inform the cluster immediately if the secondary gets out of sync. The
hook is called by SAP HANA using the HA/DR provider interface at that point of time when the
secondary gets out of sync. This is typically the case when the rst commit pending is released.
The hook is called by SAP HANA again when the system replication is back.
Procedure
Integrate the hook into the global.ini le (SAP HANA needs to be stopped for doing that
oine).
Use the hook from the SAPHanaSR package (available since version 0.153). Optionally copy it
to your preferred directory like /hana/share/myHooks. The hook must be available on all SAP
HANA cluster nodes.
EXAMPLE 26: STOP SAP HANA
36 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
sapcontrol -nr <instanceNumber> -function StopSystem
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /usr/share/SAPHanaSR
execution_order = 1
[trace]
ha_dr_saphanasr = info
delta_datashipping
logreplay
logreplay_readaccess
Until a takeover and re-registration in the opposite direction, the entry for the operation
mode is missing on your primary site. The rst operation mode which was available was
delta_datashipping. Today the preferred modes for HA are logreplay or logreplay_readaccess. Using
the operation mode logreplay makes your secondary site in the SAP HANA system replication
a hot standby system. For more details regarding all operation modes check the available SAP
documentation such as "How To Perform System Replication for SAP HANA".
EXAMPLE 28: CHECKING THE OPERATION MODE
Check both global.ini les and add the operation mode if needed.
section
[ system_replication ]
entry
operation_mode = logreplay
[system_replication]
operation_mode = logreplay
37 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
8.3 Allowing <sidadm> to Access the Cluster
The current version of the SAPHanaSR python hook uses the command sudo to allow the
<sidadm> user to access the cluster attributes. In Linux you can use visudo to start the vi
editor for the /etc/sudoers conguration le.
The user <sidadm> must be able to set the cluster attributes hana_<sid>_site_srHook_* .
The SAP HANA system replication hook needs password free access. The following example
limits the sudo access to exactly setting the needed attribute.
Replace the <sid> by the lowercase SAP system ID (like ha1 ).
EXAMPLE 29: ENTRY IN SUDO PERMISSIONS /ETC/SUDOERS FILE
38 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
This chapter describes the conguration of the cluster software SUSE Linux Enterprise High
Availability Extension, which is part of SUSE Linux Enterprise Server for SAP Applications, and
SAP HANA Database Integration.
ACTIONS
9.1 Installation
AWS "SLES for SAP" AMIs already have all High Availability Extension packages installed.
It is recommended to update all packages to make sure that the latest revision of the cluster
packages and AWS agents are installed.
EXAMPLE 30: UPDATING SUSE LINUX ENTERPRISE SERVER WITH ALL LATEST PATCHES
By default, the cluster service (pacemaker) is disabled and not set to start during boot. Thus at
this point the cluster should not be running. However, if you previously congured pacemaker
and it is running, proceed with a "stop" by using the following command:
39 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
9.2.2 Creating Keys
On Node 1, generate a corosync secret key used to encrypt all cluster communication:
suse01:~# corosync-keygen
A new key le will be created on /etc/corosync/authkey, and this le needs to be copied to the
same location on Node 2. After generating and transferring the key le to the second node,
verify that permissions and ownerships on both nodes are the same:
EXAMPLE 34: CHECKING PERMISSIONS AND OWNERSHIP FOR COROSYNC KEY FILE
suse01:~ # ls -l /etc/corosync/authkey
-r-------- 1 root root 128 Oct 23 10:51 /etc/corosync/authkey
Note
When using the following conguration as an example for the le /etc/corosync/
corosync.conf, replace the IP addresses from the le below.
40 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
crypto_hash: sha1
crypto_cipher: aes256
clear_node_high_bit: yes
interface {
ringnumber: 0
bindnetaddr: ip-local-node
mcastport: 5405
ttl: 1
}
transport: udpu
}
logging {
fileline: off
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
nodelist {
node {
ring0_addr: ip-node-1-a
# redundant ring
ring1_addr: ip-node-1-b
nodeid: 1
}
node {
ring0_addr: ip-node-2-a
# redundant ring
ring1_addr: ip-node-2-b
nodeid: 2
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Replace the variables ip-node-1-a, ip-node-1-b, ip-node-2-a, ip-node-2-b and ip-local-node from the
above sample le.
41 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
ip-local-node: Use the IP address of the node where the le is being congured. This IP
will be dierent between cluster nodes.
The chosen settings for crypto_cipher and crypto_hash are suitable for clusters in AWS. They may
be modied according to SUSE’s documentation if strong encryption of cluster communication
is desired.
Note
Remember to change the password of the user hacluster.
Check the cluster status with crm_mon . We use the option -r to also see resources which may
be congured but stopped. But at this stage crm_mon is expected to display no services.
EXAMPLE 37: CHECKING CLUSTER STATUS USING CRM_MON
# crm_mon -r
The command will show the "empty" cluster and will print something like the computer output
shown below. The most interesting information for now is that there are two nodes in the status
"online", and the message "partition with quorum".
EXAMPLE 38: CLUSTER STATUS AFTER FIRST START
Stack: corosync
Current DC: prihana (version 1.1.19+20181105.ccd6b5b10-3.19.1-1.1.19+20181105.ccd6b5b10)
- partition with quorum
42 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Last updated: Mon Sep 28 18:36:16 2020
Last change: Mon Sep 28 18:36:09 2020 by root via crm_attribute on suse01
2 nodes configured
2 nodes configured
0 resources configured
No resources
Corosync’s redundant ring conguration can be checked with the following command:
corosync-cfgtool -s
This will display a result like the following one for a cluster node with redundant corosync
rings and IP addresses 172.16.100.179 and 172.16.100.138:
Note
It is not recommended to automatically rejoin a node to a cluster after a system crash with
a reboot. A full inspection and a root cause analysis of the crash is highly recommended
before rejoining the cluster.
43 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Use the command crm to add the objects to CRM. Copy the following examples to a local le,
edit the le and then load the conguration to the CIB:
suse01:~ # vi crm-fileXX
suse01:~ # crm configure load update crm-fileXX
The rst example denes the cluster bootstrap options, the resource and operation defaults.
suse01:~ # vi crm-bs.txt
# enter the following to the file crm-bs.txt
property $id="cib-bootstrap-options" \
stonith-enabled="true" \
stonith-action="off" \
stonith-timeout="600s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
Note
In some older SUSE versions, the parameter stonith-action may require a change to
stonith-action="poweroff" .
The setting powero forces the EC2 STONITH agent to shut down the EC2 instance in case of
fencing operation. This is desirable to avoid split brain scenarios on the AWS platform.
Now, add the conguration to the cluster:
suse01::~ # vi aws-stonith.txt
# enter the following to the file aws-stonith.txt
44 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
primitive res_AWS_STONITH stonith:external/ec2 \
op start interval=0 timeout=180 \
op stop interval=0 timeout=180 \
op monitor interval=300 timeout=60 \
meta target-role=Started \
params tag=pacemaker profile=cluster pcmk_delay_max=15
The "tag=pacemaker" entry needs to match the tag chosen for the EC2 instances. The value for
this tag contains the host name returned by the uname -n command. The name of the prole
("cluster" in this example) needs to match the previously congured prole in the AWS CLI.
Name this le for example aws-stonith.txt and add it to the conguration. The following
command needs to be issued as root user:
Note
Make sure to execute the STONITH tests as outlined in section Troubleshooting of this
document to verify STONITH on both nodes.
This step requires the Overlay IP address and the resource IDs of the AWS VPC Route Table(s).
Create a le with the following content:
suse01:~ # vi aws-move-ip.txt
# enter the following to the file aws-move-ip.txt
primitive res_AWS_IP ocf:suse:aws-vpc-move-ip \
params ip=overlay-ip-address routing_table=rtb-table interface=eth0 profile=cluster \
op start interval=0 timeout=180 \
op stop interval=0 timeout=180 \
op monitor interval=60 timeout=60
rtb-table : The AWS VPC Route Table(s) resource ids - if using more than one VPC Route
Table use comma (,) as a separator (see below).
45 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
interface : The Linux' network interface identicator
prole : The name of the prole (cluster in this example) needs to match the previously
congured prole in the AWS CLI.
Load this le into the cluster conguration by issuing the following command as superuser:
suse01:~ # vi aws-move-ip.txt
# enter the following to the file aws-move-ip.txt
primitive res_AWS_IP ocf:suse:aws-vpc-move-ip \
params ip=overlay-ip-address routing_table=rtb-table-1,rtb-table-2,rtb-table-N
interface=eth0 profile=cluster \
op start interval=0 timeout=180 \
op stop interval=0 timeout=180 \
op monitor interval=60 timeout=60
Note
Make sure to execute the IP tests as outlined in section Troubleshooting of this document to
verify them on both nodes. Checking the conguration for potential problems at current
point in time will increase the chances to launch the cluster successfully.
9.3.4 SAPHanaTopology
Next, dene the group of resources needed, before the HANA instances can be started. Prepare
the changes in a text le, for example crm-saphanatop.txt, and load it with the command:
crm configure load update crm-saphanatop.txt
# vi crm-saphanatop.txt
# enter the following to crm-saphanatop.txt
primitive rsc_SAPHanaTopology_HA1_HDB10 ocf:suse:SAPHanaTopology \
op monitor interval="10" timeout="600" \
op start interval="0" timeout="600" \
op stop interval="0" timeout="300" \
params SID="HA1" InstanceNumber="10"
clone cln_SAPHanaTopology_HA1_HDB10 rsc_SAPHanaTopology_HA1_HDB10 \
46 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
meta clone-node-max="1" interleave="true"
Additional information about all parameters can be found with the command:
man ocf_suse_SAPHanaTopology
The most important parameters here are SID and InstanceNumber, which are quite self-
explaining in the SAP context. Beside these parameters, the timeout values or the operations
(start, monitor, stop) are typical tuneables.
9.3.5 SAPHana
Next, dene the group of resources needed, before the HANA instances can be started. Edit the
changes in a text le, for example crm-saphana.txt, and load it with the command:
crm configure load update crm-saphana.txt
Parameter Description
47 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Parameter Description
Additional information about all parameters can be found with the command:
man ocf_suse_SAPHana
# vi crm-saphana.txt
# enter the following to crm-saphana.txt
primitive rsc_SAPHana_HA1_HDB10 ocf:suse:SAPHana \
op start interval="0" timeout="3600" \
op stop interval="0" timeout="3600" \
op promote interval="0" timeout="3600" \
op monitor interval="60" role="Master" timeout="700" \
op monitor interval="61" role="Slave" timeout="700" \
params SID="HA1" InstanceNumber="10" PREFER_SITE_TAKEOVER="true" \
DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"
ms msl_SAPHana_HA1_HDB10 rsc_SAPHana_HA1_HDB10 \
meta clone-max="2" clone-node-max="1" interleave="true"
48 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
The most important parameters here are again SID and InstanceNumber. Beside these
parameters, the timeout values for the operations (start, promote, monitors, stop) are typical
tuneables.
9.3.6 Constraints
Two constraints are organizing the correct placement of the virtual IP address for the
client database access and the start order between the two resource agents SAPHana and
SAPHanaTopology.
The AWS IP agent needs to operate on the same node as the SAP HANA Master database. A
constraint forces it to be on the same node.
# vi crm-cs.txt
# enter the following to crm-cs.txt
Add this le to the conguration. The following command needs to be issued as superuser. It
uses the le name crm-cs.txt:
This step is optional. If you have an active/active SAP HANA system replication with a read-
enabled secondary, it is possible to integrate the needed second Overlay IP address into the
cluster. This is done by adding a second Overlay IP address resource and a location constraint
binding the address to the secondary site.
# vi crm-re.txt
# enter the following to crm-re.txt
primitive res_AWS_IP_readenabled ocf:suse:aws-vpc-move-ip \
params ip=readenabled-overlay-ip-address routing_table=rtb-table interface=eth0
profile=cluster \
op start interval=0 timeout=180 \
op stop interval=0 timeout=180 \
49 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
op monitor interval=60 timeout=60
colocation col_saphana_ip_HA1_HDB10_readenabled 2000: \
res_AWS_IP_readenabled:Started msl_SAPHana_HA1_HDB10:Slave
Now that the cluster has been congured, the basic it should have two online nodes, and six
resources. If you congured a second Overlay IP for the read enabled replica, then the cluster
will display seven resources.
The cluster status can be checked with crm status command:
2 nodes configured
6 resources configured
The above example shows that the Overlay IP resource (res_AWS_IP) is "Started" on node suse01,
along with SAPHanaTopology resource (cln_SAPHanaTopology_HA1_HDB10) running on both
cluster nodes, and Master/Slave SAPHana (msl_SAPHana_HA1_HDB10), which in the above
example is Master (Primary) on node suse01, and Secondary on node suse02.
50 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
10 Testing the Cluster
The lists of tests will be enhanced with future updates of this document.
As with any cluster testing is crucial. Make sure that all test cases derived from customer
expectations are implemented and fully passed. Otherwise the project is likely to fail in
production.
The test prerequisite, if not described dierently, is always that both nodes are booted, normal
members of the cluster, and the HANA RDBMS is running. The system replication is in sync
(SOK).
Note
The following tests are designed to run in a sequence. They depend on the exit state of
the proceeding tests.
COMPONENT:
Primary Database
DESCRIPTION:
51 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. Manually register the old primary (on node 1) with the new primary after takeover
(on node 2) as <sid>adm.
EXPECTED:
1. The cluster detects the stopped primary HANA database (on node 1) and marks the
resource failed.
2. The cluster promotes the secondary HANA database (on node 2) to take over as
primary.
3. The cluster migrates the IP address to the new primary (on node 2).
4. After some time the cluster shows the sync_state of the stopped primary (on node
1) as SFAIL.
6. After the manual register and resource refresh, the system replication pair is marked
as in sync (SOK).
7. The cluster "failed actions" are cleaned up after following the recovery procedure.
Component:
52 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Primary Database
Description:
The primary HANA database is stopped during normal cluster operation.
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. Manually register the old primary (on node 2) with the new primary after takeover
(on node 1) as <sid>adm.
EXPECTED:
1. The cluster detects the stopped primary HANA database (on node 2) and marks the
resource failed.
2. The cluster promotes the secondary HANA database (on node 1) to take over as
primary.
3. The cluster migrates the IP address to the new primary (on node 1).
4. After some time, the cluster shows the sync_state of the stopped primary (on node
2) as SFAIL.
6. After the manual register and resource refresh, the system replication pair is marked
as in sync (SOK).
7. The cluster "failed actions" are cleaned up after following the recovery procedure.
53 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
10.1.3 Test: Crash Primary Database on Availability Zone A (Node 1)
Component:
Primary Database
Description:
Simulate a complete breakdown of the primary database system.
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. Manually register the old primary (on node 1) with the new primary after takeover
(on node 2) as <sid>adm.
EXPECTED:
1. The cluster detects the stopped primary HANA database (on node 1) and marks the
resource failed.
2. The cluster promotes the secondary HANA database (on node 2) to take over as
primary.
3. The cluster migrates the IP address to the new primary (on node 2).
4. After some time, the cluster shows the sync_state of the stopped primary (on node
1) as SFAIL.
54 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
6. After the manual register and resource refresh, the system replication pair is marked
as in sync (SOK).
7. The cluster "failed actions" are cleaned up after following the recovery procedure.
Component:
Primary Database
Description:
Simulate a complete breakdown of the primary database system.
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. Manually register the old primary (on node 2) with the new primary after takeover
(on node 1) as <sid>adm.
EXPECTED:
1. The cluster detects the stopped primary HANA database (on node 2) and marks the
resource failed.
2. The cluster promotes the secondary HANA database (on node 1) to take over as
primary.
3. The cluster migrates the IP address to the new primary (on node 1).
55 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
4. After some time, the cluster shows the sync_state of the stopped primary (on node
2) as SFAIL.
6. After the manual register and resource refresh, the system replication pair is marked
as in sync (SOK).
7. The cluster "failed actions" are cleaned up after following the recovery procedure.
Component:
Cluster node of primary site
Description:
Simulate a crash of the primary site node running the primary HANA database.
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. AWS infrastructure has stopped the fenced instance. Restart it with AWS console or
AWS CLI tools. Execute the following command after the instance has booted.
3. Manually register the old primary (on node 1) with the new primary after takeover
(on node 2) as <sid>adm.
56 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
suse01# crm resource refresh rsc_SAPHana_HA1_HDB10 suse01
EXPECTED:
1. The cluster detects the failed node (node 1) and declares it UNCLEAN and sets the
secondary node (node 2) to status "partition with quorum".
4. The cluster promotes the secondary HANA database (on node 2) to take over as
primary.
5. The cluster migrates the IP address to the new primary (on node 2).
6. After some time, the cluster shows the sync_state of the stopped primary (on node
2) as SFAIL.
8. After the manual register and resource refresh, the system replication pair is marked
as in sync (SOK).
9. The cluster "failed actions" are cleaned up after following the recovery procedure.
Component:
Cluster node of secondary site
Description:
Simulate a crash of the secondary site node running the primary HANA database.
TEST PROCEDURE:
57 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
RECOVERY PROCEDURE:
1. AWS infrastructure has stopped the fenced instance. Restart it with AWS console or
AWS CLI tools. Execute the following command after the instance has booted.
3. Manually register the old primary (on node 2) with the new primary after takeover
(on node 1) as <sid>adm.
EXPECTED:
1. The cluster detects the failed secondary node (node 2) and declares it UNCLEAN and
sets the primary node (node 1) to status "partition with quorum".
4. The cluster promotes the secondary HANA database (on node 1) to take over as
primary.
5. The cluster migrates the IP address to the new primary (on node 1).
6. After some time, the cluster shows the sync_state of the stopped secondary (on node
2) as SFAIL.
8. After the manual register and resource refresh, the system replication pair is marked
as in sync (SOK).
9. The cluster "failed actions" are cleaned up after following the recovery procedure.
58 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
10.1.7 Test: Stop Secondary Database on Availability Zone B (Node 2)
Component:
Secondary HANA database
Description:
The secondary HANA database is stopped during normal cluster operation.
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. Refresh the failed resource status of the secondary HANA database (on node 2) as
root.
EXPECTED:
1. The cluster detects the stopped secondary database (on node 2) and marks the
resource failed.
2. The cluster detects the broken system replication and marks it as failed (SFAIL).
3. The cluster restarts the secondary HANA database on the same node (node 2).
4. The cluster detects that the system replication is in sync again and marks it as ok
(SOK).
5. The cluster "failed actions" are cleaned up after following the recovery procedure.
Component:
59 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Secondary HANA database
Description:
Simulate a complete breakdown of the secondary database system.
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. Clean up the failed resource status of the secondary HANA database (on node 2)
as root.
EXPECTED:
1. The cluster detects the stopped secondary database (on node 2) and marks the
resource failed.
2. The cluster detects the broken system replication and marks it as failed (SFAIL).
3. The cluster restarts the secondary HANA database on the same node (node 2).
4. The cluster detects that the system replication is in sync again and marks it as ok
(SOK).
5. The cluster "failed actions" are cleaned up after following the recovery procedure.
Component:
Cluster node of secondary site
Description:
Simulate a crash of the secondary site node running the secondary HANA database.
60 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
TEST PROCEDURE:
RECOVERY PROCEDURE:
1. AWS infrastructure has stopped the fenced instance. Restart it with AWS console or
AWS CLI tools. Execute the following command after the instance has booted.
EXPECTED:
1. The cluster detects the failed secondary node (node 2) and declares it UNCLEAN and
sets the primary node (node 1) to status "partition with quorum".
4. After some time, the cluster shows the sync_state of the stopped secondary (on node
2) as SFAIL.
5. When the fenced node (node 2) rejoins the cluster, the former secondary HANA
database is started automatically.
6. The cluster detects that the system replication is in sync again and marks it as ok
(SOK).
61 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Note
The following tests are designed to run in a sequence. They depend on the exit state of
the proceeding tests.
COMPONENT:
Primary Database
DESCRIPTION:
TEST PROCEDURE:
RECOVERY PROCEDURE:
EXPECTED:
1. The cluster detects the stopped primary HANA database (on node 1) and marks the
resource failed.
2. The cluster promotes the secondary HANA database (on node 2) to take over as
primary.
3. The cluster migrates the IP address to the new primary (on node 2).
4. After some time, the cluster shows the sync_state of the stopped primary (on node
1) as SFAIL.
62 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
6. After the automated register and resource refresh, the system replication pair is
marked as in sync (SOK).
7. The cluster "failed actions" are cleaned up after following the recovery procedure.
COMPONENT:
DESCRIPTION:
Simulate a crash of the site B node running the primary HANA database.
TEST PROCEDURE:
RECOVERY PROCEDURE:
EXPECTED:
1. The cluster detects the failed primary node (node 2) and declares it UNCLEAN and
sets the primary node (node 2) to status "partition with quorum".
4. The cluster promotes the secondary HANA database (on node 1) to take over as
primary.
5. The cluster migrates the IP address to the new primary (on node 1).
63 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
6. After some time, the cluster shows the sync_state of the stopped secondary (on node
2) as SFAIL.
7. When the fenced node (node 2) rejoins the cluster, the former primary becomes a
secondary.
9. The cluster detects that the system replication is in sync again and marks it as ok
(SOK).
11 Administration
do intensive testing.
creating a cluster without proper time synchronization or unstable name resolutions for
hosts, users and groups.
adding location rules for the clone, master/slave or IP resource. Only location rules
mentioned in this setup guide are allowed.
"migrating" or "moving" resources in crm-shell, HAWK or other tools because this would
add client-prefer location rules. Thus, these activities are completely forbidden.
64 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
11.2 Monitoring and Tools
You can use the High Availability Web Console (HAWK), SAP HANA Studio and dierent
command line tools for cluster status requests.
If you set up the cluster using ha-cluster-init and you have installed all packages as described
above, your system will provide a very useful Web interface. You can use this graphical Web
interface to get an overview of the complete cluster status, perform administrative tasks or
congure resources and cluster bootstrap parameters. Read the product manuals for a complete
documentation of this powerful user interface.
Database-specic administration and checks can be done with SAP HANA studio.
65 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
11.2.3 Cluster Command Line Tools
A simple overview can be obtained by calling crm_mon . Using the option -r shows also stopped
but already congured resources. Option -1 tells crm_mon to output the status once instead
of periodically.
Stack: corosync
Current DC: suse01 (version 1.1.19+20181105.ccd6b5b10-3.19.1-1.1.19+20181105.ccd6b5b10) -
partition with quorum
Last updated: Mon Sep 28 18:36:16 2020
Last change: Mon Sep 28 18:36:09 2020 by root via crm_attribute on prihana
2 nodes configured
6 resources configured
suse01:~ # SAPHanaSR-showAttr
Host \ Attr clone_state remoteHost roles ... site srmode sync_state ...
---------------------------------------------------------------------------------
66 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
suse01 PROMOTED suse02 4:P:master1:... WDF sync PRIM ...
suse02 DEMOTED suse01 4:S:master1:... ROT sync SOK ...
SAPHanaSR-showAttr also supports other output formats such as script. The script format is
intended to allow running lters. The SAPHanaSR package beginning with version 0.153 also
provides a lter engine SAPHanaSR-filter . In combination of SAPHanaSR-showAttr with
output format script and SAPHanaSR-filter you can dene eective queries:
In our example, the administrator killed the primary SAP HANA instance using the command
HDB kill-9 . This happened around 9:10 pm.
In the above example the attributes indicate that at the beginning suse01 was running primary
(4:P) and suse02 was running secondary (4:S).
At 21:11 (CET) suddenly the primary on suse01 died - it was falling down to 1:P.
The cluster did jump in and initiated a takeover. At 21:12 (CET) the former secondary was
detected as new running master (changing from 4:S to 4:P).
To check the status of an SAPHana database and to nd out if the cluster should react, you can
use the script landscapeHostConguration to be called as Linux user <sid>adm.
67 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
| | Active | ... Config Role | Actual Role | Config Role | Actual Role |
| ------ | ------ | ... ------------ | ----------- | ----------- | ----------- |
| suse01 | yes | ... master 1 | master | worker | master |
Following the SAP HA guideline, the SAPHana resource agent interprets the return codes in the
following way:
Return Interpretation
Code
4 SAP HANA database is up and OK. The cluster does interpret this as a
correctly running database.
3 SAP HANA database is up and in status info. The cluster does interpret this as
a correctly running database.
2 SAP HANA database is up and in status warning. The cluster does interpret
this as a correctly running database.
1 SAP HANA database is down. If the database should be up and is not down by
intention, this could trigger a takeover.
11.3 Maintenance
To receive updates for the operating system or the SUSE Linux Enterprise High Availability
Extension, it is recommended to register your systems to either a local SUSE Manager or
Subscription Management Tool (SMT) or remotely with SUSE Customer Center.
68 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
11.4 Reconfiguring the Cluster After a Takeover
The nodes of the HAE Cluster monitor each other. They will shut down unresponsive or
misbehaving nodes prior to any failover actions to prevent data corruption. Setting the AWS
stonith-action to powero will permanently shut down the defect cluster node. This will expedite
a takeover on AWS.
The default setting reboot makes the STONITH agent wait until a reboot has been successfully
completed. This will delay the reconguration of the SAP HANA database. Re-integrating a
faulty cluster node into the cluster needs to be performed manually since it needs investigation
why the cluster node did not operate as expected.
Restarting the second (faulty) cluster node automatically can be congured as well. It bears
however the risk that the remaining node gets harmed through an incorrect acting second
(faulty) node. The reconguration of the second (faulty) node happens through the following
steps:
3. Boot SAP HANA manually. Check the instance health. Fix a potential defect. Shut down
SAP HANA.
6. Restart the HAE cluster with the command systemctl start pacemaker as superuser.
This process can take several minutes.
A takeover is now completed. The roles of the two cluster nodes have been ipped. The SAP
HANA database is now protected against future failure events.
For updating SAP HANA database systems in system replication you need to follow the dened
SAP processes. This section describes the steps to be done before and after the update procedure
to get the system replication automated again.
69 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
SUSE has optimized the SAP HANA maintenance process in the cluster. The improved
procedure only sets the master-slave-resource to maintenance and keeps the rest of the cluster
(SAPHanaTopology clones and IPaddr2 vIP resource) still active. Using the updated procedure
allows a seamless SAP HANA maintenance in the cluster, as the virtual IP address can
automatically follow the running primary.
Prepare the cluster not to react on the maintenance work to be done on the SAP HANA database
systems. Set the master-slave-resource to be unmanaged and the cluster nodes in maintenance
mode.
Note
If your maintenance procedure requires a node reboot, the pacemaker service may be
automatically started by systemd when the node comes back online. If HANA System
Replication was disabled during the maintenance activities, pacemaker will fail to start
the SAP HANA cluster resource and will throw an error message for that. This can be
avoided by disabling the automatic start of the pacemaker service during boot until
the maintenance is complete ( systemctl disable pacemaker ). SAP HANA System
Replication must be congured and functioning normally before the pacemaker service
is started and/or the cluster maintenance mode is released. We strongly recommend to
follow the SAP guides on HANA update procedures.
Update
Process the SAP Update for both SAP HANA database systems. This procedure is
described by SAP.
70 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
crm resource refresh <master-slave-resource>
After the SAP HANA update is complete on both sites, tell the cluster about the end
of the maintenance process. This allows the cluster to actively control and monitor
the SAP again.
In the following procedures, we assume the primary to be running on node 1 and the secondary
on node 2. The goal is to "exchange" the roles of the nodes, so nally the primary should run
on node 2 and the secondary should run on node 1.
There are dierent methods to get the exchange of the roles done. The following procedure
shows how to tell the cluster to "accept" a role change via native HANA commands.
Pre move
Set the <master-slave-resource> to "maintenance". This could be done on any
cluster node.
Stop the primary SAP HANA database system. Enter the command in our
example on node 1 as user <sid>adm.
HDB stop
Start the takeover process on the secondary SAP HANA database system. Enter
the command in our example on node 2 as user <sid>adm.
hdbnsutil -sr_takeover
Register the former primary to become the new secondary. Enter the command
in our example on node 1 as user <sid>adm.
71 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
hdbnsutil -sr_register --remoteHost=suse02 --remoteInstance=10 \
--replicationMode=sync --name=WDF \
--operationMode=logreplay
Start the new secondary SAP HANA database system. Enter the command in
our example on node 1 as user <sid>adm.
HDB start
Post Migrate
Wait some time until SAPHanaSR-showAttr shows both SAP HANA database
systems to be up again (eld roles must start with the digit 4). The new
secondary should have role "S" (for secondary).
Tell the cluster to forget about the former master-slave roles and to re-monitor
the failed master. The command could be submitted on any cluster node as
user root.
Now we explain how to use the cluster to partially automate the migration. For the described
attribute query using SAPHanaSR-showAttr and SAPHanaSR-lter, you need at least SAPHanaSR
with package version 0.153.
EXAMPLE 53: MOVING AN SAP HANA PRIMARY USING THE CLUSTER TOOLSET
Create a move away from this node rule by using the force option.
Because of the "move away" (force) rule the cluster will stop the current primary.
After that, run a promote on the secondary site if the system replication was in sync
before. You should not migrate the primary if the status of the system replication
is not in sync (SFAIL).
72 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Important
Migration without the force option will cause a takeover without the former
primary to be stopped. Only the migration with force option is supported.
Note
The crm resource command move was previously named migrate . The
migrate command is still valid but already known as obsolete.
Wait until the secondary has completely taken over to be the new primary role.
You see this using the command line tool SAPHanaSR-showAttr and check for the
attributes "roles" for the new primary. It must start with "4:P".
If you have set up AUTOMATED_REGISTER="true" , you can skip this step. In other
cases you now need to register the old primary. Enter the command in our example
on node 1 as user <sid>adm.
Clear the ban rules of the resource to allow the cluster to start the new secondary.
Note
The crm resource command clear was previously named unmigrate . The
unmigrate command is still valid but already known as obsolete.
Wait until the new secondary has started. You see this using the command line tool
SAPHanaSR-showAttr and check for the attributes "roles" for the new primary. It must
start with "4:S".
73 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
suse01:~ # SAPHanaSR-showAttr --format=script | \
SAPHanaSR-filter --search='roles'
Mon Nov 11 20:38:50 2019; Hosts/suse01/roles=4:S:master1::worker:
Mon Nov 11 20:38:50 2019; Hosts/suse02/roles=4:P:master1:master:worker:master
Blog in 2014 - Fail-Safe Operation of SAP HANA®: SUSE Extends Its High Availability Solution
http://scn.sap.com/community/hana-in-memory/blog/2014/04/04/fail-safe-operation-of-sap-
hana-suse-extends-its-high-availability-solution
Release Notes
https://www.suse.com/releasenotes
74 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
TID Estimate correct multipath timeout
http://www.suse.com/support/kb/doc.php?id=7008216
crm_simulate
crm_simulate.8
cs_clusterstate
cs_clusterstate.8
ocf_suse_SAPHana
ocf_suse_SAPHana.7
ocf_suse_SAPHanaTopology
ocf_suse_SAPHanaTopology.7
SAPHanaSR
SAPHanaSR.7
SAPHanaSR-showAttr
SAPHanaSR-showAttr.8
SAPHanaSR-replay-archive
SAPHanaSR-replay-archive.8
75 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
SAPHanaSR_manitenance_examples
SAPHanaSR_manitenance_examples.8
2205917 - SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP Applications 12
https://launchpad.support.sap.com/#/notes/2205917
76 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
https://launchpad.support.sap.com/#/notes/1944799
13 Examples
node suse01
node suse02
77 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
params tag=pacemaker profile=cluster
primitive rsc_ip_HA1_HDB10 ocf:suse:aws-vpc-move-ip \
params ip=192.168.10.15 routing_table=rtb-XYZ interface=eth0 profile=cluster \
op start interval=0 timeout=180 \
op stop interval=0 timeout=180 \
op monitor interval=120 timeout=60
ms msl_SAPHana_HA1_HDB10 rsc_SAPHana_HA1_HDB10 \
meta clone-max="2" clone-node-max="1" interleave="true"
clone cln_SAPHanaTopology_HA1_HDB10 rsc_SAPHanaTopology_HA1_HDB10 \
meta clone-node-max="1" interleave="true"
colocation col_saphana_ip_HA1_HDB10 2000: \
rsc_ip_HA1_HDB10:Started msl_SAPHana_HA1_HDB10:Master
order ord_SAPHana_HA1_HDB10 2000: \
cln_SAPHanaTopology_HA1_HDB10 msl_SAPHana_HA1_HDB10
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.15-21.1-e174ec8 \
cluster-infrastructure=corosync \
stonith-enabled=true \
stonith-action=poweroff \
stonith-timeout=600s \
last-lrm-refresh=1518102942 \
maintenance-mode=false
rsc_defaults $id="rsc_default-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op_defaults-options" \
timeout="600"
totem {
version: 2
rrp_mode: passive
token: 30000
consensus: 36000
token_retransmits_before_loss_const: 6
secauth: on
crypto_hash: sha1
78 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
crypto_cipher: aes256
clear_node_high_bit: yes
interface {
ringnumber: 0
bindnetaddr: 10.79.254.249
mcastport: 5405
ttl: 1
}
transport: udpu
nodelist {
node {
ring0_addr: 10.79.254.249
ring1_addr: 10.79.253.249
nodeid: 1
}
node {
ring0_addr: 10.79.9.213
ring1_addr: 10.79.10.213
nodeid: 2
}
}
logging {
fileline: off
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
79 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
}
Item Status/Value
All system have been updated to use the latest patch level
Item Status/Value
Item Status/Value
VPC ID
80 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Checklist AWS Cluster Setup
Item Status/Value
Item Status/Value
EC2 Instance Id
ENI ID
1st IP address
2nd IP address
Hostname
81 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Checklist AWS Cluster Setup
Does the AWS CLI prole cluster created and set to text?
Item Status/Value
EC2 Instance Id
ENI ID
1st IP address
2nd IP address
Hostname
Item Status/Value
IP address
82 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Checklist AWS Cluster Setup
Internet access
Item Status/Value
14 Reference
For more detailed information, have a look at the documents listed below.
14.1 Pacemaker
83 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
15 Appendix: Troubleshooting
suse01:~ # OCF_RESKEY_address=<virtual_IPv4_address>
OCF_RESKEY_routing_table=<AWS_route_table> OCF_RESKEY_interface=eth0
OCF_RESKEY_profile=<AWS-profile> OCF_ROOT=/usr/lib/ocf /usr/lib/ocf/resource.d/suse/aws-
vpc-move-ip monitor
suse01:~ # OCF_RESKEY_address=<virtual_IPv4_address>
OCF_RESKEY_routing_table=<AWS_route_table> OCF_RESKEY_interface=eth0
OCF_RESKEY_profile=<AWS-profile> OCF_ROOT=/usr/lib/ocf /usr/lib/ocf/resource.d/suse/aws-
vpc-move-ip stop
Check the DEBUG output for errors and verify that the virtual IP address is NOT active on the
current node with the command ip address list dev eth0 . Start the overlay IP Address
to be hosted on a given node.
As root user, run the following command using the same parameters as in your cluster
conguration:
suse01:~ # OCF_RESKEY_address=<virtual_IPv4_address>
OCF_RESKEY_routing_table=<AWS_route_table> OCF_RESKEY_interface=eth0
OCF_RESKEY_profile=<AWS-profile> OCF_ROOT=/usr/lib/ocf /usr/lib/ocf/resource.d/suse/aws-
vpc-move-ip start
Check the DEBUG output for error messages and verify that the virtual IP address is active on
the current node with the command ip address show .
84 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
15.2 Testing the AWS STONITH Agent
The EC2 STONITH agent will shut down the other node if he thinks that the other node stops to
respond at the corosync layer. The agent can be called manually as root user on a cluster node
1 to shut down cluster node 2 for testing purposes.
The EC2 STONITH agent can be manually tested and validated.
Monitor Operation:
As part of its normal work, EC2 STONITH needs to be able to get all node’s names from the EC2
resource tags. This operation can be tested as shown in the following example:
Get Nodes List Operation:
The EC2 STONITH agent should also be able to shutdown/stop the other EC2 Instance as part
of a fencing operation. The fencing operation can be tested as shown in the following example:
Fencing Operation:
Note
The above command should shutdown/stop cluster the EC2 instance. If it does not work
as expected, check the errors reported during execution of the command.
85 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
On all of the above examples the parameter used are:
AWS-prole : The prole which will be used by the AWS CLI. heck the le ~/.aws/cong
for the matching one. Using the AWS CLI command aws congure list will provide the
same information cluster-node2:
aws_tag_containing_hostname: The name of the tag of the EC2 instances for the two cluster
nodes. We used the name pacemaker in this documentation
86 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
16 Legal Notice
Copyright © 2006–2021 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the
GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant
Section being this copyright notice and license. A copy of the license version 1.2 is included in
the section entitled "GNU Free Documentation License".
SUSE, the SUSE logo and YaST are registered trademarks of SUSE LLC in the United States and
other countries. For SUSE trademarks, see https://www.suse.com/company/legal/ .
Linux is a registered trademark of Linus Torvalds. All other names or trademarks mentioned in
this document may be trademarks or registered trademarks of their respective owners.
This article is part of a series of documents called "SUSE Best Practices". The individual
documents in the series were contributed voluntarily by SUSE’s employees and by third parties.
The articles are intended only to be one example of how a particular action could be taken.
Also, SUSE cannot verify either that the actions described in the articles do what they claim to
do or that they don’t have unintended consequences.
All information found in this article has been compiled with utmost attention to detail. However,
this does not guarantee complete accuracy. Therefore, we need to specically state that neither
SUSE LLC, its aliates, the authors, nor the translators may be held liable for possible errors or
the consequences thereof. Below we draw your attention to the license under which the articles
are published.
87 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
17 GNU Free Documentation License
Copyright © 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston,
MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful
document "free" in the sense of freedom: to assure everyone the eective freedom to copy
and redistribute it, with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way to get credit for their
work, while not being considered responsible for modications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must
themselves be free in the same sense. It complements the GNU General Public License, which
is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free
software needs free documentation: a free program should come with manuals providing the
same freedoms that the software does. But this License is not limited to software manuals; it
can be used for any textual work, regardless of subject matter or whether it is published as a
printed book. We recommend this License principally for works whose purpose is instruction
or reference.
88 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals
exclusively with the relationship of the publishers or authors of the Document to the Document’s
overall subject (or to related matters) and contains nothing that could fall directly within that
overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section
may not explain any mathematics.) The relationship could be a matter of historical connection
with the subject or with related matters, or of legal, commercial, philosophical, ethical or
political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being
those of Invariant Sections, in the notice that says that the Document is released under this
License. If a section does not t the above denition of Secondary then it is not allowed to be
designated as Invariant. The Document may contain zero Invariant Sections. If the Document
does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-
Cover Texts, in the notice that says that the Document is released under this License. A Front-
Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format
whose specication is available to the general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of pixels) generic paint
programs or (for drawings) some widely available drawing editor, and that is suitable for input
to text formatters or for automatic translation to a variety of formats suitable for input to text
formatters. A copy made in an otherwise Transparent le format whose markup, or absence of
markup, has been arranged to thwart or discourage subsequent modication by readers is not
Transparent. An image format is not Transparent if used for any substantial amount of text. A
copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup,
Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD,
and standard-conforming simple HTML, PostScript or PDF designed for human modication.
Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include
proprietary formats that can be read and edited only by proprietary word processors, SGML or
XML for which the DTD and/or processing tools are not generally available, and the machine-
generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are
needed to hold, legibly, the material this License requires to appear in the title page. For works
in formats which do not have any title page as such, "Title Page" means the text near the most
prominent appearance of the work’s title, preceding the beginning of the body of the text.
89 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely
XYZ or contains XYZ in parentheses following text that translates XYZ in another language.
(Here XYZ stands for a specic section name mentioned below, such as "Acknowledgements",
"Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you
modify the Document means that it remains a section "Entitled XYZ" according to this denition.
The Document may include Warranty Disclaimers next to the notice which states that this
License applies to the Document. These Warranty Disclaimers are considered to be included by
reference in this License, but only as regards disclaiming warranties: any other implication that
these Warranty Disclaimers may have is void and has no eect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or
noncommercially, provided that this License, the copyright notices, and the license notice saying
this License applies to the Document are reproduced in all copies, and that you add no other
conditions whatsoever to those of this License. You may not use technical measures to obstruct
or control the reading or further copying of the copies you make or distribute. However, you
may accept compensation in exchange for copies. If you distribute a large enough number of
copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display
copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the
Document, numbering more than 100, and the Document’s license notice requires Cover Texts,
you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-
Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also
clearly and legibly identify you as the publisher of these copies. The front cover must present the
full title with all words of the title equally prominent and visible. You may add other material
on the covers in addition. Copying with changes limited to the covers, as long as they preserve
the title of the Document and satisfy these conditions, can be treated as verbatim copying in
other respects.
90 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
If the required texts for either cover are too voluminous to t legibly, you should put the rst
ones listed (as many as t reasonably) on the actual cover, and continue the rest onto adjacent
pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must
either include a machine-readable Transparent copy along with each Opaque copy, or state in
or with each Opaque copy a computer-network location from which the general network-using
public has access to download using public-standard network protocols a complete Transparent
copy of the Document, free of added material. If you use the latter option, you must take
reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated location until at least one year
after the last time you distribute an Opaque copy (directly or through your agents or retailers)
of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before
redistributing any large number of copies, to give them a chance to provide you with an updated
version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modied Version of the Document under the conditions of
sections 2 and 3 above, provided that you release the Modied Version under precisely this
License, with the Modied Version lling the role of the Document, thus licensing distribution
and modication of the Modied Version to whoever possesses a copy of it. In addition, you
must do these things in the Modied Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document,
and from those of previous versions (which should, if there were any, be listed in the
History section of the Document). You may use the same title as a previous version if the
original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship
of the modications in the Modied Version, together with at least ve of the principal
authors of the Document (all of its principal authors, if it has fewer than ve), unless they
release you from this requirement.
C. State on the Title page the name of the publisher of the Modied Version, as the publisher.
91 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
E. Add an appropriate copyright notice for your modications adjacent to the other copyright
notices.
F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modied Version under the terms of this License, in the form shown
in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts
given in the Document’s license notice.
I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at
least the title, year, new authors, and publisher of the Modied Version as given on the
Title Page. If there is no section Entitled "History" in the Document, create one stating the
title, year, authors, and publisher of the Document as given on its Title Page, then add an
item describing the Modied Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a
Transparent copy of the Document, and likewise the network locations given in the
Document for previous versions it was based on. These may be placed in the "History"
section. You may omit a network location for a work that was published at least four years
before the Document itself, or if the original publisher of the version it refers to gives
permission.
K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the
section, and preserve in the section all the substance and tone of each of the contributor
acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their
titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section may not be included in the
Modied Version.
N. Do not retitle any existing section to be Entitled "Endorsements" or to conict in title with
any Invariant Section.
92 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
If the Modied Version includes new front-matter sections or appendices that qualify as
Secondary Sections and contain no material copied from the Document, you may at your option
designate some or all of these sections as invariant. To do this, add their titles to the list of
Invariant Sections in the Modied Version’s license notice. These titles must be distinct from
any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements
of your Modied Version by various parties—for example, statements of peer review or that the
text has been approved by an organization as the authoritative denition of a standard.
You may add a passage of up to ve words as a Front-Cover Text, and a passage of up to 25
words as a Back-Cover Text, to the end of the list of Cover Texts in the Modied Version. Only
one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through
arrangements made by) any one entity. If the Document already includes a cover text for the
same cover, previously added by you or by arrangement made by the same entity you are acting
on behalf of, you may not add another; but you may replace the old one, on explicit permission
from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use
their names for publicity for or to assert or imply endorsement of any Modied Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under
the terms dened in section 4 above for modied versions, provided that you include in the
combination all of the Invariant Sections of all of the original documents, unmodied, and list
them all as Invariant Sections of your combined work in its license notice, and that you preserve
all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant
Sections may be replaced with a single copy. If there are multiple Invariant Sections with the
same name but dierent contents, make the title of each such section unique by adding at the
end of it, in parentheses, the name of the original author or publisher of that section if known,
or else a unique number. Make the same adjustment to the section titles in the list of Invariant
Sections in the license notice of the combined work.
93 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
In the combination, you must combine any sections Entitled "History" in the various original
documents, forming one section Entitled "History"; likewise combine any sections Entitled
"Acknowledgements", and any sections Entitled "Dedications". You must delete all sections
Entitled "Endorsements".
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under
this License, and replace the individual copies of this License in the various documents with a
single copy that is included in the collection, provided that you follow the rules of this License
for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under
this License, provided you insert a copy of this License into the extracted document, and follow
this License in all other respects regarding verbatim copying of that document.
8. TRANSLATION
Translation is considered a kind of modication, so you may distribute translations of the
Document under the terms of section 4. Replacing Invariant Sections with translations requires
special permission from their copyright holders, but you may include translations of some or
94 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
all Invariant Sections in addition to the original versions of these Invariant Sections. You may
include a translation of this License, and all the license notices in the Document, and any
Warranty Disclaimers, provided that you also include the original English version of this License
and the original versions of those notices and disclaimers. In case of a disagreement between
the translation and the original version of this License or a notice or disclaimer, the original
version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the
requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual
title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided
for under this License. Any other attempt to copy, modify, sublicense or distribute the Document
is void, and will automatically terminate your rights under this License. However, parties
who have received copies, or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
95 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled “GNU
Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “ with…
Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three,
merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these
examples in parallel under your choice of free software license, such as the GNU General Public
License, to permit their use in free software.
96 SAP HANA High Availability Cluster for the AWS Cloud - Setup Guide (v12)