Red Hat OpenStack Platform-9-Director Installation and Usage-En-US
Red Hat OpenStack Platform-9-Director Installation and Usage-En-US
9
Director Installation and Usage
OpenStack Team
Legal Notice
Copyright 2016 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
AttributionShare Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other
countries.
Linux is the registered trademark of Linus Torvalds in the United States and other countries.
Java is a registered trademark of Oracle and/or its affiliates.
XFS is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Abstract
This guide explains how to install Red Hat OpenStack Platform 9 in an enterprise environment using
the Red Hat OpenStack Platform Director. This includes installing the director, planning your
environment, and creating an OpenStack environment with the director.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . .1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . .
1.1. UNDERCLOUD
5
1.2. OVERCLOUD
6
1.3. HIGH AVAILABILITY
7
1.4. CEPH STORAGE
8
.CHAPTER
. . . . . . . . .2.. .REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . .
2.1. ENVIRONMENT REQUIREMENTS
9
2.2. UNDERCLOUD REQUIREMENTS
9
2.3. NETWORKING REQUIREMENTS
10
2.4. OVERCLOUD REQUIREMENTS
12
2.5. REPOSITORY REQUIREMENTS
14
.CHAPTER
. . . . . . . . .3.. .PLANNING
. . . . . . . . . .YOUR
. . . . . OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
...........
3.1. PLANNING NODE DEPLOYMENT ROLES
17
3.2. PLANNING NETWORKS
18
3.3. PLANNING STORAGE
23
.CHAPTER
. . . . . . . . .4.. .INSTALLING
. . . . . . . . . . . THE
. . . . UNDERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
...........
4.1. CREATING A DIRECTOR INSTALLATION USER
25
4.2. CREATING DIRECTORIES FOR TEMPLATES AND IMAGES
4.3. SETTING THE HOSTNAME FOR THE SYSTEM
25
25
26
26
27
30
31
31
31
. . . . . . . . . .5.. .CONFIGURING
CHAPTER
. . . . . . . . . . . . . BASIC
. . . . . . OVERCLOUD
. . . . . . . . . . . . REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
...........
5.1. REGISTERING NODES FOR THE OVERCLOUD
5.2. INSPECTING THE HARDWARE OF NODES
33
34
35
36
38
.CHAPTER
. . . . . . . . .6.. .CONFIGURING
. . . . . . . . . . . . . ADVANCED
. . . . . . . . . . .CUSTOMIZATIONS
. . . . . . . . . . . . . . . . .FOR
. . . .THE
. . . .OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
...........
6.1. UNDERSTANDING HEAT TEMPLATES
39
6.2. ISOLATING NETWORKS
41
6.3. CONTROLLING NODE PLACEMENT
55
6.4. CONFIGURING CONTAINERIZED COMPUTE NODES
6.5. CONFIGURING EXTERNAL LOAD BALANCING
6.6. CONFIGURING IPV6 NETWORKING
6.7. CONFIGURING NFS STORAGE
59
63
63
64
65
66
66
69
70
72
74
75
77
77
78
.CHAPTER
. . . . . . . . .7.. .CREATING
. . . . . . . . . .THE
. . . .OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80
...........
7.1. SETTING OVERCLOUD PARAMETERS
80
7.2. INCLUDING ENVIRONMENT FILES IN OVERCLOUD CREATION
85
7.3. OVERCLOUD CREATION EXAMPLE
7.4. MONITORING THE OVERCLOUD CREATION
7.5. ACCESSING THE OVERCLOUD
7.6. COMPLETING THE OVERCLOUD CREATION
87
88
88
89
. . . . . . . . . .8.. .PERFORMING
CHAPTER
. . . . . . . . . . . . TASKS
. . . . . . . AFTER
. . . . . . .OVERCLOUD
. . . . . . . . . . . .CREATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
...........
8.1. CREATING THE OVERCLOUD TENANT NETWORK
8.2. CREATING THE OVERCLOUD EXTERNAL NETWORK
8.3. CREATING ADDITIONAL FLOATING IP NETWORKS
8.4. CREATING THE OVERCLOUD PROVIDER NETWORK
8.5. VALIDATING THE OVERCLOUD
90
90
91
92
92
94
97
98
98
99
100
. . . . . . . . . .9.. .SCALING
CHAPTER
. . . . . . . . THE
. . . . OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
............
9.1. ADDING ADDITIONAL NODES
101
103
104
105
117
117
. . . . . . . . . .10.
CHAPTER
. . .TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . .DIRECTOR
. . . . . . . . . .ISSUES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
............
10.1. TROUBLESHOOTING NODE REGISTRATION
10.2. TROUBLESHOOTING HARDWARE INTROSPECTION
121
121
124
127
128
129
131
132
. . . . . . . . . . A.
APPENDIX
. . .SSL/TLS
. . . . . . . CERTIFICATE
. . . . . . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
............
A.1. CREATING A CERTIFICATE AUTHORITY
A.2. ADDING THE CERTIFICATE AUTHORITY TO CLIENTS
135
135
135
135
137
137
138
. . . . . . . . . . B.
APPENDIX
. . .POWER
. . . . . . .MANAGEMENT
. . . . . . . . . . . . . DRIVERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139
............
139
139
Table of Contents
B.3. IBOOT
139
140
141
141
142
.APPENDIX
. . . . . . . . . C.
. . .AUTOMATIC
. . . . . . . . . . .PROFILE
. . . . . . . . TAGGING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
............
C.1. POLICY FILE SYNTAX
144
C.2. POLICY FILE EXAMPLE
C.3. IMPORTING POLICY FILES
146
147
148
. . . . . . . . . . D.
APPENDIX
. . .BASE
. . . . .PARAMETERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
............
. . . . . . . . . . E.
APPENDIX
. . NETWORK
. . . . . . . . . . INTERFACE
. . . . . . . . . . . PARAMETERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
............
E.1. INTERFACE OPTIONS
E.2. VLAN OPTIONS
168
169
170
171
172
174
.APPENDIX
. . . . . . . . . F.
. . NETWORK
. . . . . . . . . . INTERFACE
. . . . . . . . . . .TEMPLATE
. . . . . . . . . . EXAMPLES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
............
F.1. CONFIGURING INTERFACES
176
F.2. CONFIGURING ROUTES AND DEFAULT ROUTES
177
177
178
178
. . . . . . . . . . G.
APPENDIX
. . .NETWORK
. . . . . . . . . ENVIRONMENT
. . . . . . . . . . . . . . OPTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
............
. . . . . . . . . . H.
APPENDIX
. . .OPEN
. . . . . VSWITCH
. . . . . . . . .BONDING
. . . . . . . . .OPTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184
............
CHAPTER 1. INTRODUCTION
CHAPTER 1. INTRODUCTION
The Red Hat OpenStack Platform director is a toolset for installing and managing a complete
OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an
abbreviation for "OpenStack-On-OpenStack". This project takes advantage of OpenStack
components to install a fully operational OpenStack environment; this includes new OpenStack
components that provision and control bare metal systems to use as OpenStack nodes. This
provides a simple method for installing a complete Red Hat OpenStack Platform environment that is
both lean and robust.
The Red Hat OpenStack Platform director uses two main concepts: an Undercloud and an
Overcloud. The Undercloud installs and configures the Overcloud. The next few sections outline the
concept of each.
1.1. UNDERCLOUD
The Undercloud is the main director node. It is a single-system OpenStack installation that includes
components for provisioning and managing the OpenStack nodes that form your OpenStack
environment (the Overcloud). The components that form the Undercloud provide the following
functions:
Environment planning - The Undercloud provides planning functions for users to assign Red Hat
OpenStack Platform roles, including Compute, Controller, and various storage roles.
Bare metal system control - The Undercloud uses the Intelligent Platform Management Interface
(IPMI) of each node for power management control and a PXE-based service to discover
hardware attributes and install OpenStack to each node. This provides a method to provision
bare metal systems as OpenStack nodes.
Orchestration - The Undercloud provides and reads a set of YAML templates to create an
OpenStack environment.
The Red Hat OpenStack Platform director performs these Undercloud functions through a terminalbased command line interface.
The Undercloud consists of the following components:
OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal nodes.
OpenStack Networking (neutron) and Open vSwitch - Controls networking for bare metal nodes.
OpenStack Image Service (glance) - Stores images that are written to bare metal machines.
OpenStack Orchestration (heat) and Puppet - Provides orchestration of nodes and configuration
of nodes after the director writes the Overcloud image to disk.
OpenStack Telemetry (ceilometer) - Performs monitoring and data collection. This also includes:
OpenStack Telemetry Metrics (gnocchi) - Provides a time series database for metrics.
OpenStack Telemetry Alarming (aodh) - Provides a an alarming component for monitoring.
OpenStack Identity (keystone) - Provides authentication and authorization for the directors
components.
MariaDB - The database back end for the director.
RabbitMQ - Messaging queue for the directors components.
1.2. OVERCLOUD
The Overcloud is the resulting Red Hat OpenStack Platform environment created using the
Undercloud. This includes one or more of the following node types:
Controller
Nodes that provide administration, networking, and high availability for the OpenStack
environment. An ideal OpenStack environment recommends three of these nodes together
in a high availability cluster.
A default Controller node contains the following components:
OpenStack Dashboard (horizon)
OpenStack Identity (keystone)
OpenStack Compute (nova) API
OpenStack Networking (neutron)
OpenStack Image Service (glance)
OpenStack Block Storage (cinder)
OpenStack Object Storage (swift)
OpenStack Orchestration (heat)
OpenStack Telemetry (ceilometer)
OpenStack Telemetry Metrics (gnocchi)
OpenStack Telemetry Alarming (aodh)
OpenStack Clustering (sahara)
MariaDB
Open vSwitch
CHAPTER 1. INTRODUCTION
CHAPTER 2. REQUIREMENTS
CHAPTER 2. REQUIREMENTS
This chapter outlines the main requirements for setting up an environment to provision Red Hat
OpenStack Platform using the director. This includes the requirements for setting up the director,
accessing it, and the hardware requirements for hosts that the director provisions for OpenStack
services.
Important
Ensure the Underclouds file system only contains a root and swap partitions if using
Logical Volume Management (LVM). For more information, see the Red Hat Customer
Portal article "Director node fails to boot after undercloud installation".
10
CHAPTER 2. REQUIREMENTS
Make sure the Provisioning network NIC is not the same NIC used for remote connectivity on the
director machine. The director installation creates a bridge using the Provisioning NIC, which
drops any remote connections. Use the External NIC for remote connections to the director
system.
The Provisioning network requires an IP range that fits your environment size. Use the following
guidelines to determine the total number of IP addresses to include in this range:
Include at least one IP address per node connected to the Provisioning network.
If planning a high availability configuration, include an extra IP address for the virtual IP of the
cluster.
Include additional IP addresses within the range for scaling the environment.
Note
Duplicate IP addresses should be avoided on the Provisioning network. For more
information, see Section 3.2, Planning Networks.
Note
For more information on planning your IP address usage, for example, for storage,
provider, and tenant networks, see the Networking Guide.
Set all Overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the
External NIC (and any other NICs on the system). Also ensure that the Provisioning NIC has
PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives.
All Overcloud bare metal systems require a supported power management interface, such as an
Intelligent Platform Management Interface (IPMI). This allows the director to control the power
management of each node.
Make a note of the following details for each Overcloud system: the MAC address of the
Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This
information will be useful later when setting up the Overcloud nodes.
If an instance needs to be accessible from the external internet, you can allocate a floating IP
address from a public network and associate it with an instance. The instance still retains its
private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a
floating IP address can only be assigned to a single instance rather than multiple private IP
addresses. However, the floating IP address is reserved only for use by a single tenant, allowing
the tenant to associate or disassociate with a particular instance as required. This configuration
exposes your infrastructure to the external internet. As a result, you might need to check that you
are following suitable security practices.
11
Important
Your OpenStack Platform implementation is only as secure as its environment. Follow
good security principles in your networking environment to ensure that network access is
properly controlled. For example:
Use network segmentation to mitigate network movement and isolate sensitive data; a
flat network is much less secure.
Restrict services access and ports to a minimum.
Ensure proper firewall rules and password usage.
Ensure that SELinux is enabled.
For details on securing your system, see:
Red Hat Enterprise Linux 7 Security Guide
Red Hat Enterprise Linux 7 SELinux Users and Administrators Guide
12
CHAPTER 2. REQUIREMENTS
13
Disk Layout
The recommended Red Hat Ceph Storage node configuration requires a disk layout similar
to the following:
/dev/sda - The root disk. The director copies the main Overcloud image to the disk.
/dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals.
For example, /dev/sdb1, /dev/sdb2, /dev/sdb3, and onward. The journal disk is
usually a solid state drive (SSD) to aid with system performance.
/dev/sdc and onward - The OSD disks. Use as many disks as necessary for your
storage requirements.
This guide contains the necessary instructions to map your Ceph Storage disks into the
director.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at
least two NICs in a production environment. Use additional network interface cards for
bonded interfaces or to delegate tagged VLAN traffic. It is recommended to use a 10 Gbps
interface for storage node, especially if creating an OpenStack Platform environment that
serves a high volume of traffic.
Power Management
Each Controller node requires a supported power management interface, such as an
Intelligent Platform Management Interface (IPMI) functionality, on the servers motherboard.
Important
The director does not create partitions on the journal disk. You must manually create
these journal partitions before the Director can deploy the Ceph Storage nodes.
The Ceph Storage OSDs and journals partitions require GPT disk labels, which you also
configure prior to customization. For example, use the following command on the potential
Ceph Storage host to create a GPT disk label for a disk or partition:
# parted [device] mklabel gpt
Name
14
Repository
Description of Requirement
CHAPTER 2. REQUIREMENTS
rhel-7-server-rpms
rhel-7-server-extrasrpms
rhel-7-server-rhcommon-rpms
rhel-7-serversatellite-tools-6.1rpms
rhel-ha-for-rhel-7server-rpms
rhel-7-serveropenstack-9-directorrpms
rhel-7-serveropenstack-9-rpms
rhel-7-server-rhceph1.3-osd-rpms
rhel-7-server-rhceph1.3-mon-rpms
15
Note
To configure repositories for your Red Hat OpenStack Platform environment in an offline
network, see "Configuring Red Hat OpenStack Platform Director in an Offline
Environment" on the Red Hat Customer Portal.
16
Small
Overcloud
Controller
Compute
CephStorage
SwiftStorage
CinderStorage
Total
17
Medium
Overcloud
Medium
Overcloud
with
additional
Object and
Block
storage
Medium
Overcloud
with High
Availability
Medium
Overcloud
with High
Availability
and Ceph
Storage
18
Network Type
Description
Used By
IPMI
All nodes
Provisioning
All nodes
Internal API
Tenant
Controller, Compute
Storage
All nodes
19
20
Storage Management
External
Controller
Floating IP
Controller
Management
All nodes
In a typical Red Hat OpenStack Platform installation, the number of network types often exceeds the
number of physical network links. In order to connect all the networks to the proper hosts, the
Overcloud uses VLAN tagging to deliver more than one network per interface. Most of the networks
are isolated subnets but some require a Layer 3 gateway to provide routing for Internet access or
infrastructure network connectivity.
Note
It is recommended that you deploy a Tenant VLAN (for tunneling GRE and VXLAN) even
if using neutron VLAN mode with tunneling disabled at deployment time. This requires
minor customization at deployment time and leaves the option available to use tunnel
networks as utility networks or virtualization networks in the future. You still create Tenant
networks using VLANs, but you can also create VXLAN tunnels for special-use networks
without consuming tenant VLANs. It is possible to add VXLAN capability to a deployment
with a Tenant VLAN, but it is not possible to add a Tenant VLAN to an existing Overcloud
without causing disruption.
The director provides a method for mapping six of these traffic types to certain subnets or VLANs.
These traffic types include:
21
Internal API
Storage
Storage Management
Tenant Networks
External
Management
Any unassigned networks are automatically assigned to the same subnet as the Provisioning
network.
The diagram below provides an example of a network topology where the networks are isolated on
separate VLANs. Each Overcloud node uses two interfaces (nic2 and nic3) in a bond to deliver
these networks over their respective VLANs. Meanwhile, each Overcloud node communicates with
the Undercloud over the Provisioning network through a native VLAN using nic1.
22
The following table provides examples of network traffic mappings different network layouts:
Table 3.3. Network Mappings
Mappings
Total Interfaces
Total VLANs
3 (includes 2 bonded
interfaces)
Network 2 - External,
Floating IP (mapped
after Overcloud
creation)
Isolated Networks
Network 1 Provisioning
Network 2 - Internal API
Network 3 - Tenant
Networks
Network 4 - Storage
Network 5 - Storage
Management
Network 6 - Storage
Management
Network 7 - External,
Floating IP (mapped
after Overcloud
creation)
23
to attach volumes to running VMs. OpenStack manages volumes using Cinder services.
You can use Cinder to boot a VM using a copy-on-write clone of an image.
Guest Disks - Guest disks are guest operating system disks. By default, when you boot
a virtual machine with nova, its disk appears as a file on the filesystem of the hypervisor
(usually under /var/lib/nova/instances/<uuid>/). It is possible to boot every
virtual machine inside Ceph directly without using cinder, which is advantageous
because it allows you to perform maintenance operations easily with the live-migration
process. Additionally, if your hypervisor dies it is also convenient to trigger nova
evacuate and run the virtual machine elsewhere almost seamlessly.
Important
Ceph doesnt support QCOW2 for hosting a virtual machine disk. If you want
to boot virtual machines in Ceph (ephemeral backend or boot from volume),
the glance image format must be RAW.
See Red Hat Ceph Storage Architecture Guide for additional information.
Swift Storage Nodes
The director creates an external object storage node. This is useful in situations where you
need to scale or replace controller nodes in your Overcloud environment but need to retain
object storage outside of a high availability cluster.
24
25
The director also requires an entry for the systems hostname and base name in /etc/hosts. For
example, if the system is named manager.example.com, then /etc/hosts requires an entry
like:
127.0.0.1
manager.example.com manager localhost localhost.localdomain
localhost4 localhost4.localdomain4
26
configuration:
$ sudo yum install -y python-tripleoclient
This installs all packages required for the director installation.
27
obtain this certificate from a trusted certificate authority. Otherwise generate your own selfsigned certificate using the guidelines in Appendix A, SSL/TLS Certificate Configuration.
These guidelines also contain instructions on setting the SELinux context for your certificate,
whether self-signed or from an authority.
local_interface
The chosen interface for the directors Provisioning NIC. This is also the device the director
uses for its DHCP and PXE boot services. Change this value to your chosen device. To see
which device is connected, use the ip addr command. For example, this is the result of an
ip addr command:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast state UP qlen 1000
link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.178/24 brd 192.168.122.255 scope global
dynamic eth0
valid_lft 3462sec preferred_lft 3462sec
inet6 fe80::5054:ff:fe75:2409/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop
state DOWN
link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff
In this example, the External NIC uses eth0 and the Provisioning NIC uses eth1, which is
currently not configured. In this case, set the local_interface to eth1. The
configuration script attaches this interface to a custom bridge defined with the
inspection_interface parameter.
network_cidr
The network that the director uses to manage Overcloud instances. This is the Provisioning
network. Leave this as the default 192.0.2.0/24 unless you are using a different subnet
for the Provisioning network.
masquerade_network
Defines the network that will masquerade for external access. This provides the
Provisioning network with a degree of network address translation (NAT) so that it has
external access through the director. Leave this as the default (192.0.2.0/24) unless you
are using a different subnet for the Provisioning network.
dhcp_start; dhcp_end
The start and end of the DHCP allocation range for Overcloud nodes. Ensure this range
contains enough IP addresses to allocate your nodes.
inspection_interface
The bridge the director uses for node introspection. This is custom bridge that the director
configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the
default br-ctlplane.
inspection_iprange
A range of IP address that the directors introspection service uses during the PXE boot and
provisioning process. Use comma-separated values to define the start and end of this
range. For example, 192.0.2.100,192.0.2.120. Make sure this range contains enough
28
IP addresses for your nodes and does not conflict with the range for dhcp_start and
dhcp_end.
inspection_extras
Defines whether to enable extra hardware collection during the inspection process.
Requires python-hardware or python-hardware-detect package on the
introspection image.
inspection_runbench
Runs a set of benchmarks during node introspection. Set to true to enable. This option is
necessary if you intend to perform benchmark analysis when inspecting the hardware of
registered nodes. See Section 5.2, Inspecting the Hardware of Nodes for more details.
undercloud_debug
Sets the log level of Undercloud services to DEBUG. Set this value to true to enable.
enable_tempest
Defines whether to install the validation tools. The default is set to false, but you can can
enable using true.
ipxe_deploy
Defines whether to use iPXE or standard PXE. The default is true, which enables iPXE.
Set to false to set to standard PXE. For more information, see "Changing from iPXE to
PXE in Red Hat OpenStack Platform director" on the Red Hat Customer Portal.
store_events
Defines whether to store events in Ceilometer on the Undercloud.
undercloud_db_password; undercloud_admin_token; undercloud_admin_password;
undercloud_glance_password; etc
The remaining parameters are the access details for all of the directors services. No
change is required for the values. The directors configuration script automatically generates
these values if blank in undercloud.conf. You can retrieve all values after the
configuration script completes.
Modify the values for these parameters to suit your network. When complete, save the file and run
the following command:
$ openstack undercloud install
This launches the directors configuration script. The director installs additional packages and
configures its services to suit the settings in the undercloud.conf. This script takes several
minutes to complete.
The configuration script generates two files when complete:
undercloud-passwords.conf - A list of all passwords for the directors services.
stackrc - A set of initialization variables to help you access the directors command line tools.
To initialize the stack user to use the command line tools, run the following command:
$ source ~/stackrc
29
30
Important
If you aim to isolate service traffic onto separate networks, the Overcloud nodes use the
DnsServer parameter in your network environment templates. This is covered in the
advanced configuration scenario in Section 6.2, Isolating Networks.
31
32
Node Name
IP Address
MAC Address
IPMI IP Address
Director
192.0.2.1
aa:aa:aa:aa:aa:aa
None required
Controller
DHCP defined
bb:bb:bb:bb:bb:bb
192.0.2.205
Compute
DHCP defined
cc:cc:cc:cc:cc:cc
192.0.2.206
All other network types use the Provisioning network for OpenStack services. However, you can
create additional networks for other network traffic types. For more information, see Section 6.2,
Isolating Networks.
33
mac
(Optional) A list of MAC addresses for the network interfaces on the node. Use only the
MAC address for the Provisioning NIC of each system.
cpu
(Optional) The number of CPUs on the node.
memory
(Optional) The amount of memory in MB.
disk
(Optional) The size of the hard disk in GB.
arch
(Optional) The system architecture.
Note
For more supported power management types and their options, see Appendix B, Power
Management Drivers.
After creating the template, save the file to the stack users home directory
(/home/stack/instackenv.json), then import it into the director using the following command:
$ openstack baremetal import --json ~/instackenv.json
This imports the template and registers each node from the template into the director.
Assign the kernel and ramdisk images to all nodes:
$ openstack baremetal configure boot
The nodes are now registered and configured in the director. View a list of these nodes in the CLI:
$ ironic node-list
34
Note
You can also create policy files to automatically tag nodes into profiles immediately after
introspection. For more information on creating policy files and including them in the
introspection process, see Appendix C, Automatic Profile Tagging. Alternatively, you can
manually tag nodes into profiles as per the instructions in Section 5.3, Tagging Nodes
into Profiles.
Run the following command to inspect the hardware attributes of each node:
$ openstack baremetal introspection bulk start
Monitor the progress of the introspection using the following command in a separate terminal
window:
$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironicinspector-dnsmasq -u openstack-ironic-conductor -f
Important
Make sure this process runs to completion. This process usually takes 15 minutes for bare
metal nodes.
Alternatively, perform a single introspection on each node individually. Set the node to management
mode, perform the introspection, then move the node out of management mode:
$ ironic node-set-provision-state [NODE UUID] manage
$ openstack baremetal introspection start [NODE UUID]
$ ironic node-set-provision-state [NODE UUID] provide
35
36
37
This helps the director identify the specific disk to use as the root disk. When we initiate our
Overcloud creation, the director provisions this node and writes the Overcloud image to this disk.
Note
Make sure to configure the BIOS of each node to include booting from the chosen root
disk. The recommended boot order is network boot, then root disk boot.
Important
Do not use name to set the root disk as this value can change when the node boots.
38
39
flavor:
type: string
description: Instance type for the instance to be created
default: m1.small
image:
type: string
default: cirros
description: ID or name of the image to use for the instance
resources:
my_instance:
type: OS::Nova::Server
properties:
name: My Cirros Instance
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
output:
instance_name:
description: Get the instance's name
value: { get_attr: [ my_instance, name ] }
This template uses the resource type type: OS::Nova::Server to create an instance called
my_instance with a particular flavor, image, and key. The stack can return the value of
instance_name, which is called My Cirros Instance.
When Heat processes a template it creates a stack for the template and a set of child stacks for
resource templates. This creates a hierarchy of stacks that descend from the main stack you define
with your template. You can view the stack hierarchy using this following command:
$ heat stack-list --show-nested
40
Important
It is recommended to use parameter_defaults instead of parameters When creating
custom environment files for your Overcloud. This is so the parameters apply to all stack
templates for the Overcloud.
An example of a basic environment file:
resource_registry:
OS::Nova::Server::MyServer: myserver.yaml
parameter_defaults:
NetworkName: my_network
parameters:
MyIP: 192.168.0.1
For example, this environment file (my_env.yaml) might be included when creating a stack from a
certain Heat template (my_template.yaml). The my_env.yaml files creates a new resource type
called OS::Nova::Server::MyServer. The myserver.yaml file is a Heat template file that
provides an implementation for this resource type that overrides any built-in ones. You can include
the OS::Nova::Server::MyServer resource in your my_template.yaml file.
The MyIP applies a parameter only to the main Heat template that deploys along with this
environment file. In this example, it only applies to the parameters in my_template.yaml.
The NetworkName applies to both the main Heat template (in this example, my_template.yaml)
and the templates associated with resources included the main template, such as the
OS::Nova::Server::MyServer resource and its myserver.yaml template in this example.
41
configures the OpenStack services to use the isolated networks. If no isolated networks are
configured, all services run on the Provisioning network.
This example uses separate networks for all services:
Network 1 - Provisioning
Network 2 - Internal API
Network 3 - Tenant Networks
Network 4 - Storage
Network 5 - Storage Management
Network 6 - Management
Network 7 - External and Floating IP (mapped after Overcloud creation)
In this example, each Overcloud node uses two network interfaces in a bond to serve networks in
tagged VLANs. The following network assignments apply to this bond:
Table 6.1. Network Subnet and VLAN Assignments
Network Type
Subnet
VLAN
Internal API
172.16.0.0/24
201
Tenant
172.17.0.0/24
202
Storage
172.18.0.0/24
203
Storage Management
172.19.0.0/24
204
Management
172.20.0.0/24
205
External / Floating IP
10.1.1.0/24
100
For more examples of network configuration, see Section 3.2, Planning Networks.
42
The Overcloud network configuration requires a set of the network interface templates. You
customize these templates to configure the node interfaces on a per role basis. These templates are
standard Heat templates in YAML format (see Section 6.1.1, Heat Templates). The director
contains a set of example templates to get you started:
/usr/share/openstack-tripleo-heat-templates/network/config/single-nicvlans - Directory containing templates for single NIC with VLANs configuration on a per role
basis.
/usr/share/openstack-tripleo-heat-templates/network/config/bond-withvlans - Directory containing templates for bonded NIC configuration on a per role basis.
/usr/share/openstack-tripleo-heat-templates/network/config/multiplenics - Directory containing templates for multiple NIC configuration using one NIC per role.
/usr/share/openstack-tripleo-heat-templates/network/config/single-niclinux-bridge-vlans - Directory containing templates for single NIC with VLANs
configuration on a per role basis and using a Linux bridge instead of an Open vSwitch bridge.
For this example, use the default bonded NIC example configuration as a basis. Copy the version
located at /usr/share/openstack-tripleo-heat-templates/network/config/bondwith-vlans.
$ cp -r /usr/share/openstack-tripleo-heattemplates/network/config/bond-with-vlans ~/templates/nic-configs
This creates a local set of heat templates that define a bonded network interface configuration for
each role. Each template contains the standard parameters, resources, and output sections.
For this example, you would only edit the resources section. Each resources section begins
with the following:
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
This creates a request for the os-apply-config command and os-net-config subcommand
to configure the network properties for a node. The network_config section contains your custom
interface configuration arranged in a sequence based on type, which includes the following:
interface
Defines a single network interface. The configuration defines each interface using either the
actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1",
"nic2", "nic3").
- type: interface
name: nic2
vlan
Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.
43
- type: vlan
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
ovs_bond
Defines a bond in Open vSwitch to join two or more interfaces together. This helps with
redundancy and increases bandwidth.
- type: ovs_bond
name: bond1
members:
- type: interface
name: nic2
- type: interface
name: nic3
ovs_bridge
Defines a bridge in Open vSwitch, which connects multiple interface, ovs_bond and
vlan objects together.
- type: ovs_bridge
name: {get_input: bridge_name}
members:
- type: ovs_bond
name: bond1
members:
- type: interface
name: nic2
primary: true
- type: interface
name: nic3
- type: vlan
device: bond1
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
linux_bond
Defines a Linux bond that joins two or more interfaces together. This helps with
redundancy and increases bandwidth. Make sure to include the kernel-based bonding
options in the bonding_options parameter. For more information on Linux bonding
options, see 4.5.1. Bonding Module Directives in the Red Hat Enterprise Linux 7 Networking
Guide.
- type: linux_bond
name: bond1
members:
- type: interface
name: nic2
- type: interface
name: nic3
bonding_options: "mode=802.3ad"
44
linux_bridge
Defines a Linux bridge, which connects multiple interface, linux_bond and vlan
objects together.
- type: linux_bridge
name: bridge1
addresses:
- ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
members:
- type: interface
name: nic1
primary: true
- type: vlan
vlan_id: {get_param: ExternalNetworkVlanID}
device: bridge1
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
routes:
- ip_netmask: 0.0.0.0/0
default: true
next_hop: {get_param:
ExternalInterfaceDefaultRoute}
See Appendix E, Network Interface Parameters for a full list of parameters for each of these items.
For this example, you use the default bonded interface configuration. For example, the
/home/stack/templates/nic-configs/controller.yaml template uses the following
network_config:
resources:
OsNetConfigImpl:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
os_net_config:
network_config:
- type: interface
name: nic1
use_dhcp: false
addresses:
- ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
- ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
- type: ovs_bridge
45
Note
The Management network section is commented in the network interface Heat templates.
Uncomment this section to enable the Management network.
46
This template defines a bridge (usually the external bridge named br-ex) and creates a bonded
interface called bond1 from two numbered interfaces: nic2 and nic3. The bridge also contains a
number of tagged VLAN devices, which use bond1 as a parent device. The template also include
an interface that connects back to the director (nic1).
For more examples of network interface templates, see Appendix F, Network Interface Template
Examples.
Note that a lot of these parameters use the get_param function. You would define these in an
environment file you create specifically for your networks.
Important
Unused interfaces can cause unwanted default routes and network loops. For example,
your template might contain a network interface (nic4) that does not use any IP
assignments for OpenStack services but still uses DHCP and/or a default route. To avoid
network conflicts, remove any used interfaces from ovs_bridge devices and disable the
DHCP and default route settings:
- type: interface
name: nic4
use_dhcp: false
defroute: false
47
48
49
parameter_defaults:
...
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: internal_api
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
SwiftMgmtNetwork: storage
# Change from 'storage_mgmt'
SwiftProxyNetwork: storage
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage
# Change from 'storage_mgmt'
CephPublicNetwork: storage
# Define which network will be used for hostname resolution
ControllerHostnameResolveNetwork: internal_api
ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
...
Changing these parameters to storage places these services on the Storage network instead of
the Storage Management network. This means you only need to define a set of
parameter_defaults for the Storage network and not the Storage Management network.
50
In order to use isolated networks, the servers must have IP addresses on each network. You can
use neutron in the Undercloud to manage IP addresses on the isolated networks, so you will need to
enable neutron port creation for each network. You can override the resource registry in your
environment file.
First, this is the complete set of networks and ports that can be deployed:
resource_registry:
# This section is usually not modified, if in doubt stick to the
defaults
# TripleO overcloud networks
OS::TripleO::Network::External: /usr/share/openstack-tripleo-heattemplates/network/external.yaml
OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heattemplates/network/internal_api.yaml
OS::TripleO::Network::StorageMgmt: /usr/share/openstack-tripleo-heattemplates/network/storage_mgmt.yaml
OS::TripleO::Network::Storage: /usr/share/openstack-tripleo-heattemplates/network/storage.yaml
OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heattemplates/network/tenant.yaml
OS::TripleO::Network::Management: /usr/share/openstack-tripleo-heattemplates/network/management.yaml
# Port assignments for the VIPs
OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Network::Ports::InternalApiVipPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/storage_mgmt.yaml
OS::TripleO::Network::Ports::TenantVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Network::Ports::ManagementVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/management.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/vip.yaml
# Port assignments for the controller role
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstacktripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Controller::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/storage_mgmt.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-
51
tripleo-heat-templates/network/ports/management.yaml
# Port assignments for the compute role
OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstacktripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstacktripleo-heat-templates/network/ports/management.yaml
# Port assignments for the ceph storage role
OS::TripleO::CephStorage::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::CephStorage::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/storage_mgmt.yaml
OS::TripleO::CephStorage::Ports::ManagementPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/management.yaml
# Port assignments for the swift storage role
OS::TripleO::SwiftStorage::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::SwiftStorage::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::SwiftStorage::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/storage_mgmt.yaml
OS::TripleO::SwiftStorage::Ports::ManagementPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/management.yaml
# Port assignments for the block storage role
OS::TripleO::BlockStorage::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::BlockStorage::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::BlockStorage::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/storage_mgmt.yaml
OS::TripleO::BlockStorage::Ports::ManagementPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/management.yaml
The first section of this file has the resource registry declaration for the
OS::TripleO::Network::* resources. By default these resources point at a noop.yaml file that
does not create any networks. By pointing these resources at the YAML files for each network, you
enable the creation of these networks.
The next several sections create the IP addresses for the nodes in each role. The controller nodes
have IPs on each network. The compute and storage nodes each have IPs on a subset of the
networks.
52
To deploy without one of the pre-configured networks, disable the network definition and the
corresponding port definition for the role. For example, all references to storage_mgmt.yaml
could be replaced with noop.yaml:
resource_registry:
# This section is usually not modified, if in doubt stick to the
defaults
# TripleO overcloud networks
OS::TripleO::Network::External: /usr/share/openstack-tripleo-heattemplates/network/external.yaml
OS::TripleO::Network::InternalApi: /usr/share/openstack-tripleo-heattemplates/network/internal_api.yaml
OS::TripleO::Network::StorageMgmt: /usr/share/openstack-tripleo-heattemplates/network/noop.yaml
OS::TripleO::Network::Storage: /usr/share/openstack-tripleo-heattemplates/network/storage.yaml
OS::TripleO::Network::Tenant: /usr/share/openstack-tripleo-heattemplates/network/tenant.yaml
# Port assignments for the VIPs
OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Network::Ports::InternalApiVipPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Network::Ports::StorageMgmtVipPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Network::Ports::TenantVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstacktripleo-heat-templates/network/ports/vip.yaml
# Port assignments for the controller role
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstacktripleo-heat-templates/network/ports/external.yaml
OS::TripleO::Controller::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant.yaml
# Port assignments for the compute role
OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstacktripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant.yaml
# Port assignments for the ceph storage role
53
OS::TripleO::CephStorage::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::CephStorage::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for the swift storage role
OS::TripleO::SwiftStorage::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::SwiftStorage::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::SwiftStorage::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
# Port assignments for the block storage role
OS::TripleO::BlockStorage::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api.yaml
OS::TripleO::BlockStorage::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage.yaml
OS::TripleO::BlockStorage::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
parameter_defaults:
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
AodhApiNetwork: internal_api
GnocchiApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: ctlplane # Admin connection for Undercloud
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
SwiftMgmtNetwork: storage # Changed from storage_mgmt
SwiftProxyNetwork: storage
SaharaApiNetwork: internal_api
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage # Changed from storage_mgmt
CephPublicNetwork: storage
ControllerHostnameResolveNetwork: internal_api
ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
54
By using noop.yaml, no network or ports are created, so the services on the Storage Management
network would default to the Provisioning network. This can be changed in the ServiceNetMap in
order to move the Storage Management services to another network, such as the Storage network.
55
Note
Node placement takes priority over profile matching. To avoid scheduling failures, use the
default baremetal flavor for deployment and not the flavors designed for profile matching
(compute, control, etc). For example:
$ openstack overcloud deploy ... --control-flavor baremetal -compute-flavor baremetal ...
56
The first is a set of resource_registry references that override the defaults. These tell the
director to use a specific IP for a given port on a node type. Modify each resource to use the
absolute path of its respective template. For example:
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstacktripleo-heat-templates/network/ports/external_from_pool.yaml
OS::TripleO::Controller::Ports::InternalApiPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/internal_api_from_pool.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstacktripleo-heat-templates/network/ports/storage_from_pool.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort:
/usr/share/openstack-tripleo-heattemplates/network/ports/storage_mgmt_from_pool.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstacktripleo-heat-templates/network/ports/tenant_from_pool.yaml
The default configuration sets all networks on all node types to use pre-assigned IPs. To allow a
particular network or node type to use default IP assignment instead, simply remove the
resource_registry entries related to that node type or network from the environment file.
The second section is parameter_defaults, where the actual IP addresses are assigned. Each node
type has an associated parameter:
ControllerIPs for Controller nodes.
NovaComputeIPs for Compute nodes.
CephStorageIPs for Ceph Storage nodes.
BlockStorageIPs for Block Storage nodes.
SwiftStorageIPs for Object Storage nodes.
Each parameter is a map of network names to a list of addresses. Each network type must have at
least as many addresses as there will be nodes on that network. The director assigns addresses in
order. The first node of each type receives the first address on each respective list, the second node
receives the second address on each respective lists, and so forth.
For example, if an Overcloud will contain three Ceph Storage nodes, the CephStorageIPs parameter
might look like:
CephStorageIPs:
storage:
- 172.16.1.100
- 172.16.1.101
- 172.16.1.102
storage_mgmt:
- 172.16.3.100
- 172.16.3.101
- 172.16.3.102
The first Ceph Storage node receives two addresses: 172.16.1.100 and 172.16.3.100. The second
receives 172.16.1.101 and 172.16.3.101, and the third receives 172.16.1.102 and 172.16.3.102. The
same pattern applies to the other node types.
Make sure the chosen IP addresses fall outside the allocation pools for each network defined in your
57
network environment file (see Section 6.2.2, Creating a Network Environment File). For example,
make sure the internal_api assignments fall outside of the InternalApiAllocationPools
range. This avoids any conflicts with the VIPs chosen for each network. Likewise, make sure the IP
assignments do not conflict with the VIP configuration defined for external load balancing (see
Section 6.5, Configuring External Load Balancing).
To apply this configuration during a deployment, include the environment file with the openstack
overcloud deploy command. If using network isolation (see Section 6.2, Isolating Networks),
include this file after the network-isolation.yaml file. For example:
$ openstack overcloud deploy --templates -e /usr/share/openstacktripleo-heat-templates/environments/network-isolation.yaml -e
~/templates/ips-from-pool-all.yaml [OTHER OPTIONS]
58
StorageNetworkVip: 172.18.0.30
StorageMgmtNetworkVip: 172.19.0.40
ServiceVips:
redis: 172.16.0.31
Important
You must assign a VIP address for all networks if using this feature.
Select these IPs from their respective allocation pool ranges. For example, select the
InternalApiNetworkVip from the InternalApiAllocationPools range. The exception is
the ControlPlaneIP, which you select from outside of the allocation range defined in
undercloud.conf file.
59
OS::TripleO::NodeUserData
Provides a Heat template that uses custom configuration on first boot. In this case, it installs
the openstack-heat-docker-agents container on the Compute nodes when they first
boot. This container provides a set of initialization scripts to configure the containerized
Compute node and Heat hooks to communicate with the director.
OS::TripleO::ComputePostDeployment
Provides a Heat template with a set of post-configuration resources for Compute nodes.
This includes a software configuration resource that provides a set of tags to Puppet:
ComputePuppetConfig:
type: OS::Heat::SoftwareConfig
properties:
group: puppet
options:
enable_hiera: True
enable_facter: False
tags:
package,file,concat,file_line,nova_config,neutron_config,neutron_
agent_ovs,neutron_plugin_ml2
inputs:
- name: tripleo::packages::enable_install
type: Boolean
default: True
outputs:
- name: result
config:
get_file: ../puppet/manifests/overcloud_compute.pp
These tags define the Puppet modules to pass to the openstack-heat-docker-agents
container.
The docker.yaml file includes a parameter called NovaImage that replaces the standard
overcloud-full image with a different image (atomic-image) when provisioning Compute
nodes. See in Section 6.4.2, Uploading the Atomic Host Image for instructions on uploading this
new image.
The docker.yaml file also includes a parameter_defaults section that defines the Docker
registry and images to use for our Compute node services. You can modify this section to use a
local registry instead of the default registry.access.redhat.com. See Section 6.4.3, Using a Local
Registry for instructions on configuring a local registry.
60
$ glance image-create --name atomic-image --file ~/images/rhel-atomiccloud-7.2-12.x86_64.qcow2 --disk-format qcow2 --container-format bare
This imports the image alongside the other Overcloud images.
$ glance image-list
+--------------------------------------+------------------------+
| ID
| Name
|
+--------------------------------------+------------------------+
| 27b5bad7-f8b2-4dd8-9f69-32dfe84644cf | atomic-image
|
| 08c116c6-8913-427b-b5b0-b55c18a01888 | bm-deploy-kernel
|
| aec4c104-0146-437b-a10b-8ebc351067b9 | bm-deploy-ramdisk
|
| 9012ce83-4c63-4cd7-a976-0c972be747cd | overcloud-full
|
| 376e95df-c1c1-4f2a-b5f3-93f639eb9972 | overcloud-full-initrd |
| 0b5773eb-4c64-4086-9298-7f28606b68af | overcloud-full-vmlinuz |
+--------------------------------------+------------------------+
61
nova-compute:latest
$ sudo docker tag
registry.access.redhat.com/rhosp9_tech_preview/openstack-data:latest
localhost:8787/registry.access.redhat.com/openstack-data:latest
$ sudo docker tag
registry.access.redhat.com/rhosp9_tech_preview/openstack-novalibvirt:latest localhost:8787/registry.access.redhat.com/openstacknova-libvirt:latest
$ sudo docker tag
registry.access.redhat.com/rhosp9_tech_preview/openstack-neutronopenvswitch-agent:latest
localhost:8787/registry.access.redhat.com/openstack-neutronopenvswitch-agent:latest
$ sudo docker tag
registry.access.redhat.com/rhosp9_tech_preview/openstack-openvswitchvswitchd:latest localhost:8787/registry.access.redhat.com/openstackopenvswitch-vswitchd:latest
$ sudo docker tag
registry.access.redhat.com/rhosp9_tech_preview/openstack-openvswitchdb-server:latest localhost:8787/registry.access.redhat.com/openstackopenvswitch-db-server:latest
$ sudo docker tag
registry.access.redhat.com/rhosp9_tech_preview/openstack-heat-dockeragents:latest localhost:8787/registry.access.redhat.com/openstack-heatdocker-agents:latest
Push them to the registry:
$ sudo docker push localhost:8787/registry.access.redhat.com/openstacknova-compute:latest
$ sudo docker push localhost:8787/registry.access.redhat.com/openstackdata:latest
$ sudo docker push localhost:8787/registry.access.redhat.com/openstacknova-libvirt:latest
$ sudo docker push localhost:8787/registry.access.redhat.com/openstackneutron-openvswitch-agent:latest
$ sudo docker push localhost:8787/registry.access.redhat.com/openstackopenvswitch-vswitchd:latest
$ sudo docker push localhost:8787/registry.access.redhat.com/openstackopenvswitch-db-server:latest
$ sudo docker push localhost:8787/registry.access.redhat.com/openstackheat-docker-agents:latest
Create a copy of the main docker.yaml environment file in the templates subdirectory:
$ cp /usr/share/openstack-tripleo-heattemplates/environments/docker.yaml ~/templates/.
Edit the file and modify the resource_registry to use absolute paths:
resource_registry:
OS::TripleO::ComputePostDeployment: /usr/share/openstack-tripleoheat-templates/docker/compute-post.yaml
OS::TripleO::NodeUserData: /usr/share/openstack-tripleo-heattemplates/docker/firstboot/install_docker_agents.yaml
62
63
For more information about configuring IPv6 in the Overcloud, see the dedicated IPv6 Networking
for the Overcloud guide for full instructions.
64
GlanceFilePcmkFstype
Defines the file system type that Pacemaker uses for image storage. Set to nfs.
GlanceFilePcmkDevice
The NFS share to mount for image storage. For example, 192.168.122.1:/export/glance.
GlanceFilePcmkOptions
The NFS mount options for the image storage.
The environment files options should look similar to the following:
parameter_defaults:
CinderEnableIscsiBackend: false
CinderEnableRbdBackend: false
CinderEnableNfsBackend: true
NovaEnableRbdBackend: false
GlanceBackend: 'file'
CinderNfsMountOptions: 'rw,sync'
CinderNfsServers: '192.0.2.230:/cinder'
GlanceFilePcmkManage: true
GlanceFilePcmkFstype: 'nfs'
GlanceFilePcmkDevice: '192.0.2.230:/glance'
GlanceFilePcmkOptions:
'rw,sync,context=system_u:object_r:glance_var_lib_t:s0'
Important
Include the context=system_u:object_r:glance_var_lib_t:s0 in the
GlanceFilePcmkOptions parameter to allow glance access to the /var/lib directory.
Without this SELinux content, glance will fail to write to the mount point.
These parameters are integrated as part of the heat template collection. Setting them as such
creates two NFS mount points for cinder and glance to use.
Save this file for inclusion in the Overcloud creation.
65
If you already have an existing Ceph Storage Cluster, you can integrate this during an
Overcloud deployment. This means you manage and scale the cluster outside of the
Overcloud configuration.
For more information about configuring Overcloud Ceph Storage, see the dedicated Red Hat Ceph
Storage for the Overcloud guide for full instructions on both scenarios.
66
Ensure you have a private key and certificate authority created. See Appendix A, SSL/TLS
Certificate Configuration for more information on creating a valid SSL/TLS key and certificate
authority file.
Important
The certificate authority contents require the same indentation level for all new
lines.
SSLKey
Copy the contents of the private key into the SSLKey parameter. For example:
parameter_defaults:
...
SSLKey: |
-----BEGIN RSA PRIVATE KEY----MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO9hkJZnGP6qb6wtYUoy1bVP7
...
ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4XpZUC7yhqPaU
-----END RSA PRIVATE KEY-----
Important
The private key contents require the same indentation level for all new lines.
EndpointMap
The EndpointMap contains a mapping of the services using HTTPS and HTTP
67
communication. If using DNS for SSL communication, leave this section with the defaults.
However, if using an IP address for the SSL certificates common name (see Appendix A,
SSL/TLS Certificate Configuration), replace all instances of CLOUDNAME with IP_ADDRESS.
Use the following command to accomplish this:
$ sed -i 's/CLOUDNAME/IP_ADDRESS/' ~/templates/enable-tls.yaml
Important
Do not substitute IP_ADDRESS or CLOUDNAME for actual values. Heat replaces
these variables with the appropriate value during the Overcloud creation.
OS::TripleO::NodeTLSData
Change the resource path for OS::TripleO::NodeTLSData: to an absolute path:
resource_registry:
OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heattemplates/puppet/extraconfig/tls/tls-cert-inject.yaml
Important
The certificate authority contents require the same indentation level for all new
lines.
OS::TripleO::NodeTLSCAData
68
69
70
71
Gives the hostname of the content delivery server to use to receive updates. The default is
https://cdn.redhat.com. Since Satellite 6 hosts its own content, the URL must be used for
systems registered with Satellite 6. The base URL for content uses the form of
https://hostname:port/prefix.
rhel_reg_org
The organization to use for registration.
rhel_reg_environment
The environment to use within the chosen organization.
rhel_reg_repos
A comma-separated list of repositories to enable. See Section 2.5, Repository
Requirements for repositories to enable.
rhel_reg_activation_key
The activation key to use for registration.
rhel_reg_user; rhel_reg_password
The username and password for registration. If possible, use activation keys for registration.
rhel_reg_machine_name
The machine name. Leave this as blank to use the hostname of the node.
rhel_reg_force
Set to true to force your registration options. For example, when re-registering nodes.
rhel_reg_sat_repo
The repository containing Red Hat Satellite 6s management tools, such as katelloagent. For example, rhel-7-server-satellite-tools-6.1-rpms.
The deployment command (openstack overcloud deploy) in Chapter 7, Creating the
Overcloud uses the -e option to add environment files. Add both ~/templates/rhelregistration/environment-rhel-registration.yaml and ~/templates/rhelregistration/rhel-registration-resource-registry.yaml. For example:
$ openstack overcloud deploy --templates [...] -e
/home/stack/templates/rhel-registration/environment-rhelregistration.yaml -e /home/stack/templates/rhel-registration/rhelregistration-resource-registry.yaml
Important
Registration is set as the OS::TripleO::NodeExtraConfig Heat resource. This
means you can only use this resource for registration. See Section 6.14, Customizing
Overcloud Pre-Configuration for more information.
The director provides a mechanism to perform configuration on all nodes upon the initial creation of
the Overcloud. The director achieves this through cloud-init, which you can call using the
OS::TripleO::NodeUserData resource type.
In this example, you will update the nameserver with a custom IP address on all nodes. You must
first create a basic heat template (/home/stack/templates/nameserver.yaml) that runs a
script to append each nodes resolv.conf with a specific nameserver. You can use the
OS::TripleO::MultipartMime resource type to send the configuration script.
heat_template_version: 2014-10-16
description: >
Extra hostname configuration
resources:
userdata:
type: OS::Heat::MultipartMime
properties:
parts:
- config: {get_resource: nameserver_config}
nameserver_config:
type: OS::Heat::SoftwareConfig
properties:
config: |
#!/bin/bash
echo "nameserver 192.168.1.1" >> /etc/resolv.conf
outputs:
OS::stack_id:
value: {get_resource: userdata}
Next, create an environment file (/home/stack/templates/firstboot.yaml) that registers
your heat template as the OS::TripleO::NodeUserData resource type.
resource_registry:
OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml
To add the first boot configuration, add the environment file to the stack when first creating the
Overcloud. For example:
$ openstack overcloud deploy --templates -e
/home/stack/templates/firstboot.yaml
The -e applies the environment file to the Overcloud stack.
This adds the configuration to all nodes when they are first created and boot for the first time.
Subsequent inclusions of these templates, such as updating the Overcloud stack, does not run
these scripts.
Important
You can only register the OS::TripleO::NodeUserData to one heat template.
Subsequent usage overrides the heat template to use.
73
74
outputs:
deploy_stdout:
description: Deployment reference, used to trigger pre-deploy on
changes
value: {get_attr: [ExtraPreDeployment, deploy_stdout]}
Important
The server parameter is the server to apply the configuration and is provided by the
parent template. This parameter is mandatory in all pre-configuration templates.
Next, create an environment file (/home/stack/templates/pre_config.yaml) that registers
your heat template as the OS::TripleO::NodeExtraConfig resource type.
resource_registry:
OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml
parameter_defaults:
nameserver_ip: 192.168.1.1
To apply the configuration, add the environment file to the stack when creating or updating the
Overcloud. For example:
$ openstack overcloud deploy --templates -e
/home/stack/templates/pre_config.yaml
This applies the configuration to all nodes before the core configuration begins on either the initial
Overcloud creation or subsequent updates.
Important
You can only register these resources to only one Heat template each. Subsequent usage
overrides the heat template to use per resource.
75
servers:
type: json
nameserver_ip:
type: string
resources:
ExtraConfig:
type: OS::Heat::SoftwareConfig
properties:
group: script
config:
str_replace:
template: |
#!/bin/sh
echo "nameserver _NAMESERVER_IP_" >> /etc/resolv.conf
params:
_NAMESERVER_IP_: {get_param: nameserver_ip}
ExtraDeployments:
type: OS::Heat::SoftwareDeployments
properties:
servers: {get_param: servers}
config: {get_resource: ExtraConfig}
actions: ['CREATE','UPDATE']
Important
The servers parameter is the server list to apply the configuration and is provided by the
parent template. This parameter is mandatory in all
OS::TripleO::NodeExtraConfigPost templates.
Next, create an environment file (/home/stack/templates/post_config.yaml) that registers
your heat template as the OS::TripleO::NodeExtraConfigPost: resource type.
resource_registry:
OS::TripleO::NodeExtraConfigPost:
/home/stack/templates/nameserver.yaml
parameter_defaults:
nameserver_ip: 192.168.1.1
To apply the configuration, add the environment file to the stack when creating or updating the
Overcloud. For example:
$ openstack overcloud deploy --templates -e
/home/stack/templates/post_config.yaml
This applies the configuration to all nodes after the core configuration completes on either initial
Overcloud creation or subsequent updates.
76
Important
You can only register the OS::TripleO::NodeExtraConfigPost to only one heat
template. Subsequent usage overrides the heat template to use.
77
after the main configuration completes. As a basic example, you might intend to install motd to each
node. The process for accomplishing is to first create a Heat template
(/home/stack/templates/custom_puppet_config.yaml) that launches Puppet
configuration.
heat_template_version: 2014-10-16
description: >
Run Puppet extra configuration to set new MOTD
parameters:
servers:
type: json
resources:
ExtraPuppetConfig:
type: OS::Heat::SoftwareConfig
properties:
config: {get_file: motd.pp}
group: puppet
options:
enable_hiera: True
enable_facter: False
ExtraPuppetDeployments:
type: OS::Heat::SoftwareDeployments
properties:
config: {get_resource: ExtraPuppetConfig}
servers: {get_param: servers}
This includes the /home/stack/templates/motd.pp within the template and passes it to nodes
for configuration. The motd.pp file itself contains the Puppet classes to install and configure motd.
Next, create an environment file (/home/stack/templates/puppet_post_config.yaml) that
registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type.
resource_registry:
OS::TripleO::NodeExtraConfigPost:
/home/stack/templates/custom_puppet_config.yaml
And finally include this environment file when creating or updating the Overcloud stack:
$ openstack overcloud deploy --templates -e
/home/stack/templates/puppet_post_config.yaml
This applies the configuration from motd.pp to all nodes in the Overcloud.
78
$ cp -r /usr/share/openstack-tripleo-heat-templates ~/templates/myovercloud
This creates a clone of the Overcloud Heat templates. When running openstack overcloud
deploy, we use the --templates option to specify your local template directory. This occurs later
in this guide (see Chapter 7, Creating the Overcloud).
Note
The director uses the default template directory (/usr/share/openstack-tripleoheat-templates) if you specify the --templates option without a directory.
Important
Red Hat provides updates to the heat template collection over subsequent releases. Using
a modified template collection can lead to a divergence between your custom copy and
the original copy in /usr/share/openstack-tripleo-heat-templates. Red Hat
recommends using the methods from the following section instead of modifying the heat
template collection:
Section 6.14, Customizing Overcloud Pre-Configuration
Section 6.15, Customizing Overcloud Post-Configuration
Section 6.16, Customizing Puppet Configuration Data
Section 6.17, Applying Custom Puppet Configuration
If creating a copy of the heat template collection, you should track changes to the
templates using a version control system such as git.
79
Warning
Do not run openstack overcloud deploy as a background process. The Overcloud
creation might hang in mid-deployment if started as a background process.
Parameter
Description
Example
--templates [TEMPLATES]
~/templates/my-overcloud
/usr/share/openstacktripleo-heattemplates/
80
--stack STACK
overcloud
-t [TIMEOUT], --timeout
[TIMEOUT]
240
--control-scale
[CONTROL_SCALE]
--compute-scale
[COMPUTE_SCALE]
--ceph-storage-scale
[CEPH_STORAGE_SCALE]
--block-storage-scale
[BLOCK_STORAGE_SCALE]
--swift-storage-scale
[SWIFT_STORAGE_SCALE]
--control-flavor
[CONTROL_FLAVOR]
control
--compute-flavor
[COMPUTE_FLAVOR]
compute
--ceph-storage-flavor
[CEPH_STORAGE_FLAVOR]
ceph-storage
--block-storage-flavor
[BLOCK_STORAGE_FLAVOR]
cinder-storage
--swift-storage-flavor
[SWIFT_STORAGE_FLAVOR]
swift-storage
--neutron-flat-networks
[NEUTRON_FLAT_NETWORKS
]
datacentre
--neutron-physical-bridge
[NEUTRON_PHYSICAL_BRIDG
E]
(DEPRECATED) An Open
vSwitch bridge to create on each
hypervisor. This defaults to "brex". Typically, this should not
need to be changed
br-ex
81
82
--neutron-bridge-mappings
[NEUTRON_BRIDGE_MAPPIN
GS]
datacentre:br-ex
--neutron-public-interface
[NEUTRON_PUBLIC_INTERFA
CE]
nic1, eth0
--neutron-network-type
[NEUTRON_NETWORK_TYPE]
gre or vxlan
--neutron-tunnel-types
[NEUTRON_TUNNEL_TYPES]
vxlan gre,vxlan
--neutron-tunnel-id-ranges
[NEUTRON_TUNNEL_ID_RAN
GES]
(DEPRECATED) Ranges of
GRE tunnel IDs to make
available for tenant network
allocation
1:1000
--neutron-vni-ranges
[NEUTRON_VNI_RANGES]
(DEPRECATED) Ranges of
VXLAN VNI IDs to make
available for tenant network
allocation
1:1000
--neutron-disable-tunneling
(DEPRECATED) Disables
tunneling in case you aim to use
a VLAN segmented network or
flat network with Neutron
--neutron-network-vlan-ranges
[NEUTRON_NETWORK_VLAN
_RANGES]
datacentre:1:1000
--neutron-mechanism-drivers
[NEUTRON_MECHANISM_DRI
VERS]
(DEPRECATED) The
mechanism drivers for the
neutron tenant network. Defaults
to "openvswitch". To specify
multiple values, use a commaseparated string
openvswitch,l2population
--libvirt-type [LIBVIRT_TYPE]
kvm,qemu
--ntp-server [NTP_SERVER]
pool.ntp.org
--no-proxy [NO_PROXY]
--overcloud-ssh-user
OVERCLOUD_SSH_USER
ocuser
-e ~/templates/my-config.yaml
openstack overcloud
deploy command is important.
For example, parameters from
each sequential environment file
override the same parameters
from earlier environment files.
--environment-directory
--environment-directory
~/templates
83
84
--validation-errors-fatal
--validation-warnings-fatal
--dry-run
--force-postconfig
--force-postconfig
--answers-file ANSWERS_FILE
--answers-file ~/answers.yaml
--rhel-reg
--reg-method
--reg-org [REG_ORG]
--reg-force
--reg-sat-url [REG_SAT_URL]
consumerlatest.noarch.rpm file,
registers with
subscription-manager,
and installs katello-agent .
If a Red Hat Satellite 5 server,
the Overcloud obtains the RHN-
ORG-TRUSTED-SSL-CERT
file and registers with
rhnreg_ks.
--reg-activation-key
[REG_ACTIVATION_KEY]
Note
Run the following command for a full list of options:
$ openstack help overcloud deploy
85
86
Important
Save the original deployment command for later use and modification. For example, save
your deployment command in a script file called deploy-overcloud.sh:
#!/bin/bash
openstack overcloud deploy --templates \
-e /usr/share/openstack-tripleo-heattemplates/environments/network-isolation.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/storage-environment.yaml \
-t 150 \
--control-scale 3 \
--compute-scale 3 \
--ceph-storage-scale 3 \
--swift-storage-scale 0 \
--block-storage-scale 0 \
--compute-flavor compute \
--control-flavor control \
--ceph-storage-flavor ceph-storage \
--swift-storage-flavor swift-storage \
--block-storage-flavor block-storage \
--ntp-server pool.ntp.org \
--neutron-network-type vxlan \
--neutron-tunnel-types vxlan \
--libvirt-type qemu
This retains the Overcloud deployment commands parameters and environment files for
future use, such as Overcloud modifications and scaling. You can then edit and rerun this
script to suit future customizations to the Overcloud.
87
/usr/share/openstack-tripleo-heat-templates.
-e /usr/share/openstack-tripleo-heat-templates/environments/networkisolation.yaml - The -e option adds an additional environment file to the Overcloud
deployment. In this case, it is an environment file that initializes network isolation configuration.
-e ~/templates/network-environment.yaml - The -e option adds an additional
environment file to the Overcloud deployment. In this case, it is the network environment file from
Section 6.2.2, Creating a Network Environment File.
-e ~/templates/storage-environment.yaml - The -e option adds an additional
environment file to the Overcloud deployment. In this case, it is a custom environment file that
initializes our storage configuration.
--control-scale 3 - Scale the Controller nodes to three.
--compute-scale 3 - Scale the Compute nodes to three.
--ceph-storage-scale 3 - Scale the Ceph Storage nodes to three.
--control-flavor control - Use the a specific flavor for the Controller nodes.
--compute-flavor compute - Use the a specific flavor for the Compute nodes.
--ceph-storage-flavor ceph-storage - Use the a specific flavor for the Ceph Storage
nodes.
--ntp-server pool.ntp.org - Use an NTP server for time synchronization. This is useful
for keeping the Controller node cluster in synchronization.
--neutron-network-type vxlan - Use Virtual Extensible LAN (VXLAN) for the neutron
networking in the Overcloud.
--neutron-tunnel-types vxlan - Use Virtual Extensible LAN (VXLAN) for neutron
tunneling in the Overcloud.
The heat stack-list --show-nested command shows the current stage of the Overcloud
creation.
88
$ source ~/overcloudrc
This loads the necessary environment variables to interact with your Overcloud from the director
hosts CLI. To return to interacting with the directors host, run the following command:
$ source ~/stackrc
Each node in the Overcloud also contains a user called heat-admin. The stack user has SSH
access to this user on each node. To access a node over SSH, find the IP address of the desired
node:
$ nova list
Then connect to the node using the heat-admin user and the nodes IP address:
$ ssh [email protected]
89
90
91
92
+----------------------------------+------------------+
| 6226a517204846d1a26d15aae1af208f | swiftoperator
|
| 7c7eb03955e545dd86bbfeb73692738b | heat_stack_owner |
+----------------------------------+------------------+
If the role does not exist, create it:
$ keystone role-create --name heat_stack_owner
Install the Tempest toolset:
$ sudo yum install openstack-tempest
Set up a tempest directory in your stack users home directory and copy a local version of the
Tempest suite:
$ mkdir ~/tempest
$ cd ~/tempest
$ /usr/share/openstack-tempest-10.0.0/tools/configure-tempest-directory
This creates a local version of the Tempest tool set.
After the Overcloud creation process completed, the director created a file named ~/tempestdeployer-input.conf. This file provides a set of Tempest configuration options relevant to your
Overcloud. Run the following command to use this file to configure Tempest:
$ tools/config_tempest.py --deployer-input ~/tempest-deployerinput.conf --debug --create identity.uri $OS_AUTH_URL
identity.admin_password $OS_PASSWORD --network-id d474fe1f-222d-4e329242-cd1fefe9c14b
The $OS_AUTH_URL and $OS_PASSWORD environment variables use values set from the
overcloudrc file sourced previously. The --network-id is the UUID of the external network
created in Section 8.2, Creating the Overcloud External Network.
Important
The configuration script downloads the Cirros image for the Tempest tests. Make sure the
director has access to the Internet or uses a proxy with access to the Internet. Set the
http_proxy environment variable to use a proxy for command line operations.
Run the full suite of Tempest tests with the following command:
$ tools/run-tests.sh
Note
The full Tempest test suite might take hours. Alternatively, run part of the tests using the
'.*smoke' option.
$ tools/run-tests.sh '.*smoke'
93
Each test runs against the Overcloud, and the subsequent output displays each test and its result.
You can see more information about each test in the tempest.log file generated in the same
directory. For example, the output might show the following failed test:
{2}
tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_sp
ecify_keypair [18.305114s] ... FAILED
This corresponds to a log entry that contains more information. Search the log for the last two parts
of the test namespace separated with a colon. In this example, search for
ServersTestJSON:test_create_specify_keypair in the log:
$ grep "ServersTestJSON:test_create_specify_keypair" tempest.log -A 4
2016-03-17 14:49:31.123 10999 INFO tempest_lib.common.rest_client [reqa7a29a52-0a52-4232-9b57-c4f953280e2c ] Request
(ServersTestJSON:test_create_specify_keypair): 500 POST
http://192.168.201.69:8774/v2/2f8bef15b284456ba58d7b149935cbc8/oskeypairs 4.331s
2016-03-17 14:49:31.123 10999 DEBUG tempest_lib.common.rest_client
[req-a7a29a52-0a52-4232-9b57-c4f953280e2c ] Request - Headers:
{'Content-Type': 'application/json', 'Accept': 'application/json', 'XAuth-Token': '<omitted>'}
Body: {"keypair": {"name": "tempest-key-722237471"}}
Response - Headers: {'status': '500', 'content-length': '128', 'xcompute-request-id': 'req-a7a29a52-0a52-4232-9b57-c4f953280e2c',
'connection': 'close', 'date': 'Thu, 17 Mar 2016 04:49:31 GMT',
'content-type': 'application/json; charset=UTF-8'}
Body: {"computeFault": {"message": "The server has either erred
or is incapable of performing the requested operation.", "code": 500}}
_log_request_full /usr/lib/python2.7/sitepackages/tempest_lib/common/rest_client.py:414
Note
The -A 4 option shows the next four lines, which are usually the request header and
body, then the response header and body.
After completing the validation, remove any temporary connections to the Overclouds Internal API.
In this example, use the following commands to remove the previously created VLAN on the
Undercloud:
$ source ~/stackrc
$ sudo ovs-vsctl del-port vlan201
94
The director uses Pacemaker to provide a highly available cluster of Controller nodes. Pacemaker
uses a process called STONITH (Shoot-The-Other-Node-In-The-Head) to help fence faulty nodes.
By default, STONITH is disabled on your cluster and requires manual configuration so that
Pacemaker can control the power management of each node in the cluster.
Note
Login to each node as the heat-admin user from the stack user on the director. The
Overcloud creation automatically copies the stack users SSH key to each nodes heatadmin.
Verify you have a running cluster with pcs status:
$ sudo pcs status
Cluster name: openstackHA
Last updated: Wed Jun 24 12:40:27 2015
Last change: Wed Jun 24 11:36:18 2015
Stack: corosync
Current DC: lb-c1a2 (2) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
141 Resources configured
Verify that stonith is disabled with pcs property show:
$ sudo pcs property show
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: openstackHA
dc-version: 1.1.12-a14efad
have-watchdog: false
stonith-enabled: false
The Controller nodes contain a set of fencing agents for the various power management devices the
director supports. This includes:
Table 8.1. Fence Agents
Device
Type
fence_ipmilan
fence_idrac , fence_drac5
95
fence_ilo
fence_ucs
fence_xvm, fence_virt
The rest of this section uses the IPMI agent (fence_ipmilan) as an example.
View a full list of IPMI options that Pacemaker supports:
$ sudo pcs stonith describe fence_ipmilan
Each node requires configuration of IPMI devices to control the power management. This involves
adding a stonith device to Pacemaker for each node. Use the following commands for the cluster:
Note
The second command in each example is to prevent the node from asking to fence itself.
For Controller node 0:
$ sudo pcs stonith create my-ipmilan-for-controller-0 fence_ipmilan
pcmk_host_list=overcloud-controller-0 ipaddr=192.0.2.205 login=admin
passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-0 avoids
overcloud-controller-0
For Controller node 1:
$ sudo pcs stonith create my-ipmilan-for-controller-1 fence_ipmilan
pcmk_host_list=overcloud-controller-1 ipaddr=192.0.2.206 login=admin
passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-1 avoids
overcloud-controller-1
For Controller node 2:
$ sudo pcs stonith create my-ipmilan-for-controller-2 fence_ipmilan
pcmk_host_list=overcloud-controller-2 ipaddr=192.0.2.207 login=admin
passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-2 avoids
overcloud-controller-2
Run the following command to see all stonith resources:
$ sudo pcs stonith show
96
97
Important
Each VM disk has to be copied from the existing OpenStack environment and into the new
Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering
system.
98
100
Node Type
Scale Up?
Scale Down?
Controller
Compute
Notes
Important
Make sure to leave at least 10 GB free space before scaling the Overcloud. This free
space accommodates image conversion and caching during the node provisioning
process.
101
{
"mac":[
"dd:dd:dd:dd:dd:dd"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.207"
},
{
"mac":[
"ee:ee:ee:ee:ee:ee"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.0.2.208"
}
]
}
See Section 5.1, Registering Nodes for the Overcloud for an explanation of these parameters.
Run the following command to register these nodes:
$ openstack baremetal import --json newnodes.json
After registering the new nodes, launch the introspection process for them. Use the following
commands for each new node:
$ ironic node-set-provision-state [NODE UUID] manage
$ openstack baremetal introspection start [NODE UUID]
$ ironic node-set-provision-state [NODE UUID] provide
This detects and benchmarks the hardware properties of the nodes.
After the introspection process completes, tag each new node for its desired role. For example, for a
Compute node, use the following command:
$ ironic node-update [NODE UUID] add
properties/capabilities='profile:compute,boot_option:local'
Set the boot images to use during the deployment. Find the UUIDs for the bm-deploy-kernel and
bm-deploy-ramdisk images:
$ glance image-list
+--------------------------------------+------------------------+
| ID
| Name
|
102
+--------------------------------------+------------------------+
| 09b40e3d-0382-4925-a356-3a4b4f36b514 | bm-deploy-kernel
|
| 765a46af-4417-4592-91e5-a300ead3faf6 | bm-deploy-ramdisk
|
| ef793cd0-e65c-456a-a675-63cd57610bd5 | overcloud-full
|
| 9a51a6cb-4670-40de-b64b-b70f4dd44152 | overcloud-full-initrd |
| 4f7e33f4-d617-47c1-b36f-cbe90f132e5d | overcloud-full-vmlinuz |
+--------------------------------------+------------------------+
Set these UUIDs for the new nodes deploy_kernel and deploy_ramdisk settings:
$ ironic node-update [NODE UUID] add
driver_info/deploy_kernel='09b40e3d-0382-4925-a356-3a4b4f36b514'
$ ironic node-update [NODE UUID] add
driver_info/deploy_ramdisk='765a46af-4417-4592-91e5-a300ead3faf6'
Scaling the Overcloud requires running the openstack overcloud deploy again with the
desired number of nodes for a role. For example, to scale to 5 Compute nodes:
$ openstack overcloud deploy --templates --compute-scale 5
[OTHER_OPTIONS]
This updates the entire Overcloud stack. Note that this only updates the stack. It does not delete the
Overcloud and replace the stack.
Important
Make sure to include all environment files and options from your initial Overcloud creation.
This includes the same scale parameters for non-Compute nodes.
source ~/stack/overcloudrc
nova service-list
nova service-disable [hostname] nova-compute
source ~/stack/stackrc
Removing Overcloud nodes requires an update to the overcloud stack in the director using the
local template files. First identify the UUID of the Overcloud stack:
103
$ heat stack-list
Identify the UUIDs of the nodes to delete:
$ nova list
Run the following command to delete the nodes from the stack and update the plan accordingly:
$ openstack overcloud node delete --stack [STACK_UUID] --templates -e
[ENVIRONMENT_FILE] [NODE1_UUID] [NODE2_UUID] [NODE3_UUID]
Important
If you passed any extra environment files when you created the Overcloud, pass them
here again using the -e or --environment-file option to avoid making undesired
manual changes to the Overcloud.
Important
Make sure the openstack overcloud node delete command runs to completion
before you continue. Use the openstack stack list command and check the
overcloud stack has reached an UPDATE_COMPLETE status.
Finally, remove the nodes Compute service:
$ source ~/stack/overcloudrc
$ nova service-delete [service-id]
$ source ~/stack/stackrc
And remove the nodes Open vSwitch agent:
$
$
$
$
source ~/stack/overcloudrc
neutron agent-list
neutron agent-delete [openvswitch-service-id]
source ~/stack/stackrc
You are now free to remove the node from the Overcloud and re-provision it for other purposes.
104
This process ensures that a node can be replaced without affecting the availability of any instances.
105
The output should show all services running on the existing nodes and stopped on the failed
node.
5. Check the following parameters on each node of the Overclouds MariaDB cluster:
wsrep_local_state_comment: Synced
wsrep_cluster_size: 2
Use the following command to check these parameters on each running Controller node
(respectively using 192.168.0.47 and 192.168.0.46 for IP addresses):
$ for i in 192.168.0.47 192.168.0.46 ; do echo "*** $i ***" ;
ssh heat-admin@$i "sudo mysql --exec=\"SHOW STATUS LIKE
'wsrep_local_state_comment'\" ; sudo mysql --exec=\"SHOW
STATUS LIKE 'wsrep_cluster_size'\""; done
6. Check the RabbitMQ status. For example, if 192.168.0.47 is the IP address of a running
Controller node, use the following command to get the status
$ ssh [email protected] "sudo rabbitmqctl cluster_status"
The running_nodes key should only show the two available nodes and not the failed
node.
7. Disable fencing, if enabled. For example, if 192.168.0.47 is the IP address of a running
Controller node, use the following command to disable fencing:
$ ssh [email protected] "sudo pcs property set stonithenabled=false"
Check the fencing status with the following command:
$ ssh [email protected] "sudo pcs property show stonithenabled"
8. Check the nova-compute service on the director node:
$ sudo systemctl status openstack-nova-compute
$ nova hypervisor-list
The output should show all non-maintenance mode nodes as up.
9. Make sure all Undercloud services are running:
$ sudo systemctl -t service
106
| ID
| Name
|
+--------------------------------------+------------------------+
| 861408be-4027-4f53-87a6-cd3cf206ba7a | overcloud-compute-0
|
| 0966e9ae-f553-447a-9929-c4232432f718 | overcloud-compute-1
|
| 9c08fa65-b38c-4b2e-bd47-33870bff06c7 | overcloud-compute-2
|
| a7f0f5e1-e7ce-4513-ad2b-81146bc8c5af | overcloud-controller-0 |
| cfefaf60-8311-4bc3-9416-6a824a40a9ae | overcloud-controller-1 |
| 97a055d4-aefd-481c-82b7-4a5f384036d2 | overcloud-controller-2 |
+--------------------------------------+------------------------+
In this example, the aim is to remove the overcloud-controller-1 node and replace it with
overcloud-controller-3. First, set the node into maintenance mode so the director does not
reprovision the failed node. Correlate the instance ID from nova list with the node ID from
ironic node-list
[stack@director ~]$ ironic node-list
+--------------------------------------+------+-------------------------------------+
| UUID
| Name | Instance UUID
|
+--------------------------------------+------+-------------------------------------+
| 36404147-7c8a-41e6-8c72-a6e90afc7584 | None | 7bee57cf-4a58-4eafb851-2a8bf6620e48 |
| 91eb9ac5-7d52-453c-a017-c0e3d823efd0 | None | None
|
| 75b25e9a-948d-424a-9b3b-f0ef70a6eacf | None | None
|
| 038727da-6a5c-425f-bd45-fda2f4bd145b | None | 763bfec2-9354-466aae65-2401c13e07e5 |
| dc2292e6-4056-46e0-8848-d6e96df1f55d | None | 2017b481-706f-44e1852a-2ee857c303c4 |
| c7eadcea-e377-4392-9fc3-cf2b02b7ec29 | None | 5f73c7d7-4826-49a5b6be-8bfd558f3b41 |
| da3a8d19-8a59-4e9d-923a-6a336fe10284 | None | cfefaf60-8311-4bc39416-6a824a40a9ae |
| 807cb6ce-6b94-4cd1-9969-5c47560c2eee | None | c07c13e6-a845-47919628-260110829c3a |
+--------------------------------------+------+-------------------------------------+
Set the node into maintenance mode:
[stack@director ~]$ ironic node-set-maintenance da3a8d19-8a59-4e9d923a-6a336fe10284 true
Tag the new node with the control profile.
[stack@director ~]$ ironic node-update 75b25e9a-948d-424a-9b3bf0ef70a6eacf add
properties/capabilities='profile:control,boot_option:local'
Create a YAML file (~/templates/remove-controller.yaml) that defines the node index to
remove:
107
parameters:
ControllerRemovalPolicies:
[{'resource_list': ['1']}]
Important
If replacing the node with index 0, edit the heat templates and change the bootstrap node
index and node validation index before starting replacement. Create a copy of the
directors Heat template collection (see Section 6.18, Using Customized Core Heat
Templates) and run the following command on the overcloud.yaml file:
$ sudo sed -i "s/resource\.0/resource.1/g" ~/templates/myovercloud/overcloud.yaml
This changes the node index for the following resources:
ControllerBootstrapNodeConfig:
type: OS::TripleO::BootstrapNode::SoftwareConfig
properties:
bootstrap_nodeid: {get_attr: [Controller,
resource.0.hostname]}
bootstrap_nodeid_ip: {get_attr: [Controller,
resource.0.ip_address]}
And:
AllNodesValidationConfig:
type: OS::TripleO::AllNodes::Validation
properties:
PingTestIps:
list_join:
- ' '
- - {get_attr: [Controller,
resource.0.external_ip_address]}
- {get_attr: [Controller,
resource.0.internal_api_ip_address]}
- {get_attr: [Controller,
resource.0.storage_ip_address]}
- {get_attr: [Controller,
resource.0.storage_mgmt_ip_address]}
- {get_attr: [Controller,
resource.0.tenant_ip_address]}
After identifying the node index, redeploy the Overcloud and include the removecontroller.yaml environment file:
[stack@director ~]$ openstack overcloud deploy --templates --controlscale 3 -e ~/templates/remove-controller.yaml [OTHER OPTIONS]
108
Important
If you passed any extra environment files or options when you created the Overcloud,
pass them again here to avoid making undesired changes to the Overcloud.
However, note that the -e ~/templates/remove-controller.yaml is only required once in
this instance.
The director removes the old node, creates a new one, and updates the Overcloud stack. You can
check the status of the Overcloud stack with the following command:
[stack@director ~]$ heat stack-list --show-nested
...
...
...
...
...
...
...
...
+-------------------------+
| Networks
|
+-------------------------+
| ctlplane=192.168.0.44
|
| ctlplane=192.168.0.47
|
| ctlplane=192.168.0.46
|
| ctlplane=192.168.0.48
|
+-------------------------+
109
ring0_addr: overcloud-controller-2
nodeid: 3
}
}
Note the nodeid value of the removed node for later. In this example, it is 2.
3. Delete the failed node from the Corosync configuration on each node and restart Corosync.
For this example, log into overcloud-controller-0 and overcloud-controller-2
and run the following commands:
[stack@director]
localnode remove
[stack@director]
reload corosync"
[stack@director]
localnode remove
[stack@director]
reload corosync"
4. Log into one of the remaining nodes and delete the node from the cluster with the
crm_node command:
[stack@director] ssh [email protected]
[heat-admin@overcloud-controller-0 ~]$ sudo crm_node -R
overcloud-controller-1 --force
Stay logged into this node.
5. Delete the failed node from the RabbitMQ cluster:
[heat-admin@overcloud-controller-0 ~]$ sudo rabbitmqctl
forget_cluster_node rabbit@overcloud-controller-1
6. Delete the failed node from MongoDB. First, find the IP address for the nodes Interal API
connection.
[heat-admin@overcloud-controller-0 ~]$ sudo netstat -tulnp | grep
27017
tcp
0
0 192.168.0.47:27017
0.0.0.0:*
LISTEN
13415/mongod
Check that the node is the primary replica set:
[root@overcloud-controller-0 ~]# echo "db.isMaster()" | mongo -host 192.168.0.47:27017
MongoDB shell version: 2.6.11
connecting to: 192.168.0.47:27017/echo
{
"setName" : "tripleo",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"hosts" : [
"192.168.0.47:27017",
110
"192.168.0.46:27017",
"192.168.0.45:27017"
],
"primary" : "192.168.0.47:27017",
"me" : "192.168.0.47:27017",
"electionId" : ObjectId("575919933ea8637676159d28"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-06-09T09:02:43.340Z"),
"maxWireVersion" : 2,
"minWireVersion" : 0,
"ok" : 1
}
bye
This should indicate if the current node is the primary. If not, use the IP address of the node
indicated in the primary key.
Connect to MongoDB on the primary node:
[heat-admin@overcloud-controller-0 ~]$ mongo --host 192.168.0.47
MongoDB shell version: 2.6.9
connecting to: 192.168.0.47:27017/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
tripleo:PRIMARY>
Check the status of the MongoDB cluster:
tripleo:PRIMARY> rs.status()
Identify the node using the _id key and remove the failed node using the name key. In this
case, we remove Node 1, which has 192.168.0.45:27017 for name:
tripleo:PRIMARY> rs.remove('192.168.0.45:27017')
Important
You must run the command against the PRIMARY replica set. If you see the
following message:
"replSetReconfig command must be sent to the current
replica set primary."
Relog into MongoDB on the node designated as PRIMARY.
111
Note
The following output is normal when removing the failed nodes replica set:
2016-05-07T03:57:19.541+0000 DBClientCursor::init call()
failed
2016-05-07T03:57:19.543+0000 Error: error doing query:
failed at src/mongo/shell/query.js:81
2016-05-07T03:57:19.545+0000 trying reconnect to
192.168.0.47:27017 (192.168.0.47) failed
2016-05-07T03:57:19.547+0000 reconnect 192.168.0.47:27017
(192.168.0.47) ok
Exit MongoDB:
tripleo:PRIMARY> exit
7. Update list of nodes in the Galera cluster:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs resource update
galera wsrep_cluster_address=gcomm://overcloud-controller0,overcloud-controller-3,overcloud-controller-2
8. Add the new node to the cluster:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster node add
overcloud-controller-3
9. Check the /etc/corosync/corosync.conf file on each node. If the nodeid of the new
node is the same as the removed node, update the value to a new nodeid value. For
example, the /etc/corosync/corosync.conf file contains an entry for the new node
(overcloud-controller-3):
nodelist {
node {
ring0_addr: overcloud-controller-0
nodeid: 1
}
node {
ring0_addr: overcloud-controller-2
nodeid: 3
}
node {
ring0_addr: overcloud-controller-3
nodeid: 2
}
}
Note that in this example, the new node uses the same nodeid of the removed node.
Update this value to a unused node ID value. For example:
node {
ring0_addr: overcloud-controller-3
112
nodeid: 4
}
Update this nodeid value on each Controller nodes /etc/corosync/corosync.conf
file, including the new node.
10. Restart the Corosync service on the existing nodes only. For example, on overcloudcontroller-0:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster reload
corosync
And on overcloud-controller-2:
[heat-admin@overcloud-controller-2 ~]$ sudo pcs cluster reload
corosync
Do not run this command on the new node.
11. Start the new Controller node:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster start
overcloud-controller-3
12. Enable the keystone service on the new node. Copy the /etc/keystone directory from a
remaining node to the director host:
[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]$ scp -r /etc/keystone
[email protected]:~/.
Log in to the new Controller node. Remove the /etc/keystone directory from the new
Controller node and copy the keystone files from the director host:
[heat-admin@overcloud-controller-3 ~]$ sudo -i
[root@overcloud-controller-3 ~]$ rm -rf /etc/keystone
[root@overcloud-controller-3 ~]$ scp -r
[email protected]:~/keystone /etc/.
[root@overcloud-controller-3 ~]$ chown -R keystone: /etc/keystone
[root@overcloud-controller-3 ~]$ chown root
/etc/keystone/logging.conf
/etc/keystone/default_catalog.templates
Edit /etc/keystone/keystone.conf and set the admin_bind_host and
public_bind_host parameters to new Controller nodes IP addresses. To find these IP
addresses, use the ip addr command and look for the IP address within the following
networks:
admin_bind_host - Provisioning network
public_bind_host - Internal API network
113
Note
These networks might differ if you deployed the Overcloud using a custom
ServiceNetMap parameter.
For example, if the Provisioning network uses the 192.168.0.0/24 subnet and the
Internal API uses the 172.17.0.0/24 subnet, use the following commands to find the
nodes IP addresses on those networks:
[root@overcloud-controller-3 ~]$ ip addr | grep
"192\.168\.0\..*/24"
[root@overcloud-controller-3 ~]$ ip addr | grep
"172\.17\.0\..*/24"
13. Enable and restart some services through Pacemaker. The cluster is currently in
maintenance mode and you will need to temporarily disable it to enable the service. For
example:
[heat-admin@overcloud-controller-3 ~]$ sudo pcs property set
maintenance-mode=false --wait
14. Wait until the Galera service starts on all nodes.
[heat-admin@overcloud-controller-3 ~]$ sudo pcs status | grep
galera -A1
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-2
overcloud-controller-3 ]
If need be, perform a cleanup on the new node:
[heat-admin@overcloud-controller-3 ~]$ sudo pcs resource cleanup
galera overcloud-controller-3
15. Wait until the Keystone service starts on all nodes.
[heat-admin@overcloud-controller-3 ~]$ sudo pcs status | grep
keystone -A1
Clone Set: openstack-keystone-clone [openstack-keystone]
Started: [ overcloud-controller-0 overcloud-controller-2
overcloud-controller-3 ]
If need be, perform a cleanup on the new node:
[heat-admin@overcloud-controller-3 ~]$ sudo pcs resource cleanup
openstack-keystone-clone overcloud-controller-3
16. Switch the cluster back into maintenance mode:
[heat-admin@overcloud-controller-3 ~]$ sudo pcs property set
maintenance-mode=true --wait
114
The manual configuration is complete. Re-run the Overcloud deployment command to continue the
stack update:
[stack@director ~]$ openstack overcloud deploy --templates --controlscale 3 [OTHER OPTIONS]
Important
If you passed any extra environment files or options when you created the Overcloud,
pass them again here to avoid making undesired changes to the Overcloud. However,
note that the remove-controller.yaml file is no longer needed.
Note
If any services have failed, use the pcs resource cleanup command to restart them
after resolving them.
Exit to the director
[heat-admin@overcloud-controller-0 ~]$ exit
115
Identify the UUID for the agents on the new node and the old node. Add the router to the agent on
the new node and remove the router from old node. For example:
[stack@director ~]$ neutron l3-agent-router-add fd6b3d6e-7d8c-4e1a831a-4ec1c9ebb965 r1
[stack@director ~]$ neutron l3-agent-router-remove b40020af-c6dd-4f7ab426-eba7bac9dbc2 r1
Perform a final check on the router and make sure all are active:
[stack@director ~]$ neutron l3-agent-list-hosting-router r1
Delete the existing Neutron agents that point to old Controller node. For example:
[stack@director ~]$ neutron agent-list -F id -F host | grep overcloudcontroller-1
| ddae8e46-3e8e-4a1b-a8b3-c87f13c294eb | overcloud-controller1.localdomain |
[stack@director ~]$ neutron agent-delete ddae8e46-3e8e-4a1b-a8b3c87f13c294eb
9.4.7. Conclusion
The failed Controller node and its related services are now replaced with a new node.
116
Important
If you disabled automatic ring building for Object Storage, like in Section 9.6, Replacing
Object Storage Nodes, you need to manually build the Object Storage ring files for the
new node. See Section 9.6, Replacing Object Storage Nodes for more information on
manually building ring files.
Note
Add this file to the end of the environment file list so its parameters supersede previous
environment file parameters.
117
After redeployment completes, the Overcloud now contains an additional Object Storage node.
However, the nodes storage directory has not been created and ring files for the nodes object store
are unbuilt. This means you must create the storage directory and build the ring files manually.
Note
Use the following procedure to also build ring files on Controller nodes.
Login to the new node and create the storage directory:
$ sudo mkdir -p /srv/node/d1
$ sudo chown -R swift:swift /srv/node/d1
Note
You can also mount an external storage device at this directory.
Copy the existing ring files to the node. Log into a Controller node as the heat-admin user and
then change to the superuser. For example, given a Controller node with an IP address of
192.168.201.24.
$ ssh [email protected]
$ sudo -i
Copy the /etc/swift/*.builder files from the Controller node to the new Object Storage
nodes /etc/swift/ directory. If necessary, transfer the files to the director host:
[root@overcloud-controller-0 ~]# scp /etc/swift/*.builder
[email protected]:~/.
Then transfer the files to the new node:
[stack@director ~]$ scp ~/*.builder [email protected]:~/.
Log into the new Object Storage node as the heat-admin user and then change to the superuser.
For example, given a Object Storage node with an IP address of 192.168.201.29.
$ ssh [email protected]
$ sudo -i
Copy the files to the /etc/swift directory:
# cp /home/heat-admin/*.builder /etc/swift/.
Add the new Object Storage node to the account, container, and object rings. Run the following
commands for the new node:
# swift-ring-builder /etc/swift/account.builder add zX-IP:6002/d1
weight
# swift-ring-builder /etc/swift/container.builder add zX-IP:6001/d1
118
weight
# swift-ring-builder /etc/swift/object.builder add zX-IP:6000/d1 weight
Replace the following values in these commands:
zX
Replace X with the corresponding integer of a specified zone (for example, z1 for Zone 1).
IP
The IP that the account, container, and object services use to listen. This should match the
IP address of each storage node; specifically, the value of bind_ip in the DEFAULT
sections of /etc/swift/object-server.conf, /etc/swift/accountserver.conf, and /etc/swift/container-server.conf.
weight
Describes relative weight of the device in comparison to other devices. This is usually 100.
Note
Check the existing values of the current nodes in the ring file using the swift-ringbuilder on the rings files alone:
# swift-ring-builder /etc/swift/account.builder
Remove the node you aim to replace from the account, container, and object rings. Run the
following commands for each node:
# swift-ring-builder /etc/swift/account.builder remove IP
# swift-ring-builder /etc/swift/container.builder remove IP
# swift-ring-builder /etc/swift/object.builder remove IP
Replace IP with the IP address of the node.
Redistribute the partitions across all the nodes:
# swift-ring-builder /etc/swift/account.builder rebalance
# swift-ring-builder /etc/swift/container.builder rebalance
# swift-ring-builder /etc/swift/object.builder rebalance
Change the ownership of all /etc/swift/ contents to the root user and swift group:
# chown -R root:swift /etc/swift
Restart the openstack-swift-proxy service:
# systemctl restart openstack-swift-proxy.service
At this point, the ring files (*.ring.gz and *.builder) should be updated on the new node:
/etc/swift/account.builder
119
/etc/swift/account.ring.gz
/etc/swift/container.builder
/etc/swift/container.ring.gz
/etc/swift/object.builder
/etc/swift/object.ring.gz
Copy these files to /etc/swift/ on the Controller nodes and the existing Object Storage nodes
(except for the node to remove). If necessary, transfer the files to the director host:
[root@overcloud-objectstorage-2 swift]# scp *.builder
[email protected]:~/
[root@overcloud-objectstorage-2 swift]# scp *.ring.gz
[email protected]:~/
Then copy the files to the /etc/swift/ on each node.
On each node, change the ownership of all /etc/swift/ contents to the root user and swift
group:
# chown -R root:swift /etc/swift
The new node is added and a part of the ring. Before removing the old node from the ring, check that
the new node completes a full data replication pass.
To remove the old node from the ring, reduce the ObjectStorageCount to the omit the old ring. In
this case, we reduce from 3 to 2:
parameter_defaults:
SwiftRingBuild: false
RingBuild: false
ObjectStorageCount: 2
Create a new environment file (remove-object-node.yaml) to identify and remove the old
Object Storage node. In this case, we remove overcloud-objectstorage-1:
parameter_defaults:
ObjectStorageRemovalPolicies:
[{'resource_list': ['1']}]
Include both environment files with the deployment command:
$ openstack overcloud deploy --templates -e swift-ring-prevent.yaml -e
remove-object-node.yaml ...
The director deletes the Object Storage node from the Overcloud and updates the rest of the nodes
on the Overcloud to accommodate the node removal.
120
121
Normally the introspection process uses the baremetal introspection, which acts an an
umbrella command for ironics services. However, if running the introspection directly with ironicinspector, it might fail to discover nodes in the AVAILABLE state, which is meant for deployment
and not for discovery. Change the node status to the MANAGEABLE state before discovery:
$ ironic node-set-provision-state [NODE UUID] manage
Then, when discovery completes, change back to AVAILABLE before provisioning:
$ ironic node-set-provision-state [NODE UUID] provide
destination
anywhere
MAC
anywhere
If the MAC address is not there, the most common cause is a corruption in the ironic-inspector
cache, which is in an SQLite database. To fix it, delete the SQLite file:
$ sudo rm /var/lib/ironic-inspector/inspector.sqlite
And recreate it:
$ sudo ironic-inspector-dbsync --config-file /etc/ironicinspector/inspector.conf upgrade
$ sudo systemctl restart openstack-ironic-inspector
122
$ rm /var/lib/ironic-inspector/inspector.sqlite
Resynchronize the ironic-inspector cache:
$ sudo ironic-inspector-dbsync --config-file /etc/ironicinspector/inspector.conf upgrade
$ sudo systemctl restart openstack-ironic-inspector
123
The director uses OpenStack Object Storage (swift) to save the hardware data obtained during the
introspection process. If this service is not running, the introspection can fail. Check all services
related to OpenStack Object Storage to ensure the service is running:
$ sudo systemctl list-units openstack-swift*
10.3.1. Orchestration
In most cases, Heat shows the failed Overcloud stack after the Overcloud creation fails:
$ heat stack-list
+-----------------------+------------+--------------------+---------------------+
| id
| stack_name | stack_status
|
creation_time
|
+-----------------------+------------+--------------------+---------------------+
| 7e88af95-535c-4a55... | overcloud | CREATE_FAILED
| 2015-0406T17:57:16Z |
+-----------------------+------------+--------------------+---------------------+
If the stack list is empty, this indicates an issue with the initial Heat setup. Check your Heat
templates and configuration options, and check for any error messages that presented after running
openstack overcloud deploy.
124
125
126
Step
Description
ControllerLoadBalancerDeployment_S
tep1
ControllerServicesBaseDeployment_S
tep2
ControllerRingbuilderDeployment_St
ep3
ControllerOvercloudServicesDeploym
ent_Step4
ControllerOvercloudServicesDeploym
ent_Step5
ControllerOvercloudServicesDeploym
ent_Step6
127
128
129
resource failure.
The next few sections provide advice to diagnose issues on specific node types.
130
131
innodb_additional_mem_pool_size
The size in bytes of a memory pool the database uses to store data dictionary
information and other internal data structures. The default is usually 8M and an ideal
value is 20M for the Undercloud.
innodb_buffer_pool_size
The size in bytes of the buffer pool, the memory area where the database caches table
and index data. The default is usually 128M and an ideal value is 1000M for the
Undercloud.
innodb_flush_log_at_trx_commit
Controls the balance between strict ACID compliance for commit operations, and higher
performance that is possible when commit-related I/O operations are rearranged and
done in batches. Set to 1.
innodb_lock_wait_timeout
The length of time in seconds a database transaction waits for a row lock before giving
up. Set to 50.
innodb_max_purge_lag
This variable controls how to delay INSERT, UPDATE, and DELETE operations when
purge operations are lagging. Set to 10000.
innodb_thread_concurrency
The limit of concurrent operating system threads. Ideally, provide at least two threads for
each CPU and disk resource. For example, if using a quad-core CPU and a single disk,
use 10 threads.
Ensure that heat has enough workers to perform an Overcloud creation. Usually, this depends
on how many CPUs the Undercloud has. To manually set the number of workers, edit the
/etc/heat/heat.conf file, set the num_engine_workers parameter to the number of
workers you need (ideally 4), and restart the heat engine:
$ sudo systemctl restart openstack-heat-engine
Information
132
Undercloud or Overcloud
Log Location
Undercloud
/var/log/nova/*
/var/log/heat/*
/var/log/ironic/*
Introspection
Undercloud
/var/log/ironic/*
/var/log/ironicinspector/*
Provisioning
Undercloud
/var/log/ironic/*
Cloud-Init Log
Overcloud
/var/log/cloudinit.log
Overcloud Configuration
(Summary of Last Puppet Run)
Overcloud
/var/lib/puppet/state/
last_run_summary.yaml
Overcloud
/var/lib/puppet/state/
last_run_report.yaml
Overcloud
/var/lib/puppet/report
s/overcloud-*/*
133
Overcloud
/var/log/ceilometer/*
/var/log/ceph/*
/var/log/cinder/*
/var/log/glance/*
/var/log/heat/*
/var/log/horizon/*
/var/log/httpd/*
/var/log/keystone/*
/var/log/libvirt/*
/var/log/neutron/*
/var/log/nova/*
/var/log/openvswitch/*
/var/log/rabbitmq/*
/var/log/redis/*
/var/log/swift/*
134
Overcloud
/var/log/pacemaker.log
135
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = AU
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Queensland
localityName = Locality Name (eg, city)
localityName_default = Brisbane
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Red Hat
commonName = Common Name
commonName_default = 192.168.0.1
commonName_max = 64
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.0.1
DNS.1 = 192.168.0.1
DNS.2 = instack.localdomain
DNS.3 = vip.localdomain
Set the commonName_default to the IP address, or fully qualified domain name if using one, of the
Public API:
For the Undercloud, use the undercloud_public_vip parameter in undercloud.conf. If
using a fully qualified domain name for this IP address, use the domain name instead.
For the Overcloud, use the IP address for the Public API, which is the first address for the
ExternalAllocationPools parameter in your network isolation environment file. If using a
fully qualified domain name for this IP address, use the domain name instead.
Include the same Public API IP address as an IP entry and a DNS entry in the alt_names section. If
also using DNS, include the hostname for the server as DNS entries in the same section. For more
information about openssl.cnf, run man openssl.cnf.
Run the following command to generate certificate signing request (server.csr.pem):
$ openssl req -config openssl.cnf -key server.key.pem -new -out
server.csr.pem
Make sure to include the SSL/TLS key you created in Section A.3, Creating an SSL/TLS Key for
the -key option.
136
Important
The openssl req command asks for several details for the certificate, including the
Common Name. Make sure the Common Name is set to the IP address of the Public API
for the Undercloud or Overcloud (depending on which certificate set you are creating).
The openssl.cnf file should use this IP address as a default value.
Use the server.csr.pem file to create the SSL/TLS certificate in the next section.
sudo
sudo
sudo
sudo
mkdir /etc/pki/instack-certs
cp ~/undercloud.pem /etc/pki/instack-certs/.
semanage fcontext -a -t etc_t "/etc/pki/instack-certs(/.*)?"
restorecon -R /etc/pki/instack-certs
137
In addition, make sure to add your certificate authority from Section A.1, Creating a Certificate
Authority to the Underclouds list of trusted Certificate Authorities so that different services within the
Undercloud have access to the certificate authority:
$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract
Continue installing the Undercloud as per the instructions in Section 4.6, Configuring the Director.
138
B.3. IBOOT
139
iBoot from Dataprobe is a power unit that provide remote power management for systems.
pm_type
Set this option to pxe_iboot.
pm_user; pm_password
The iBoot username and password.
pm_addr
The IP address of the iBoot interface.
pm_relay_id (Optional)
The iBoot relay ID for the host. The default is 1.
pm_port (Optional)
The iBoot port. The default is 9100.
Edit the /etc/ironic/ironic.conf file and add pxe_iboot to the
enabled_drivers option to enable this driver.
140
pm_type
Set this option to pxe_irmc.
pm_user; pm_password
The username and password for the iRMC interface.
pm_addr
The IP address of the iRMC interface.
pm_port (Optional)
The port to use for iRMC operations. The default is 443.
pm_auth_method (Optional)
The authentication method for iRMC operations. Use either basic or digest. The default
is basic
pm_client_timeout (Optional)
Timeout (in seconds) for iRMC operations. The default is 60 seconds.
pm_sensor_method (Optional)
Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.
Edit the /etc/ironic/ironic.conf file and add pxe_irmc to the
enabled_drivers option to enable this driver.
The director also requires an additional set of utilities if you enabled SCCI as the sensor
method. Install the python-scciclient package and restart the openstackironic-conductor service:
$ yum install python-scciclient
$ sudo systemctl restart openstack-ironic-conductor.service
141
Important
This option is available for testing and evaluation purposes only. It is not recommended for
Red Hat OpenStack Platform enterprise environments.
pm_type
Set this option to pxe_ssh.
pm_user; pm_password
The SSH username and contents of the SSH private key. The private key must be on one
line with new lines replaced with escape characters (\n). For example:
-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCAQEA ....
kk+WXt9Y=\n-----END RSA PRIVATE KEY----Add the SSH public key to the libvirt servers authorized_keys collection.
pm_addr
The IP address of the virsh host.
The server hosting libvirt requires an SSH key pair with the public key set as the
pm_password attribute.
Ensure the chosen pm_user has full access to the libvirt environment.
pm_type
Set this option to fake_pxe.
This driver does not use any authentication details because it does not control power
management.
Edit the /etc/ironic/ironic.conf file and add fake_pxe to the
enabled_drivers option to enable this driver. Restart the baremetal services after
editing the file:
$ sudo systemctl restart openstack-ironic-api openstackironic-conductor
142
When performing introspection on nodes, manually power the nodes after running the
openstack baremetal introspection bulk start command.
When performing Overcloud deployment, check the node status with the ironic
node-list command. Wait until the node status changes from deploying to deploy
wait-callback and then manually power the nodes.
After the Overcloud provisioning process completes, reboot the nodes. To check the
completion of provisioning, check the node status with the ironic node-list
command, wait until the node status changes to active, then manually reboot all
Overcloud nodes.
143
Conditions
A condition defines an evaluation using the following key-value pattern:
field
Defines the field to evaluate. For field types, see Section C.4, Automatic Profile Tagging
Properties
op
Defines the operation to use for the evaluation. This includes the following:
eq - Equal to
ne - Not equal to
lt - Less than
gt - Greater than
le - Less than or equal to
ge - Greater than or equal to
in-net - Checks that an IP address is in a given network
matches - Requires a full match against a given regular expression
contains - Requires a value to contain a given regular expression;
is-empty - Checks that field is empty.
144
invert
Boolean value to define whether to invert the result of the evaluation.
multiple
Defines the evaluation to use if multiple results exist. This includes:
any - Requires any result to match
all - Requires all results to match
first - Requires the first result to match
value
Defines the value in the evaluation. If the field and operation result in the value, the condition
return a true result. If not, the condition returns false.
Example:
"conditions": [
{
"field": "local_gb",
"op": "ge",
"value": 1024
}
],
Actions
An action is performed if the condition returns as true. It uses the action key and additional keys
depending on the value of action:
fail - Fails the introspection. Requires a message parameter for the failure message.
set-attribute - Sets an attribute on an Ironic node. Requires a path field, which is the path
to an Ironic attribute (e.g. /driver_info/ipmi_address), and a value to set.
set-capability - Sets a capability on an Ironic node. Requires name and value fields, which
are the name and the value for a new capability accordingly. The existing value for this same
capability is replaced. For example, use this to define node profiles.
extend-attribute - The same as set-attribute but treats the existing value as a list and
appends value to it. If the optional unique parameter is set to True, nothing is added if the given
value is already in a list.
Example:
"actions": [
{
"action": "set-capability",
"name": "profile",
"value": "swift-storage"
}
]
146
"name": "compute_profile",
"value": "1"
},
{
"action": "set-capability",
"name": "control_profile",
"value": "1"
},
{
"action": "set-capability",
"name": "profile",
"value": null
}
]
}
]
This example consists of three rules:
Fail introspection if memory is lower is 4096 MiB. Such rules can be applied to exclude nodes
that should not become part of your cloud.
Nodes with hard drive size 1 TiB and bigger are assigned the swift-storage profile
unconditionally.
Nodes with hard drive less than 1 TiB but more than 40 GiB can be either Compute or Controller
nodes. We assign two capabilities (compute_profile and control_profile) so that the
openstack overcloud profiles match command can later make the final choice. For that
to work, we remove the existing profile capability, otherwise it will have priority.
Other nodes are not changed.
Note
Using introspection rules to assign the profile capability always overrides the existing
value. However, [PROFILE]_profile capabilities are ignored for nodes with an existing
profile capability.
147
148
Property
Description
memory_mb
cpus
cpu_arch
local_gb
149
CeilometerBackend
Type: string
The OpenStack Telemetry backend type. Select either mongodb or mysql.
CeilometerComputeAgent
Type: string
Indicates whether the Compute agent is present and expects nova-compute to be
configured accordingly.
CeilometerMeterDispatcher
Type: string
The OpenStack Telemetry (ceilometer) service includes a new component for a time
series data storage (gnocchi). It is possible in Red Hat OpenStack Platform to switch the
default Ceilometer dispatcher to use this new component instead of the standard database.
You accomplish this with the CeilometerMeterDispatcher, which you set to either:
database - Use the standard database for the Ceilometer dispatcher. This is the
default option.
gnocchi - Use the new time series database for Ceilometer dispatcher.
CeilometerMeteringSecret
Type: string
Secret shared by the OpenStack Telemetry services.
CeilometerPassword
Type: string
The password for the OpenStack Telemetry service account.
CephAdminKey
Type: string
The Ceph admin client key. Can be created with ceph-authtool --gen-print-key.
CephClientKey
Type: string
The Ceph client key. Can be created with ceph-authtool --gen-print-key. Currently
only used for external Ceph deployments to create the OpenStack user keyring.
CephClusterFSID
Type: string
The Ceph cluster FSID. Must be a UUID.
CephExternalMonHost
Type: string
List of externally managed Ceph Monitors host IPs. Only used for external Ceph
150
deployments.
CephMonKey
Type: string
The Ceph Monitors key. Can be created with ceph-authtool --gen-print-key.
CephStorageCount
Type: number
The number of Ceph Storage nodes in your Overcloud.
CephStorageExtraConfig
Type: json
Ceph Storage specific configuration to inject into the cluster.
CephStorageHostnameFormat
Type: string
Format for Ceph Storage node host names.
CephStorageImage
Type: string
The image to use for provisioning Ceph Storage nodes.
CephStorageRemovalPolicies
Type: json
List of resources to be removed from CephStorageResourceGroup when doing an
update that requires removal of specific resources.
CephStorageSchedulerHints
Type: json
Optional scheduler hints to pass to OpenStack Compute.
CinderEnableIscsiBackend
Type: boolean
Whether to enable or not the iSCSI backend for Block Storage.
CinderEnableNfsBackend
Type: boolean
Whether to enable or not the NFS backend for Block Storage.
CinderEnableRbdBackend
Type: boolean
Whether to enable or not the Ceph Storage backend for Block Storage.
151
CinderISCSIHelper
Type: string
The iSCSI helper to use with Block Storage.
CinderLVMLoopDeviceSize
Type: number
The size of the loopback file used by the Block Storage LVM driver.
CinderNfsMountOptions
Type: string
Mount options for NFS mounts used by Block Storage NFS backend. Effective when
CinderEnableNfsBackend is true.
CinderNfsServers
Type: comma delimited list
NFS servers used by Block Storage NFS backend. Effective when
CinderEnableNfsBackend is true.
CinderPassword
Type: string
The password for the Block Storage service account.
CloudDomain
Type: string
The DNS domain used for the hosts. This should match the dhcp_domain configured in the
Underclouds networking. Defaults to localdomain.
CloudName
Type: string
The DNS name of this cloud. For example: ci-overcloud.tripleo.org.
ComputeCount
Type: number
The number of Compute nodes in your Overcloud.
ComputeHostnameFormat
Type: string
Format for Compute node host names.
ComputeRemovalPolicies
Type: json
List of resources to be removed from ComputeResourceGroup when doing an update that
requires removal of specific resources.
152
ControlFixedIPs
Type: json
A list of fixed IP addresses for Controller nodes.
ControlVirtualInterface
Type: string
Interface where virtual IPs are assigned.
ControllerCount
Type: number
The number of Controller nodes in your Overcloud.
ControllerEnableCephStorage
Type: boolean
Whether to deploy Ceph Storage (OSD) on the Controller.
ControllerEnableSwiftStorage
Type: boolean
Whether to enable Object Storage on the Controller.
controllerExtraConfig
Type: json
Controller specific configuration to inject into the cluster.
ControllerHostnameFormat
Type: string
Format for Controller node host names.
controllerImage
Type: string
The image to use for provisioning Block Storage nodes.
ControllerRemovalPolicies
Type: json
List of resources to be removed from ControllerResourceGroup when doing an update
that requires removal of specific resources.
ControllerSchedulerHints
Type: json
Optional scheduler hints to pass to OpenStack Compute.
CorosyncIPv6
Type: boolean
153
154
"manage_fw": true,
"manage_key_file": true,
"key_file": "/etc/fence_xvm.key",
"key_file_password": "abcdef"
}
}
]
}
GlanceBackend
Type: string
The short name of the OpenStack Image backend to use. Should be one of swift, rbd, or
file.
GlanceLogFile
Type: string
The path of the file to use for logging messages from OpenStack Image.
GlanceNotifierStrategy
Type: string
Strategy to use for OpenStack Image notification queue. Defaults to noop.
GlancePassword
Type: string
The password for the OpenStack Image service account, used by the OpenStack Image
services.
GnocchiBackend
Type: string
The short name of the Gnocchi backend to use. Should be one of swift, rbd, or file.
GnocchiIndexerBackend
Type: string
The short name of the Gnocchi indexer backend to use.
GnocchiPassword
Type: string
The password for the Gnocchi service account.
HAProxySyslogAddress
Type: string
Syslog address where HAProxy will send its log.
HeatPassword
Type: string
155
156
157
MysqlMaxConnections
Type: number
Configures MySQL max_connections setting.
NeutronAgentExtensions
Type: comma delimited list
Comma-separated list of extensions enabled for the OpenStack Networking agents.
NeutronAgentMode
Type: string
Agent mode for the neutron-l3-agent on the Controller hosts.
NeutronAllowL3AgentFailover
Type: string
Allow automatic L3 Agent failover.
NeutronBridgeMappings
Type: comma delimited list
The OVS logical-to-physical bridge mappings to use. Defaults to mapping the external
bridge on hosts (br-ex) to a physical name (datacentre), which can be used to create
provider networks (and we use this for the default floating network). If changing this, either
use different post-install network scripts or make sure to keep datacentre as a mapping
network name.
NeutronComputeAgentMode
Type: string
Agent mode for the neutron-l3-agent on the Compute nodes.
NeutronControlPlaneID
Type: string
Neutron ID or name for ctlplane network.
NeutronCorePlugin
Type: string
The core plugin for OpenStack Networking. The value should be the entry point to be loaded
from neutron.core_plugins name space.
NeutronDVR
Type: string
Whether to configure OpenStack Networking Distributed Virtual Routers
NeutronDhcpAgentsPerNetwork
Type: number
158
159
160
NeutronTenantMtu
Type: string
The default MTU for tenant networks. For VXLAN/GRE tunneling, this should be at least 50
bytes smaller than the MTU on the physical network. This value will be used to set the MTU
on the virtual Ethernet device. This value will be used to construct the
NeutronDnsmasqOptions, since that will determine the MTU that is assigned to the VM
host through DHCP.
NeutronTunnelIdRanges
Type: comma delimited list
Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE
tunnel IDs that are available for tenant network allocation.
NeutronTunnelTypes
Type: comma delimited list
The tunnel types for the OpenStack Networking tenant network.
NeutronTypeDrivers
Type: comma delimited list
Comma-separated list of network type driver entry points to be loaded.
NeutronVniRanges
Type: comma delimited list
Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN
VNI IDs that are available for tenant network allocation
NovaComputeDriver
Type: string
The OpenStack Compute driver to use for managing instances. Defaults to the
libvirt.LibvirtDriver driver.
NovaComputeExtraConfig
Type: json
Compute node specific configuration to inject into the cluster.
NovaComputeLibvirtType
Type: string
Defines the Libvirt type to use. Defaults to kvm.
NovaComputeLibvirtVifDriver
Type: string
Libvirt VIF driver configuration for the network.
NovaComputeSchedulerHints
161
Type: json
Optional scheduler hints to pass to OpenStack Compute.
NovaEnableRbdBackend
Type: boolean
Whether to enable or not the Ceph backend for Nova.
NovaIPv6
Type: boolean
Enable IPv6 features in Nova.
NovaImage
Type: string
The image to use for provisioning Compute nodes.
NovaOVSBridge
Type: string
Name of integration bridge used by Open vSwitch.
NovaPassword
Type: string
The password for the OpenStack Compute service account.
NovaSecurityGroupAPI
Type: string
The full class name of the security API class.
NtpServer
Type: comma delimited list
Comma-separated list of NTP servers.
ObjectStorageCount
Type: number
The number of Object Storage nodes in your Overcloud.
ObjectStorageExtraConfig
Type: json
ObjectStorage specific configuration to inject into the cluster.
ObjectStorageHostnameFormat
Type: string
Format for Object Storage node host names.
162
ObjectStorageRemovalPolicies
Type: json
List of resources to be removed from ObjectStorageResourceGroup when doing an
update that requires removal of specific resources.
ObjectStorageSchedulerHints
Type: json
Optional scheduler hints to pass to OpenStack Compute.
OvercloudBlockStorageFlavor
Type: string
Flavor for Block Storage nodes to request when deploying.
OvercloudCephStorageFlavor
Type: string
Flavor for Ceph Storage nodes to request when deploying.
OvercloudComputeFlavor
Type: string
Flavor for Compute nodes to request when deploying.
OvercloudControlFlavor
Type: string
Flavor for Controller nodes to request when deploying.
OvercloudSwiftStorageFlavor
Type: string
Flavor for Object Storage nodes to request when deploying.
PublicVirtualFixedIPs
Type: json
Control the IP allocation for the PublicVirtualInterface port. For example:
[{'ip_address':'1.2.3.4'}]
PublicVirtualInterface
Type: string
Specifies the interface where the public-facing virtual IP will be assigned. This should be
int_public when a VLAN is being used.
PurgeFirewallRules
Type: boolean
Defines whether to purge firewall rules before setting up new ones.
163
RabbitClientPort
Type: number
Set RabbitMQ subscriber port.
RabbitClientUseSSL
Type: string
RabbitMQ client subscriber parameter to specify an SSL connection to the RabbitMQ host.
RabbitCookieSalt
Type: string
Salt for the RabbitMQ cookie. Change this to force the randomly generated RabbitMQ
cookie to change.
RabbitFDLimit
Type: string
Configures RabbitMQ file descriptor limit.
RabbitIPv6
Type: boolean
Enable IPv6 in RabbitMQ.
RabbitPassword
Type: string
The password for RabbitMQ.
RabbitUserName
Type: string
The username for RabbitMQ
RedisPassword
Type: string
The password for Redis
SaharaPassword
Type: string
The password for the OpenStack Clustering service account.
ServerMetadata
Type: json
Extra properties or metadata passed to OpenStack Compute for the created nodes in the
Overcloud.
ServiceNetMap
164
Type: json
Mapping of service names to network names. Typically set in the parameter_defaults
of the resource registry.
SnmpdReadonlyUserName
Type: string
The user name for SNMPd with read-only rights running on all Overcloud nodes.
SnmpdReadonlyUserPassword
Type: string
The user password for SNMPd with read-only rights running on all Overcloud nodes.
StorageMgmtVirtualFixedIPs
Type: json
Control the IP allocation for the StorageMgmgVirtualInterface port. For example:
[{'ip_address':'1.2.3.4'}]
StorageVirtualFixedIPs
Type: json
Control the IP allocation for the StorageVirtualInterface port. For example:
[{'ip_address':'1.2.3.4'}]
SwiftHashSuffix
Type: string
A random string to be used as a salt when hashing to determine mappings in the ring.
SwiftMinPartHours
Type: number
The minimum time (in hours) before a partition in a ring can be moved following a
rebalance.
SwiftMountCheck
Type: boolean
Value of mount_check in Object Storage account/container/object-server.conf.
SwiftPartPower
Type: number
Partition power to use when building Object Storage rings.
SwiftPassword
Type: string
165
The password for the Object Storage service account, used by the Object Storage proxy
services.
SwiftReplicas
Type: number
How many replicas to use in the Object Storage rings.
SwiftStorageImage
Type: string
The image to use for provisioning Object Storage nodes.
TimeZone
Type: string
Sets the time zone of your Overcloud deployment. If you leave the TimeZone parameter
blank, the Overcloud will default to UTC time. Director recognizes the standard timezone
names defined in the timezone database /usr/share/zoneinfo/. For example, if you wanted
to set your time zone to Japan, you would examine the contents of /usr/share/zoneinfo to
locate a suitable entry:
$ ls /usr/share/zoneinfo/
Africa
Asia
Canada
Cuba
EST
GB
GMT-0
HST
iso3166.tab Kwajalein MST
NZ-CHAT
posix
right
Turkey
UTC
Zulu
America
Atlantic
CET
EET
EST5EDT GB-Eire GMT+0
Iceland Israel
Libya
MST7MDT Pacific
posixrules
ROC
UCT
WET
Antarctica Australia Chile
Egypt Etc
GMT
Greenwich Indian
Jamaica
MET
Navajo
Poland
PRC
ROK
Universal W-SU
Arctic
Brazil
CST6CDT Eire
Europe
GMT0
Hongkong
Iran
Japan
Mexico
NZ
Portugal
PST8PDT
Singapore US
zone.tab
The output listed above includes time zone files, and directories containing additional time
zone files. For example, Japan is an individual time zone file in this result, but Africa is a
directory containing additional time zone files:
$ ls /usr/share/zoneinfo/Africa/
Abidjan
Algiers Bamako Bissau
Bujumbura
Ceuta
Dar_es_Salaam El_Aaiun Harare
Kampala
Kinshasa
Lome
Lusaka Maseru
Monrovia Niamey
Porto-Novo
Tripoli
Accra
Asmara
Bangui Blantyre
Cairo
Conakry
Djibouti
Freetown Johannesburg Khartoum Lagos
Luanda
Malabo Mbabane
Nairobi
Nouakchott
Sao_Tome
Tunis
Addis_Ababa Asmera
Banjul Brazzaville Casablanca Dakar
Douala
Gaborone Juba
Kigali
Libreville
Lubumbashi Maputo Mogadishu Ndjamena Ouagadougou Timbuktu
Windhoek
166
UpdateIdentifier
Type: string
Setting to a previously unused value during stack-update triggers a package update on
all nodes.
167
Option
Default
name
168
Description
use_dhcp
False
use_dhcpv6
False
addresses
A sequence of IP addresses
assigned to the interface
routes
mtu
1500
primary
False
defroute
True
persist_mapping
False
dhclient_args
None
dns_servers
None
Default
Description
Option
vlan_id
The VLAN ID
device
use_dhcp
False
use_dhcpv6
False
addresses
A sequence of IP addresses
assigned to the VLAN
routes
mtu
1500
primary
False
defroute
True
169
persist_mapping
False
dhclient_args
None
dns_servers
None
Option
Default
name
170
Description
use_dhcp
False
use_dhcpv6
False
addresses
A sequence of IP addresses
assigned to the bond
routes
mtu
1500
primary
False
members
ovs_options
ovs_extra
defroute
True
persist_mapping
False
dhclient_args
None
dns_servers
None
Option
Default
name
Description
use_dhcp
False
use_dhcpv6
False
171
addresses
A sequence of IP addresses
assigned to the bridge
routes
mtu
1500
members
ovs_options
ovs_extra
defroute
True
persist_mapping
False
dhclient_args
None
dns_servers
None
172
Option
Default
name
Description
use_dhcp
False
use_dhcpv6
False
addresses
A sequence of IP addresses
assigned to the bond
routes
mtu
1500
primary
False
members
bonding_options
defroute
True
173
persist_mapping
False
dhclient_args
None
dns_servers
None
Option
Default
name
use_dhcp
False
use_dhcpv6
False
addresses
A sequence of IP addresses
assigned to the bridge
routes
mtu
members
174
Description
1500
defroute
True
persist_mapping
False
dhclient_args
None
dns_servers
None
175
176
Note
The defroute parameter only applies to routes obtained through DHCP.
To set a static route on an interface with a static IP, specify a route to the subnet. For example, you
can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API
network:
- type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
- ip_netmask: {get_param: InternalApiIpSubnet}
routes:
- ip_netmask: 10.1.2.0/24
next_hop: 172.17.0.1
177
parameter_defaults:
# Set to "br-ex" when using floating IPs on the native VLAN
NeutronExternalNetworkBridge: "''"
Using only one Floating IP network on the native VLAN of a bridge means you can optionally set the
neutron external bridge. This results in the packets only having to traverse one bridge instead of two,
which might result in slightly lower CPU usage when passing traffic over the Floating IP network.
The next section contains changes to the NIC config to put the External network on the native VLAN.
If the External network is mapped to br-ex, you can use the External network for Floating IPs in
addition to the horizon dashboard, and Public APIs.
Note
When moving the address (and possibly route) statements onto the bridge, remove the
corresponding VLAN interface from the bridge. Make the changes to all applicable roles.
The External network is only on the controllers, so only the controller template requires a
change. The Storage network on the other hand is attached to all roles, so if the Storage
network is on the default VLAN, all roles require modifications.
178
the configuration of the switch port to support jumbo frames. Most switches support an MTU of at
least 9000, but many are configured for 1500 by default.
The MTU of a VLAN cannot exceed the MTU of the physical interface. Make sure to include the
MTU value on the bond and/or interface.
The Storage, Storage Management, Internal API, and Tenant networking all benefit from jumbo
frames. In testing, Tenant networking throughput was over 300% greater when using jumbo frames
in conjunction with VXLAN tunnels.
Note
It is recommended that the Provisioning interface, External interface, and any floating IP
interfaces be left at the default MTU of 1500. Connectivity problems are likely to occur
otherwise. This is because routers typically cannot forward jumbo frames across Layer 3
boundaries.
- type: ovs_bond
name: bond1
mtu: 9000
ovs_options: {get_param: BondInterfaceOvsOptions}
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
# The external interface should stay at default
- type: vlan
device: bond1
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
routes:
- ip_netmask: 0.0.0.0/0
next_hop: {get_param: ExternalInterfaceDefaultRoute}
# MTU 9000 for Internal API, Storage, and Storage Management
- type: vlan
device: bond1
mtu: 9000
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
- ip_netmask: {get_param: InternalApiIpSubnet}
179
180
Parameter
Description
Example
InternalApiNetCidr
172.17.0.0/24
StorageNetCidr
StorageMgmtNetCidr
TenantNetCidr
ExternalNetCidr
InternalApiAllocationPools
StorageAllocationPools
StorageMgmtAllocationPools
TenantAllocationPools
Parameter
Description
Example
ExternalAllocationPools
InternalApiNetworkVlanID
StorageNetworkVlanID
StorageMgmtNetworkVlanID
TenantNetworkVlanID
ExternalNetworkVlanID
ExternalInterfaceDefaultRoute
10.1.2.1
ControlPlaneDefaultRoute
ControlPlaneDefaultRoute:
192.0.2.254
ControlPlaneSubnetCidr
ControlPlaneSubnetCidr: 24
EC2MetadataIp
EC2MetadataIp: 192.0.2.1
DnsServers
DnsServers: ["8.8.8.8","8.8.4.4"]
200
181
182
Parameter
Description
Example
BondInterfaceOvsOptions
BondInterfaceOvsOptions:"bond
_mode=balance-slb"
NeutronFlatNetworks
NeutronFlatNetworks:
"datacentre"
NeutronExternalNetworkBridge
NeutronExternalNetworkBridge:
"br-ex"
NeutronBridgeMappings
NeutronBridgeMappings:
"datacentre:br-ex"
NeutronPublicInterface
NeutronPublicInterface: "eth0"
NeutronNetworkType
NeutronNetworkType: "vxlan"
NeutronTunnelTypes
NeutronTunnelTypes: gre,vxlan
NeutronTunnelIdRanges
NeutronTunnelIdRanges
"1:1000"
Parameter
Description
Example
NeutronVniRanges
NeutronVniRanges: "1:1000"
NeutronEnableTunnelling
NeutronNetworkVLANRanges
NeutronNetworkVLANRanges:
"datacentre:1:1000"
NeutronMechanismDrivers
NeutronMechanismDrivers:
openvswitch,l2population
183
184
bond_mode=balance-slb
bond_mode=active-backup
lacp=[active|passive|off]
other-config:lacp-fallback-ab=true
other_config:lacp-time=[fast|slow]
other_config:bond-detect-mode=
[miimon|carrier]
185
other_config:bond-miimoninterval=100
other_config:bond_updelay=1000
other_config:bond-rebalanceinterval=10000
Important
If you experience packet drops or performance issues using Linux bonds with Provider
networks, consider disabling Large Receive Offload (LRO) on the standby interfaces.
Avoid adding a Linux bond to an OVS bond, as port-flapping and loss of connectivity can
occur. This is a result of a packet-loop through the standby interface.
186