Red Hat OpenStack Platform-16.0-Director Installation and Usage-En-US
Red Hat OpenStack Platform-16.0-Director Installation and Usage-En-US
OpenStack Team
[email protected]
Legal Notice
Copyright © 2021 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
Install Red Hat OpenStack Platform 16 in an enterprise environment using the Red Hat OpenStack
Platform director. This includes installing the director, planning your environment, and creating an
OpenStack environment with the director.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . TO
. . . .DIRECTOR
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
1.1. UNDERCLOUD 8
1.2. UNDERSTANDING THE OVERCLOUD 9
1.3. UNDERSTANDING HIGH AVAILABILITY IN RED HAT OPENSTACK PLATFORM 11
1.4. UNDERSTANDING CONTAINERIZATION IN RED HAT OPENSTACK PLATFORM 11
1.5. WORKING WITH CEPH STORAGE IN RED HAT OPENSTACK PLATFORM 12
. . . . . . .I.. DIRECTOR
PART . . . . . . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . AND
. . . . . CONFIGURATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
..............
.CHAPTER
. . . . . . . . . . 2.
. . PLANNING
. . . . . . . . . . . . YOUR
. . . . . . .UNDERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
..............
2.1. CONTAINERIZED UNDERCLOUD 15
2.2. PREPARING YOUR UNDERCLOUD NETWORKING 15
2.3. DETERMINING ENVIRONMENT SCALE 16
2.4. UNDERCLOUD DISK SIZING 16
2.5. VIRTUALIZATION SUPPORT 17
2.6. CHARACTER ENCODING CONFIGURATION 18
2.7. CONSIDERATIONS WHEN RUNNING THE UNDERCLOUD WITH A PROXY 18
2.8. UNDERCLOUD REPOSITORIES 20
.CHAPTER
. . . . . . . . . . 3.
. . PREPARING
. . . . . . . . . . . . . .FOR
. . . . .DIRECTOR
. . . . . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
..............
3.1. PREPARING THE UNDERCLOUD 22
3.2. INSTALLING CEPH-ANSIBLE 24
3.3. PREPARING CONTAINER IMAGES 24
3.4. CONTAINER IMAGE PREPARATION PARAMETERS 25
3.5. LAYERING IMAGE PREPARATION ENTRIES 28
3.6. EXCLUDING CEPH STORAGE CONTAINER IMAGES 29
3.7. OBTAINING CONTAINER IMAGES FROM PRIVATE REGISTRIES 29
3.8. MODIFYING IMAGES DURING PREPARATION 30
3.9. UPDATING EXISTING PACKAGES ON CONTAINER IMAGES 31
3.10. INSTALLING ADDITIONAL RPM FILES TO CONTAINER IMAGES 31
3.11. MODIFYING CONTAINER IMAGES WITH A CUSTOM DOCKERFILE 32
3.12. PREPARING A SATELLITE SERVER FOR CONTAINER IMAGES 32
.CHAPTER
. . . . . . . . . . 4.
. . .INSTALLING
. . . . . . . . . . . . .DIRECTOR
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
..............
4.1. CONFIGURING DIRECTOR 36
4.2. DIRECTOR CONFIGURATION PARAMETERS 36
4.3. CONFIGURING THE UNDERCLOUD WITH ENVIRONMENT FILES 41
4.4. COMMON HEAT PARAMETERS FOR UNDERCLOUD CONFIGURATION 42
4.5. CONFIGURING HIERADATA ON THE UNDERCLOUD 42
4.6. CONFIGURING THE UNDERCLOUD FOR BARE METAL PROVISIONING OVER IPV6 43
4.7. INSTALLING DIRECTOR 44
4.8. OBTAINING IMAGES FOR OVERCLOUD NODES 45
4.8.1. Single CPU architecture overclouds 45
4.8.2. Multiple CPU architecture overclouds 46
4.8.3. Minimal overcloud image 48
4.9. SETTING A NAMESERVER FOR THE CONTROL PLANE 49
4.10. UPDATING THE UNDERCLOUD CONFIGURATION 50
4.11. UNDERCLOUD CONTAINER REGISTRY 51
4.12. NEXT STEPS 51
. . . . . . . . . . . 5.
CHAPTER . . INSTALLING
. . . . . . . . . . . . . .UNDERCLOUD
. . . . . . . . . . . . . . . .MINIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
..............
5.1. UNDERCLOUD MINION 52
1
Red Hat OpenStack Platform 16.0 Director Installation and Usage
. . . . . . .II.. .BASIC
PART . . . . . . .OVERCLOUD
. . . . . . . . . . . . . .DEPLOYMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
..............
.CHAPTER
. . . . . . . . . . 6.
. . .PLANNING
. . . . . . . . . . . YOUR
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
..............
6.1. NODE ROLES 62
6.2. OVERCLOUD NETWORKS 63
6.3. OVERCLOUD STORAGE 64
6.4. OVERCLOUD SECURITY 65
6.5. OVERCLOUD HIGH AVAILABILITY 66
6.6. CONTROLLER NODE REQUIREMENTS 66
6.7. COMPUTE NODE REQUIREMENTS 67
6.8. CEPH STORAGE NODE REQUIREMENTS 67
6.9. OBJECT STORAGE NODE REQUIREMENTS 68
6.10. OVERCLOUD REPOSITORIES 69
6.11. PROVISIONING METHODS 72
.CHAPTER
. . . . . . . . . . 7.
. . CONFIGURING
. . . . . . . . . . . . . . . .A
. . BASIC
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .WITH
. . . . . .CLI
. . . TOOLS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
..............
7.1. REGISTERING NODES FOR THE OVERCLOUD 74
7.2. VALIDATING THE INTROSPECTION REQUIREMENTS 76
7.3. INSPECTING THE HARDWARE OF NODES 77
7.4. TAGGING NODES INTO PROFILES 77
7.5. SETTING UEFI BOOT MODE 78
7.6. ENABLING VIRTUAL MEDIA BOOT 79
7.7. DEFINING THE ROOT DISK FOR MULTI-DISK CLUSTERS 81
7.8. USING THE OVERCLOUD-MINIMAL IMAGE TO AVOID USING A RED HAT SUBSCRIPTION ENTITLEMENT
82
7.9. CREATING ARCHITECTURE SPECIFIC ROLES 83
7.10. ENVIRONMENT FILES 83
7.11. CREATING AN ENVIRONMENT FILE THAT DEFINES NODE COUNTS AND FLAVORS 84
7.12. CREATING AN ENVIRONMENT FILE FOR UNDERCLOUD CA TRUST 85
7.13. DEPLOYMENT COMMAND 86
7.14. DEPLOYMENT COMMAND OPTIONS 87
7.15. INCLUDING ENVIRONMENT FILES IN AN OVERCLOUD DEPLOYMENT 92
7.16. VALIDATING THE DEPLOYMENT REQUIREMENTS 93
7.17. OVERCLOUD DEPLOYMENT OUTPUT 94
7.18. ACCESSING THE OVERCLOUD 95
7.19. VALIDATING THE POST-DEPLOYMENT STATE 95
7.20. NEXT STEPS 96
.CHAPTER
. . . . . . . . . . 8.
. . .PROVISIONING
. . . . . . . . . . . . . . . .BARE
. . . . . . METAL
. . . . . . . . NODES
. . . . . . . . BEFORE
. . . . . . . . . DEPLOYING
. . . . . . . . . . . . . .THE
. . . . .OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . .97
..............
8.1. REGISTERING NODES FOR THE OVERCLOUD 97
8.2. INSPECTING THE HARDWARE OF NODES 100
8.3. PROVISIONING BARE METAL NODES 100
8.4. SCALING UP BARE METAL NODES 102
8.5. SCALING DOWN BARE METAL NODES 104
2
Table of Contents
. . . . . . . . . . . 9.
CHAPTER . . .CONFIGURING
...............A
. . BASIC
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .WITH
. . . . . .PRE-PROVISIONED
. . . . . . . . . . . . . . . . . . . . NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
...............
9.1. PRE-PROVISIONED NODE REQUIREMENTS 110
9.2. CREATING A USER ON PRE-PROVISIONED NODES 111
9.3. REGISTERING THE OPERATING SYSTEM FOR PRE-PROVISIONED NODES 112
9.4. CONFIGURING SSL/TLS ACCESS TO DIRECTOR 113
9.5. CONFIGURING NETWORKING FOR THE CONTROL PLANE 113
9.6. USING A SEPARATE NETWORK FOR PRE-PROVISIONED NODES 115
9.7. MAPPING PRE-PROVISIONED NODE HOSTNAMES 116
9.8. CONFIGURING CEPH STORAGE FOR PRE-PROVISIONED NODES 117
9.9. CREATING THE OVERCLOUD WITH PRE-PROVISIONED NODES 117
9.10. OVERCLOUD DEPLOYMENT OUTPUT 118
9.11. ACCESSING THE OVERCLOUD 118
9.12. SCALING PRE-PROVISIONED NODES 119
9.13. REMOVING A PRE-PROVISIONED OVERCLOUD 120
9.14. NEXT STEPS 120
.CHAPTER
. . . . . . . . . . 10.
. . . DEPLOYING
. . . . . . . . . . . . . .MULTIPLE
. . . . . . . . . . .OVERCLOUDS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
...............
10.1. DEPLOYING ADDITIONAL OVERCLOUDS 122
10.2. MANAGING MULTIPLE OVERCLOUDS 124
. . . . . . .III.
PART . . POST
. . . . . . .DEPLOYMENT
. . . . . . . . . . . . . . . OPERATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
...............
.CHAPTER
. . . . . . . . . . 11.
. . .PERFORMING
. . . . . . . . . . . . . . .OVERCLOUD
. . . . . . . . . . . . . .POST-INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . TASKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
...............
11.1. CHECKING OVERCLOUD DEPLOYMENT STATUS 127
11.2. CREATING BASIC OVERCLOUD FLAVORS 127
11.3. CREATING A DEFAULT TENANT NETWORK 128
11.4. CREATING A DEFAULT FLOATING IP NETWORK 129
11.5. CREATING A DEFAULT PROVIDER NETWORK 129
11.6. CREATING ADDITIONAL BRIDGE MAPPINGS 131
11.7. VALIDATING THE OVERCLOUD 131
11.8. PROTECTING THE OVERCLOUD FROM REMOVAL 132
. . . . . . . . . . . 12.
CHAPTER . . . PERFORMING
. . . . . . . . . . . . . . . BASIC
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . ADMINISTRATION
. . . . . . . . . . . . . . . . . . . .TASKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
...............
12.1. MANAGING CONTAINERIZED SERVICES 133
12.2. MODIFYING THE OVERCLOUD ENVIRONMENT 136
12.3. IMPORTING VIRTUAL MACHINES INTO THE OVERCLOUD 137
12.4. RUNNING THE DYNAMIC INVENTORY SCRIPT 138
12.5. REMOVING THE OVERCLOUD 139
. . . . . . . . . . . 13.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . THE
. . . . .OVERCLOUD
. . . . . . . . . . . . . . WITH
. . . . . . ANSIBLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
...............
13.1. ANSIBLE-BASED OVERCLOUD CONFIGURATION (CONFIG-DOWNLOAD) 140
13.2. CONFIG-DOWNLOAD WORKING DIRECTORY 140
13.3. ENABLING ACCESS TO CONFIG-DOWNLOAD WORKING DIRECTORIES 141
13.4. CHECKING CONFIG-DOWNLOAD LOG 141
13.5. SEPARATING THE PROVISIONING AND CONFIGURATION PROCESSES 141
13.6. RUNNING CONFIG-DOWNLOAD MANUALLY 142
13.7. PERFORMING GIT OPERATIONS ON THE WORKING DIRECTORY 144
13.8. CREATING CONFIG-DOWNLOAD FILES MANUALLY 145
13.9. CONFIG-DOWNLOAD TOP LEVEL FILES 146
3
Red Hat OpenStack Platform 16.0 Director Installation and Usage
. . . . . . . . . . . 14.
CHAPTER . . . USING
. . . . . . . .THE
. . . . VALIDATION
. . . . . . . . . . . . . . FRAMEWORK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
...............
14.1. ANSIBLE-BASED VALIDATIONS 149
14.2. LISTING VALIDATIONS 149
14.3. RUNNING VALIDATIONS 150
14.4. IN-FLIGHT VALIDATIONS 150
. . . . . . . . . . . 15.
CHAPTER . . . SCALING
. . . . . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
..............
15.1. ADDING NODES TO THE OVERCLOUD 151
15.2. INCREASING NODE COUNTS FOR ROLES 152
15.3. REMOVING COMPUTE NODES 153
15.4. REPLACING CEPH STORAGE NODES 156
15.5. REPLACING OBJECT STORAGE NODES 156
15.6. BLACKLISTING NODES 157
. . . . . . . . . . . 16.
CHAPTER . . . REPLACING
. . . . . . . . . . . . . CONTROLLER
. . . . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159
...............
16.1. PREPARING FOR CONTROLLER REPLACEMENT 159
16.2. REMOVING A CEPH MONITOR DAEMON 160
16.3. PREPARING THE CLUSTER FOR CONTROLLER NODE REPLACEMENT 162
16.4. REPLACING A CONTROLLER NODE 163
16.5. TRIGGERING THE CONTROLLER NODE REPLACEMENT 165
16.6. CLEANING UP AFTER CONTROLLER NODE REPLACEMENT 166
. . . . . . . . . . . 17.
CHAPTER . . . REBOOTING
. . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
...............
17.1. REBOOTING THE UNDERCLOUD NODE 168
17.2. REBOOTING CONTROLLER AND COMPOSABLE NODES 168
17.3. REBOOTING STANDALONE CEPH MON NODES 169
17.4. REBOOTING A CEPH STORAGE (OSD) CLUSTER 169
17.5. REBOOTING COMPUTE NODES 170
. . . . . . .IV.
PART . . .ADDITIONAL
. . . . . . . . . . . . . .DIRECTOR
. . . . . . . . . . . OPERATIONS
. . . . . . . . . . . . . . .AND
. . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
...............
. . . . . . . . . . . 18.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . CUSTOM
. . . . . . . . . . SSL/TLS
. . . . . . . . . .CERTIFICATES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
...............
18.1. INITIALIZING THE SIGNING HOST 174
18.2. CREATING A CERTIFICATE AUTHORITY 174
18.3. ADDING THE CERTIFICATE AUTHORITY TO CLIENTS 174
18.4. CREATING AN SSL/TLS KEY 175
18.5. CREATING AN SSL/TLS CERTIFICATE SIGNING REQUEST 175
18.6. CREATING THE SSL/TLS CERTIFICATE 176
18.7. ADDING THE CERTIFICATE TO THE UNDERCLOUD 177
. . . . . . . . . . . 19.
CHAPTER . . . ADDITIONAL
. . . . . . . . . . . . . . INTROSPECTION
. . . . . . . . . . . . . . . . . . OPERATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
...............
19.1. PERFORMING INDIVIDUAL NODE INTROSPECTION 179
19.2. PERFORMING NODE INTROSPECTION AFTER INITIAL INTROSPECTION 179
19.3. PERFORMING NETWORK INTROSPECTION FOR INTERFACE INFORMATION 179
. . . . . . . . . . . 20.
CHAPTER . . . .AUTOMATICALLY
. . . . . . . . . . . . . . . . . . .DISCOVERING
. . . . . . . . . . . . . . . BARE
. . . . . . .METAL
. . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
...............
20.1. PREREQUISITES 185
20.2. ENABLING AUTO-DISCOVERY 185
20.3. TESTING AUTO-DISCOVERY 186
20.4. USING RULES TO DISCOVER DIFFERENT VENDOR HARDWARE 186
. . . . . . . . . . . 21.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . .AUTOMATIC
. . . . . . . . . . . . . .PROFILE
. . . . . . . . . TAGGING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
...............
4
Table of Contents
. . . . . . . . . . . 22.
CHAPTER . . . .CREATING
. . . . . . . . . . . WHOLE
. . . . . . . . .DISK
. . . . . IMAGES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193
...............
22.1. SECURITY HARDENING MEASURES 193
22.2. WHOLE DISK IMAGE WORKFLOW 193
22.3. DOWNLOADING THE BASE CLOUD IMAGE 194
22.4. DISK IMAGE ENVIRONMENT VARIABLES 194
22.5. CUSTOMIZING THE DISK LAYOUT 195
22.6. MODIFYING THE PARTITIONING SCHEMA 196
22.7. MODIFYING THE IMAGE SIZE 198
22.8. BUILDING THE WHOLE DISK IMAGE 199
22.9. UPLOADING THE WHOLE DISK IMAGE 199
. . . . . . . . . . . 23.
CHAPTER . . . .CONFIGURING
. . . . . . . . . . . . . . . .DIRECT
. . . . . . . .DEPLOY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201
...............
23.1. CONFIGURING THE DIRECT DEPLOY INTERFACE ON THE UNDERCLOUD 201
Procedure 201
. . . . . . . . . . . 24.
CHAPTER . . . .CREATING
. . . . . . . . . . . VIRTUALIZED
. . . . . . . . . . . . . . .CONTROL
. . . . . . . . . . . PLANES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202
...............
24.1. VIRTUALIZED CONTROL PLANE ARCHITECTURE 202
24.2. BENEFITS AND LIMITATIONS OF VIRTUALIZING YOUR RHOSP OVERCLOUD CONTROL PLANE 202
24.3. PROVISIONING VIRTUALIZED CONTROLLERS USING THE RED HAT VIRTUALIZATION DRIVER 203
. . . . . . .V.
PART . . TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . . .AND
. . . . .TIPS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
...............
. . . . . . . . . . . 25.
CHAPTER . . . .TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . . DIRECTOR
. . . . . . . . . . . .ERRORS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
................
25.1. TROUBLESHOOTING NODE REGISTRATION 208
25.2. TROUBLESHOOTING HARDWARE INTROSPECTION 208
25.3. TROUBLESHOOTING WORKFLOWS AND EXECUTIONS 210
25.4. TROUBLESHOOTING OVERCLOUD CREATION AND DEPLOYMENT 211
25.5. TROUBLESHOOTING NODE PROVISIONING 212
25.6. TROUBLESHOOTING IP ADDRESS CONFLICTS DURING PROVISIONING 213
25.7. TROUBLESHOOTING "NO VALID HOST FOUND" ERRORS 214
25.8. TROUBLESHOOTING OVERCLOUD CONFIGURATION 215
25.9. TROUBLESHOOTING CONTAINER CONFIGURATION 215
25.10. TROUBLESHOOTING COMPUTE NODE FAILURES 218
25.11. CREATING AN SOSREPORT 218
25.12. LOG LOCATIONS 219
. . . . . . . . . . . 26.
CHAPTER . . . .TIPS
. . . . .FOR
. . . . .UNDERCLOUD
. . . . . . . . . . . . . . . .AND
. . . . .OVERCLOUD
. . . . . . . . . . . . . . SERVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220
...............
26.1. REVIEW THE DATABASE FLUSH INTERVALS 220
26.2. TUNING DEPLOYMENT PERFORMANCE 223
26.3. RUNNING SWIFT-RING-BUILDER IN A CONTAINER 223
26.4. CHANGING THE SSL/TLS CIPHER RULES FOR HAPROXY 223
. . . . . . .VI.
PART . . .APPENDICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225
...............
. . . . . . . . . . . .A.
APPENDIX . . POWER
. . . . . . . . .MANAGEMENT
. . . . . . . . . . . . . . . . DRIVERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
...............
A.1. INTELLIGENT PLATFORM MANAGEMENT INTERFACE (IPMI) 226
A.2. REDFISH 226
A.3. DELL REMOTE ACCESS CONTROLLER (DRAC) 226
A.4. INTEGRATED LIGHTS-OUT (ILO) 227
A.5. FUJITSU INTEGRATED REMOTE MANAGEMENT CONTROLLER (IRMC) 227
A.6. RED HAT VIRTUALIZATION 228
5
Red Hat OpenStack Platform 16.0 Director Installation and Usage
. . . . . . . . . . . .B.
APPENDIX . . RED
. . . . . HAT
. . . . . OPENSTACK
. . . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . .FOR
. . . . .POWER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230
...............
B.1. CEPH STORAGE 230
B.2. COMPOSABLE SERVICES 230
6
Table of Contents
7
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Director uses two main concepts: an undercloud and an overcloud. First you install the undercloud, and
then use the undercloud as a tool to install and configure the overcloud.
1.1. UNDERCLOUD
The undercloud is the main management node that contains the Red Hat OpenStack Platform director
toolset. It is a single-system OpenStack installation that includes components for provisioning and
managing the OpenStack nodes that form your OpenStack environment (the overcloud). The
components that form the undercloud have multiple functions:
Environment planning
The undercloud includes planning functions that you can use to create and assign certain node roles.
The undercloud includes a default set of nodes: Compute, Controller, and various Storage roles. You
can also design custom roles. Additionally, you can select which OpenStack Platform services to
include on each node role, which provides a method to model new node types or isolate certain
components on their own host.
Bare metal system control
The undercloud uses the out-of-band management interface, usually Intelligent Platform
Management Interface (IPMI), of each node for power management control and a PXE-based
service to discover hardware attributes and install OpenStack on each node. You can use this feature
to provision bare metal systems as OpenStack nodes. For a full list of power management drivers,
see Appendix A, Power management drivers .
Orchestration
The undercloud contains a set of YAML templates that represent a set of plans for your environment.
The undercloud imports these plans and follows their instructions to create the resulting OpenStack
environment. The plans also include hooks that you can use to incorporate your own customizations
as certain points in the environment creation process.
Undercloud components
8
CHAPTER 1. INTRODUCTION TO DIRECTOR
The undercloud uses OpenStack components as its base tool set. Each component operates within a
separate container on the undercloud:
OpenStack Identity (keystone) - Provides authentication and authorization for the director
components.
OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal
nodes.
OpenStack Networking (neutron) and Open vSwitch - Control networking for bare metal
nodes.
OpenStack Image Service (glance) - Stores images that director writes to bare metal
machines.
OpenStack Telemetry Metrics (gnocchi) - Provides a time series database for metrics.
OpenStack Telemetry Event Storage (panko) - Provides event storage for monitoring.
OpenStack Workflow Service (mistral) - Provides a set of workflows for certain director-
specific actions, such as importing and deploying plans.
OpenStack Messaging Service (zaqar) - Provides a messaging service for the OpenStack
Workflow Service.
OpenStack Object Storage (swift) - Provides object storage for various OpenStack
Platform components, including:
Controller
Controller nodes provide administration, networking, and high availability for the OpenStack
environment. A recommended OpenStack environment contains three Controller nodes together in
a high availability cluster.
A default Controller node role supports the following components. Not all of these services are
enabled by default. Some of these components require custom or pre-packaged environment files to
enable:
9
Red Hat OpenStack Platform 16.0 Director Installation and Usage
MariaDB
Open vSwitch
Compute
Compute nodes provide computing resources for the OpenStack environment. You can add more
Compute nodes to scale out your environment over time. A default Compute node contains the
following components:
KVM/QEMU
Open vSwitch
Storage
Storage nodes provide storage for the OpenStack environment. The following list contains
information about the various types of Storage node in RHOSP:
Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object
Storage Daemon (OSD). Additionally, director installs Ceph Monitor onto the Controller
nodes in situations where you deploy Ceph Storage nodes as part of your environment.
Block storage (cinder) - Used as external block storage for highly available Controller nodes.
This node contains the following components:
10
CHAPTER 1. INTRODUCTION TO DIRECTOR
Open vSwitch.
Object storage (swift) - These nodes provide an external storage layer for OpenStack Swift.
The Controller nodes access object storage nodes through the Swift proxy. Object storage
nodes contain the following components:
Open vSwitch.
The OpenStack Platform director uses some key pieces of software to manage components on the
Controller node:
Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the
availability of OpenStack components across all nodes in the cluster.
NOTE
From version 13 and later, you can use director to deploy High Availability for
Compute Instances (Instance HA). With Instance HA you can automate
evacuating instances from a Compute node when the Compute node fails.
Red Hat OpenStack Platform 16.0 supports installation on the Red Hat Enterprise Linux 8.1 operating
system. Red Hat Enterprise Linux 8.1 no longer includes Docker and provides a new set of tools to
replace the Docker ecosystem. This means OpenStack Platform 16.0 replaces Docker with these new
tools for OpenStack Platform deployment and upgrades.
Podman
11
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Pod Manager (Podman) is a container management tool. It implements almost all Docker CLI
commands, not including commands related to Docker Swarm. Podman manages pods, containers,
and container images. One of the major differences between Podman and Docker is that Podman
can manage resources without a daemon running in the background.
For more information about Podman, see the Podman website.
Buildah
Buildah specializes in building Open Containers Initiative (OCI) images, which you use in conjunction
with Podman. Buildah commands replicate the contents of a Dockerfile. Buildah also provides a
lower-level coreutils interface to build container images, so that you do not require a Dockerfile to
build containers. Buildah also uses other scripting languages to build container images without
requiring a daemon.
For more information about Buildah, see the Buildah website.
Skopeo
Skopeo provides operators with a method to inspect remote container images, which helps director
collect data when it pulls images. Additional features include copying container images from one
registry to another and deleting images from registries.
Red Hat supports the following methods for managing container images for your overcloud:
Pulling container images from the Red Hat Container Catalog to the image-serve registry on
the undercloud and then pulling the images from the image-serve registry. When you pull
images to the undercloud first, you avoid multiple overcloud nodes simultaneously pulling
container images over an external connection.
Pulling container images from your Satellite 6 server. You can pull these images directly from
the Satellite because the network traffic is internal.
This guide contains information about configuring your container image registry details and performing
basic container operations.
However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat
Ceph Storage so that you can scale the RHOSP storage layer from tens of terabytes to petabytes, or
even exabytes of storage. Red Hat Ceph Storage provides this storage virtualization layer with high
availability and high performance while running on commodity hardware. While virtualization might seem
like it comes with a performance penalty, Ceph stripes block device images as objects across the cluster,
meaning that large Ceph Block Device images have better performance than a standalone disk. Ceph
Block devices also support caching, copy-on-write cloning, and copy-on-read cloning for enhanced
performance.
For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage .
NOTE
12
CHAPTER 1. INTRODUCTION TO DIRECTOR
NOTE
For multi-architecture clouds, Red Hat supports only pre-installed or external Ceph
implementation. For more information, see Integrating an Overcloud with an Existing Red
Hat Ceph Cluster and Appendix B, Red Hat OpenStack Platform for POWER .
13
Red Hat OpenStack Platform 16.0 Director Installation and Usage
14
CHAPTER 2. PLANNING YOUR UNDERCLOUD
Since both the undercloud and overcloud use containers, both use the same architecture to pull,
configure, and run containers. This architecture is based on the OpenStack Orchestration service (heat)
for provisioning nodes and uses Ansible to configure services and containers. It is useful to have some
familiarity with heat and Ansible to help you troubleshoot issues that you might encounter.
The Provisioning or Control Plane network, which is the network that director uses to provision
your nodes and access them over SSH when executing Ansible configuration. This network also
enables SSH access from the undercloud to overcloud nodes. The undercloud contains DHCP
services for introspection and provisioning other nodes on this network, which means that no
other DHCP services should exist on this network. The director configures the interface for this
network.
The External network, which enables access to OpenStack Platform repositories, container
image sources, and other servers such as DNS servers or NTP servers. Use this network for
standard access the undercloud from your workstation. You must manually configure an
interface on the undercloud to access the external network.
The undercloud requires a minimum of 2 x 1 Gbps Network Interface Cards: one for the Provisioning or
Control Plane network and one for the External network. However, it is recommended to use a 10
Gbps interface for Provisioning network traffic, especially if you want to provision a large number of
nodes in your overcloud environment.
Note:
Do not use the same Provisioning or Control Plane NIC as the one that you use to access the
director machine from your workstation. The director installation creates a bridge by using the
Provisioning NIC, which drops any remote connections. Use the External NIC for remote
connections to the director system.
The Provisioning network requires an IP range that fits your environment size. Use the following
guidelines to determine the total number of IP addresses to include in this range:
Include at least one temporary IP address for each node that connects to the Provisioning
network during introspection.
Include at least one permanent IP address for each node that connects to the Provisioning
network during deployment.
Include an extra IP address for the virtual IP of the overcloud high availability cluster on the
15
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Include an extra IP address for the virtual IP of the overcloud high availability cluster on the
Provisioning network.
Include additional IP addresses within this range for scaling the environment.
The undercloud has the following minimum CPU and memory requirements:
An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This
provides 4 workers for each undercloud service.
A minimum of 24 GB of RAM.
The ceph-ansible playbook consumes 1 GB resident set size (RSS) for every 10 hosts that
the undercloud deploys. If you want to use a new or existing Ceph cluster in your
deployment, you must provision the undercloud RAM accordingly.
To use a larger number of workers, increase the vCPUs and memory of your undercloud using the
following recommendations:
Minimum: Use 1.5 GB of memory for each thread. For example, a machine with 48 threads
requires 72 GB of RAM to provide the minimum coverage for 24 heat workers and 12 workers for
other services.
Recommended: Use 3 GB of memory for each thread. For example, a machine with 48 threads
requires 144 GB of RAM to provide the recommended coverage for 24 heat workers and 12
workers for other services.
16
CHAPTER 2. PLANNING YOUR UNDERCLOUD
10 GB to accommodate QCOW2 image conversion and caching during the node provisioning
process
Platform Notes
Kernel-based Virtual Machine (KVM) Hosted by Red Hat Enterprise Linux 8, as listed on
certified hypervisors.
VMware ESX and ESXi Hosted by versions of ESX and ESXi as listed on the
Red Hat Customer Portal Certification Catalogue.
IMPORTANT
Red Hat OpenStack Platform director requires that the latest version of Red Hat
Enterprise Linux 8 is installed as the host operating system. This means your virtualization
platform must also support the underlying Red Hat Enterprise Linux version.
Network Considerations
Note the following network considerations for your virtualized undercloud:
Power Management
The undercloud VM requires access to the overcloud nodes' power management devices. This is the
IP address set for the pm_addr parameter when registering nodes.
Provisioning network
The NIC used for the provisioning (ctlplane) network requires the ability to broadcast and serve
DHCP requests to the NICs of the overcloud’s bare metal nodes. As a recommendation, create a
bridge that connects the VM’s NIC to the same network as the bare metal NICs.
NOTE
17
Red Hat OpenStack Platform 16.0 Director Installation and Usage
NOTE
A common problem occurs when the hypervisor technology blocks the undercloud from
transmitting traffic from an unknown address. - If using Red Hat Enterprise Virtualization,
disable anti-mac-spoofing to prevent this. - If using VMware ESX or ESXi, allow forged
transmits to prevent this. You must power off and on the director VM after you apply
these settings. Rebooting the VM is not sufficient.
Use UTF-8 encoding on all nodes. Ensure the LANG environment variable is set to en_US.UTF-
8 on all nodes.
Avoid using non-ASCII characters if you use Red Hat Ansible Tower to automate the creation of
Red Hat OpenStack Platform resources.
http_proxy
The proxy that you want to use for standard HTTP requests.
https_proxy
The proxy that you want to use for HTTPs requests.
no_proxy
A comma-separated list of domains that you want to exclude from proxy communications.
The no_proxy variable primarily uses domain names ( www.example.com), domain suffixes
(example.com), and domains with a wildcard ( *.example.com). Most Red Hat OpenStack
Platform services interpret IP addresses in no_proxy but certain services, such as container
health checks, do not interpret IP addresses in the no_proxy environment variable due to
limitations with cURL and wget. To use a system-wide proxy with the undercloud, disable
container health checks with the container_healthcheck_disabled parameter in the
undercloud.conf file during installation. For more information, see BZ#1837458 - Container
health checks fail to honor no_proxy CIDR notation.
Some containers bind and parse the environment variables in /etc/environments incorrectly,
which causes problems when running these services. For more information, see BZ#1916070 -
proxy configuration updates in /etc/environment files are not being picked up in containers
correctly and BZ#1918408 - mistral_executor container fails to properly set no_proxy
environment parameter.
18
CHAPTER 2. PLANNING YOUR UNDERCLOUD
proxy
The URL of the proxy server.
proxy_username
The username that you want to use to connect to the proxy server.
proxy_password
The password that you want to use to connect to the proxy server.
proxy_auth_method
The authentication method used by the proxy server.
The dnf proxy method does not include an option to exclude certain hosts from proxy
communication.
proxy_hostname
Host for the proxy.
proxy_scheme
The scheme for the proxy when writing out the proxy to repo definitions.
proxy_port
The port for the proxy.
proxy_username
The username that you want to use to connect to the proxy server.
proxy_password
The password to use for connecting to the proxy server.
no_proxy
A comma-separated list of hostname suffixes for specific hosts that you want to exclude from proxy
communication.
The Red Hat Subscription Manager proxy method has the following limitations:
This method provides proxy support only for Red Hat Subscription Manager.
The values for the Red Hat Subscription Manager proxy configuration override any values set
for the system-wide environment variables.
19
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Transparent proxy
If your network uses a transparent proxy to manage application layer traffic, you do not need to
configure the undercloud itself to interact with the proxy because proxy management occurs
automatically. A transparent proxy can help overcome limitations associated with client-based proxy
configuration in Red Hat OpenStack Platform.
Enable the following repositories for the installation and configuration of the undercloud.
Core repositories
The following table lists core repositories for installing the undercloud.
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-baseos- Base operating system repository
x86_64 - BaseOS (RPMs) eus-rpms for x86_64 systems.
Extended Update Support (EUS)
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-appstream- Contains Red Hat OpenStack
x86_64 - AppStream (RPMs) eus-rpms Platform dependencies.
Extended Update Support (EUS)
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64- High availability tools for Red Hat
x86_64 - High Availability (RPMs) highavailability-eus-rpms Enterprise Linux. Used for
Extended Update Support (EUS) Controller node high availability.
Red Hat Ansible Engine 2.8 for ansible-2.8-for-rhel-8- Ansible Engine for Red Hat
RHEL 8 x86_64 (RPMs) x86_64-rpms Enterprise Linux. Used to provide
the latest version of Ansible.
Red Hat Satellite Tools for RHEL satellite-tools-6.5-for-rhel-8- Tools for managing hosts with Red
8 Server RPMs x86_64 x86_64-rpms Hat Satellite 6.
Red Hat OpenStack Platform 16.0 openstack-16-for-rhel-8- Core Red Hat OpenStack
for RHEL 8 (RPMs) x86_64-rpms Platform repository, which
contains packages for Red Hat
OpenStack Platform director.
Red Hat Fast Datapath for RHEL fast-datapath-for-rhel-8- Provides Open vSwitch (OVS)
8 (RPMS) x86_64-rpms packages for OpenStack
Platform.
The following table contains a list of repositories for Red Hat Openstack Platform on POWER PC
20
CHAPTER 2. PLANNING YOUR UNDERCLOUD
The following table contains a list of repositories for Red Hat Openstack Platform on POWER PC
architecture. Use these repositories in place of equivalents in the Core repositories.
Red Hat Enterprise Linux for IBM rhel-8-for-ppc64le-baseos- Base operating system repository
Power, little endian - BaseOS rpms for ppc64le systems.
(RPMs)
Red Hat Enterprise Linux 8 for rhel-8-for-ppc64le- Contains Red Hat OpenStack
IBM Power, little endian - appstream-rpms Platform dependencies.
AppStream (RPMs)
Red Hat Enterprise Linux 8 for rhel-8-for-ppc64le- High availability tools for Red Hat
IBM Power, little endian - High highavailability-rpms Enterprise Linux. Used for
Availability (RPMs) Controller node high availability.
Red Hat Ansible Engine 2.8 for ansible-2.8-for-rhel-8- Ansible Engine for Red Hat
RHEL 8 IBM Power, little endian ppc64le-rpms Enterprise Linux. Provides the
(RPMs) latest version of Ansible.
Red Hat OpenStack Platform 16.0 openstack-16-for-rhel-8- Core Red Hat OpenStack
for RHEL 8 (RPMs) ppc64le-rpms Platform repository for ppc64le
systems.
[1] In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value
21
Red Hat OpenStack Platform 16.0 Director Installation and Usage
A resolvable hostname.
The command line tools for image preparation and director installation.
Procedure
Director uses system images and heat templates to create the overcloud environment. Red Hat
recommends creating these directories to help you organize your local file system.
If either of the previous commands do not report the correct fully-qualified hostname or report
an error, use hostnamectl to set a hostname:
22
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
8. Edit the /etc/hosts and include an entry for the system hostname. The IP address in /etc/hosts
must match the address that you plan to use for your undercloud public API. For example, if the
system is named manager.example.com and uses 10.0.0.1 for its IP address, add the following
line to the /etc/hosts file:
9. Register your system either with the Red Hat Content Delivery Network or with a Red Hat
Satellite. For example, run the following command to register the system to the Content
Delivery Network. Enter your Customer Portal user name and password when prompted:
10. Find the entitlement pool ID for Red Hat OpenStack Platform (RHOSP) director:
11. Locate the Pool ID value and attach the Red Hat OpenStack Platform 16.0 entitlement:
13. Disable all default repositories, and then enable the required Red Hat Enterprise Linux
repositories:
23
Red Hat OpenStack Platform 16.0 Director Installation and Usage
14. Perform an update on your system to ensure that you have the latest base system packages:
15. Install the command line tools for director installation and configuration:
If you use Red Hat Ceph Storage, or if your deployment uses an external Ceph Storage cluster, install
the ceph-ansible package. For more information about integrating with an existing Ceph Storage
cluster, see Integrating an Overcloud with an Existing Red Hat Ceph Cluster .
Procedure
Procedure
--local-push-destination sets the registry on the undercloud as the location for container
images. This means the director pulls the necessary images from the Red Hat Container
24
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
Catalog and pushes them to the registry on the undercloud. The director uses this registry
as the container image source. To pull directly from the Red Hat Container Catalog, omit
this option.
--output-env-file is an environment file name. The contents of this file include the
parameters for preparing your container images. In this case, the name of the file is
containers-prepare-parameter.yaml.
NOTE
parameter_defaults:
ContainerImagePrepare:
- (strategy one)
- (strategy two)
- (strategy three)
...
Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the
images. The following table contains information about the sub-parameters you can use with each
ContainerImagePrepare strategy:
Parameter Description
25
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Parameter Description
26
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
Key Description
tag Sets the specific tag for all images from the source. If
you use this option without specifying a
tag_from_label value, director pulls all container
images that use this tag. However, if you use this
option in combination with tag_from_label value,
director uses the tag as a source image to identify a
specific version tag based on labels. Keep this key set
to the default value, which is the Red Hat OpenStack
Platform version number.
IMPORTANT
The Red Hat Container Registry uses a specific version format to tag all Red Hat
OpenStack Platform container images. This version format is {version}-{release}, which
each container image stores as labels in the container metadata. This version format
helps facilitate updates from one {release} to the next. For this reason, you must always
use the tag_from_label: {version}-{release} parameter with the
ContainerImagePrepare heat parameter. Do not only use tag on its own to to pull
container images. For example, using tag by itself causes problems when performing
updates because director requires a change in tag to update a container image.
IMPORTANT
The container images use multi-stream tags based on Red Hat OpenStack Platform
version. This means there is no longer a latest tag.
27
Red Hat OpenStack Platform 16.0 Director Installation and Usage
ContainerImagePrepare:
- push_destination: true
set:
namespace: registry.redhat.io/...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
my_username: my_password
In the example, replace my_username and my_password with your authentication credentials. Instead
of using your individual user credentials, Red Hat recommends creating a registry service account and
using those credentials to access registry.redhat.io content. For more information, see "Red Hat
Container Registry Authentication".
The ContainerImageRegistryLogin parameter is used to control the registry login on the systems
being deployed. This must be set to true if push_destination is set to false or not used.
ContainerImagePrepare:
- set:
namespace: registry.redhat.io/...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
my_username: my_password
ContainerImageRegistryLogin: true
ContainerImagePrepare:
- tag_from_label: "{version}-{release}"
push_destination: true
excludes:
- nova-api
set:
namespace: registry.redhat.io/rhosp-rhel8
name_prefix: openstack-
name_suffix: ''
tag: 16.0
- push_destination: true
includes:
- nova-api
set:
namespace: registry.redhat.io/rhosp-rhel8
tag: 16.0-44
The includes and excludes parameters use regular expressions to control image filtering for each
28
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
The includes and excludes parameters use regular expressions to control image filtering for each
entry. The images that match the includes strategy take precedence over excludes matches. The
image name must the includes or excludes regular expression value to be considered a match.
If your overcloud does not require Ceph Storage containers, you can configure director to not pull the
Ceph Storage containers images from the Red Hat Container Registry.
Procedure
parameter_defaults:
ContainerImagePrepare:
- push_destination: true
excludes:
- ceph
- prometheus
set:
…
The excludes parameter uses regular expressions to exclude any container images that contain
the ceph or prometheus strings.
parameter_defaults:
ContainerImagePrepare:
- (strategy one)
- (strategy two)
- (strategy three)
ContainerImageRegistryCredentials:
registry.example.com:
username: "p@55w0rd!"
IMPORTANT
Private registries require push_destination set to true for their respective strategy in
the ContainerImagePrepare.
The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry
29
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry
URL. Each private registry URL uses its own key and value pair to define the username (key) and
password (value). This provides a method to specify credentials for multiple private registries.
parameter_defaults:
...
ContainerImageRegistryCredentials:
registry.redhat.io:
myuser: 'p@55w0rd!'
registry.internalsite.com:
myuser2: '0th3rp@55w0rd!'
'192.0.2.1:8787':
myuser3: '@n0th3rp@55w0rd!'
IMPORTANT
The ContainerImageRegistryLogin parameter is used to control whether the system needs to log in to
the remote registry to fetch the containers.
parameter_defaults:
...
ContainerImageRegistryLogin: true
IMPORTANT
You must set this value to true if push_destination is not configured for a given strategy.
If push_destination is configured in a ContainerImagePrepare strategy and the
ContainerImageRegistryCredentials parameter is configured, the system logs in to
fetch the containers and pushes them to the remote system.
As part of a continuous integration pipeline where images are modified with the changes being
tested before deployment.
As part of a development workflow where local changes must be deployed for testing and
development.
When changes must be deployed but are not available through an image build pipeline. For
example, adding proprietary add-ons or emergency fixes.
To modify an image during preparation, invoke an Ansible role on each image that you want to modify.
The role takes a source image, makes the requested changes, and tags the result. The prepare
command can push the image to the destination registry and set the heat parameters to refer to the
modified image.
The Ansible role tripleo-modify-image conforms with the required role interface and provides the
30
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
The Ansible role tripleo-modify-image conforms with the required role interface and provides the
behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in
the ContainerImagePrepare parameter:
modify_role specifies the Ansible role to invoke for each image to modify.
modify_append_tag appends a string to the end of the source image tag. This makes it obvious
that the resulting image has been modified. Use this parameter to skip modification if the
push_destination registry already contains the modified image. Change modify_append_tag
whenever you modify the image.
To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the
required file in that role.
While developing and testing the ContainerImagePrepare entries that modify images, run the image
prepare command without any additional options to confirm that the image is modified as you expect:
ContainerImagePrepare:
- push_destination: true
...
modify_role: tripleo-modify-image
modify_append_tag: "-updated"
modify_vars:
tasks_from: yum_update.yml
compare_host_packages: true
yum_repos_dir_path: /etc/yum.repos.d
...
ContainerImagePrepare:
- push_destination: true
...
includes:
- nova-compute
modify_role: tripleo-modify-image
modify_append_tag: "-hotfix"
modify_vars:
31
Red Hat OpenStack Platform 16.0 Director Installation and Usage
tasks_from: rpm_install.yml
rpms_path: /home/stack/nova-hotfix-pkgs
...
ContainerImagePrepare:
- push_destination: true
...
includes:
- nova-compute
modify_role: tripleo-modify-image
modify_append_tag: "-hotfix"
modify_vars:
tasks_from: modify_image.yml
modify_dir_path: /home/stack/nova-custom
...
The following example shows the /home/stack/nova-custom/Dockerfile file. After you run any USER
root directives, you must switch back to the original image default user:
FROM registry.redhat.io/rhosp-rhel8/openstack-nova-compute:latest
USER "root"
USER "nova"
The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an
example organization called ACME. Substitute this organization for your own Satellite 6 organization.
NOTE
32
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
Procedure
$ sudo podman search --limit 1000 "registry.redhat.io/rhosp" | grep rhosp-rhel8 | awk '{ print
$2 }' | grep -v beta | sed "s/registry.redhat.io\///g" | tail -n+2 > satellite_images
2. Copy the satellite_images file to a system that contains the Satellite 6 hammer tool.
Alternatively, use the instructions in the Hammer CLI Guide to install the hammer tool to the
undercloud.
3. Run the following hammer command to create a new product ( OSP16 Containers) in your
Satellite organization:
33
Red Hat OpenStack Platform 16.0 Director Installation and Usage
--upstream-username USERNAME \
--upstream-password PASSWORD \
--name rhceph-4-rhel8
NOTE
Depending on your configuration, hammer might prompt you for your Satellite
server username and password. You can configure hammer to log in
automatically using a configuration file. For more information, see the
"Authentication" section in the Hammer CLI Guide .
8. If your Satellite 6 server uses content views, create a new content view version to incorporate
the images and promote it along environments in your application life cycle. This largely
depends on how you structure your application lifecycle. For example, if you have an
environment called production in your lifecycle and you want the container images to be
available in that environment, create a content view that includes the container images and
promote that content view to the production environment. For more information, see
"Managing Content Views".
This command displays tags for the OpenStack Platform container images within a content view
for a particular environment.
10. Return to the undercloud and generate a default environment file that prepares images using
your Satellite server as a source. Run the following example command to generate the
environment file:
--output-env-file is an environment file name. The contents of this file include the
parameters for preparing your container images for the undercloud. In this case, the name
of the file is containers-prepare-parameter.yaml.
11. Edit the containers-prepare-parameter.yaml file and modify the following parameters:
push_destination - Set this to true or false depending on your chosen container image
management strategy. If you set this parameter to false, the overcloud nodes pull images
directly from the Satellite. If you set this parameter to true, the director pulls the images
34
CHAPTER 3. PREPARING FOR DIRECTOR INSTALLATION
from the Satellite to the undercloud registry and the overcloud pulls the images from the
undercloud registry.
namespace - The URL and port of the registry on the Satellite server. The default registry
port on Red Hat Satellite is 5000.
If you do not use content views, the structure is [org]-[product]-. For example: acme-
osp16_containers-.
parameter_defaults:
ContainerImagePrepare:
- push_destination: false
set:
ceph_image: acme-production-myosp16-osp16_containers-rhceph-4
ceph_namespace: satellite.example.com:5000
ceph_tag: latest
name_prefix: acme-production-myosp16-osp16_containers-
name_suffix: ''
namespace: satellite.example.com:5000
neutron_driver: null
tag: 16.0
...
tag_from_label: '{version}-{release}'
container_images_file = /home/stack/containers-prepare-parameter.yaml
35
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
1. Copy the default template to the home directory of the stack user’s:
[stack@director ~]$ cp \
/usr/share/python-tripleoclient/undercloud.conf.sample \
~/undercloud.conf
2. Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you
omit or comment out a parameter, the undercloud installation uses the default value.
Defaults
The following parameters are defined in the [DEFAULT] section of the undercloud.conf file:
additional_architectures
A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud
supports ppc64le architecture.
NOTE
When you enable support for ppc64le, you must also set ipxe_enabled to False
certificate_generation_ca
The certmonger nickname of the CA that signs the requested certificate. Use this option only if you
have set the generate_service_certificate parameter. If you select the local CA, certmonger
extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the
certificate to the trust chain.
clean_nodes
Defines whether to wipe the hard drive between deployments and after introspection.
cleanup
Cleanup temporary files. Set this to False to leave the temporary files used during deployment in
place after you run the deployment command. This is useful for debugging the generated files or if
errors occur.
container_cli
The CLI tool for container management. Leave this parameter set to podman. Red Hat Enterprise
Linux 8.1 only supports podman.
36
CHAPTER 4. INSTALLING DIRECTOR
container_healthcheck_disabled
Disables containerized service health checks. Red Hat recommends that you enable health checks
and leave this option set to false.
container_images_file
Heat environment file with container image information. This file can contain the following entries:
container_insecure_registries
A list of insecure registries for podman to use. Use this parameter if you want to pull images from
another source, such as a private container registry. In most cases, podman has the certificates to
pull container images from either the Red Hat Container Catalog or from your Satellite server if the
undercloud is registered to Satellite.
container_registry_mirror
An optional registry-mirror configured that podman uses.
custom_env_files
Additional environment files that you want to add to the undercloud installation.
deployment_user
The user who installs the undercloud. Leave this parameter unset to use the current default user
stack.
discovery_default_driver
Sets the default driver for automatically enrolled nodes. Requires the enable_node_discovery
parameter to be enabled and you must include the driver in the enabled_hardware_types list.
enable_ironic; enable_ironic_inspector; enable_mistral; enable_nova; enable_tempest;
enable_validations; enable_zaqar
Defines the core services that you want to enable for director. Leave these parameters set to true.
enable_node_discovery
Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use
the fake_pxe driver as a default but you can set discovery_default_driver to override. You can also
use introspection rules to specify driver information for newly enrolled nodes.
enable_novajoin
Defines whether to install the novajoin metadata service in the undercloud.
enable_routed_networks
Defines whether to enable support for routed control plane networks.
enable_swift_encryption
Defines whether to enable Swift encryption at-rest.
enable_telemetry
Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud.
Set the enable_telemetry parameter to true if you want to install and configure telemetry services
automatically. The default value is false, which disables telemetry on the undercloud. This parameter
is required if you use other products that consume metrics data, such as Red Hat CloudForms.
enabled_hardware_types
A list of hardware types that you want to enable for the undercloud.
37
Red Hat OpenStack Platform 16.0 Director Installation and Usage
generate_service_certificate
Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used
for the undercloud_service_certificate parameter. The undercloud installation saves the resulting
certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem. The CA defined in the
certificate_generation_ca parameter signs this certificate.
heat_container_image
URL for the heat container image to use. Leave unset.
heat_native
Run host-based undercloud configuration using heat-all. Leave as true.
hieradata_override
Path to hieradata override file that configures Puppet hieradata on the director, providing custom
configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation
copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. For
more information about using this feature, see Configuring hieradata on the undercloud.
inspection_extras
Defines whether to enable extra hardware collection during the inspection process. This parameter
requires the python-hardware or python-hardware-detect packages on the introspection image.
inspection_interface
The bridge that director uses for node introspection. This is a custom bridge that the director
configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default
br-ctlplane.
inspection_runbench
Runs a set of benchmarks during node introspection. Set this parameter to true to enable the
benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting
the hardware of registered nodes.
ipa_otp
Defines the one-time password to register the undercloud node to an IPA server. This is required
when enable_novajoin is enabled.
ipv6_address_mode
IPv6 address configuration mode for the undercloud provisioning network. The following list contains
the possible values for this parameter:
ipxe_enabled
Defines whether to use iPXE or standard PXE. The default is true, which enables iPXE. Set this
parameter to false to use standard PXE.
local_interface
The chosen interface for the director Provisioning NIC. This is also the device that director uses for
DHCP and PXE boot services. Change this value to your chosen device. To see which device is
connected, use the ip addr command. For example, this is the result of an ip addr command:
38
CHAPTER 4. INSTALLING DIRECTOR
In this example, the External NIC uses em0 and the Provisioning NIC uses em1, which is currently not
configured. In this case, set the local_interface to em1. The configuration script attaches this
interface to a custom bridge defined with the inspection_interface parameter.
local_ip
The IP address defined for the director Provisioning NIC. This is also the IP address that director
uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you
use a different subnet for the Provisioning network, for example, if this IP address conflicts with an
existing IP address or subnet in your environment.
local_mtu
The maximum transmission unit (MTU) that you want to use for the local_interface. Do not exceed
1500 for the undercloud.
local_subnet
The local subnet that you want to use for PXE boot and DHCP interfaces. The local_ip address
should reside in this subnet. The default is ctlplane-subnet.
net_config_override
Path to network configuration override template. If you set this parameter, the undercloud uses a
JSON format template to configure the networking with os-net-config and ignores the network
parameters set in undercloud.conf. Use this parameter when you want to configure bonding or add
an option to the interface. See /usr/share/python-tripleoclient/undercloud.conf.sample for an
example.
networks_file
Networks file to override for heat.
output_dir
Directory to output state, processed heat templates, and Ansible deployment files.
overcloud_domain_name
The DNS domain name that you want to use when you deploy the overcloud.
NOTE
When you configure the overcloud, you must set the CloudDomain parameter to a
matching value. Set this parameter in an environment file when you configure your
overcloud.
roles_file
The roles file that you want to use to override the default roles file for undercloud installation. It is
highly recommended to leave this parameter unset so that the director installation uses the default
roles file.
scheduler_max_attempts
The maximum number of times that the scheduler attempts to deploy an instance. This value must
be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid
potential race conditions when scheduling.
service_principal
39
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The Kerberos principal for the service using the certificate. Use this parameter only if your CA
requires a Kerberos principal, such as in FreeIPA.
subnets
List of routed network subnets for provisioning and introspection. The default value includes only the
ctlplane-subnet subnet. For more information, see Subnets.
templates
Heat templates file to override.
undercloud_admin_host
The IP address or hostname defined for director Admin API endpoints over SSL/TLS. The director
configuration attaches the IP address to the director software bridge as a routed IP address, which
uses the /32 netmask.
undercloud_debug
Sets the log level of undercloud services to DEBUG. Set this value to true to enable DEBUG log
level.
undercloud_enable_selinux
Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to
true unless you are debugging an issue.
undercloud_hostname
Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures
all system host name settings. If left unset, the undercloud uses the current host name, but you must
configure all system host name settings appropriately.
undercloud_log_file
The path to a log file to store the undercloud install and upgrade logs. By default, the log file is
install-undercloud.log in the home directory. For example, /home/stack/install-undercloud.log.
undercloud_nameservers
A list of DNS nameservers to use for the undercloud hostname resolution.
undercloud_ntp_servers
A list of network time protocol servers to help synchronize the undercloud date and time.
undercloud_public_host
The IP address or hostname defined for director Public API endpoints over SSL/TLS. The director
configuration attaches the IP address to the director software bridge as a routed IP address, which
uses the /32 netmask.
undercloud_service_certificate
The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you
obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed
certificate.
undercloud_timezone
Host timezone for the undercloud. If you do not specify a timezone, director uses the existing
timezone configuration.
undercloud_update_packages
Defines whether to update packages during the undercloud installation.
Subnets
Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a
subnet called ctlplane-subnet, use the following sample in your undercloud.conf file:
40
CHAPTER 4. INSTALLING DIRECTOR
[ctlplane-subnet]
cidr = 192.168.24.0/24
dhcp_start = 192.168.24.5
dhcp_end = 192.168.24.24
inspection_iprange = 192.168.24.100,192.168.24.120
gateway = 192.168.24.1
masquerade = true
You can specify as many provisioning networks as necessary to suit your environment.
cidr
The network that director uses to manage overcloud instances. This is the Provisioning network,
which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you
use a different subnet for the Provisioning network.
masquerade
Defines whether to masquerade the network defined in the cidr for external access. This provides
the Provisioning network with a degree of network address translation (NAT) so that the
Provisioning network has external access through director.
NOTE
The director configuration also enables IP forwarding automatically using the relevant
sysctl kernel parameter.
dhcp_start; dhcp_end
The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains
enough IP addresses to allocate your nodes.
dhcp_exclude
IP addresses to exclude in the DHCP allocation range.
dns_nameservers
DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet
uses nameservers defined in the undercloud_nameservers parameter.
gateway
The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the
External network. Leave this as the default 192.168.24.1 unless you use a different IP address for
director or want to use an external gateway directly.
host_routes
Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also
configures the host routes for the local_subnet on the undercloud.
inspection_iprange
Temporary IP range for nodes on this network to use during the inspection process. This range must
not overlap with the range defined by dhcp_start and dhcp_end but must be in the same IP subnet.
Modify the values of these parameters to suit your configuration. When complete, save the file.
41
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
2. Edit this file and include your heat parameters. For example, to enable debugging for certain
OpenStack Platform services include the following snippet in the custom-undercloud-
params.yaml file:
parameter_defaults:
Debug: True
3. Edit your undercloud.conf file and scroll to the custom_env_files parameter. Edit the
parameter to point to your custom-undercloud-params.yaml environment file:
custom_env_files = /home/stack/templates/custom-undercloud-params.yaml
NOTE
The director installation includes this environment file during the next undercloud installation or
upgrade operation.
Parameter Description
Set these parameters in your custom environment file under the parameter_defaults section:
parameter_defaults:
Debug: True
AdminPassword: "myp@ssw0rd!"
AdminEmail: "[email protected]"
42
CHAPTER 4. INSTALLING DIRECTOR
Procedure
2. Add the customized hieradata to the file. For example, add the following snippet to modify the
Compute (nova) service parameter force_raw_images from the default value of True to False:
nova::compute::force_raw_images: False
If there is no Puppet implementation for the parameter you want to set, then use the following
method to configure the parameter:
nova::config::nova_config:
DEFAULT/<parameter_name>:
value: <parameter_value>
For example:
nova::config::nova_config:
DEFAULT/network_allocate_retries:
value: 20
ironic/serial_console_state_timeout:
value: 15
3. Set the hieradata_override parameter in the undercloud.conf file to the path of the new
/home/stack/hieradata.yaml file:
hieradata_override = /home/stack/hieradata.yaml
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning
network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack
Platform onto IPv6 nodes. However, there are some considerations:
Stateful DHCPv6 is available only with a limited set of UEFI firmware. For more information, see
Bugzilla #1575026.
Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform.
43
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Prerequisites
An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on
the undercloud in the IPv6 Networking for the Overcloud guide.
Procedure
1. Copy the sample undercloud.conf file, or modify your existing undercloud.conf file.
b. Set enable_routed_networks to true if you do not want the undercloud to create a router
on the provisioning network. In this case, the data center router must provide router
advertisements. Otherwise, set this value to false.
cidr
dhcp_start
dhcp_end
gateway
inspection_iprange
f. In the [ctlplane-subnet] section, set an IPv6 nameserver for the subnet in the
dns_nameservers parameter.
ipv6_address_mode = dhcpv6-stateless
enable_routed_networks: false
local_ip = <ipv6-address>
undercloud_admin_host = <ipv6-address>
undercloud_public_host = <ipv6-address>
[ctlplane-subnet]
cidr = <ipv6-address>::<ipv6-mask>
dhcp_start = <ipv6-address>
dhcp_end = <ipv6-address>
dns_nameservers = <ipv6-dns>
gateway = <ipv6-address>
inspection_iprange = <ipv6-address>,<ipv6-address>
44
CHAPTER 4. INSTALLING DIRECTOR
Procedure
This command launches the director configuration script. Director installs additional packages
and configures its services according to the configuration in the undercloud.conf. This script
takes several minutes to complete.
stackrc - A set of initialization variables to help you access the director command line tools.
2. The script also starts all OpenStack Platform service containers automatically. You can check
the enabled containers with the following command:
3. To initialize the stack user to use the command line tools, run the following command:
The prompt now indicates that OpenStack commands authenticate and execute against the
undercloud;
The director installation is complete. You can now use the director command line tools.
An introspection kernel and ramdisk for bare metal system introspection over PXE boot.
An overcloud kernel, ramdisk, and full image. which form a base overcloud system that is written
to the hard disk of the node.
The following procedure shows how to obtain and install these images.
Procedure
1. Source the stackrc file to enable the director command line tools:
45
Red Hat OpenStack Platform 16.0 Director Installation and Usage
3. Extract the images archives to the images directory in the home directory of the stack user
(/home/stack/images):
overcloud-full
overcloud-full-initrd
overcloud-full-vmlinuz
The script also installs the introspection images on the director PXE server.
This list does not show the introspection PXE images. Director copies these files to
/var/lib/ironic/httpboot.
These are the images and procedures that are necessary to deploy the overcloud to enable support of
46
CHAPTER 4. INSTALLING DIRECTOR
These are the images and procedures that are necessary to deploy the overcloud to enable support of
additional CPU architectures.
Procedure
1. Source the stackrc file to enable the director command line tools:
3. Extract the archives to an architecture specific directory in the images directory in the home
directory of the stack user (/home/stack/images):
overcloud-full
overcloud-full-initrd
overcloud-full-vmlinuz
ppc64le-bm-deploy-kernel
ppc64le-bm-deploy-ramdisk
ppc64le-overcloud-full
The script also installs the introspection images on the director PXE server.
47
Red Hat OpenStack Platform 16.0 Director Installation and Usage
This list does not show the introspection PXE images. Director copies these files to /tftpboot.
/var/lib/ironic/tftpboot/ppc64le/:
total 457204
-rwxr-xr-x. 1 root root 19858896 Aug 8 19:34 agent.kernel
-rw-r--r--. 1 root root 448311235 Aug 8 19:34 agent.ramdisk
-rw-r--r--. 1 ironic-inspector ironic-inspector 336 Aug 8 02:06 default
Procedure
1. Source the stackrc file to enable the director command line tools:
3. Extract the images archives to the images directory in the home directory of the stack user
(/home/stack/images):
48
CHAPTER 4. INSTALLING DIRECTOR
overcloud-minimal
overcloud-minimal-initrd
overcloud-minimal-vmlinuz
NOTE
The default overcloud-full.qcow2 image is a flat partition image. However, you can also
import and use whole disk images. For more information, see Chapter 22, Creating whole
disk images.
Procedure
1. Source the stackrc file to enable the director command line tools:
49
Red Hat OpenStack Platform 16.0 Director Installation and Usage
+-------------------+-----------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------+
| ... | |
| dns_nameservers | 8.8.8.8 |
| ... | |
+-------------------+-----------------------------------------------+
IMPORTANT
If you aim to isolate service traffic onto separate networks, the overcloud nodes use the
DnsServers parameter in your network environment files.
Procedure
1. Modify the undercloud configuration files. For example, edit the undercloud.conf file and add
the idrac hardware type to the list of enabled hardware types:
enabled_hardware_types = ipmi,redfish,idrac
2. Run the openstack undercloud install command to refresh your undercloud with the new
changes:
The prompt now indicates that OpenStack commands authenticate and execute against the
undercloud:
4. Verify that director has applied the new configuration. For this example, check the list of
enabled hardware types:
50
CHAPTER 4. INSTALLING DIRECTOR
You can find the container registry logs in the following locations:
/var/log/httpd/image_serve_access.log
/var/log/httpd/image_serve_error.log.
The image content is served from /var/lib/image-serve. This location uses a specific directory layout
and apache configuration to implement the pull function of the registry REST API.
The Apache-based registry does not support podman push nor buildah push commands, which means
that you cannot push container images using traditional methods. To modify images during deployment,
use the container preparation workflow, such as the ContainerImagePrepare parameter. To manage
container images, use the container management commands:
NOTE
You must run all container image management commands with sudo level permissions.
Perform basic overcloud configuration, including registering nodes, inspecting them, and then
tagging them into various node roles. For more information, see Chapter 7, Configuring a basic
overcloud with CLI tools.
51
Red Hat OpenStack Platform 16.0 Director Installation and Usage
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
heat-engine 4 24
ironic-conductor 2 12
An undercloud minion has the following minimum CPU and memory requirements:
An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This
processor provides 4 workers for each undercloud service.
A minimum of 16 GB of RAM.
To use a larger number of workers, increase the vCPUs and memory count on the undercloud using a
ratio of 2 GB of RAM for each CPU thread. For example, a machine with 48 threads must have 96 GB of
RAM. This provides coverage for 24 heat-engine workers and 12 ironic-conductor workers.
A resolvable hostname
52
CHAPTER 5. INSTALLING UNDERCLOUD MINIONS
The command line tools for image preparation and minion installation
Procedure
If either of the previous commands do not report the correct fully-qualified hostname or report
an error, use hostnamectl to set a hostname:
7. Edit the /etc/hosts file and include an entry for the system hostname. For example, if the
system is named minion.example.com and uses the IP address 10.0.0.1, add the following line
to the /etc/hosts file:
8. Register your system either with the Red Hat Content Delivery Network or Red Hat Satellite.
For example, run the following command to register the system to the Content Delivery
Network. Enter your Customer Portal user name and password when prompted:
9. Find the entitlement pool ID for Red Hat OpenStack Platform (RHOSP) director:
53
Red Hat OpenStack Platform 16.0 Director Installation and Usage
10. Locate the Pool ID value and attach the Red Hat OpenStack Platform 16.0 entitlement:
11. Disable all default repositories, and then enable the required Red Hat Enterprise Linux
repositories:
12. Perform an update on your system to ensure that you have the latest base system packages:
13. Install the command line tools for minion installation and configuration:
tripleo-undercloud-outputs.yaml
tripleo-undercloud-passwords.yaml
Procedure
54
CHAPTER 5. INSTALLING UNDERCLOUD MINIONS
An external certificate authority whose certificate is preloaded on the minion host. No action is
required.
A custom self-signed certificate authority, which you create with OpenSSL. Examples in this
document refer to this file as ca.crt.pem. Copy this file to the minion host and include the file as
a part of the trusted certificate authorities for the minion host.
Procedure
2. Copy the certificate authority file from the undercloud to the minion:
Procedure
55
Red Hat OpenStack Platform 16.0 Director Installation and Usage
2. Copy the default template to the home directory of the stack user:
[stack@minion ~]$ cp \
/usr/share/python-tripleoclient/minion.conf.sample \
~/minion.conf
3. Edit the minion.conf file. This file contains settings to configure your minion. If you omit or
comment out a parameter, the minion installation uses the default value. Review the following
recommended parameters:
minion_local_interface, which you set to the interface that connects to the undercloud
through the Provisioning Network.
minion_nameservers, which you set to the DNS nameservers so that the minion can
resolve hostnames.
NOTE
The default minion.conf file enables only the heat-engine service on the minion. To
enable the ironic-conductor service, set the enable_ironic_conductor parameter to
true.
Defaults
The following parameters are defined in the [DEFAULT] section of the minion.conf file:
cleanup
Cleanup temporary files. Set this parmaeter to False to leave the temporary files used during
deployment in place after the command is run. This is useful for debugging the generated files or if
errors occur.
container_cli
The CLI tool for container management. Leave this parameter set to podman. Red Hat Enterprise
Linux 8.1 only supports podman.
container_healthcheck_disabled
Disables containerized service health checks. Red Hat recommends that you enable health checks
and leave this option set to false.
container_images_file
Heat environment file with container image information. This file can contain the following entries:
56
CHAPTER 5. INSTALLING UNDERCLOUD MINIONS
container_insecure_registries
A list of insecure registries for podman to use. Use this parameter if you want to pull images from
another source, such as a private container registry. In most cases, podman has the certificates to
pull container images from either the Red Hat Container Catalog or from your Satellite server if the
minion is registered to Satellite.
container_registry_mirror
An optional registry-mirror configured that podman uses.
custom_env_files
Additional environment file that you want to add to the minion installation.
deployment_user
The user who installs the minion. Leave this parameter unset to use the current default user stack.
enable_heat_engine
Defines whether to install the heat engine on the minion. The default is true.
enable_ironic_conductor
Defines whether to install the ironic conductor service on the minion. The default value is false. Set
this value to true to enable the ironic conductor service.
heat_container_image
URL for the heat container image that you want to use. Leave unset.
heat_native
Use native heat templates. Leave as true.
hieradata_override
Path to hieradata override file that configures Puppet hieradata on the director, providing custom
configuration to services beyond the minion.conf parameters. If set, the minion installation copies
this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy.
minion_debug
Set this value to true to enable the DEBUG log level for minion services.
minion_enable_selinux
Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to
true unless you are debugging an issue.
minion_enable_validations
Enable validation services on the minion.
minion_hostname
Defines the fully qualified host name for the minion. If set, the minion installation configures all
system host name settings. If left unset, the minion uses the current host name, but you must
configure all system host name settings appropriately.
minion_local_interface
The chosen interface for the Provisioning NIC on the undercloud. This is also the device that the
minion uses for DHCP and PXE boot services. Change this value to your chosen device. To see which
device is connected, use the ip addr command. For example, this is the result of an ip addr
command:
57
Red Hat OpenStack Platform 16.0 Director Installation and Usage
In this example, the External NIC uses eth0 and the Provisioning NIC uses eth1, which is currently not
configured. In this case, set the local_interface to eth1. The configuration script attaches this
interface to a custom bridge defined with the inspection_interface parameter.
minion_local_ip
The IP address defined for the Provisioning NIC on the undercloud. This is also the IP address that
the minion uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24
unless you use a different subnet for the Provisioning network, for example, if the default IP address
conflicts with an existing IP address or subnet in your environment.
minion_local_mtu
The maximum transmission unit (MTU) that you want to use for the local_interface. Do not exceed
1500 for the minion.
minion_log_file
The path to a log file where you want to store the minion install and upgrade logs. By default, the log
file is install-minion.log in the home directory. For example, /home/stack/install-minion.log.
minion_nameservers
A list of DNS nameservers to use for the minion hostname resolution.
minion_ntp_servers
A list of network time protocol servers to help synchronize the minion date and time.
minion_password_file
The file that contains the passwords for the minion to connect to undercloud services. Leave this
parameter set to the tripleo-undercloud-passwords.yaml file copied from the undercloud.
minion_service_certificate
The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you
obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed
certificate.
minion_timezone
Host timezone for the minion. If you do not specify a timezone, the minion uses the existing timezone
configuration.
minion_undercloud_output_file
The file that contains undercloud configuration information that the minion can use to connect to
undercloud services. Leave this parameter set to the tripleo-undercloud-outputs.yaml file copied
from the undercloud.
net_config_override
The path to a network configuration override template. If you set this parameter, the minion uses a
JSON format template to configure the networking with os-net-config and ignores the network
parameters set in minion.conf. See /usr/share/python-tripleoclient/minion.conf.sample for an
example.
networks_file
Networks file to override for heat.
58
CHAPTER 5. INSTALLING UNDERCLOUD MINIONS
output_dir
Directory to output state, processed heat templates, and Ansible deployment files.
roles_file
The roles file that you want to use to override the default roles file for minion installation. It is highly
recommended to leave this parameter unset so that the minion installation uses the default roles file.
templates
Heat templates file to override.
Procedure
This command launches the configuration script for the minion, installs additional packages, and
configures minion services according to the configuration in the minion.conf file. This script
takes several minutes to complete.
Procedure
3. If you enabled the heat engine service on the minion, verify that the heat-engine service from
the minion appears on the undercloud service list:
The command output displays a table with heat-engine workers for both the undercloud and
any minions.
4. If you enabled the ironic conductor service on the minion, verify that the ironic-conductor
service from the minion appears on the undercloud service list:
The command output displays a table with ironic-conductor services for both the undercloud
and any minions.
59
Red Hat OpenStack Platform 16.0 Director Installation and Usage
60
PART II. BASIC OVERCLOUD DEPLOYMENT
61
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Controller
Provides key services for controlling your environment. This includes the dashboard (horizon),
authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and
high availability services. A Red Hat OpenStack Platform (RHOSP) environment requires three
Controller nodes for a highly available production-level environment.
NOTE
Use environments with one Controller node only for testing purposes, not for
production. Environments with two Controller nodes or more than three Controller
nodes are not supported.
Compute
A physical server that acts as a hypervisor and contains the processing capabilities required to run
virtual machines in the environment. A basic RHOSP environment requires at least one Compute
node.
Ceph Storage
A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This
deployment role is optional.
Swift Storage
A host that provides external object storage to the OpenStack Object Storage (swift) service. This
deployment role is optional.
The following table contains some examples of different overclouds and defines the node types for
each scenario.
Small 3 1 - - 4
overcloud
Medium 3 3 - - 6
overcloud
Medium 3 3 - 3 9
overcloud with
additional
object storage
62
CHAPTER 6. PLANNING YOUR OVERCLOUD
Medium 3 3 3 - 9
overcloud with
Ceph Storage
cluster
In addition, consider whether to split individual services into custom roles. For more information about
the composable roles architecture, see "Composable Services and Custom Roles" in the Advanced
Overcloud Customization guide.
By default, director configures nodes to use the Provisioning / Control Plane for connectivity.
However, it is possible to isolate network traffic into a series of composable networks, that you can
customize and assign services.
In a typical RHOSP installation, the number of network types often exceeds the number of physical
network links. To connect all the networks to the proper hosts, the overcloud uses VLAN tagging to
deliver more than one network on each interface. Most of the networks are isolated subnets but some
networks require a Layer 3 gateway to provide routing for Internet access or infrastructure network
connectivity. If you use VLANs to isolate your network traffic types, you must use a switch that supports
802.1Q standards to provide tagged VLANs.
NOTE
It is recommended that you deploy a project network (tunneled with GRE or VXLAN)
even if you intend to use a neutron VLAN mode with tunneling disabled at deployment
time. This requires minor customization at deployment time and leaves the option
available to use tunnel networks as utility networks or virtualization networks in the future.
You still create Tenant networks using VLANs, but you can also create VXLAN tunnels for
special-use networks without consuming tenant VLANs. It is possible to add VXLAN
capability to a deployment with a Tenant VLAN, but it is not possible to add a Tenant
VLAN to an existing overcloud without causing disruption.
Director also includes a set of templates that you can use to configure NICs with isolated composable
networks. The following configurations are the default configurations:
Single NIC configuration - One NIC for the Provisioning network on the native VLAN and
tagged VLANs that use subnets for the different overcloud network types.
Bonded NIC configuration - One NIC for the Provisioning network on the native VLAN and two
NICs in a bond for tagged VLANs for the different overcloud network types.
Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.
You can also create your own templates to map a specific NIC configuration.
The following details are also important when you consider your network configuration:
During the overcloud creation, you refer to NICs using a single name across all overcloud
63
Red Hat OpenStack Platform 16.0 Director Installation and Usage
During the overcloud creation, you refer to NICs using a single name across all overcloud
machines. Ideally, you should use the same NIC on each overcloud node for each respective
network to avoid confusion. For example, use the primary NIC for the Provisioning network and
the secondary NIC for the OpenStack services.
Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the
External NIC and any other NICs on the system. Also ensure that the Provisioning NIC has PXE
boot at the top of the boot order, ahead of hard disks and CD/DVD drives.
All overcloud bare metal systems require a supported power management interface, such as an
Intelligent Platform Management Interface (IPMI), so that director can control the power
management of each node.
Make a note of the following details for each overcloud system: the MAC address of the
Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This
information is useful later when you configure the overcloud nodes.
If an instance must be accessible from the external internet, you can allocate a floating IP
address from a public network and associate the floating IP with an instance. The instance
retains its private IP but network traffic uses NAT to traverse through to the floating IP address.
Note that a floating IP address can be assigned only to a single instance rather than multiple
private IP addresses. However, the floating IP address is reserved for use only by a single
tenant, which means that the tenant can associate or disassociate the floating IP address with a
particular instance as required. This configuration exposes your infrastructure to the external
internet and you must follow suitable security practices.
To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond
can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure
multiple bridges.
Red Hat recommends using DNS hostname resolution so that your overcloud nodes can
connect to external services, such as the Red Hat Content Delivery Network and network time
servers.
NOTE
You can virtualize the overcloud control plane if you are using Red Hat Virtualization
(RHV). For more information, see Creating virtualized control planes .
NOTE
Using LVM on a guest instance that uses a back end cinder-volume of any driver or back-
end type results in issues with performance, volume visibility and availability, and data
corruption. Use an LVM filter to mitigate these issues. For more information, see section
2.1 Back Ends in the Storage Guide and KCS article 3213311, "Using LVM on a cinder
volume exposes the data to the compute host."
64
CHAPTER 6. PLANNING YOUR OVERCLOUD
Images - The Image service (glance) manages images for virtual machines. Images are
immutable. OpenStack treats images as binary blobs and downloads them accordingly. You
can use the Image service (glance) to store images in a Ceph Block Device.
Volumes - OpenStack manages volumes with the Block Storage service (cinder). The Block
Storage service (cinder) volumes are block devices. OpenStack uses volumes to boot virtual
machines, or to attach volumes to running virtual machines. You can use the Block Storage
serivce to boot a virtual machine using a copy-on-write clone of an image.
File Systems - Openstack manages shared file systems with the Shared File Systems
service (manila). Shares are backed by file systems. You can use manila to manage shares
backed by a CephFS file system with data on the Ceph Storage nodes.
Guest Disks - Guest disks are guest operating system disks. By default, when you boot a
virtual machine with the Compute service (nova), the virtual machine disk appears as a file
on the filesystem of the hypervisor (usually under /var/lib/nova/instances/<uuid>/). Every
virtual machine inside Ceph can be booted without using the Block Storage service (cinder).
As a result, you can perform maintenance operations easily with the live-migration process.
Additionally, if your hypervisor fails, it is also convenient to trigger nova evacuate and run
the virtual machine elsewhere.
IMPORTANT
For information about supported image formats, see the Image Service
chapter in the Instances and Images Guide .
For more information about Ceph Storage, see the Red Hat Ceph Storage Architecture
Guide.
Use network segmentation to mitigate network movement and isolate sensitive data. A flat
network is much less secure.
For more information about securing your system, see the following Red Hat guides:
65
Red Hat OpenStack Platform 16.0 Director Installation and Usage
NOTE
Deploying a highly available overcloud without STONITH is not supported. You must
configure a STONITH device for each node that is a part of the Pacemaker cluster in a
highly available overcloud. For more information on STONITH and Pacemaker, see
Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High
Availability Clusters.
You can also configure high availability for Compute instances with director (Instance HA). This high
availability mechanism automates evacuation and re-spawning of instances on Compute nodes in case
of node failure. The requirements for Instance HA are the same as the general overcloud requirements,
but you must perform a few additional steps to prepare your environment for the deployment. For more
information about Instance HA and installation instructions, see the High Availability for Compute
Instances guide.
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
The minimum amount of memory is 32 GB. However, the amount of recommended memory depends
on the number of vCPUs, which is based on the number of CPU cores multiplied by hyper-threading
value. Use the following calculations to determine your RAM requirements:
Use 1.5 GB of memory for each vCPU. For example, a machine with 48 vCPUs should
have 72 GB of RAM.
Use 3 GB of memory for each vCPU. For example, a machine with 48 vCPUs should have
144 GB of RAM
For more information about measuring memory requirements, see "Red Hat OpenStack Platform
Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal.
66
CHAPTER 6. PLANNING YOUR OVERCLOUD
small overclouds built on commodity hardware. These environments are typical of proof-of-concept
and test environments. You can use these defaults to deploy overclouds with minimal planning, but
they offer little in terms of workload capacity and performance.
In an enterprise environment, however, the defaults could cause a significant bottleneck because
Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts
the performance of all other Controller services. In this type of environment, you must plan your
overcloud and configure it accordingly.
Red Hat provides several configuration recommendations for both Telemetry and Object Storage.
For more information, see Deployment Recommendations for Specific Red Hat OpenStack Platform
Services.
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the
AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended that this
processor has a minimum of 4 cores.
Memory
A minimum of 6 GB of RAM. Add additional RAM to this requirement based on the amount of
memory that you intend to make available to virtual machine instances.
Disk space
A minimum of 40 GB of available disk space.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two
NICs in a production environment. Use additional network interface cards for bonded interfaces or to
delegate tagged VLAN traffic.
Power management
Each Compute node requires a supported power management interface, such as an Intelligent
Platform Management Interface (IPMI) functionality, on the server motherboard.
67
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Ceph Storage nodes are responsible for providing object storage in a Red Hat OpenStack Platform
environment.
/dev/sda - The root disk. The director copies the main overcloud image to the disk. Ensure
that the disk has a minimum of 40 GB of available disk space.
/dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For
example, /dev/sdb1, /dev/sdb2, and /dev/sdb3. The journal disk is usually a solid state drive
(SSD) to aid with system performance.
/dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage
requirements.
NOTE
Red Hat OpenStack Platform director uses ceph-ansible, which does not
support installing the OSD on the root disk of Ceph Storage nodes. This
means that you need at least two disks for a supported Ceph Storage node.
For more information about installing an overcloud with a Ceph Storage cluster, see the Deploying an
Overcloud with Containerized Red Hat Ceph guide.
Object Storage nodes provide an object storage layer for the overcloud. The Object Storage proxy is
68
CHAPTER 6. PLANNING YOUR OVERCLOUD
Object Storage nodes provide an object storage layer for the overcloud. The Object Storage proxy is
installed on Controller nodes. The storage layer requires bare metal nodes with multiple disks on each
node.
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Memory requirements depend on the amount of storage space. Use at minimum 1 GB of memory for
each 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB for each 1 TB
of hard disk space, especially for workloads with files smaller than 100GB.
Disk space
Storage requirements depend on the capacity needed for the workload. It is recommended to use
SSD drives to store the account and container data. The capacity ratio of account and container
data to objects is approximately 1 per cent. For example, for every 100TB of hard drive capacity,
provide 1TB of SSD capacity for account and container data.
However, this depends on the type of stored data. If you want to store mostly small objects, provide
more SSD space. For large objects (videos, backups), use less SSD space.
Disk layout
The recommended node configuration requires a disk layout similar to the following example:
/dev/sda - The root disk. Director copies the main overcloud image to the disk.
/dev/sdd and onward - The object server disks. Use as many disks as necessary for your
storage requirements.
You must enable the following repositories to install and configure the overcloud.
Core repositories
The following table lists core repositories for installing the overcloud.
69
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-baseos- Base operating system repository
x86_64 - BaseOS (RPMs) eus-rpms for x86_64 systems.
Extended Update Support (EUS)
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-appstream- Contains Red Hat OpenStack
x86_64 - AppStream (RPMs) eus-rpms Platform dependencies.
Extended Update Support (EUS)
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64- High availability tools for Red Hat
x86_64 - High Availability (RPMs) highavailability-eus-rpms Enterprise Linux. Used for
Extended Update Support (EUS) Controller node high availability.
Red Hat Ansible Engine 2.8 for ansible-2.8-for-rhel-8- Ansible Engine for Red Hat
RHEL 8 x86_64 (RPMs) x86_64-rpms Enterprise Linux. Used to provide
the latest version of Ansible.
Red Hat Satellite Tools for RHEL satellite-tools-6.5-for-rhel-8- Tools for managing hosts with Red
8 Server RPMs x86_64 x86_64-rpms Hat Satellite 6.
Red Hat OpenStack Platform 16.0 openstack-16-for-rhel-8- Core Red Hat OpenStack
for RHEL 8 (RPMs) x86_64-rpms Platform repository.
Red Hat Fast Datapath for RHEL fast-datapath-for-rhel-8- Provides Open vSwitch (OVS)
8 (RPMS) x86_64-rpms packages for OpenStack
Platform.
Ceph repositories
The following table lists Ceph Storage related repositories for the overcloud.
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-baseos- Base operating system repository
x86_64 - BaseOS (RPMs) rpms for x86_64 systems.
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-appstream- Contains Red Hat OpenStack
x86_64 - AppStream (RPMs) eus-rpms Platform dependencies.
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64- High availability tools for Red Hat
x86_64 - High Availability (RPMs) highavailability-rpms Enterprise Linux.
Red Hat Ansible Engine 2.8 for ansible-2.8-for-rhel-8- Ansible Engine for Red Hat
RHEL 8 x86_64 (RPMs) x86_64-rpms Enterprise Linux. Used to provide
the latest version of Ansible.
70
CHAPTER 6. PLANNING YOUR OVERCLOUD
Red Hat Ceph Storage OSD 4 for rhceph-4-osd-for-rhel-8- (For Ceph Storage Nodes)
RHEL 8 x86_64 (RPMs) x86_64-rpms Repository for Ceph Storage
Object Storage daemon. Installed
on Ceph Storage nodes.
Red Hat Ceph Storage MON 4 for rhceph-4-mon-for-rhel-8- (For Ceph Storage Nodes)
RHEL 8 x86_64 (RPMs) x86_64-rpms Repository for Ceph Storage
Monitor daemon. Installed on
Controller nodes in OpenStack
environments using Ceph Storage
nodes.
Red Hat Ceph Storage Tools 4 for rhceph-4-tools-for-rhel-8- Provides tools for nodes to
RHEL 8 x86_64 (RPMs) x86_64-rpms communicate with the Ceph
Storage cluster. Enable this
repository for all nodes when you
deploy an overcloud with a Ceph
Storage cluster or when you
integrate your overcloud with an
existing Ceph Storage cluster.
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-rt-rpms Repository for Real Time KVM
x86_64 - Real Time (RPMs) (RT-KVM). Contains packages to
enable the real time kernel.
Enable this repository for all
Compute nodes targeted for RT-
KVM. NOTE: You need a separate
subscription to a Red Hat
OpenStack Platform for Real
Time SKU to access this
repository.
71
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Red Hat Enterprise Linux 8 for rhel-8-for-x86_64-nfv-rpms Repository for Real Time KVM
x86_64 - Real Time for NFV (RT-KVM) for NFV. Contains
(RPMs) packages to enable the real time
kernel. Enable this repository for
all NFV Compute nodes targeted
for RT-KVM. NOTE: You need a
separate subscription to a Red
Hat OpenStack Platform for
Real Time SKU to access this
repository.
Red Hat Enterprise Linux for IBM rhel-8-for-ppc64le-baseos- Base operating system repository
Power, little endian - BaseOS rpms for ppc64le systems.
(RPMs)
Red Hat Enterprise Linux 8 for rhel-8-for-ppc64le- Contains Red Hat OpenStack
IBM Power, little endian - appstream-rpms Platform dependencies.
AppStream (RPMs)
Red Hat Enterprise Linux 8 for rhel-8-for-ppc64le- High availability tools for Red Hat
IBM Power, little endian - High highavailability-rpms Enterprise Linux. Used for
Availability (RPMs) Controller node high availability.
Red Hat Ansible Engine 2.8 for ansible-2.8-for-rhel-8- Ansible Engine for Red Hat
RHEL 8 IBM Power, little endian ppc64le-rpms Enterprise Linux. Used to provide
(RPMs) the latest version of Ansible.
Red Hat OpenStack Platform 16.0 openstack-16-for-rhel-8- Core Red Hat OpenStack
for RHEL 8 (RPMs) ppc64le-rpms Platform repository for ppc64le
systems.
72
CHAPTER 6. PLANNING YOUR OVERCLOUD
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not
fully supported by Red Hat. It should only be used for testing, and should not be
deployed in a production environment. For more information about Technology
Preview features, see Scope of Coverage Details.
IMPORTANT
73
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
1. Create a template that lists your nodes. Use the following JSON and YAML template examples
to understand how to structure your node definition template:
{
"nodes":[
{
"mac":[
"bb:bb:bb:bb:bb:bb"
],
"name":"node01",
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.205"
},
{
"mac":[
"cc:cc:cc:cc:cc:cc"
],
"name":"node02",
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.206"
}
]
}
74
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
nodes:
- mac:
- "bb:bb:bb:bb:bb:bb"
name: "node01"
cpu: 4
memory: 6144
disk: 40
arch: "x86_64"
pm_type: "ipmi"
pm_user: "admin"
pm_password: "p@55w0rd!"
pm_addr: "192.168.24.205"
- mac:
- cc:cc:cc:cc:cc:cc
name: "node02"
cpu: 4
memory: 6144
disk: 40
arch: "x86_64"
pm_type: "ipmi"
pm_user: "admin"
pm_password: "p@55w0rd!"
pm_addr: "192.168.24.206"
name
The logical name for the node.
pm_type
The power management driver that you want to use. This example uses the IPMI driver
(ipmi).
NOTE
pm_user; pm_password
The IPMI username and password.
pm_addr
The IP address of the IPMI device.
pm_port (Optional)
The port to access the specific IPMI device.
mac
(Optional) A list of MAC addresses for the network interfaces on the node. Use only the
MAC address for the Provisioning NIC of each system.
cpu
(Optional) The number of CPUs on the node.
75
Red Hat OpenStack Platform 16.0 Director Installation and Usage
memory
(Optional) The amount of memory in MB.
disk
(Optional) The size of the hard disk in GB.
arch
(Optional) The system architecture.
IMPORTANT
2. After you create the template, run the following commands to verify the formatting and syntax:
$ source ~/stackrc
(undercloud) $ openstack overcloud node import --validate-only ~/nodes.json
3. Save the file to the home directory of the stack user (/home/stack/nodes.json), then run the
following commands to import the template to director:
This command registers each node from the template into director.
4. Wait for the node registration and configuration to complete. When complete, confirm that
director has successfully registered the nodes:
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
Procedure
$ source ~/stackrc
2. Run the openstack tripleo validator run command with the --group pre-introspection option:
76
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
IMPORTANT
A FAILED validation does not prevent you from deploying or running Red Hat OpenStack
Platform. However, a FAILED validation can indicate a potential issue with a production
environment.
Procedure
1. Run the following command to inspect the hardware attributes of each node:
Use the --all-manageable option to introspect only the nodes that are in a managed state.
In this example, all nodes are in a managed state.
Use the --provide option to reset all nodes to an available state after introspection.
IMPORTANT
Ensure that this process runs to completion. This process usually takes 15
minutes for bare metal nodes.
Type Description
77
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Type Description
Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created
during undercloud installation and are usable without modification in most environments.
Procedure
1. To tag a node into a specific profile, add a profile option to the properties/capabilities
parameter for each node. For example, to tag your nodes to use Controller and Compute
profiles respectively, use the following commands:
The addition of the profile:control and profile:compute options tag the two nodes into each
respective profiles.
These commands also set the boot_option:local parameter, which defines how each node
boots.
2. After you complete node tagging, check the assigned profiles or possible profiles:
Procedure
ipxe_enabled = True
inspection_enable_uefi = True
78
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
3. Set the boot mode to uefi for each registered node. For example, to add or replace the existing
boot_mode parameters in the capabilities property, run the following command:
NOTE
Check that you have retained the profile and boot_option capabilities:
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
You can use Redfish virtual media boot to supply a boot image to the Baseboard Management
Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The
node can then boot from the virtual drive into the operating system that exists in the image.
Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare
Metal service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO
images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of
virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET,
or other methods, instead.
To boot a node with the redfish hardware type over virtual media, set the boot interface to redfish-
virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an
enrolled node to use Redfish virtual media boot.
Prerequisites
79
Red Hat OpenStack Platform 16.0 Director Installation and Usage
For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image
Service (glance).
Procedure
NOTE
3. For UEFI nodes, define the EFI System Partition (ESP) image:
Replace $ESP with the glance image UUID or URL for the ESP image, and replace
$NODE_NAME with the name of the node.
NOTE
4. Create a port on the bare metal node and associate the port with the MAC address of the NIC
on the bare metal node:
Replace $UUID with the UUID of the bare metal node, and replace $MAC_ADDRESS with the
MAC address of the NIC on the bare metal node.
80
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
There are several properties that you can define to help the director identify the root disk:
wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
IMPORTANT
Use the name property only for devices with persistent names. Do not use name to set
the root disk for any other devices because this value can change when the node boots.
Complete the following steps to specify the root device using its serial number.
Procedure
1. Check the disk information from the hardware introspection of each node. Run the following
command to display the disk information of a node:
For example, the data for one node might show three disks:
[
{
"size": 299439751168,
"rotational": true,
"vendor": "DELL",
"name": "/dev/sda",
"wwn_vendor_extension": "0x1ea4dcc412a9632b",
"wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b",
"model": "PERC H330 Mini",
"wwn": "0x61866da04f380700",
81
Red Hat OpenStack Platform 16.0 Director Installation and Usage
"serial": "61866da04f3807001ea4dcc412a9632b"
}
{
"size": 299439751168,
"rotational": true,
"vendor": "DELL",
"name": "/dev/sdb",
"wwn_vendor_extension": "0x1ea4e13c12e36ad6",
"wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6",
"model": "PERC H330 Mini",
"wwn": "0x61866da04f380d00",
"serial": "61866da04f380d001ea4e13c12e36ad6"
}
{
"size": 299439751168,
"rotational": true,
"vendor": "DELL",
"name": "/dev/sdc",
"wwn_vendor_extension": "0x1ea4e31e121cfb45",
"wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45",
"model": "PERC H330 Mini",
"wwn": "0x61866da04f37fc00",
"serial": "61866da04f37fc001ea4e31e121cfb45"
}
]
2. Run the openstack baremetal node set --property root_device= command to set the root
disk for a node. Include the most appropriate hardware attribute value to define the root disk.
For example, to set the root device to disk 2, which has the serial number
61866da04f380d001ea4e13c12e36ad6 run the following command:
NOTE
Ensure that you configure the BIOS of each node to include booting from the root disk
that you choose. Configure the boot order to boot from the network first, then to boot
from the root disk.
Director identifies the specific disk to use as the root disk. When you run the openstack overcloud
deploy command, director provisions and writes the overcloud image to the root disk.
82
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
process. The overcloud-full image uses a valid Red Hat subscription. However, you can also use the
overcloud-minimal image, for example, to provision a bare OS where you do not want to run any other
OpenStack services and consume your subscription entitlements.
A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this
and similar use cases, you can use the overcloud-minimal image option to avoid reaching the limit of
your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal image,
see Obtaining images for overcloud nodes .
NOTE
A Red Hat OpenStack Platform subscription contains Open vSwitch (OVS), but core
services, such as OVS, are not available when you use the overcloud-minimal image.
OVS is not required to deploy Ceph Storage nodes. Instead of using ovs_bond to define
bonds, use linux_bond. For more information about linux_bond, see Linux bonding
options.
Procedure
1. To configure director to use the overcloud-minimal image, create an environment file that
contains the following image definition:
parameter_defaults:
<roleName>Image: overcloud-minimal
2. Replace <roleName> with the name of the role and append Image to the name of the role. The
following example shows an overcloud-minimal image for Ceph storage nodes:
parameter_defaults:
CephStorageImage: overcloud-minimal
NOTE
The overcloud-minimal image supports only standard Linux bridges and not OVS
because OVS is an OpenStack service that requires a Red Hat OpenStack Platform
subscription entitlement.
83
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The undercloud includes a set of heat templates that form the plan for your overcloud creation. You can
customize aspects of the overcloud with environment files, which are YAML-formatted files that
override parameters and resources in the core heat template collection. You can include as many
environment files as necessary. However, the order of the environment files is important because the
parameters and resources that you define in subsequent environment files take precedence. Use the
following list as an example of the environment file order:
The number of nodes and the flavors for each role. It is vital to include this information for
overcloud creation.
Any network isolation files, starting with the initialization file (environments/network-
isolation.yaml) from the heat template collection, then your custom NIC configuration file, and
finally any additional network configurations. For more information, see the following chapters
in the Advanced Overcloud Customization guide:
Any external load balancing environment files if you are using an external load balancer. For
more information, see External Load Balancing for the Overcloud .
Red Hat recommends that you organize your custom environment files in a separate directory, such as
the templates directory.
For more information about customizing advanced features for your overcloud, see the Advanced
Overcloud Customization guide.
IMPORTANT
A basic overcloud uses local LVM storage for block storage, which is not a supported
configuration. It is recommended to use an external storage solution, such as Red Hat
Ceph Storage, for block storage.
NOTE
The environment file extension must be .yaml or .template, or it will not be treated as a
custom template resource.
The next few sections contain information about creating some environment files necessary for your
overcloud.
By default, director deploys an overcloud with 1 Controller node and 1 Compute node using the
84
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
By default, director deploys an overcloud with 1 Controller node and 1 Compute node using the
baremetal flavor. However, this is only suitable for a proof-of-concept deployment. You can override
the default configuration by specifying different node counts and flavors. For a small-scale production
environment, deploy at least 3 Controller nodes and 3 Compute nodes, and assign specific flavors to
ensure that the nodes have the appropriate resource specifications. Complete the following steps to
create an environment file named node-info.yaml that stores the node counts and flavor assignments.
Procedure
2. Edit the file to include the node counts and flavors that you need. This example contains 3
Controller nodes and 3 Compute nodes:
parameter_defaults:
OvercloudControllerFlavor: control
OvercloudComputeFlavor: compute
ControllerCount: 3
ComputeCount: 3
NOTE
For this approach to work, your overcloud nodes must have a network route to the public
endpoint on the undercloud. It is likely that you must apply this configuration for
deployments that rely on spine-leaf networking.
There are two types of custom certificates you can use in the undercloud:
User-provided certificates - This definition applies when you have provided your own
certificate. This can be from your own CA, or it can be self-signed. This is passed using the
undercloud_service_certificate option. In this case, you must either trust the self-signed
certificate, or the CA (depending on your deployment).
Auto-generated certificates - This definition applies when you use certmonger to generate
the certificate using its own local CA. Enable auto-generated certificates with the
generate_service_certificate option in the undercloud.conf file. In this case, director
generates a CA certificate at /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and the
director configures the undercloud’s HAProxy instance to use a server certificate. Add the CA
certificate to the inject-trust-anchor-hiera.yaml file to present the certificate to OpenStack
Platform.
This example uses a self-signed certificate located in /home/stack/ca.crt.pem. If you use auto-
generated certificates, use /etc/pki/ca-trust/source/anchors/cm-local-ca.pem instead.
85
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
1. Open the certificate file and copy only the certificate portion. Do not include the key:
$ vi /home/stack/ca.crt.pem
The certificate portion you need looks similar to this shortened example:
-----BEGIN CERTIFICATE-----
MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECg
wH
UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
-----END CERTIFICATE-----
parameter_defaults:
CAMap:
undercloud-ca:
content: |
-----BEGIN CERTIFICATE-----
MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECg
wH
UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
-----END CERTIFICATE-----
NOTE
NOTE
Director copies the CA certificate to each overcloud node during the overcloud deployment. As a result,
each node trusts the encryption presented by the undercloud’s SSL endpoints. For more information
about environment files, see Section 7.15, “Including environment files in an overcloud deployment” .
86
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
WARNING
Parameter Description
--templates [TEMPLATES] The directory that contains the heat templates that
you want to deploy. If blank, the deployment
command uses the default template location at
/usr/share/openstack-tripleo-heat-templates/
--stack STACK The name of the stack that you want to create or
update
--libvirt-type [LIBVIRT_TYPE] The virtualization type that you want to use for
hypervisors
--ntp-server [NTP_SERVER] The Network Time Protocol (NTP) server that you
want to use to synchronize time. You can also specify
multiple NTP servers in a comma-separated list, for
example: --ntp-server
0.centos.pool.org,1.centos.pool.org. For a high
availability cluster deployment, it is essential that
your Controller nodes are consistently referring to
the same time source. Note that a typical
environment might already have a designated NTP
time source with established practices.
--overcloud-ssh-key OVERCLOUD_SSH_KEY Defines the key path for SSH access to overcloud
nodes.
87
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Parameter Description
--overcloud-ssh-network Defines the network name that you want to use for
OVERCLOUD_SSH_NETWORK SSH access to overcloud nodes.
-e [EXTRA HEAT TEMPLATE] , --extra- Extra environment files that you want to pass to the
template [EXTRA HEAT TEMPLATE] overcloud deployment. You can specify this option
more than once. Note that the order of environment
files that you pass to the openstack overcloud
deploy command is important. For example,
parameters from each sequential environment file
override the same parameters from earlier
environment files.
--update-plan-only Use this option if you want to update the plan without
performing the actual deployment.
88
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
Parameter Description
--no-config-download, --stack-only Use this option if you want to disable the config-
download workflow and create only the stack and
associated OpenStack resources. This command
applies no software configuration to the overcloud.
--output-dir OUTPUT_DIR The directory that you want to use for saved config-
download output. The directory must be writeable
by the mistral user. When not specified, director uses
the default, which is /var/lib/mistral/overcloud.
89
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Parameter Description
Some command line parameters are outdated or deprecated in favor of using heat template
parameters, which you include in the parameter_defaults section in an environment file. The following
table maps deprecated parameters to their heat template equivalents.
90
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
91
Red Hat OpenStack Platform 16.0 Director Installation and Usage
These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.
The number of nodes and the flavors for each role. It is vital to include this information for
overcloud creation.
Any network isolation files, starting with the initialization file (environments/network-
isolation.yaml) from the heat template collection, then your custom NIC configuration file, and
finally any additional network configurations. For more information, see the following chapters
in the Advanced Overcloud Customization guide:
92
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
Any external load balancing environment files if you are using an external load balancer. For
more information, see External Load Balancing for the Overcloud .
Any environment files that you add to the overcloud using the -e option become part of the stack
definition of the overcloud.
The following command is an example of how to start the overcloud creation using environment files
defined earlier in this scenario:
--templates
Creates the overcloud using the heat template collection in /usr/share/openstack-tripleo-heat-
templates as a foundation.
-e /home/stack/templates/node-info.yaml
Adds an environment file to define how many nodes and which flavors to use for each role.
-e /home/stack/containers-prepare-parameter.yaml
Adds the container image preparation environment file. You generated this file during the
undercloud installation and can use the same file for your overcloud creation.
-e /home/stack/inject-trust-anchor-hiera.yaml
Adds an environment file to install a custom certificate in the undercloud.
-r /home/stack/templates/roles_data.yaml
(Optional) The generated roles data if you use custom roles or want to enable a multi architecture
cloud. For more information, see Section 7.9, “Creating architecture specific roles” .
Director requires these environment files for re-deployment and post-deployment functions. Failure to
include these files can result in damage to your overcloud.
To modify the overcloud configuration at a later stage, perform the following actions:
2. Run the openstack overcloud deploy command again with the same environment files.
Do not edit the overcloud configuration directly because director overrides any manual configuration
when you update the overcloud stack.
IMPORTANT
93
Red Hat OpenStack Platform 16.0 Director Installation and Usage
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
Procedure
$ source ~/stackrc
2. This validation requires a copy of your overcloud plan. Upload your overcloud plan with all
necessary environment files. To upload your plan only, run the openstack overcloud deploy
command with the --update-plan-only option:
3. Run the openstack tripleo validator run command with the --group pre-deployment option:
4. If the overcloud uses a plan name that is different to the default overcloud name, set the plan
name with the --plan option:
IMPORTANT
A FAILED validation does not prevent you from deploying or running Red Hat OpenStack
Platform. However, a FAILED validation can indicate a potential issue with a production
environment.
94
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH CLI TOOLS
Ansible passed.
Overcloud configuration completed.
Overcloud Endpoint: http://192.168.24.113:5000
Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed
This command loads the environment variables that are necessary to interact with your overcloud from
the undercloud CLI. The command prompt changes to indicate this:
(overcloud) $
Each node in the overcloud also contains a heat-admin user. The stack user has SSH access to this user
on each node. To access a node over SSH, find the IP address of the node that you want to access:
Then connect to the node using the heat-admin user and the IP address of the node:
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
Procedure
95
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
$ source ~/stackrc
2. Run the openstack tripleo validator run command with the --group post-deployment option:
3. If the overcloud uses a plan name that is different to the default overcloud name, set the plan
name with the --plan option:
IMPORTANT
A FAILED validation does not prevent you from deploying or running Red Hat OpenStack
Platform. However, a FAILED validation can indicate a potential issue with a production
environment.
96
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
Provisioning nodes
You can mitigate some of the risk involved with this process and identify points of failure more
efficiently if you separate these operations into distinct processes:
a. Run the deployment command, including the heat environment file that the provisioning
command generates.
The provisioning process provisions your nodes and generates a heat environment file that contains
various node specifications, including node count, predictive node placement, custom images, and
custom NICs. When you deploy your overcloud, include this file in the deployment command.
IMPORTANT
Procedure
1. Create a template that lists your nodes. Use the following JSON and YAML template examples
to understand how to structure your node definition template:
{
"nodes":[
{
"mac":[
97
Red Hat OpenStack Platform 16.0 Director Installation and Usage
"bb:bb:bb:bb:bb:bb"
],
"name":"node01",
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.205"
},
{
"mac":[
"cc:cc:cc:cc:cc:cc"
],
"name":"node02",
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.206"
}
]
}
nodes:
- mac:
- "bb:bb:bb:bb:bb:bb"
name: "node01"
cpu: 4
memory: 6144
disk: 40
arch: "x86_64"
pm_type: "ipmi"
pm_user: "admin"
pm_password: "p@55w0rd!"
pm_addr: "192.168.24.205"
- mac:
- cc:cc:cc:cc:cc:cc
name: "node02"
cpu: 4
memory: 6144
disk: 40
arch: "x86_64"
pm_type: "ipmi"
pm_user: "admin"
pm_password: "p@55w0rd!"
pm_addr: "192.168.24.206"
98
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
name
The logical name for the node.
pm_type
The power management driver that you want to use. This example uses the IPMI driver
(ipmi).
NOTE
pm_user; pm_password
The IPMI username and password.
pm_addr
The IP address of the IPMI device.
pm_port (Optional)
The port to access the specific IPMI device.
mac
(Optional) A list of MAC addresses for the network interfaces on the node. Use only the
MAC address for the Provisioning NIC of each system.
cpu
(Optional) The number of CPUs on the node.
memory
(Optional) The amount of memory in MB.
disk
(Optional) The size of the hard disk in GB.
arch
(Optional) The system architecture.
IMPORTANT
2. After you create the template, run the following commands to verify the formatting and syntax:
$ source ~/stackrc
(undercloud) $ openstack overcloud node import --validate-only ~/nodes.json
3. Save the file to the home directory of the stack user (/home/stack/nodes.json), then run the
following commands to import the template to director:
99
Red Hat OpenStack Platform 16.0 Director Installation and Usage
This command registers each node from the template into director.
4. Wait for the node registration and configuration to complete. When complete, confirm that
director has successfully registered the nodes:
Procedure
1. Run the following command to inspect the hardware attributes of each node:
Use the --all-manageable option to introspect only the nodes that are in a managed state.
In this example, all nodes are in a managed state.
Use the --provide option to reset all nodes to an available state after introspection.
IMPORTANT
Ensure that this process runs to completion. This process usually takes 15
minutes for bare metal nodes.
Prerequisites
A successful undercloud installation. For more information, see Section 4.7, “Installing director” .
Bare metal nodes introspected and available for provisioning and deployment. For more
100
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
Bare metal nodes introspected and available for provisioning and deployment. For more
information, see Section 8.1, “Registering nodes for the overcloud” and Section 8.2, “Inspecting
the hardware of nodes”.
Procedure
$ source ~/stackrc
2. Create a new ~/overcloud-baremetal-deploy.yaml file and define the node count for each role
that you want to provision. For example, to provision three Controller nodes and three Compute
nodes, use the following syntax:
- name: Controller
count: 3
- name: Compute
count: 3
- name: Controller
count: 3
instances:
- hostname: overcloud-controller-0
name: node00
- hostname: overcloud-controller-1
name: node01
- hostname: overcloud-controller-2
name: node02
- name: Compute
count: 3
instances:
- hostname: overcloud-novacompute-0
name: node04
- hostname: overcloud-novacompute-1
name: node05
- hostname: overcloud-novacompute-2
name: node06
By default, the provisioning process uses the overcloud-full image. You can use the image
attribute in the instances parameter to define a custom image:
- name: Controller
count: 3
instances:
- hostname: overcloud-controller-0
name: node00
image:
href: overcloud-custom
You can also override the default parameter values with the defaults parameter to avoid
101
Red Hat OpenStack Platform 16.0 Director Installation and Usage
You can also override the default parameter values with the defaults parameter to avoid
manual node definitions for each node entry:
- name: Controller
count: 3
defaults:
image:
href: overcloud-custom
instances:
- hostname :overcloud-controller-0
name: node00
- hostname: overcloud-controller-1
name: node01
- hostname: overcloud-controller-2
name: node02
For more information about the parameters, attributes, and values that you can use in your node
definition file, see Section 8.6, “Bare metal node provisioning attributes” .
The provisioning process generates a heat environment file with the name that you specify in
the --output option. This file contains your node definitions. When you deploy the overcloud,
include this file in the deployment command.
5. In a separate terminal, monitor your nodes to verify that they provision successfully. The
provisioning process changes the node state from available to active:
Use the metalsmith tool to obtain a unified view of your nodes, including allocations and
neutron ports:
You can also use the openstack baremetal allocation command to verify association of nodes
to hostnames, and to obtain IP addresses for the provisioned nodes:
When your nodes are provisioned successfully, you can deploy the overcloud. For more information, see
Chapter 9, Configuring a basic overcloud with pre-provisioned nodes .
102
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
Prerequisites
A successful undercloud installation. For more information, see Section 4.7, “Installing director” .
A successful overcloud deployment. For more information, see Chapter 9, Configuring a basic
overcloud with pre-provisioned nodes.
Bare metal nodes introspected and available for provisioning and deployment. For more
information, see Section 8.1, “Registering nodes for the overcloud” and Section 8.2, “Inspecting
the hardware of nodes”.
Procedure
$ source ~/stackrc
2. Edit the ~/overcloud-baremetal-deploy.yaml file that you used to provision your bare metal
nodes, and increment the count parameter for the roles that you want to scale up. For example,
if your overcloud contains three Compute nodes, use the following snippet to increase the
Compute node count to 10:
- name: Controller
count: 3
- name: Compute
count: 10
You can also add predictive node placement with the instances parameter. For more
information about the parameters and attributes that are available, see Section 8.6, “Bare metal
node provisioning attributes”.
4. Monitor the provisioning progress with the openstack baremetal node list command.
103
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Prerequisites
A successful undercloud installation. For more information, see Section 4.7, “Installing director” .
A successful overcloud deployment. For more information, see Chapter 9, Configuring a basic
overcloud with pre-provisioned nodes.
At least one bare metal node that you want to remove from the stack.
Procedure
$ source ~/stackrc
2. Edit the ~/overcloud-baremetal-deploy.yaml file that you used to provision your bare metal
nodes, and decrement the count parameter for the roles that you want to scale down. You must
also define the following attributes for each node that you want to remove from the stack:
- name: Controller
count: 2
instances:
- hostname: overcloud-controller-0
name: node00
- hostname: overcloud-controller-1
name: node01
# Removed from cluster due to disk failure
provisioned: false
- hostname: overcloud-controller-2
name: node02
4. Redeploy the overcloud and include the ~/overcloud-baremetal-deployed.yaml file that the
104
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
4. Redeploy the overcloud and include the ~/overcloud-baremetal-deployed.yaml file that the
provisioning command generates, along with any other environment files relevant to your
deployment:
After you redeploy the overcloud, the nodes that you define with the provisioned: false
attribute are no longer present in the stack. However, these nodes are still running in a
provisioned state.
NOTE
If you want to remove a node from the stack temporarily, you can deploy the
overcloud with the attribute provisioned: false and then redeploy the overcloud
with the attribute provisioned: true to return the node to the stack.
5. Run the openstack overcloud node delete command, including the ~/overcloud-baremetal-
deploy.yaml file with the --baremetal-deployment option.
NOTE
Do not include the nodes that you want to remove from the stack as command
arguments in the openstack overcloud node delete command.
Parameter Value
105
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Parameter Value
Example syntax
In the following example, the name refers to the logical name of the node, and the hostname refers to
the generated hostname which is derived from the overcloud stack name, the role, and an incrementing
index. All Controller servers use a default custom image overcloud-full-custom and are on predictive
nodes. One of the Compute servers is placed predictively on node04 with custom host name
overcloud-compute-special, and the other 99 Compute servers are on nodes allocated automatically
from the pool of available nodes:
- name: Controller
count: 3
defaults:
image:
href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
instances:
- hostname: overcloud-controller-0
name: node00
- hostname: overcloud-controller-1
name: node01
- hostname: overcloud-controller-2
name: node02
- name: Compute
count: 100
instances:
- hostname: overcloud-compute-special
name: node04
106
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
Parameter Value
Example syntax
In the following example, all Controller servers use a custom default overcloud image overcloud-full-
custom. The Controller server overcloud-controller-0 is placed predictively on node00 and has custom
root and swap sizes. The other two Controller servers are on nodes allocated automatically from the
pool of available nodes, and have default root and swap sizes:
- name: Controller
count: 3
defaults:
image:
107
Red Hat OpenStack Platform 16.0 Director Installation and Usage
href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
instances:
- hostname: overcloud-controller-0
name: node00
root_size_gb: 140
swap_size_mb: 600
Parameter Value
Example syntax
In the following example, all three Controller servers are on nodes allocated automatically from the pool
of available nodes. All Controller servers in this environment use a default custom image overcloud-full-
custom:
- name: Controller
count: 3
defaults:
image:
href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
checksum: 1582054665
kernel: file:///var/lib/ironic/images/overcloud-full-custom.vmlinuz
ramdisk: file:///var/lib/ironic/images/overcloud-full-custom.initrd
Parameter Value
fixed_ip The specific IP address that you want to use for this
NIC.
108
CHAPTER 8. PROVISIONING BARE METAL NODES BEFORE DEPLOYING THE OVERCLOUD
Parameter Value
Example syntax
In the following example, all three Controller servers are on nodes allocated automatically from the pool
of available nodes. All Controller servers in this environment use a default custom image overcloud-full-
custom and have specific networking requirements:
- name: Controller
count: 3
defaults:
image:
href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
nics:
network: custom-network
subnet: custom-subnet
109
Red Hat OpenStack Platform 16.0 Director Installation and Usage
You can provision nodes with an external tool and let the director control the overcloud
configuration only.
You can use nodes without relying on the director provisioning methods. This is useful if you
want to create an overcloud without power management control, or use networks with
DHCP/PXE boot restrictions.
The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or
OpenStack Image (glance) to manage nodes.
Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2
overcloud-full image.
This scenario includes only basic configuration with no custom features. However, you can add advanced
configuration options to this basic overcloud and customize it to your specifications with the instructions
in the Advanced Overcloud Customization guide.
IMPORTANT
A set of bare metal machines for your nodes. The number of nodes required depends on the
type of overcloud you intend to create. These machines must comply with the requirements set
for each node type. These nodes require Red Hat Enterprise Linux 8.1 or later installed as the
host operating system. Red Hat recommends using the latest version available.
One network connection for managing the pre-provisioned nodes. This scenario requires
uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this
network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This
network is usually a layer-3 (L3) routable network connection from the pre-provisioned
nodes to director. The examples for this scenario use following IP address assignments:
110
CHAPTER 9. CONFIGURING A BASIC OVERCLOUD WITH PRE-PROVISIONED NODES
Director 192.168.24.1
Controller 0 192.168.24.2
Compute 0 192.168.24.3
Using a separate network. In situations where the director’s Provisioning network is a private
non-routable network, you can define IP addresses for nodes from any subnet and
communicate with director over the Public API endpoint. For more information about the
requirements for this scenario, see Section 9.6, “Using a separate network for pre-
provisioned nodes”.
All other network types in this example also use the Control Plane network for OpenStack
services. However, you can create additional networks for other network traffic types.
If any nodes use Pacemaker resources, the service user hacluster and the service group
haclient must have a UID/GID of 189. This is due to CVE-2018-16877. If you installed
Pacemaker together with the operating system, the installation creates these IDs automatically.
If the ID values are set incorrectly, follow the steps in the article OpenStack minor update / fast-
forward upgrade can fail on the controller nodes at pacemaker step with "Could not evaluate:
backup_cib" to change the ID values.
To prevent some services from binding to an incorrect IP address and causing deployment
failures, make sure that the /etc/hosts file does not include the node-name=127.0.0.1 mapping.
Procedure
1. On each overcloud node, create the stack user and set a password. For example, run the
following commands on the Controller node:
3. After you create and configure the stack user on all pre-provisioned nodes, copy the stack
user’s public SSH key from the director node to each overcloud node. For example, to copy the
director’s public SSH key to the Controller node, run the following command:
111
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
1. Run the registration command and enter your Customer Portal user name and password when
prompted:
2. Find the entitlement pool for the Red Hat OpenStack Platform 16:
3. Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 16
entitlements:
IMPORTANT
Enable only the repositories listed. Additional repositories can cause package and
software conflicts. Do not enable any additional repositories.
6. Update your system to ensure you have the latest base system packages:
112
CHAPTER 9. CONFIGURING A BASIC OVERCLOUD WITH PRE-PROVISIONED NODES
Procedure
These steps ensure that the overcloud nodes can access the director’s Public API over SSL/TLS.
The director Control Plane network, which is the subnet that you define with the network_cidr
parameter in your undercloud.conf file. The overcloud nodes require either direct access to this
subnet or routable access to the subnet.
The director Public API endpoint, that you specify with the undercloud_public_host
parameter in your undercloud.conf file. This option is available if you do not have an L3 route to
the Control Plane or if you want to use SSL/TLS communication. For more information about
configuring your overcloud nodes to use the Public API endpoint, see Section 9.6, “Using a
separate network for pre-provisioned nodes”.
Director uses the Control Plane network to manage and configure a standard overcloud. For an
overcloud with pre-provisioned nodes, your network configuration might require some modification to
accommodate communication between the director and the pre-provisioned nodes.
NOTE
113
Red Hat OpenStack Platform 16.0 Director Installation and Usage
NOTE
If you use network isolation, ensure that your NIC templates do not include the NIC used
for undercloud access. These templates can reconfigure the NIC, which introduces
connectivity and configuration problems during deployment.
Assigning IP addresses
If you do not use network isolation, you can use a single Control Plane network to manage all services.
This requires manual configuration of the Control Plane NIC on each node to use an IP address within
the Control Plane network range. If you are using the director Provisioning network as the Control Plane,
ensure that the overcloud IP addresses that you choose are outside of the DHCP ranges for both
provisioning (dhcp_start and dhcp_end) and introspection (inspection_iprange).
During standard overcloud creation, director creates OpenStack Networking (neutron) ports and
automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network.
However, this can cause director to assign different IP addresses to the ones that you configure
manually for each node. In this situation, use a predictable IP address strategy to force director to use
the pre-provisioned IP assignments on the Control Plane.
For example, you can use an environment file ctlplane-assignments.yaml with the following IP
assignments to implement a predictable IP strategy:
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-
templates/deployed-server/deployed-neutron-port.yaml
parameter_defaults:
DeployedServerPortMap:
controller-0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.2
subnets:
- cidr: 192.168.24.0/24
network:
tags:
192.168.24.0/24
compute-0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.3
subnets:
- cidr: 192.168.24.0/24
network:
tags:
- 192.168.24.0/24
1. The name of the assignment, which follows the format <node_hostname>-<network> where
the <node_hostname> value matches the short host name for the node, and <network>
matches the lowercase name of the network. For example: controller-0-ctlplane for controller-
0.example.com and compute-0-ctlplane for compute-0.example.com.
114
CHAPTER 9. CONFIGURING A BASIC OVERCLOUD WITH PRE-PROVISIONED NODES
fixed_ips/ip_address - Defines the fixed IP addresses for the control plane. Use multiple
ip_address parameters in a list to define multiple IP addresses.
A later section in this chapter uses the resulting environment file (ctlplane-assignments.yaml) as part
of the openstack overcloud deploy command.
The overcloud nodes must accommodate the basic network configuration from Section 9.5,
“Configuring networking for the control plane”.
You must enable SSL/TLS on the director for Public API endpoint usage. For more information,
see Section 4.2, “Director configuration parameters” and Chapter 18, Configuring custom
SSL/TLS certificates.
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN
must resolve to a routable IP address for the director. Use the undercloud_public_host
parameter in the undercloud.conf file to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Controller 0 192.168.100.2
Compute 0 192.168.100.3
The following sections provide additional configuration for situations that require a separate network for
overcloud nodes.
IP address assignments
The method for IP assignments is similar to Section 9.5, “Configuring networking for the control plane” .
However, since the Control Plane is not routable from the deployed servers, you must use the
115
Red Hat OpenStack Platform 16.0 Director Installation and Usage
DeployedServerPortMap parameter to assign IP addresses from your chosen overcloud node subnet,
including the virtual IP address to access the Control Plane. The following example is a modified version
of the ctlplane-assignments.yaml environment file from Section 9.5, “Configuring networking for the
control plane” that accommodates this network architecture:
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-
templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-
templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-
templates/network/ports/noop.yaml 1
parameter_defaults:
NeutronPublicInterface: eth1
EC2MetadataIp: 192.168.100.1 2
ControlPlaneDefaultRoute: 192.168.100.1
DeployedServerPortMap:
control_virtual_ip:
fixed_ips:
- ip_address: 192.168.100.1
subnets:
- cidr: 24
controller-0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.2
subnets:
- cidr: 24
compute-0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.3
subnets:
- cidr: 24
2 The EC2MetadataIp and ControlPlaneDefaultRoute parameters are set to the value of the
Control Plane virtual IP address. The default NIC configuration templates require these parameters
and you must set them to use a pingable IP address to pass the validations performed during
deployment. Alternatively, customize the NIC configuration so that they do not require these
parameters.
Procedure
1. Create an environment file, for example hostname-map.yaml, and include the HostnameMap
parameter and the hostname mappings. Use the following syntax:
116
CHAPTER 9. CONFIGURING A BASIC OVERCLOUD WITH PRE-PROVISIONED NODES
parameter_defaults:
HostnameMap:
[HEAT HOSTNAME]: [ACTUAL HOSTNAME]
[HEAT HOSTNAME]: [ACTUAL HOSTNAME]
The [HEAT HOSTNAME] usually conforms to the following convention: [STACK NAME]-
[ROLE]-[INDEX]:
parameter_defaults:
HostnameMap:
overcloud-controller-0: controller-00-rack01
overcloud-controller-1: controller-01-rack02
overcloud-controller-2: controller-02-rack03
overcloud-novacompute-0: compute-00-rack01
overcloud-novacompute-1: compute-01-rack01
overcloud-novacompute-2: compute-02-rack01
Procedure
1. On the undercloud host, create an environment variable, OVERCLOUD_HOSTS, and set the
variable to a space-separated list of IP addresses of the overcloud hosts that you want to use as
Ceph clients:
2. Run the enable-ssh-admin.sh script to configure a user on the overcloud nodes that Ansible
can use to configure Ceph clients:
bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-
admin.sh
When you run the openstack overcloud deploy command, Ansible configures the hosts that you define
in the OVERCLOUD_HOSTS variable as Ceph clients.
--disable-validations - Use this option to disable basic CLI validations for services not used
with pre-provisioned infrastructure. If you do not disable these validations, the deployment fails.
117
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The following command is an example overcloud deployment command with the environment files
specific to the pre-provisioned architecture:
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy \
[other arguments] \
--disable-validations \
-e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \
-e /home/stack/templates/hostname-map.yaml \
--overcloud-ssh-user stack \
--overcloud-ssh-key ~/.ssh/id_rsa \
[OTHER OPTIONS]
The --overcloud-ssh-user and --overcloud-ssh-key options are used to SSH into each overcloud node
during the configuration stage, create an initial tripleo-admin user, and inject an SSH key into
/home/tripleo-admin/.ssh/authorized_keys. To inject the SSH key, specify the credentials for the
initial SSH connection with --overcloud-ssh-user and --overcloud-ssh-key (defaults to ~/.ssh/id_rsa).
To limit exposure to the private key that you specify with the --overcloud-ssh-key option, director
never passes this key to any API service, such as heat or the Workflow service (mistral), and only the
director openstack overcloud deploy command uses this key to enable access for the tripleo-admin
user.
Ansible passed.
Overcloud configuration completed.
Overcloud Endpoint: http://192.168.24.113:5000
Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed
This command loads the environment variables that are necessary to interact with your overcloud from
118
CHAPTER 9. CONFIGURING A BASIC OVERCLOUD WITH PRE-PROVISIONED NODES
This command loads the environment variables that are necessary to interact with your overcloud from
the undercloud CLI. The command prompt changes to indicate this:
(overcloud) $
Each node in the overcloud also contains a heat-admin user. The stack user has SSH access to this user
on each node. To access a node over SSH, find the IP address of the node that you want to access:
Then connect to the node using the heat-admin user and the IP address of the node:
1. Prepare the new pre-provisioned nodes according to Section 9.1, “Pre-provisioned node
requirements”.
2. Scale up the nodes. For more information, see Chapter 15, Scaling overcloud nodes.
3. After you execute the deployment command, wait until the director creates the new node
resources and launches the configuration.
In most scaling operations, you must obtain the UUID value of the node that you want to remove and
pass this value to the openstack overcloud node delete command. To obtain this UUID, list the
resources for the specific role:
Replace <RoleName> with the name of the role that you want to scale down. For example, for the
119
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Replace <RoleName> with the name of the role that you want to scale down. For example, for the
ComputeDeployedServer role, run the following command:
Use the stack_name column in the command output to identify the UUID associated with each node.
The stack_name includes the integer value of the index of the node in the heat resource group:
+------------------------------------+----------------------------------+
| physical_resource_id | stack_name |
+------------------------------------+----------------------------------+
| 294d4e4d-66a6-4e4e-9a8b- | overcloud-ComputeDeployedServer- |
| 03ec80beda41 | no7yfgnh3z7e-1-ytfqdeclwvcg |
| d8de016d- | overcloud-ComputeDeployedServer- |
| 8ff9-4f29-bc63-21884619abe5 | no7yfgnh3z7e-0-p4vb3meacxwn |
| 8c59f7b1-2675-42a9-ae2c- | overcloud-ComputeDeployedServer- |
| 2de4a066f2a9 | no7yfgnh3z7e-2-mmmaayxqnf3o |
+------------------------------------+----------------------------------+
The indices 0, 1, or 2 in the stack_name column correspond to the node order in the heat resource
group. Pass the corresponding UUID value from the physical_resource_id column to openstack
overcloud node delete command.
After you remove overcloud nodes from the stack, power off these nodes. In a standard deployment, the
bare metal services on the director control this function. However, with pre-provisioned nodes, you must
either manually shut down these nodes or use the power management control for each physical system.
If you do not power off the nodes after removing them from the stack, they might remain operational
and reconnect as part of the overcloud environment.
After you power off the removed nodes, reprovision them to a base operating system configuration so
that they do not unintentionally join the overcloud in the future
NOTE
Do not attempt to reuse nodes previously removed from the overcloud without first
reprovisioning them with a fresh base operating system. The scale down process only
removes the node from the overcloud stack and does not uninstall any packages.
NOTE
Do not attempt to reuse nodes previously removed from the overcloud without first
reprovisioning them with a fresh base operating system. The removal process only
deletes the overcloud stack and does not uninstall any packages.
This concludes the creation of the overcloud using pre-provisioned nodes. For post-creation functions,
120
CHAPTER 9. CONFIGURING A BASIC OVERCLOUD WITH PRE-PROVISIONED NODES
This concludes the creation of the overcloud using pre-provisioned nodes. For post-creation functions,
see Chapter 11, Performing overcloud post-installation tasks.
121
Red Hat OpenStack Platform 16.0 Director Installation and Usage
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
You can use a single undercloud node to deploy and manage multiple overclouds. Each overcloud is a
unique heat stack that does not share stack resources. This can be useful for environments where a 1:1
ratio of underclouds to overclouds creates an unmanageable amount of overhead. For example, Edge,
multi-site, and multi-product environments.
The overcloud environments in the multi-overcloud scenario are completely separate, and you can use
the source command to switch between the environments. If you use Ironic for bare metal provisioning,
all overclouds must be on the same provisioning network. If it is not possible to use the same
provisioning network, you can use the deployed servers method to deploy multiple overclouds with
routed networks. In this scenario, you must ensure that the value in the HostnameMap parameter
matches the stack name for each overcloud.
Prerequisites
Before you begin to deploy additional overclouds, ensure that your environment contains the following
configurations:
Custom networks for additional overclouds so that each overcloud has a unique network in the
resulting stack.
Procedure
1. Create a new directory for the additional overcloud that you want to deploy:
122
CHAPTER 10. DEPLOYING MULTIPLE OVERCLOUDS
$ mkdir ~/overcloud-two
2. In the new directory, create new environment files specific to the requirements of the additional
overcloud, and copy any relevant environment files from the existing overcloud:
$ cp network-data.yaml ~/overcloud-two/network-data.yaml
$ cp network-environment.yaml ~/overcloud-two/network-environment.yaml
3. Modify the environment files according to the specification of the new overcloud. For example,
the existing overcloud has the name overcloud-one and uses the VLANs that you define in the
network-data.yaml environment file:
- name: InternalApi
name_lower: internal_api_cloud_1
service_net_map_replace: internal_api
vip: true
vlan: 20
ip_subnet: '172.17.0.0/24'
allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}]
ipv6_subnet: 'fd00:fd00:fd00:2000::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end':
'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}]
mtu: 1500
- name: Storage
...
The new overcloud has the name overcloud-two and uses different VLANs. Edit the
~/overcloud-two/network-data.yaml environment file and include the new VLAN IDs for each
subnet. You must also define a unique name_lower value, and set the
service_net_map_replace attribute to the name of the network that you want to replace:
- name: InternalApi
name_lower: internal_api_cloud_2
service_net_map_replace: internal_api
vip: true
vlan: 21
ip_subnet: '172.21.0.0/24'
allocation_pools: [{'start': '172.21.0.4', 'end': '172.21.0.250'}]
ipv6_subnet: 'fd00:fd00:fd00:2001::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2001::10', 'end':
'fd00:fd00:fd00:2001:ffff:ffff:ffff:fffe'}]
mtu: 1500
- name: Storage
...
Set the ExternalInterfaceDefaultRoute parameter to the IP address of the gateway for the
external network so that the overcloud has external access.
Set the DnsServers parameter to the IP address of your DNS server so that the overcloud
123
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Set the DnsServers parameter to the IP address of your DNS server so that the overcloud
can reach the DNS server.
parameter_defaults:
...
ExternalNetValueSpecs: {'provider:physical_network': 'external_2',
'provider:network_type': 'flat'}
ExternalInterfaceDefaultRoute: 10.0.10.1
DnsServers:
- 10.0.10.2
...
5. Run the openstack overcloud deploy command. Specify the core heat template collection
with the --templates option, a new stack name with the --stack option, and any new
environment files from the ~/overcloud-two directory:
Each overcloud has a unique credential file. In this example, the deployment process creates overcloud-
onerc for overcloud-one, and overcloud-tworc for overcloud-two. To interact with either overcloud,
you must source the appropriate credential file. For example, to source the credential for the first
overcloud, run the following command:
$ source overcloud-onerc
Instead, for ease of management when you deploy or maintain multiple overclouds, create separate
directories of environment files specific to each cloud. When you run the deploy command for each
cloud, include the core heat templates together with the cloud-specific environment files that you
create separately. For example, create the following directories for the undercloud and two overclouds:
~stack/undercloud
Contains the environment files specific to the undercloud.
~stack/overcloud-one
Contains the environment files specific to the first overcloud.
~stack/overcloud-two
Contains the environment files specific to the second overcloud.
When you deploy or redeploy overcloud-one or overcloud-two, include the core heat templates in the
124
CHAPTER 10. DEPLOYING MULTIPLE OVERCLOUDS
When you deploy or redeploy overcloud-one or overcloud-two, include the core heat templates in the
deploy command with the --templates option, and then specify any additional environment files from
the cloud-specific environment file directories.
Alternatively, create a repository in a version control system and use branches for each deployment. For
more information, see the Using Customized Core Heat Templates section of the Advanced Overcloud
Customization guide.
Use the following command to view a list of overcloud plans that are available:
Use the following command to view a list of overclouds that are currently deployed:
125
Red Hat OpenStack Platform 16.0 Director Installation and Usage
126
CHAPTER 11. PERFORMING OVERCLOUD POST-INSTALLATION TASKS
Procedure
$ source ~/stackrc
+-----------+---------------------+---------------------+-------------------+
| Plan Name | Created | Updated | Deployment Status |
+-----------+---------------------+---------------------+-------------------+
| overcloud | 2018-05-03 21:24:50 | 2018-05-03 21:27:59 | DEPLOY_SUCCESS |
+-----------+---------------------+---------------------+-------------------+
If your overcloud uses a different name, use the --plan argument to select an overcloud with a
different name:
Procedure
$ source ~/overcloudrc
2. Run the openstack flavor create command to create a flavor. Use the following options to
specify the hardware requirements for each flavor:
--disk
Defines the hard disk space for a virtual machine volume.
127
Red Hat OpenStack Platform 16.0 Director Installation and Usage
--ram
Defines the RAM required for a virtual machine.
--vcpus
Defines the quantity of virtual CPUs for a virtual machine.
NOTE
Use $ openstack flavor create --help to learn more about the openstack flavor create
command.
Procedure
$ source ~/overcloudrc
These commands create a basic Networking service (neutron) network named default. The overcloud
automatically assigns IP addresses from this network to virtual machines using an internal DHCP
mechanism.
128
CHAPTER 11. PERFORMING OVERCLOUD POST-INSTALLATION TASKS
This procedure contains two examples. Use the example that best suits your environment:
Both of these examples involve creating a network with the name public. The overcloud requires this
specific name for the default floating IP pool. This name is also important for the validation tests in
Section 11.7, “Validating the overcloud”.
By default, Openstack Networking (neutron) maps a physical network name called datacentre to the
the br-ex bridge on your host nodes. You connect the public overcloud network to the physical
datacentre and this provides a gateway through the br-ex bridge.
Prerequisites
Procedure
$ source ~/overcloudrc
Use the --provider-segment option to define the VLAN that you want to use. In this
example, the VLAN is 201.
3. Create a subnet with an allocation pool for floating IP addresses. In this example, the IP range is
10.1.1.51 to 10.1.1.250:
Ensure that this range does not conflict with other IP addresses in your external network.
A provider network is another type of external network connection that routes traffic from private
129
Red Hat OpenStack Platform 16.0 Director Installation and Usage
A provider network is another type of external network connection that routes traffic from private
tenant networks to external infrastructure network. The provider network is similar to a floating IP
network but the provider network uses a logical router to connect private networks to the provider
network.
This procedure contains two examples. Use the example that best suits your environment:
By default, Openstack Networking (neutron) maps a physical network name called datacentre to the
the br-ex bridge on your host nodes. You connect the public overcloud network to the physical
datacentre and this provides a gateway through the br-ex bridge.
Procedure
$ source ~/overcloudrc
Use the --provider-segment option to define the VLAN that you want to use. In this
example, the VLAN is 201.
These example commands create a shared network. It is also possible to specify a tenant instead
of specifying --share so that only the tenant has access to the new network.
+ If you mark a provider network as external, only the operator may create ports on that
network.
4. Create a router so that other networks can route traffic through the provider network:
5. Set the external gateway for the router to the provider network:
130
CHAPTER 11. PERFORMING OVERCLOUD POST-INSTALLATION TASKS
6. Attach other networks to this router. For example, run the following command to attach a
subnet subnet1 to the router:
This command adds subnet1 to the routing table and allows traffic from virtual machines using
subnet1 to route to the provider network.
Map the additional bridge during deployment. For example, to map a new bridge called br-
floating to the floating physical network, include the NeutronBridgeMappings parameter in an
environment file:
parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,floating:br-floating"
With this method, you can create separate external networks after creating the overcloud. For example,
to create a floating IP network that maps to the floating physical network, run the following commands:
$ source ~/overcloudrc
(overcloud) $ openstack network create public --external --provider-physical-network floating --
provider-network-type vlan --provider-segment 105
(overcloud) $ openstack subnet create public --network public --dhcp --allocation-pool
start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 --subnet-range 10.1.2.0/24
The Integration Test Suite requires a few post-installation steps to ensure successful tests.
Procedure
1. If you run this test from the undercloud, ensure that the undercloud host has access to the
Internal API network on the overcloud. For example, add a temporary VLAN on the undercloud
host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address:
$ source ~/stackrc
(undercloud) $ sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201
type=internal
(undercloud) $ sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201
2. Before you run the OpenStack Integration Test Suite, ensure that the heat_stack_owner role
131
Red Hat OpenStack Platform 16.0 Director Installation and Usage
2. Before you run the OpenStack Integration Test Suite, ensure that the heat_stack_owner role
exists in your overcloud:
$ source ~/overcloudrc
(overcloud) $ openstack role list
+----------------------------------+------------------+
| ID | Name |
+----------------------------------+------------------+
| 6226a517204846d1a26d15aae1af208f | swiftoperator |
| 7c7eb03955e545dd86bbfeb73692738b | heat_stack_owner |
+----------------------------------+------------------+
4. Run the integration tests as described in the OpenStack Integration Test Suite Guide .
5. After completing the validation, remove any temporary connections to the overcloud Internal
API. In this example, use the following commands to remove the previously created VLAN on
the undercloud:
$ source ~/stackrc
(undercloud) $ sudo ovs-vsctl del-port vlan201
{"stacks:delete": "rule:deny_everybody"}
This prevents removal of the overcloud with the heat client. To allow removal of the overcloud, delete
the custom policy and save /var/lib/config-data/puppet-generated/heat/etc/heat/policy.json.
132
CHAPTER 12. PERFORMING BASIC OVERCLOUD ADMINISTRATION TASKS
$ sudo podman ps
To include stopped or failed containers in the command output, add the --all option to the command:
NOTE
It is not recommended to use the Podman CLI to stop, start, and restart containers
because Systemd applies a restart policy. Use Systemd service commands instead.
133
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Because no daemon monitors the containers status, Systemd automatically restarts most containers in
these situations:
Unclean exit code, such as the podman container crashing after a start.
Unclean signals.
For more information about Systemd services, see the systemd.service documentation.
NOTE
Any changes to the service configuration files within the container revert after restarting
the container. This is because the container regenerates the service configuration based
on files on the local file system of the node in /var/lib/config-data/puppet-generated/.
For example, if you edit /etc/keystone/keystone.conf within the keystone container and
restart the container, the container regenerates the configuration using /var/lib/config-
data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system
of the node, which overwrites any the changes that were made within the container
before the restart.
To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the
output to lines containing tripleo:
134
CHAPTER 12. PERFORMING BASIC OVERCLOUD ADMINISTRATION TASKS
tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service
Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago
tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service
(...)
To check the status of a specific container timer, run the systemctl status command for the
healthcheck service:
To stop, start, restart, and show the status of a container timer, run the relevant systemctl command
against the .timer Systemd resource. For example, to check the status of the
tripleo_keystone_healthcheck.timer resource, run the following command:
If the healthcheck service is disabled but the timer for that service is present and enabled, it means that
the check is currently timed out, but will be run according to timer. You can also start the check manually.
NOTE
The podman ps command does not show the container health status.
Paunch and the container-puppet.py script configure podman containers to push their outputs to the
/var/log/containers/stdout directory, which creates a collection of all logs, even for the deleted
containers, such as container-puppet-* containers.
The host also applies log rotation to this directory, which prevents huge files and disk space issues.
In case a container is replaced, the new container outputs to the same log file, because podman uses
the container name instead of container ID.
135
Red Hat OpenStack Platform 16.0 Director Installation and Usage
You can also check the logs for a containerized service with the podman logs command. For example,
to view the logs for the keystone container, run the following command:
Accessing containers
To enter the shell for a containerized service, use the podman exec command to launch /bin/bash. For
example, to enter the shell for the keystone container, run the following command:
To enter the shell for the keystone container as the root user, run the following command:
# exit
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates \
-e ~/templates/node-info.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/storage-environment.yaml \
--ntp-server pool.ntp.org
Director checks the overcloud stack in heat, and then updates each item in the stack with the
environment files and heat templates. Director does not recreate the overcloud, but rather changes the
existing overcloud.
IMPORTANT
Removing parameters from custom environment files does not revert the parameter
value to the default configuration. You must identify the default value from the core heat
template collection in /usr/share/openstack-tripleo-heat-templates and set the value in
your custom environment file manually.
If you want to include a new environment file, add it to the openstack overcloud deploy command with
the`-e` option. For example:
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates \
-e ~/templates/new-environment.yaml \
136
CHAPTER 12. PERFORMING BASIC OVERCLOUD ADMINISTRATION TASKS
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/storage-environment.yaml \
-e ~/templates/node-info.yaml \
--ntp-server pool.ntp.org
This command includes the new parameters and resources from the environment file into the stack.
IMPORTANT
Procedure
1. On the existing OpenStack environment, create a new image by taking a snapshot of a running
server and download the image:
$ source ~/overcloudrc
IMPORTANT
These commands copy each virtual machine disk from the existing OpenStack
environment to the new Red Hat OpenStack Platform. QCOW snapshots lose their
original layering system.
137
Red Hat OpenStack Platform 16.0 Director Installation and Usage
This process migrates all instances from a Compute node. You can now perform maintenance on the
node without any instance downtime. To return the Compute node to an enabled state, run the
following command:
$ source ~/overcloudrc
(overcloud) $ openstack compute service set [hostname] nova-compute --enable
Procedure
$ source ~/stackrc
(undercloud) $ tripleo-ansible-inventory --list
Use the --list option to return details about all hosts. This command outputs the dynamic
inventory in a JSON format:
2. To execute Ansible playbooks on your environment, run the ansible command and include the
full path of the dynamic inventory tool using the -i option. For example:
Replace [HOSTS] with the type of hosts that you want to use to use:
overcloud for all overcloud child nodes. For example, controller and compute nodes
Use the -u [USER] option to change the SSH user that executes the Ansible
automation. The default SSH user for the overcloud is automatically defined using the
ansible_ssh_user parameter in the dynamic inventory. The -u option overrides this
138
CHAPTER 12. PERFORMING BASIC OVERCLOUD ADMINISTRATION TASKS
parameter.
Use the -m [MODULE] option to use a specific Ansible module. The default is
command, which executes Linux commands.
Use the -a [MODULE_ARGS] option to define arguments for the chosen module.
IMPORTANT
Custom Ansible automation on the overcloud is not part of the standard overcloud stack.
Subsequent execution of the openstack overcloud deploy command might override
Ansible-based configuration for OpenStack Platform services on overcloud nodes.
$ source ~/stackrc
(undercloud) $ openstack overcloud delete overcloud
2. Confirm that the overcloud is no longer present in the output of the openstack stack list
command:
3. When the deletion completes, follow the standard steps in the deployment scenarios to recreate
your overcloud.
139
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Although director generates the Ansible playbooks automatically, it is a good idea to familiarize yourself
with Ansible syntax. For more information about using Ansible, see https://docs.ansible.com/.
NOTE
Ansible also uses the concept of roles, which are different to OpenStack Platform
director roles. Ansible roles form reusable components of playbooks, whereas director
roles contain mappings of OpenStack services to node types.
As a result, when you run the openstack overcloud deploy command, the following process occurs:
Director uses heat to interpret the deployment plan and create the overcloud stack and all
descendant resources. This includes provisioning nodes with the OpenStack Bare Metal service
(ironic).
Heat also creates the software configuration from the deployment plan. Director compiles the
Ansible playbooks from this software configuration.
Director generates a temporary user (tripleo-admin) on the overcloud nodes specifically for
Ansible SSH access.
Director downloads the heat software configuration and generates a set of Ansible playbooks
using heat outputs.
Director applies the Ansible playbooks to the overcloud nodes using ansible-playbook.
The working directory contains a set of sub-directories named after each overcloud role. These sub-
directories contain all tasks relevant to the configuration of the nodes in the overcloud role. These sub-
directories also contain additional sub-directories named after each specific node. These sub-
140
CHAPTER 13. CONFIGURING THE OVERCLOUD WITH ANSIBLE
directories contain node-specific variables to apply to the overcloud role tasks. As a result, the
overcloud roles within the working directory use the following structure:
─ /var/lib/mistral/overcloud
|
├── Controller
│ ├── overcloud-controller-0
| ├── overcloud-controller-1
│ └── overcloud-controller-2
├── Compute
│ ├── overcloud-compute-0
| ├── overcloud-compute-1
│ └── overcloud-compute-2
...
Each working directory is a local Git repository that records changes after each deployment operation.
Use the local Git repositories to track configuration changes between each deployment.
Procedure
1. Use the setfacl command to grant the stack user on the undercloud access to the files in the
/var/lib/mistral directory:
Procedure
1. View the log with the less command within the config-download working directory. The
following example uses the overcloud working directory:
$ less /var/lib/mistral/overcloud/ansible.log
141
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
$ source ~/stackrc
2. Run the deployment command with the --stack-only option. Include any environment files
required for your overcloud:
4. Enable SSH access from the undercloud to the overcloud for the tripleo-admin user. The
config-download process uses the tripleo-admin user to perform the Ansible-based
configuration:
5. Run the deployment command with the --config-download-only option. Include any
environment files required for your overcloud:
Procedure
$ cd /var/lib/mistral/overcloud/
$ ./ansible-playbook-command.sh
You can pass additional Ansible arguments to this script, which are then passed unchanged to
142
CHAPTER 13. CONFIGURING THE OVERCLOUD WITH ANSIBLE
You can pass additional Ansible arguments to this script, which are then passed unchanged to
the ansible-playbook command. This means that you can use other Ansible features, such as
check mode (--check), limiting hosts ( --limit), or overriding variables (-e). For example:
3. The working directory contains a playbook called deploy_steps_playbook.yaml, which runs the
overcloud configuration. To view this playbook, run the following command:
$ less deploy_steps_playbook.yaml
The playbook uses various task files contained in the working directory. Some task files are
common to all OpenStack Platform roles and some are specific to certain OpenStack Platform
roles and servers.
4. The working directory also contains sub-directories that correspond to each role that you define
in your overcloud roles_data file. For example:
$ ls Controller/
Each OpenStack Platform role directory also contains sub-directories for individual servers of
that role type. The directories use the composable role hostname format:
$ ls Controller/overcloud-controller-0
5. The Ansible tasks are tagged. To see the full list of tags, use the CLI argument --list-tags for
ansible-playbook:
Then apply tagged configuration using the --tags, --skip-tags, or --start-at-task with the
ansible-playbook-command.sh script:
6. When config-download configures Ceph, Ansible executes ceph-ansible from within the
config-download external_deploy_steps_tasks playbook. When you run config-download
manually, the second Ansible execution does not inherit the ssh_args argument. To pass
Ansible environment variables to this execution, use a heat environment file. For example:
parameter_defaults:
CephAnsibleEnvironmentVariables:
ANSIBLE_HOST_KEY_CHECKING: 'False'
ANSIBLE_PRIVATE_KEY_FILE: '/home/stack/.ssh/id_rsa'
143
Red Hat OpenStack Platform 16.0 Director Installation and Usage
WARNING
Be aware of the limitations of the working directory. For example, if you use Git to revert to a previous
version of the config-download working directory, this action affects only the configuration in the
working directory. It does not affect the following configurations:
The overcloud data schema: Applying a previous version of the working directory software
configuration does not undo data migration and schema changes.
The hardware layout of the overcloud:Reverting to previous software configuration does not
undo changes related to overcloud hardware, such as scaling up or down.
The heat stack: Reverting to earlier revisions of the working directory has no effect on the
configuration stored in the heat stack. The heat stack creates a new version of the software
configuration that applies to the overcloud. To make permanent changes to the overcloud,
modify the environment files applied to the overcloud stack before you rerun the openstack
overcloud deploy command.
Complete the following steps to compare different commits of the config-download working directory.
Procedure
1. Change to the config-download working directory for your overcloud. In this example, the
working directory is for the overcloud named overcloud:
$ cd /var/lib/mistral/overcloud
2. Run the git log command to list the commits in your working directory. You can also format the
log output to show the date:
144
CHAPTER 13. CONFIGURING THE OVERCLOUD WITH ANSIBLE
3. Run the git diff command against two commit hashes to see all changes between the
deployments:
Procedure
--name is the name of the overcloud that you want to use for the Ansible file export.
--config-dir is the location where you want to save the config-download files.
$ cd ~/config-download
$ tripleo-ansible-inventory \
--ansible_ssh_user heat-admin \
--static-yaml-inventory inventory.yaml
Use the config-download files and the static inventory file to perform a configuration. To execute the
deployment playbook, run the ansible-playbook command:
$ ansible-playbook \
-i inventory.yaml \
--private-key ~/.ssh/id_rsa \
--become \
~/config-download/deploy_steps_playbook.yaml
To generate an overcloudrc file manually from this configuration, run the following command:
145
Red Hat OpenStack Platform 16.0 Director Installation and Usage
ansible.cfg
Configuration file used when running ansible-playbook.
ansible.log
Log file from the last run of ansible-playbook.
ansible-errors.json
JSON structured file that contains any deployment errors.
ansible-playbook-command.sh
Executable script to rerun the ansible-playbook command from the last deployment operation.
ssh_private_key
Private SSH key that Ansible uses to access the overcloud nodes.
tripleo-ansible-inventory.yaml
Ansible inventory file that contains hosts and variables for all the overcloud nodes.
overcloud-config.tar.gz
Archive of the working directory.
Playbooks
The following files are playbooks within the config-download working directory.
deploy_steps_playbook.yaml
Main deployment steps. This playbook performs the main configuration operations for your
overcloud.
pre_upgrade_rolling_steps_playbook.yaml
Pre upgrade steps for major upgrade
upgrade_steps_playbook.yaml
Major upgrade steps.
post_upgrade_steps_playbook.yaml
Post upgrade steps for major upgrade.
update_steps_playbook.yaml
Minor update steps.
fast_forward_upgrade_playbook.yaml
Fast forward upgrade tasks. Use this playbook only when you want to upgrade from one long-life
version of Red Hat OpenStack Platform to the next.
146
CHAPTER 13. CONFIGURING THE OVERCLOUD WITH ANSIBLE
facts
Fact gathering operations.
common_roles
Ansible roles common to all nodes.
overcloud
All plays for overcloud deployment.
pre_deploy_steps
Deployments that happen before the deploy_steps operations.
host_prep_steps
Host preparation steps.
deploy_steps
Deployment steps.
post_deploy_steps
Steps that happen after the deploy_steps operations.
external
All external deployment tasks.
external_deploy_steps
External deployment tasks that run on the undercloud only.
This section contains a summary of the different Ansible plays used within this playbook. The play names
in this section are the same names that are used within the playbook and that are displayed in the
ansible-playbook output. This section also contains information about the Ansible tags that are set on
each play.
147
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Server deployments
Applies server-specific heat deployments for configuration such as networking and hieradata.
Includes NetworkDeployment, <Role>Deployment, <Role>AllNodesDeployment, etc.
Tags: overcloud, pre_deploy_steps
148
CHAPTER 14. USING THE VALIDATION FRAMEWORK
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
Red Hat OpenStack Platform includes a validation framework that you can use to verify the
requirements and functionality of the undercloud and overcloud. The framework includes two types of
validations:
Manual Ansible-based validations, which you execute through the openstack tripleo validator
command set.
no-op
Validations that run a no-op (no operation) task to verify to workflow functions correctly. These
validations run on both the undercloud and overcloud.
prep
Validations that check the hardware configuration of the undercloud node. Run these validation
before you run the openstack undercloud install command.
openshift-on-openstack
Validations that check that the environment meets the requirements to be able to deploy OpenShift
on OpenStack.
pre-introspection
Validations to run before the nodes introspection using Ironic Inspector.
pre-deployment
Validations to run before the openstack overcloud deploy command.
post-deployment
Validations to run after the overcloud deployment has finished.
pre-upgrade
Validations to validate your OpenStack deployment before an upgrade.
post-upgrade
Validations to validate your OpenStack deployment after an upgrade.
149
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
$ source ~/stackrc
To list validations in a group, run the command with the --group option:
NOTE
For a full list of options, run openstack tripleo validator list --help.
Procedure
$ source ~/stackrc
To run a single validation, enter the command with the --validation option and the name of
the validation. For example, to check the undercloud memory requirements, enter --
validation undercloud-ram:
To run all validations in a group, enter the command with the --group option:
In-flight validations run automatically as part of the deployment process. Some in-flight validations also
use the roles from the openstack-tripleo-validations package.
150
CHAPTER 15. SCALING OVERCLOUD NODES
WARNING
Do not use openstack server delete to remove nodes from the overcloud. Follow
the procedures in this section to remove and replace nodes correctly.
If you want to add or remove nodes after the creation of the overcloud, you must update the overcloud.
Use the following table to determine support for scaling each node type:
Compute Y Y
IMPORTANT
Ensure that you have at least 10 GB free space before you scale the overcloud. This free
space accommodates image conversion and caching during the node provisioning
process.
Procedure
1. Create a new JSON file (newnodes.json) that contains details of the new node that you want
to register:
{
"nodes":[
151
Red Hat OpenStack Platform 16.0 Director Installation and Usage
{
"mac":[
"dd:dd:dd:dd:dd:dd"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.207"
},
{
"mac":[
"ee:ee:ee:ee:ee:ee"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.208"
}
]
}
$ source ~/stackrc
(undercloud) $ openstack overcloud node import newnodes.json
3. After you register the new nodes, run the following commands to launch the introspection
process for each new node:
This process detects and benchmarks the hardware properties of the nodes.
Procedure
1. Tag each new node with the role you want. For example, to tag a node with the Compute role,
run the following command:
152
CHAPTER 15. SCALING OVERCLOUD NODES
2. To scale the overcloud, you must edit the environment file that contains your node counts and
re-deploy the overcloud. For example, to scale your overcloud to 5 Compute nodes, edit the
ComputeCount parameter:
parameter_defaults:
...
ComputeCount: 5
...
3. Rerun the deployment command with the updated file, which in this example is called node-
info.yaml:
Ensure that you include all environment files and options from your initial overcloud creation.
This includes the same scale parameters for non-Compute nodes.
IMPORTANT
Before you remove a Compute node from the overcloud, migrate the workload from the
node to other Compute nodes. For more information, see Migrating virtual machine
instances between Compute nodes.
Prerequisites
Procedure
$ source ~/overcloudrc
2. Disable the Compute service on the outgoing node on the overcloud to prevent the node from
scheduling new instances:
TIP
153
Red Hat OpenStack Platform 16.0 Director Installation and Usage
TIP
Use the --disable-reason option to add a short explanation on why the service is being
disabled. This is useful if you intend to redeploy the Compute service at a later point.
5. Identify the UUIDs or hostnames of the nodes that you want to delete:
6. Redeploy the overcloud with the --update-plan-only option, including all of the environment
files that are relevant to your deployment:
IMPORTANT
Do not use a mix of UUIDs and hostnames. Use either only UUIDs or only
hostnames.
8. Ensure that the openstack overcloud node delete command runs to completion:
The status of the overcloud stack shows UPDATE_COMPLETE when the delete operation is
complete.
IMPORTANT
154
CHAPTER 15. SCALING OVERCLOUD NODES
IMPORTANT
If you intend to redeploy the Compute service with the same host name, you
must use the existing service records for the redeployed node. If this is the case,
skip the remaining steps in this procedure, and proceed with the instructions
detailed in Redeploying the Compute service using the same host name .
11. Remove the deleted Compute service as a resource provider from the Placement service:
12. Decrease the ComputeCount parameter in the environment file that contains your node
counts. This file is usually named node-info.yaml. For example, decrease the node count from
five nodes to three nodes if you removed two nodes:
parameter_defaults:
...
ComputeCount: 3
...
Decreasing the node count ensures director does not provision any new nodes when you run
openstack overcloud deploy.
You can remove the node from the overcloud and re-provision it for other purposes.
Procedure
1. Remove the deleted Compute service as a resource provider from the Placement service:
155
Red Hat OpenStack Platform 16.0 Director Installation and Usage
3. When the service state of the redeployed Compute node changes to up, re-enable the service:
Procedure
1. Increase the Object Storage count using the ObjectStorageCount parameter. This parameter is
usually located in node-info.yaml, which is the environment file that contains your node counts:
parameter_defaults:
ObjectStorageCount: 4
The ObjectStorageCount parameter defines the quantity of Object Storage nodes in your
environment. In this example, scale the quantity of Object Storage nodes from 3 to 4.
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates -e node-info.yaml
ENVIRONMENT_FILES
3. After the deployment command completes, the overcloud contains an additional Object Storage
node.
4. Replicate data to the new node. Before you remove a node, in this case, overcloud-
objectstorage-1, wait for a replication pass to finish on the new node. Check the replication
pass progress in the /var/log/swift/swift.log file. When the pass finishes, the Object Storage
service should log entries similar to the following example:
5. To remove the old node from the ring, reduce the ObjectStorageCount parameter to omit the
156
CHAPTER 15. SCALING OVERCLOUD NODES
5. To remove the old node from the ring, reduce the ObjectStorageCount parameter to omit the
old node. In this example, reduce the ObjectStorageCount parameter to 3:
parameter_defaults:
ObjectStorageCount: 3
6. Create a new environment file named remove-object-node.yaml. This file identifies and
removes the specified Object Storage node. The following content specifies the removal of
overcloud-objectstorage-1:
parameter_defaults:
ObjectStorageRemovalPolicies:
[{'resource_list': ['1']}]
Director deletes the Object Storage node from the overcloud and updates the rest of the nodes on the
overcloud to accommodate the node removal.
IMPORTANT
Include all environment files and options from your initial overcloud creation. This includes
the same scale parameters for non-Compute nodes.
parameter_defaults:
DeploymentServerBlacklist:
- overcloud-compute-0
- overcloud-compute-1
- overcloud-compute-2
NOTE
The server names in the parameter value are the names according to OpenStack
Orchestration (heat), not the actual server hostnames.
157
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Include this environment file with your openstack overcloud deploy command:
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates \
-e server-blacklist.yaml \
[OTHER OPTIONS]
Heat blacklists any servers in the list from receiving updated heat deployments. After the stack
operation completes, any blacklisted servers remain unchanged. You can also power off or stop the os-
collect-config agents during the operation.
WARNING
Exercise caution when you blacklist nodes. Only use a blacklist if you fully
understand how to apply the requested change with a blacklist in effect. It is
possible to create a hung stack or configure the overcloud incorrectly when
you use the blacklist feature. For example, if cluster configuration changes
apply to all members of a Pacemaker cluster, blacklisting a Pacemaker
cluster member during this change can cause the cluster to fail.
When you add servers to the blacklist, further changes to those nodes are
not supported until you remove the server from the blacklist. This includes
updates, upgrades, scale up, scale down, and node replacement. For
example, when you blacklist existing Compute nodes while scaling out the
overcloud with new Compute nodes, the blacklisted nodes miss the
information added to /etc/hosts and /etc/ssh/ssh_known_hosts. This can
cause live migration to fail, depending on the destination host. The
Compute nodes are updated with the information added to /etc/hosts and
/etc/ssh/ssh_known_hosts during the next overcloud deployment where
they are no longer blacklisted.
parameter_defaults:
DeploymentServerBlacklist: []
WARNING
158
CHAPTER 16. REPLACING CONTROLLER NODES
Complete the steps in this section to replace a Controller node. The Controller node replacement
process involves running the openstack overcloud deploy command to update the overcloud with a
request to replace a Controller node.
IMPORTANT
The following procedure applies only to high availability environments. Do not use this
procedure if you are using only one Controller node.
Procedure
$ source stackrc
(undercloud) $ openstack stack list --nested
The overcloud stack and its subsequent child stacks should have either a
CREATE_COMPLETE or UPDATE_COMPLETE.
5. Check that your undercloud contains 10 GB free storage to accommodate for image caching
and conversion when you provision the new node:
(undercloud) $ df -h
6. Check the status of Pacemaker on the running Controller nodes. For example, if 192.168.0.47 is
the IP address of a running Controller node, use the following command to view the Pacemaker
status:
159
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The output shows all services that are running on the existing nodes and that are stopped on
the failed node.
7. Check the following parameters on each node of the overcloud MariaDB cluster:
wsrep_local_state_comment: Synced
wsrep_cluster_size: 2
Use the following command to check these parameters on each running Controller node. In
this example, the Controller node IP addresses are 192.168.0.47 and 192.168.0.46:
8. Check the RabbitMQ status. For example, if 192.168.0.47 is the IP address of a running
Controller node, use the following command to view the RabbitMQ status:
The running_nodes key should show only the two available nodes and not the failed node.
9. If fencing is enabled, disable it. For example, if 192.168.0.47 is the IP address of a running
Controller node, use the following command to check the status of fencing:
10. Check the Compute services are active on the director node:
NOTE
160
CHAPTER 16. REPLACING CONTROLLER NODES
NOTE
Adding a new Controller node to the cluster also adds a new Ceph monitor daemon
automatically.
Procedure
1. Connect to the Controller node that you want to replace and become the root user:
# ssh [email protected]
# sudo su -
NOTE
If the Controller node is unreachable, skip steps 1 and 2 and continue the
procedure at step 3 on any working Controller node.
For example:
# ssh [email protected]
# sudo su -
6. On all Controller nodes, remove the v1 and v2 monitor entries from /etc/ceph/ceph.conf. For
example, if you remove controller-1, then remove the IPs and hostname for controller-1.
Before:
After:
NOTE
161
Red Hat OpenStack Platform 16.0 Director Installation and Usage
NOTE
Director updates the ceph.conf file on the relevant overcloud nodes when you
add the replacement Controller node. Normally, director manages this
configuration file exclusively and you should not edit the file manually. However,
you can edit the file manually if you want to ensure consistency in case the other
nodes restart before you add the new node.
7. (Optional) Archive the monitor data and save the archive on another server:
# mv /var/lib/ceph/mon/<cluster>-<daemon_id> /var/lib/ceph/mon/removed-<cluster>-
<daemon_id>
Procedure
1. To view the list of IP addresses for the Controller nodes, run the following command:
2. If the old node is still reachable, log in to one of the remaining nodes and stop pacemaker on the
old node. For this example, stop pacemaker on overcloud-controller-1:
NOTE
3. After you stop Pacemaker on the old node, delete the old node from the pacemaker cluster.
The following example command logs in to overcloud-controller-0 to remove overcloud-
controller-1:
If the node that that you want to replace is unreachable (for example, due to a hardware failure),
162
CHAPTER 16. REPLACING CONTROLLER NODES
If the node that that you want to replace is unreachable (for example, due to a hardware failure),
run the pcs command with additional --skip-offline and --force options to forcibly remove the
node from the cluster:
4. After you remove the old node from the pacemaker cluster, remove the node from the list of
known hosts in pacemaker:
You can run this command whether the node is reachable or not.
5. The overcloud database must continue to run during the replacement procedure. To ensure
that Pacemaker does not stop Galera during this procedure, select a running Controller node
and run the following command on the undercloud with the IP address of the Controller node:
If the node is a virtual node, identify the node that contains the failed disk and restore the disk
from a backup. Ensure that the MAC address of the NIC used for PXE boot on the failed server
remains the same after disk replacement.
If the node is a bare metal node, replace the disk, prepare the new disk with your overcloud
configuration, and perform a node introspection on the new hardware.
If the node is a part of a high availability cluster with fencing, you might need recover the Galera
nodes separately. For more information, see the article How Galera works and how to rescue
Galera clusters in the context of Red Hat OpenStack Platform.
Complete the following example steps to replace the the overcloud-controller-1 node with the
overcloud-controller-3 node. The overcloud-controller-3 node has the ID 75b25e9a-948d-424a-9b3b-
f0ef70a6eacf.
IMPORTANT
To replace the node with an existing bare metal node, enable maintenance mode on the
outgoing node so that the director does not automatically reprovision the node.
IMPORTANT
163
Red Hat OpenStack Platform 16.0 Director Installation and Usage
...
| 3fab687e-99c2-4e66-805f-3106fb41d868 | controller-1 | ACTIVE | - | Running |
ctlplane=192.168.24.17 |
| a87276ea-8682-4f27-9426-6b272955b486 | controller-2 | ACTIVE | - | Running |
ctlplane=192.168.24.38 |
| a000b156-9adc-4d37-8169-c1af7800788b | controller-3 | ACTIVE | - | Running |
ctlplane=192.168.24.35 |
...
Procedure
$ source ~/stackrc
$ NODE=$(openstack baremetal node list -f csv --quote minimal | grep $INSTANCE | cut -f1
-d,)
5. If the Controller node is a virtual node, run the following command on the Controller host to
replace the virtual disk from a backup:
$ cp <VIRTUAL_DISK_BACKUP> /var/lib/libvirt/images/<VIRTUAL_DISK>
Replace <VIRTUAL_DISK_BACKUP> with the path to the backup of the failed virtual disk, and
replace <VIRTUAL_DISK> with the name of the virtual disk that you want to replace.
If you do not have a backup of the outgoing node, you must use a new virtualized node.
If the Controller node is a bare metal node, complete the following steps to replace the disk with
a new bare metal disk:
b. Prepare the node with the same configuration as the failed node.
164
CHAPTER 16. REPLACING CONTROLLER NODES
Procedure
1. Determine the UUID of the node that you want to remove and store it in the NODEID variable.
Ensure that you replace NODE_NAME with the name of the node that you want to remove:
parameters:
ControllerRemovalPolicies:
[{'resource_list': ['NODE_INDEX']}]
NOTE
5. Director removes the old node, creates a new node, and updates the overcloud stack. You can
check the status of the overcloud stack with the following command:
6. When the deployment command completes, director shows that the old node is replaced with
the new node:
165
Red Hat OpenStack Platform 16.0 Director Installation and Usage
| Name | Networks |
+------------------------+-----------------------+
| overcloud-compute-0 | ctlplane=192.168.0.44 |
| overcloud-controller-0 | ctlplane=192.168.0.47 |
| overcloud-controller-2 | ctlplane=192.168.0.46 |
| overcloud-controller-3 | ctlplane=192.168.0.48 |
+------------------------+-----------------------+
Procedure
2. Enable Pacemaker management of the Galera cluster and start Galera on the new node:
3. Perform a final status check to ensure that the services are running correctly:
NOTE
If any services have failed, use the pcs resource refresh command to resolve
and restart the failed services.
4. Exit to director:
5. Source the overcloudrc file so that you can interact with the overcloud:
$ source ~/overcloudrc
8. If necessary, add your router to the L3 agent host on the new node. Use the following example
166
CHAPTER 16. REPLACING CONTROLLER NODES
8. If necessary, add your router to the L3 agent host on the new node. Use the following example
command to add a router named r1 to the L3 agent using the UUID 2d1c1dc1-d9d4-4fa9-b2c8-
f29cd1a649d4:
9. Because compute services for the removed node still exist in the overcloud, you must remove
them. First, check the compute services for the removed node:
167
Red Hat OpenStack Platform 16.0 Director Installation and Usage
If you reboot all nodes in one role, it is advisable to reboot each node individually. If you reboot
all nodes in a role simultaneously, service downtime can occurduring the reboot operation.
If you reboot all nodes in your OpenStack Platform environment, reboot the nodes in the
following sequential order:
Procedure
$ sudo reboot
Procedure
168
CHAPTER 17. REBOOTING NODES
a. If the node uses Pacemaker services, check that the node has rejoined the cluster:
b. If the node uses Systemd services, check that all services are enabled:
c. If the node uses containerized services, check that all containers on the node are active:
Procedure
$ sudo reboot
3. Wait until the node boots and rejoins the MON cluster.
Procedure
1. Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing
temporarily:
2. Select the first Ceph Storage node that you want to reboot and log in to the node.
$ sudo reboot
169
Red Hat OpenStack Platform 16.0 Director Installation and Usage
6. Log out of the node, reboot the next node, and check its status. Repeat this process until you
have rebooted all Ceph storage nodes.
7. When complete, log into a Ceph MON or Controller node and re-enable cluster rebalancing:
8. Perform a final status check to verify that the cluster reports HEALTH_OK:
Decide whether to migrate instances to another Compute node before rebooting the node.
Select and disable the Compute node you want to reboot so that it does not provision new
instances.
Prerequisites
Before you reboot the Compute node, you must decide whether to migrate instances to another
Compute node while the node is rebooting.
If for some reason you cannot or do not want to migrate the instances, you can set the following core
template parameters to control the state of the instances after the Compute node reboots:
NovaResumeGuestsStateOnHostBoot
Determines whether to return instances to the same state on the Compute node after reboot. When
set to False, the instances will remain down and you must start them manually. Default value is: False
NovaResumeGuestsShutdownTimeout
Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to
set this value to 0. Default value is: 300
NovaResumeGuestsShutdownTimeout
Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to
set this value to 0. Default value is: 300
170
CHAPTER 17. REBOOTING NODES
For more information about overcloud parameters and their usage, see Overcloud Parameters.
Procedure
$ source ~/stackrc
(undercloud) $ openstack server list --name compute
Identify the UUID of the Compute node that you want to reboot.
$ source ~/overcloudrc
(overcloud) $ openstack compute service list
(overcloud) $ openstack compute service set [hostname] nova-compute --disable
6. If you decide to migrate the instances to another Compute node, use one of the following
commands:
NOTE
The nova command might cause some deprecation warnings, which are safe
to ignore.
9. Continue to migrate instances until none remain on the chosen Compute node.
171
Red Hat OpenStack Platform 16.0 Director Installation and Usage
$ source ~/overcloudrc
(overcloud) $ openstack compute service set [hostname] nova-compute --enable
172
PART IV. ADDITIONAL DIRECTOR OPERATIONS AND CONFIGURATION
173
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
1. The /etc/pki/CA/index.txt file contains records of all signed certificates. Check if this file exists.
If it does not exist, create an empty file:
2. The /etc/pki/CA/serial file identifies the next serial number to use for the next certificate to
sign. Check if this file exists. If the file does not exist, create a new file with a new starting value:
Procedure
1. The openssl req command requests certain details about your authority. Enter these details at
the prompt.
Procedure
174
CHAPTER 18. CONFIGURING CUSTOM SSL/TLS CERTIFICATES
2. After you copy the certificate authority file to each client, run the following command on each
client to add the certificate to the certificate authority trust bundle:
Procedure
Procedure
$ cp /etc/pki/tls/openssl.cnf .
2. Edit the new openssl.cnf file and configure the SSL parameters that you want to use for
director. An example of the types of parameters to modify include:
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = AU
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Queensland
localityName = Locality Name (eg, city)
localityName_default = Brisbane
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Red Hat
commonName = Common Name
commonName_default = 192.168.0.1
commonName_max = 64
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
175
Red Hat OpenStack Platform 16.0 Director Installation and Usage
[alt_names]
IP.1 = 192.168.0.1
DNS.1 = instack.localdomain
DNS.2 = vip.localdomain
DNS.3 = 192.168.0.1
If you are using an IP address to access director over SSL/TLS, use the
undercloud_public_host parameter in the undercloud.conf file.
If you are using a fully qualified domain name to access director over SSL/TLS, use the
domain name.
Add subjectAltName = @alt_names to the v3_req section.
DNS - A list of domain names that clients use to access director over SSL. Also include the
Public API IP address as a DNS entry at the end of the alt_names section.
NOTE
For more information about openssl.cnf, run the man openssl.cnf command.
Ensure that you include your OpenStack SSL/TLS key with the -key option.
This command generates a server.csr.pem file, which is the certificate signing request. Use this file to
create your OpenStack SSL/TLS certificate.
openssl.cnf
The customized configuration file that specifies the v3 extensions.
server.csr.pem
The certificate signing request to generate and sign the certificate with a certificate authority.
ca.crt.pem
The certificate authority, which signs the certificate.
ca.key.pem
The certificate authority private key.
Procedure
176
CHAPTER 18. CONFIGURING CUSTOM SSL/TLS CERTIFICATES
1. Run the following command to create a certificate for your undercloud or overcloud:
$ sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out
server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem
-config
Use a custom configuration file, which is the openssl.cnf file with v3 extensions.
-extensions v3_req
Enabled v3 extensions.
-days
Defines how long in days until the certificate expires.
-in'
The certificate signing request.
-out
The resulting signed certificate.
-cert
The certificate authority file.
-keyfile
The certificate authority private key.
This command creates a new certificate named server.crt.pem. Use this certificate in conjunction with
your OpenStack SSL/TLS key
Procedure
2. Copy the undercloud.pem file to a location within your /etc/pki directory and set the necessary
SELinux context so that HAProxy can read it:
undercloud_service_certificate = /etc/pki/undercloud-certs/undercloud.pem
177
Red Hat OpenStack Platform 16.0 Director Installation and Usage
4. Add the certificate authority that signed the certificate to the list of trusted Certificate
Authorities on the undercloud so that different services within the undercloud have access to
the certificate authority:
178
CHAPTER 19. ADDITIONAL INTROSPECTION OPERATIONS
(undercloud) $ for node in $(openstack baremetal node list --fields uuid -f value) ; do openstack
baremetal node manage $node ; done
(undercloud) $ openstack overcloud node introspect --all-manageable --provide
For example:
To view interface data and switch port information, run the following command:
179
Red Hat OpenStack Platform 16.0 Director Installation and Usage
For example:
180
CHAPTER 19. ADDITIONAL INTROSPECTION OPERATIONS
|
| switch_port_vlans | [{u'name': u'RHOS13-PXE', u'id': 101}]
|
| switch_protocol_identities | None
|
| switch_system_name | rhos-compute-node-sw1
|
+--------------------------------------+----------------------------------------------------------------------------------
--------------------------------------+
For example, the numa_topology collector is part of the hardware-inspection extras and includes the
following information for each NUMA node:
To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to
complete the following command:
The following example shows the retrieved NUMA information for a bare-metal node:
{
"cpus": [
{
"cpu": 1,
"thread_siblings": [
1,
17
],
"numa_node": 0
},
{
"cpu": 2,
"thread_siblings": [
10,
26
],
"numa_node": 1
},
{
"cpu": 0,
"thread_siblings": [
0,
16
],
181
Red Hat OpenStack Platform 16.0 Director Installation and Usage
"numa_node": 0
},
{
"cpu": 5,
"thread_siblings": [
13,
29
],
"numa_node": 1
},
{
"cpu": 7,
"thread_siblings": [
15,
31
],
"numa_node": 1
},
{
"cpu": 7,
"thread_siblings": [
7,
23
],
"numa_node": 0
},
{
"cpu": 1,
"thread_siblings": [
9,
25
],
"numa_node": 1
},
{
"cpu": 6,
"thread_siblings": [
6,
22
],
"numa_node": 0
},
{
"cpu": 3,
"thread_siblings": [
11,
27
],
"numa_node": 1
},
{
"cpu": 5,
"thread_siblings": [
5,
21
],
182
CHAPTER 19. ADDITIONAL INTROSPECTION OPERATIONS
"numa_node": 0
},
{
"cpu": 4,
"thread_siblings": [
12,
28
],
"numa_node": 1
},
{
"cpu": 4,
"thread_siblings": [
4,
20
],
"numa_node": 0
},
{
"cpu": 0,
"thread_siblings": [
8,
24
],
"numa_node": 1
},
{
"cpu": 6,
"thread_siblings": [
14,
30
],
"numa_node": 1
},
{
"cpu": 3,
"thread_siblings": [
3,
19
],
"numa_node": 0
},
{
"cpu": 2,
"thread_siblings": [
2,
18
],
"numa_node": 0
}
],
"ram": [
{
"size_kb": 66980172,
"numa_node": 0
},
183
Red Hat OpenStack Platform 16.0 Director Installation and Usage
{
"size_kb": 67108864,
"numa_node": 1
}
],
"nics": [
{
"name": "ens3f1",
"numa_node": 1
},
{
"name": "ens3f0",
"numa_node": 1
},
{
"name": "ens2f0",
"numa_node": 0
},
{
"name": "ens2f1",
"numa_node": 0
},
{
"name": "ens1f1",
"numa_node": 0
},
{
"name": "ens1f0",
"numa_node": 0
},
{
"name": "eno4",
"numa_node": 0
},
{
"name": "eno1",
"numa_node": 0
},
{
"name": "eno3",
"numa_node": 0
},
{
"name": "eno2",
"numa_node": 0
}
]
}
184
CHAPTER 20. AUTOMATICALLY DISCOVERING BARE METAL NODES
20.1. PREREQUISITES
You have configured all overcloud nodes BMCs to be accessible to director through the IPMI.
You have configured all overcloud nodes to PXE boot from the NIC that is connected to the
undercloud control plane network.
enable_node_discovery = True
discovery_default_driver = ipmi
enable_node_discovery - When enabled, any node that boots the introspection ramdisk
using PXE is enrolled in the Bare Metal service (ironic) automatically.
discovery_default_driver - Sets the driver to use for discovered nodes. For example, ipmi.
[
{
"description": "Set default IPMI credentials",
"conditions": [
{"op": "eq", "field": "data://auto_discovered", "value": true}
],
"actions": [
{"action": "set-attribute", "path": "driver_info/ipmi_username",
"value": "SampleUsername"},
{"action": "set-attribute", "path": "driver_info/ipmi_password",
"value": "RedactedSecurePassword"},
{"action": "set-attribute", "path": "driver_info/ipmi_address",
"value": "{data[inventory][bmc_address]}"}
]
}
]
185
Red Hat OpenStack Platform 16.0 Director Installation and Usage
2. Run the openstack baremetal node list command. You should see the new nodes listed in an
enrolled state:
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal
node set $NODE --resource-class baremetal ; done
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal
node manage $NODE ; done
$ openstack overcloud node configure --all-manageable
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal
node provide $NODE ; done
[
{
"description": "Set default IPMI credentials",
"conditions": [
{"op": "eq", "field": "data://auto_discovered", "value": true},
{"op": "ne", "field": "data://inventory.system_vendor.manufacturer",
"value": "Dell Inc."}
],
"actions": [
186
CHAPTER 20. AUTOMATICALLY DISCOVERING BARE METAL NODES
Replace the user name and password values in this example to suit your environment:
187
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The policies can identify underperforming or unstable nodes and isolate these nodes from use
in the overcloud.
The policies can define whether to tag nodes into specific profiles automatically.
Example:
Conditions
A condition defines an evaluation using the following key-value pattern:
field
Defines the field to evaluate:
op
Defines the operation to use for the evaluation. This includes the following attributes:
eq - Equal to
ne - Not equal to
lt - Less than
gt - Greater than
188
CHAPTER 21. CONFIGURING AUTOMATIC PROFILE TAGGING
invert
Boolean value to define whether to invert the result of the evaluation.
multiple
Defines the evaluation to use if multiple results exist. This parameter includes the following
attributes:
value
Defines the value in the evaluation. If the field and operation result in the value, the condition return a
true result. Otherwise, the condition returns a false result.
Example:
"conditions": [
{
"field": "local_gb",
"op": "ge",
"value": 1024
}
],
Actions
If a condition is true, the policy performs an action. The action uses the action key and additional keys
depending on the value of action:
fail - Fails the introspection. Requires a message parameter for the failure message.
set-attribute - Sets an attribute on an ironic node. Requires a path field, which is the path to an
ironic attribute (for example, /driver_info/ipmi_address), and a value to set.
set-capability - Sets a capability on an ironic node. Requires name and value fields, which are
the name and the value for a new capability. This replaces the existing value for this capability.
For example, use this to define node profiles.
extend-attribute - The same as set-attribute but treats the existing value as a list and appends
value to it. If the optional unique parameter is set to True, nothing is added if the given value is
already in a list.
Example:
"actions": [
{
"action": "set-capability",
"name": "profile",
189
Red Hat OpenStack Platform 16.0 Director Installation and Usage
"value": "swift-storage"
}
]
[
{
"description": "Fail introspection for unexpected nodes",
"conditions": [
{
"op": "lt",
"field": "memory_mb",
"value": 4096
}
],
"actions": [
{
"action": "fail",
"message": "Memory too low, expected at least 4 GiB"
}
]
},
{
"description": "Assign profile for object storage",
"conditions": [
{
"op": "ge",
"field": "local_gb",
"value": 1024
}
],
"actions": [
{
"action": "set-capability",
"name": "profile",
"value": "swift-storage"
}
]
},
{
"description": "Assign possible profiles for compute and controller",
"conditions": [
{
"op": "lt",
"field": "local_gb",
"value": 1024
},
{
"op": "ge",
"field": "local_gb",
"value": 40
}
190
CHAPTER 21. CONFIGURING AUTOMATIC PROFILE TAGGING
],
"actions": [
{
"action": "set-capability",
"name": "compute_profile",
"value": "1"
},
{
"action": "set-capability",
"name": "control_profile",
"value": "1"
},
{
"action": "set-capability",
"name": "profile",
"value": null
}
]
}
]
Fail introspection if memory is lower than 4096 MiB. You can apply these types of rules if you
want to exclude certain nodes from your cloud.
Nodes with a hard drive size 1 TiB and bigger are assigned the swift-storage profile
unconditionally.
Nodes with a hard drive less than 1 TiB but more than 40 GiB can be either Compute or
Controller nodes. You can assign two capabilities (compute_profile and control_profile) so
that the openstack overcloud profiles match command can later make the final choice. For
this process to succeed, you must remove the existing profile capability, otherwise the existing
profile capability has priority.
NOTE
Using introspection rules to assign the profile capability always overrides the existing
value. However, [PROFILE]_profile capabilities are ignored for nodes that already have a
profile capability.
Procedure
191
Red Hat OpenStack Platform 16.0 Director Installation and Usage
3. After introspection completes, check the nodes and their assigned profiles:
4. If you made a mistake in introspection rules, run the following command to delete all rules:
192
CHAPTER 22. CREATING WHOLE DISK IMAGES
IMPORTANT
The following process uses the director image building feature. Red Hat only supports
images that use the guidelines contained in this section. Custom images built outside of
these specifications are not supported.
The /tmp directory is mounted on a separate volume or partition and has the rw, nosuid, nodev,
noexec, and relatime flags.
The /var, /var/log and the /var/log/audit directories are mounted on separate volumes or
partitions, with the rw and relatime flags.
The /home directory is mounted on a separate partition or volume and has the rw, nodev, and
relatime flags.
To disable the kernel support for USB using boot loader configuration, add nousb.
Blacklist insecure modules (usb-storage, cramfs, freevxfs, jffs2, hfs, hfsplus, squashfs, udf,
vfat) and prevent these modules from loading.
Remove any insecure packages (kdump installed by kexec-tools and telnet) from the image
because they are installed by default.
3. Customize the image by modifying the partition schema and the size.
193
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
https://access.redhat.com/
NOTE
4. Select the KVM Guest Image that you want to download. For example, the KVM Guest Image
for the latest Red Hat Enterprise Linux is available on the following page:
NOTE
The image building process temporarily registers the image with a Red Hat subscription
and unregisters the system when the image building process completes.
To build a disk image, set Linux environment variables that suit your environment and requirements:
DIB_LOCAL_IMAGE
Sets the local image that you want to use as the basis for your whole disk image.
REG_ACTIVATION_KEY
Use an activation key instead of login details as part of the registration process.
REG_AUTO_ATTACH
Defines whether to attach the most compatible subscription automatically.
REG_BASE_URL
The base URL of the content delivery server that contains packages for the image. The default
Customer Portal Subscription Management process uses https://cdn.redhat.com. If you use a Red
Hat Satellite 6 server, set this parameter to the base URL of your Satellite server.
REG_ENVIRONMENT
Registers to an environment within an organization.
194
CHAPTER 22. CREATING WHOLE DISK IMAGES
REG_METHOD
Sets the method of registration. Use portal to register a system to the Red Hat Customer Portal. Use
satellite to register a system with Red Hat Satellite 6.
REG_ORG
The organization where you want to register the images.
REG_POOL_ID
The pool ID of the product subscription information.
REG_PASSWORD
Sets the password for the user account that registers the image.
REG_REPOS
A comma-separated string of repository names. Each repository in this string is enabled through
subscription-manager.
Use the following repositories for a security hardened whole disk image:
rhel-8-for-x86_64-baseos-eus-rpms
rhel-8-for-x86_64-appstream-eus-rpms
rhel-8-for-x86_64-highavailability-eus-rpms
ansible-2.8-for-rhel-8-x86_64-rpms
openstack-16-for-rhel-8-x86_64-rpms
REG_SAT_URL
The base URL of the Satellite server to register overcloud nodes. Use the Satellite HTTP URL and
not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not
https://satellite.example.com.
REG_SERVER_URL
Sets the host name of the subscription service to use. The default host name is for the Red Hat
Customer Portal at subscription.rhn.redhat.com. If you use a Red Hat Satellite 6 server, set this
parameter to the host name of your Satellite server.
REG_USER
Sets the user name for the account that registers the image.
Use the following set of example commands to export a set of environment variables and temporarily
register a local QCOW2 image to the Red Hat Customer Portal:
$ export DIB_LOCAL_IMAGE=./rhel-8.0-x86_64-kvm.qcow2
$ export REG_METHOD=portal
$ export REG_USER="[your username]"
$ export REG_PASSWORD="[your password]"
$ export REG_REPOS="rhel-8-for-x86_64-baseos-eus-rpms \
rhel-8-for-x86_64-appstream-eus-rpms \
rhel-8-for-x86_64-highavailability-eus-rpms \
ansible-2.8-for-rhel-8-x86_64-rpms \
openstack-16-for-rhel-8-x86_64-rpms"
The default security hardened image size is 20G and uses predefined partitioning sizes. However, you
195
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The default security hardened image size is 20G and uses predefined partitioning sizes. However, you
must modify the partitioning layout to accommodate overcloud container images. Complete the steps in
the following sections to increase the image size to 40G. You can modify the partitioning layout and
disk size to further suit your needs.
To modify the partitioning layout and disk size, perform the following steps:
Modify the global size of the image by updating the DIB_IMAGE_SIZE environment variable.
$ export DIB_BLOCK_DEVICE_CONFIG='<yaml_schema_with_partitions>'
The following YAML structure represents the modified logical volume partitioning layout to
accommodate enough space to pull overcloud container images:
export DIB_BLOCK_DEVICE_CONFIG='''
- local_loop:
name: image0
- partitioning:
base: image0
label: mbr
partitions:
- name: root
flags: [ boot,primary ]
size: 40G
- lvm:
name: lvm
base: [ root ]
pvs:
- name: pv
base: root
options: [ "--force" ]
vgs:
- name: vg
base: [ "pv" ]
options: [ "--force" ]
lvs:
- name: lv_root
base: vg
extents: 23%VG
- name: lv_tmp
base: vg
extents: 4%VG
- name: lv_var
base: vg
extents: 45%VG
- name: lv_log
base: vg
196
CHAPTER 22. CREATING WHOLE DISK IMAGES
extents: 23%VG
- name: lv_audit
base: vg
extents: 4%VG
- name: lv_home
base: vg
extents: 1%VG
- mkfs:
name: fs_root
base: lv_root
type: xfs
label: "img-rootfs"
mount:
mount_point: /
fstab:
options: "rw,relatime"
fsck-passno: 1
- mkfs:
name: fs_tmp
base: lv_tmp
type: xfs
mount:
mount_point: /tmp
fstab:
options: "rw,nosuid,nodev,noexec,relatime"
fsck-passno: 2
- mkfs:
name: fs_var
base: lv_var
type: xfs
mount:
mount_point: /var
fstab:
options: "rw,relatime"
fsck-passno: 2
- mkfs:
name: fs_log
base: lv_log
type: xfs
mount:
mount_point: /var/log
fstab:
options: "rw,relatime"
fsck-passno: 3
- mkfs:
name: fs_audit
base: lv_audit
type: xfs
mount:
mount_point: /var/log/audit
fstab:
options: "rw,relatime"
fsck-passno: 4
- mkfs:
name: fs_home
base: lv_home
197
Red Hat OpenStack Platform 16.0 Director Installation and Usage
type: xfs
mount:
mount_point: /home
fstab:
options: "rw,nodev,relatime"
fsck-passno: 2
'''
Use this sample YAML content as a basis for the partition schema of your image. Modify the partition
sizes and layout to suit your needs.
NOTE
You must define the correct partition sizes for the image because you cannot resize them
after the deployment.
Procedure
# cp /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images-
python3.yaml \
/home/stack/overcloud-hardened-images-python3-custom.yaml
NOTE
2. Edit the DIB_IMAGE_SIZE in the configuration file and adjust the values as necessary:
...
environment:
DIB_PYTHON_VERSION: '3'
DIB_MODPROBE_BLACKLIST: 'usb-storage cramfs freevxfs jffs2 hfs hfsplus squashfs udf
vfat bluetooth'
DIB_BOOTLOADER_DEFAULT_CMDLINE: 'nofb nomodeset vga=normal console=tty0
console=ttyS0,115200 audit=1 nousb'
DIB_IMAGE_SIZE: '40' 1
COMPRESS_IMAGE: '1'
198
CHAPTER 22. CREATING WHOLE DISK IMAGES
IMPORTANT
When you deploy the overcloud, the director creates a RAW version of the overcloud
image. This means your undercloud must have enough free space to accommodate the
RAW image. For example, if you set the security hardened image size to 40G, you must
have 40G of space available on the undercloud hard disk.
IMPORTANT
When director writes the image to the physical disk, it creates a 64MB configuration drive
primary partition at the end of the disk. When you create your whole disk image, ensure
that the size of the physical disk accommodates this extra partition.
Procedure
1. Run the openstack overcloud image build command with all necessary configuration files.
1 This is the custom configuration file that contains the new disk size. If you are not using a
different custom disk size, use the original /usr/share/openstack-tripleo-common/image-
yaml/overcloud-hardened-images-python3.yaml file instead. For standard UEFI whole
disk images, use overcloud-hardened-images-uefi-python3.yaml.
This command creates an image called overcloud-hardened-full.qcow2, which contains all the
necessary security features.
1. Rename the newly generated image and move the image to your images directory:
# mv overcloud-hardened-full.qcow2 ~/images/overcloud-full.qcow2
199
Red Hat OpenStack Platform 16.0 Director Installation and Usage
If you want to replace an existing image with the security hardened image, use the --update-existing
flag. This flag overwrites the original overcloud-full image with a new security hardened image.
200
CHAPTER 23. CONFIGURING DIRECT DEPLOY
NOTE
Your overcloud node memory tmpfs must have at least 8GB of RAM.
Procedure
parameter_defaults:
IronicDefaultDeployInterface: direct
2. By default, the Bare Metal service (ironic) agent on each node obtains the image stored in the
Object Storage service (swift) through a HTTP link. Alternatively, ironic can stream this image
directly to the node through the ironic-conductor HTTP server. To change the service that
provides the image, set the IronicImageDownloadSource to http in the
/home/stack/undercloud_custom_env.yaml file:
parameter_defaults:
IronicDefaultDeployInterface: direct
IronicImageDownloadSource: http
3. Include the custom environment file in the DEFAULT section of the undercloud.conf file.
custom_env_files = /home/stack/undercloud_custom_env.yaml
201
Red Hat OpenStack Platform 16.0 Director Installation and Usage
This chapter explains how to virtualize your Red Hat OpenStack Platform (RHOSP) control plane for the
overcloud using RHOSP and Red Hat Virtualization.
NOTE
The following architecture diagram illustrates how to deploy a virtualized control plane. Distribute the
overcloud with the Controller nodes running on VMs on Red Hat Virtualization and run the Compute and
Storage nodes on bare metal.
NOTE
The OpenStack Bare Metal Provisioning service (ironic) includes a driver for Red Hat Virtualization VMs,
staging-ovirt. You can use this driver to manage virtual nodes within a Red Hat Virtualization
environment. You can also use it to deploy overcloud controllers as virtual machines within a Red Hat
Virtualization environment.
Although there are a number of benefits to virtualizing your RHOSP overcloud control plane, this is not
202
CHAPTER 24. CREATING VIRTUALIZED CONTROL PLANES
Although there are a number of benefits to virtualizing your RHOSP overcloud control plane, this is not
an option in every configuration.
Benefits
Virtualizing the overloud control plane has a number of benefits that prevent downtime and improve
performance.
You can allocate resources to the virtualized controllers dynamically, using hot add and hot
remove to scale CPU and memory as required. This prevents downtime and facilitates increased
capacity as the platform grows.
You can deploy additional infrastructure VMs on the same Red Hat Virtualization cluster. This
minimizes the server footprint in the data center and maximizes the efficiency of the physical
nodes.
You can use composable roles to define more complex RHOSP control planes and allocate
resources to specific components of the control plane.
You can maintain systems without service interruption with the VM live migration feature.
You can integrate third-party or custom tools that Red Hat Virtualization supports.
Limitations
Virtualized control planes limit the types of configurations that you can use.
Virtualized Ceph Storage nodes and Compute nodes are not supported.
Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel.
Red Hat Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block
Storage (cinder) drivers that need to map LUNs from a storage back end to the controllers,
where cinder-volume runs by default, do not work. You must create a dedicated role for cinder-
volume instead of including it on the virtualized controllers. For more information, see
Composable Services and Custom Roles.
Prerequisites
You must have a 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
You must have the following software already installed and configured:
Red Hat Virtualization. For more information, see Red Hat Virtualization Documentation
Suite.
Red Hat OpenStack Platform (RHOSP). For more information, see Director Installation and
Usage.
You must have the virtualized Controller nodes prepared in advance. These requirements are
the same as for bare metal Controller nodes. For more information, see Controller Node
Requirements.
203
Red Hat OpenStack Platform 16.0 Director Installation and Usage
You must have the bare metal nodes being used as overcloud Compute nodes, and the storage
nodes, prepared in advance. For hardware specifications, see the Compute Node Requirements
and Ceph Storage Node Requirements . To deploy overcloud Compute nodes on POWER
(ppc64le) hardware, see Red Hat OpenStack Platform for POWER .
You must have the logical networks created, and your cluster of host networks ready to use
network isolation with multiple networks. For more information, see Logical Networks.
You must have the internal BIOS clock of each node set to UTC to prevent issues with future-
dated file timestamps when hwclock synchronizes the BIOS clock before applying the timezone
offset.
TIP
To avoid performance bottlenecks, use composable roles and keep the data plane services on the bare
metal Controller nodes.
Procedure
1. To enable the staging-ovirt driver in director, add the driver to the enabled_hardware_types
parameter in the undercloud.conf configuration file:
enabled_hardware_types = ipmi,redfish,ilo,idrac,staging-ovirt
If you have configured the undercloud correctly, this command returns the following result:
+---------------------+-----------------------+
| Supported driver(s) | Active host(s) |
+---------------------+-----------------------+
| idrac | localhost.localdomain |
| ilo | localhost.localdomain |
| ipmi | localhost.localdomain |
| pxe_drac | localhost.localdomain |
| pxe_ilo | localhost.localdomain |
| pxe_ipmitool | localhost.localdomain |
| redfish | localhost.localdomain |
| staging-ovirt | localhost.localdomain |
3. Update the overcloud node definition template, for example, nodes.json, to register the VMs
hosted on Red Hat Virtualization with director. For more information, see Registering Nodes for
the Overcloud. Use the following key:value pairs to define aspects of the VMs that you want to
deploy with your overcloud:
204
CHAPTER 24. CREATING VIRTUALIZED CONTROL PLANES
For example:
{
"nodes": [
{
"name":"osp13-controller-0",
"pm_type":"staging-ovirt",
"mac":[
"00:1a:4a:16:01:56"
],
"cpu":"2",
"memory":"4096",
"disk":"40",
"arch":"x86_64",
"pm_user":"admin@internal",
"pm_password":"password",
"pm_addr":"rhvm.example.com",
"pm_vm_name":"{vernum}-controller-0",
"capabilities": "profile:control,boot_option:local"
},
...
}
4. Configure an affinity group in Red Hat Virtualization with "soft negative affinity" to ensure high
availability is implemented for your controller VMs. For more information, see Affinity Groups.
5. Open the Red Hat Virtualization Manager interface, and use it to map each VLAN to a separate
logical vNIC in the controller VMs. For more information, see Logical Networks.
6. Set no_filter in the vNIC of the director and controller VMs, and restart the VMs, to disable the
MAC spoofing filter on the networks attached to the controller VMs. For more information, see
Virtual Network Interface Cards .
7. Deploy the overcloud to include the new virtualized controller nodes in your environment:
205
Red Hat OpenStack Platform 16.0 Director Installation and Usage
206
PART V. TROUBLESHOOTING AND TIPS
207
Red Hat OpenStack Platform 16.0 Director Installation and Usage
Procedure
$ source ~/stackrc
2. Run the node import command with the --validate-only option. This option validates your node
template without performing an import:
3. To fix incorrect details with imported nodes, run the openstack baremetal commands to
update node details. The following example shows how to change networking details:
$ source ~/stackrc
(undercloud) $ openstack baremetal port list --node [NODE UUID]
To diagnose and resolve common environment misconfiguration issues, complete the following steps:
Procedure
208
CHAPTER 25. TROUBLESHOOTING DIRECTOR ERRORS
$ source ~/stackrc
2. Director uses OpenStack Object Storage (swift) to save the hardware data that it obtains
during the introspection process. If this service is not running, the introspection can fail. Check
all services related to OpenStack Object Storage to ensure that the service is running:
3. Ensure that your nodes are in a manageable state. The introspection does not inspect nodes in
an available state, which is meant for deployment. If you want to inspect nodes that are in an
available state, change the node status to manageable state before introspection:
4. Configure temporary access to the introspection ramdisk. You can provide either a temporary
password or an SSH key to access the node during introspection debugging. Complete the
following procedure to configure ramdisk access:
a. Run the openssl passwd -1 command with a temporary password to generate an MD5
hash:
b. Edit the /var/lib/ironic/httpboot/inspector.ipxe file, find the line starting with kernel, and
append the rootpwd parameter and the MD5 hash:
NOTE
Include quotation marks for both the rootpwd and sshkey parameters.
Use the --provide option to change the node state to available after the introspection
completes.
7. If an error occurs, access the node using the root user and temporary access details:
209
Red Hat OpenStack Platform 16.0 Director Installation and Usage
$ ssh [email protected]
Access the node during introspection to run diagnostic commands and troubleshoot the
introspection failure.
NOTE
Red Hat OpenStack Platform director retries introspection three times after the
initial abort. Run the openstack baremetal introspection abort command at
each attempt to abort the introspection completely.
For example, when you run the openstack overcloud deploy command, the OpenStack Workflow
service executes two workflows. The first workflow uploads the deployment plan:
The OpenStack Workflow service uses the following objects to track the workflow:
Actions
A particular instruction that OpenStack performs when an associated task runs. Examples include
running shell scripts or performing HTTP requests. Some OpenStack components have in-built
actions that OpenStack Workflow uses.
Tasks
Defines the action to run and the result of running the action. These tasks usually have actions or
other workflows associated with them. When a task completes, the workflow directs to another task,
usually depending on whether the task succeeded or failed.
210
CHAPTER 25. TROUBLESHOOTING DIRECTOR ERRORS
Workflows
A set of tasks grouped together and executed in a specific order.
Executions
Defines a particular action, task, or workflow running.
OpenStack Workflow also provides robust logging of executions, which helps to identify issues with
certain command failures. For example, if a workflow execution fails, you can identify the point of failure.
Procedure
$ source ~/stackrc
2. List the workflow executions that have the failed state ERROR:
3. Get the UUID of the failed workflow execution (for example, dffa96b0-f679-4cd2-a490-
4769a3825262) and view the execution and output:
4. These commands return information about the failed task in the execution. The openstack
workflow execution show command also displays the workflow that was used for the execution
(for example, tripleo.plan_management.v1.publish_ui_logs_to_swift). You can view the full
workflow definition with the following command:
This is useful for identifying where in the workflow a particular task occurs.
5. View action executions and their results using a similar command syntax:
Procedure
211
Red Hat OpenStack Platform 16.0 Director Installation and Usage
$ source ~/stackrc
Procedure
$ source ~/stackrc
2. Check the bare metal service to see all registered nodes and their current status:
+----------+------+---------------+-------------+-----------------+-------------+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+----------+------+---------------+-------------+-----------------+-------------+
| f1e261...| None | None | power off | available | False |
| f0b8c1...| None | None | power off | available | False |
+----------+------+---------------+-------------+-----------------+-------------+
All nodes available for provisioning should have the following states set:
212
CHAPTER 25. TROUBLESHOOTING DIRECTOR ERRORS
Maintenance sets The director cannot access the power Check the credentials for node power
itself to True management for the nodes. management.
automatically.
Provision State is The problem occurred before bare metal Check the node details including the
set to available deployment started. profile and flavor mapping. Check that
but nodes do not the node hardware details are within the
provision. requirements for the flavor.
Provision State is The node provisioning process has not Wait until this status changes. Otherwise,
set to wait call- yet finished for this node. connect to the virtual console of the node
back for a node. and check the output.
Provision State is The node provisioning has finished Diagnose the node configuration process.
active and Power successfully and there is a problem during Connect to the virtual console of the
State is power on the post-deployment configuration step. node and check the output.
but the nodes do
not respond.
Provision State is Node provisioning has failed. View the bare metal node details with the
error or deploy openstack baremetal node show
failed . command and check the last_error field,
which contains error description.
Procedure
1. Install nmap:
2. Use nmap to scan the IP address range for active addresses. This example scans the
192.168.24.0/24 range, replace this with the IP subnet of the Provisioning network (using CIDR
bitmask notation):
3. Review the output of the nmap scan. For example, you should see the IP address of the
undercloud, and any other hosts that are present on the subnet:
213
Red Hat OpenStack Platform 16.0 Director Installation and Usage
If any of the active IP addresses conflict with the IP ranges in undercloud.conf, you must either
change the IP address ranges or release the IP addresses before you introspect or deploy the
overcloud nodes.
NoValidHost: No valid host was found. There are not enough hosts available.
This error occurs when the Compute Scheduler cannot find a bare metal node that is suitable for
booting the new instance. This usually means that there is a mismatch between resources that the
Compute service expects to find and resources that the Bare Metal service advertised to Compute. To
check that there is a mismatch error, complete the following steps:
Procedure
$ source ~/stackrc
2. Check that the introspection succeeded on the node. If the introspection fails, check that each
node contains the required ironic node properties:
Check that the properties JSON field has valid values for keys cpus, cpu_arch, memory_mb
and local_gb.
3. Ensure that the Compute flavor that is mapped to the node does not exceed the node
properties for the required number of nodes:
4. Run the openstack baremetal node list command to ensure that there are sufficient nodes in
the available state. Nodes in manageable state usually signify a failed introspection.
5. Run the openstack baremetal node list command and ensure that the nodes are not in
maintenance mode. If a node changes to maintenance mode automatically, the likely cause is an
issue with incorrect power management credentials. Check the power management credentials
and then remove maintenance mode:
214
CHAPTER 25. TROUBLESHOOTING DIRECTOR ERRORS
6. If you are using automatic profile tagging, check that you have enough nodes that correspond
to each flavor and profile. Run the openstack baremetal node show command on a node and
check the capabilities key in the properties field. For example, a node tagged for the Compute
role contains the profile:compute value.
7. You must wait for node information to propagate from Bare Metal to Compute after
introspection. However, if you performed some steps manually, there might be a short period of
time when nodes are not available to the Compute service (nova). Use the following command
to check the total resources in your system:
Procedure
1. Ensure that the stack user has access to the files in the /var/lib/mistral directory on the
undercloud:
2. Change to the working directory for the config-download files. This is usually
/var/lib/mistral/overcloud/.
$ cd /var/lib/mistral/overcloud/
$ less ansible.log
4. Find the step that failed in the config-download playbooks within the working directory to
identify the action that ocurred.
$ source ~/stackrc
215
Red Hat OpenStack Platform 16.0 Director Installation and Usage
$ sudo -i
$ podman ps --all
Identify the failed container. The failed container usually exits with a non-zero status.
1. Each container retains standard output from its main process. Use this output as a log to help
determine what actually occurs during a container run. For example, to view the log for the
keystone container, run the following command:
In most cases, this log contains information about the cause of a container failure.
2. The host also retains the stdout log for the failed service. You can find the stdout logs in
/var/log/containers/stdouts/. For example, to view the log for a failed keystone container, run
the following command:
$ cat /var/log/containers/stdouts/keystone.log
Inspecting containers
In some situations, you might need to verify information about a container. For example, use the
following command to view keystone container data:
This command returns a JSON object containing low-level configuration data. You can pipe the output
to the jq command to parse specific data. For example, to view the container mounts for the keystone
container, run the following command:
You can also use the --format option to parse data to a single line, which is useful for running commands
against sets of container data. For example, to recreate the options used to run the keystone container,
use the following inspect command with the --format option:
216
CHAPTER 25. TROUBLESHOOTING DIRECTOR ERRORS
NOTE
Use these options in conjunction with the podman run command to recreate the container for
troubleshooting purposes:
NOTE
Replace <COMMAND> with the command you want to run. For example, each container has a health
check script to verify the service connection. You can run the health check script for keystone with the
following command:
To access the container shell, run podman exec using /bin/bash as the command you want to run inside
the container:
1. To view the file system for the failed container, run the podman mount command. For
example, to view the file system for a failed keystone container, run the following command:
/var/lib/containers/storage/overlay/78946a109085aeb8b3a350fc20bd8049a08918d74f573396d
7358270e711c610/merged
This is useful for viewing the Puppet reports within the container. You can find these reports in
217
Red Hat OpenStack Platform 16.0 Director Installation and Usage
This is useful for viewing the Puppet reports within the container. You can find these reports in
the var/lib/puppet/ directory within the container mount.
Exporting a container
When a container fails, you might need to investigate the full contents of the file. In this case, you can
export the full file system of a container as a tar archive. For example, to export the keystone container
file system, run the following command:
This command creates the keystone.tar archive, which you can extract and explore.
Procedure
$ source ~/stackrc
2. Get the IP address of the Compute node that contains the failure:
$ sudo -i
7. If you perform maintenance on the Compute node, migrate the existing instances from the host
to an operational Compute node, then disable the node.
"How to collect all required logs for Red Hat Support to investigate an OpenStack issue"
218
CHAPTER 25. TROUBLESHOOTING DIRECTOR ERRORS
219
Red Hat OpenStack Platform 16.0 Director Installation and Usage
The default flush periods for each service are listed in this table:
OpenStack Orchestration (heat) Deleted template data that has Every day
expired and is older than 30 days
OpenStack Compute (nova) Flush archived data older than 14 Every day
days
The following tables outline the parameters that you can use to control these cron jobs.
Parameter Description
220
CHAPTER 26. TIPS FOR UNDERCLOUD AND OVERCLOUD SERVICES
Parameter Description
Parameter Description
221
Red Hat OpenStack Platform 16.0 Director Installation and Usage
To adjust these intervals, create an environment file that contains your token flush interval for the
respective services and add this file to the custom_env_files parameter in your undercloud.conf file.
For example, to change the OpenStack Identity (keystone) token flush to 30 minutes, use the following
snippets
keystone-cron.yaml
parameter_defaults:
KeystoneCronTokenFlushMinute: '0/30'
undercloud.yaml
custom_env_files: keystone-cron.yaml
NOTE
You can also use these parameters for your overcloud. For more information, see the
Overcloud Parameters guide.
222
CHAPTER 26. TIPS FOR UNDERCLOUD AND OVERCLOUD SERVICES
However, you can set the number of workers manually with the HeatWorkers parameter in an
environment file:
heat-workers.yaml
parameter_defaults:
HeatWorkers: 16
undercloud.yaml
custom_env_files: heat-workers.yaml
swift_object_server
swift_container_server
swift_account_server
For example, to view information about your swift object rings, run the following command:
You can run this command on both the undercloud and overcloud nodes.
Set the following hieradata using the hieradata_override undercloud configuration option:
tripleo::haproxy::ssl_cipher_suite
The cipher suite to use in HAProxy.
223
Red Hat OpenStack Platform 16.0 Director Installation and Usage
tripleo::haproxy::ssl_options
The SSL/TLS rules to use in HAProxy.
For example, you might want to use the following cipher and rules:
Cipher: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-
POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-
RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-
AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-
ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-
AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-
CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-
SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-
SHA:DES-CBC3-SHA:!DSS
tripleo::haproxy::ssl_cipher_suite: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-
CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-
AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-
SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-
SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-
AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-
CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-
SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
tripleo::haproxy::ssl_options: no-sslv3 no-tls-tickets
NOTE
Set the hieradata_override parameter in the undercloud.conf file to use the hieradata override file
you created before you ran openstack undercloud install:
[DEFAULT]
...
hieradata_override = haproxy-hiera-overrides.yaml
...
[2] In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value
224
PART VI. APPENDICES
225
Red Hat OpenStack Platform 16.0 Director Installation and Usage
pm_type
Set this option to ipmi.
pm_user; pm_password
The IPMI username and password.
pm_addr
The IP address of the IPMI controller.
pm_port (Optional)
The port to connect to the IPMI controller.
A.2. REDFISH
A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force
(DMTF)
pm_type
Set this option to redfish.
pm_user; pm_password
The Redfish username and password.
pm_addr
The IP address of the Redfish controller.
pm_system_id
The canonical path to the system resource. This path must include the root service, version, and the
path/unqiue ID for the system. For example: /redfish/v1/Systems/CX34R87.
redfish_verify_ca
If the Redfish service in your baseboard management controller (BMC) is not configured to use a
valid TLS certificate signed by a recognized certificate authority (CA), the Redfish client in ironic fails
to connect to the BMC. Set the redfish_verify_ca option to false to mute the error. However, be
aware that disabling BMC authentication compromises the access security of your BMC.
pm_type
Set this option to idrac.
pm_user; pm_password
226
APPENDIX A. POWER MANAGEMENT DRIVERS
pm_type
Set this option to ilo.
pm_user; pm_password
The iLO username and password.
pm_addr
The IP address of the iLO interface.
Director also requires an additional set of utilities for iLo. Install the python3-proliantutils
package and restart the openstack-ironic-conductor service:
HP nodes must have a minimum ILO firmware version of 1.85 (May 13 2015) for successful
introspection. Director has been successfully tested with nodes using this ILO firmware
version.
IMPORTANT
pm_type
Set this option to irmc.
pm_user; pm_password
The username and password for the iRMC interface.
pm_addr
The IP address of the iRMC interface.
227
Red Hat OpenStack Platform 16.0 Director Installation and Usage
pm_port (Optional)
The port to use for iRMC operations. The default is 443.
pm_auth_method (Optional)
The authentication method for iRMC operations. Use either basic or digest. The default is basic
pm_client_timeout (Optional)
Timeout (in seconds) for iRMC operations. The default is 60 seconds.
pm_sensor_method (Optional)
Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.
If you enable SCCI as the sensor method, you must also install an additional set of utilities.
Install the python3-scciclient package and restart the openstack-ironic-conductor
service:
pm_type
Set this option to staging-ovirt.
pm_user; pm_password
The username and password for your RHV environment. The username also includes the
authentication provider. For example: admin@internal.
pm_addr
The IP address of the RHV REST API.
pm_vm_name
The name of the virtual machine to control.
mac
A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the
Provisioning NIC of each system.
IMPORTANT
228
APPENDIX A. POWER MANAGEMENT DRIVERS
IMPORTANT
This option is available only for testing and evaluation purposes. It is not recommended
for Red Hat OpenStack Platform enterprise environments.
pm_type
Set this option to manual-management.
This driver does not use any authentication details because it does not control power
management.
In your instackenv.json node inventory file, set the pm_type to manual-management for
the nodes that you want to manage manually.
When performing introspection on nodes, manually start the nodes after running the
openstack overcloud node introspect command.
When performing overcloud deployment, check the node status with the openstack
baremetal node list command. Wait until the node status changes from deploying to
deploy wait-callback and then manually start the nodes.
After the overcloud provisioning process completes, reboot the nodes. To check the
completion of provisioning, check the node status with the openstack baremetal node list
command, wait until the node status changes to active, then manually reboot all overcloud
nodes.
229
Red Hat OpenStack Platform 16.0 Director Installation and Usage
For example:
parameter_defaults:
CephAnsiblePlaybook: /usr/share/ceph-ansible/site.yml.sample
CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==
CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
CephExternalMonHost: 172.16.1.7, 172.16.1.8
NOTE
For more information about composable services, see composable services and custom roles in the
Advanced Overcloud Customization guide. Use the following example to understand how to move the
listed services from the Controller node to a dedicated ppc64le node:
230
APPENDIX B. RED HAT OPENSTACK PLATFORM FOR POWER
231
Red Hat OpenStack Platform 16.0 Director Installation and Usage
- OS::TripleO::Services::NeutronLbaasv2Agent
- OS::TripleO::Services::NeutronLbaasv2Api
- OS::TripleO::Services::NeutronLinuxbridgeAgent
- OS::TripleO::Services::NeutronMetadataAgent
- OS::TripleO::Services::NeutronML2FujitsuCfab
- OS::TripleO::Services::NeutronML2FujitsuFossw
- OS::TripleO::Services::NeutronOvsAgent
- OS::TripleO::Services::NeutronVppAgent
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::ContainersLogrotateCrond
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::Rhsm
- OS::TripleO::Services::RsyslogSidecar
- OS::TripleO::Services::Securetty
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::SkydiveAgent
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::SwiftProxy
- OS::TripleO::Services::SwiftDispersion
- OS::TripleO::Services::SwiftRingBuilder
- OS::TripleO::Services::SwiftStorage
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::Tuned
- OS::TripleO::Services::Vpp
- OS::TripleO::Services::OVNController
- OS::TripleO::Services::OVNMetadataAgent
- OS::TripleO::Services::Ptp
EO_TEMPLATE
(undercloud) [stack@director roles]$ sed -i~ -e '/OS::TripleO::Services::\
(Cinder\|Glance\|Swift\|Keystone\|Neutron\)/d' Controller.yaml
(undercloud) [stack@director roles]$ cd ../
(undercloud) [stack@director templates]$ openstack overcloud roles generate \
--roles-path roles -o roles_data.yaml \
Controller Compute ComputePPC64LE ControllerPPC64LE BlockStorage ObjectStorage
CephStorage
232