Multi Region
The concept of Multi Region in Openstack
overview
In Openstack, there are many misunderstanding concepts such as Domain, Region, Multi
Site. Here I will clarify the concepts
Domain
An Openstack Cloud system can be used by many ORGANIZATIONS, BUSINESS,
INDIVIDUAL, etc., and openstack also allows creating projects with the same name.
For example, the Technical Department of Nhan Hoa uses a project named
Phong_ky_thuat, the Technical Department of Cloud365 company also names the project as
Phong_ky_thuat, then on the openstack system there will be 2 projects "phong_ky_thuat"
but owned by the company. 2 different companies, this will confuse the administrator.
To make the Openstack system more transparent, Keystone added the concept of "Domain"
to isolate projects between organizations. That is, each business, organization, and
individual will have a separate domain, at the domain of each organization, they can only
see the projects and users they own, independent of other "Domains".
Going back to the above example, Nhan Hoa company will own the domain 'nhanha',
Cloud365 company will own the domain 'cloud365'. At that time, the administrator of Nhan
Hoa only saw the projects and users belonging to the domain 'nhanha', similar to the
Cloud365 company.
Region
When the Openstack Cloud system develops to a certain extent, it is not enough to deploy 1
Openstack cluster, businesses will consider deploying many Openstack systems in many
different locations to serve the problem of optimal location. geographical location.
For example, Nhan Hoa Company has an Openstack cluster in Hanoi, the Company decided
to deploy another Openstack cluster in Da Nang, another Openstack cluster in Ho Chi Minh
to provide the best service to customers in Hanoi, Da Nang, Ho Chi Minh City, and requires
3 Openstack clusters to use the same authentication system for user administration.
The concept of Region or Multi Region will be applied to the above example, when we want
to deploy many different openstack clusters but want to use them in Keystone.
Multi Site
The concept of Multi Site is similar to the concept of Region but with a difference, we will
have many Openstack clusters but will not share the Keystone service (Each cluster runs
independently).
Adding the concept of Multi Region in Openstack
Using the example again, Nhan Hoa Company will deploy a total of 3 Regions in Hanoi, Da
Nang, Ho Chi Minh sharing the same authentication system. To implement the above
requirement with Openstack, we will deploy 3 openstack clusters but will use the same
Keystone project. Note, 3 Openstack clusters will be on 1 different Data Center.
To share Keystone between 3 Openstack clusters we will have 3 choices:
1. Centralized Keystone DB: That is, we will have a central DB of Keystone, the
Regions will connect to the Database through the WAN. (Don't use Keystone DB)
2. Asyncronous Keystone DB: Each Region will have 1 DB but will maintain 1 Master
(Allow read and write) and the rest will be Slave (Read Only). Then the data will be
synchronized between the DBs in the Regions
3. Syncronous (Clustered) Keystone DB: Using MySQL/MariaDB Galera Cluster,
Keystone databases in 3 Regions will be synchronized.
The options will have different pros and cons.
Preparation
Deploy 2 Openstack clusters according to docs manual Installation
Deployment
Note:
According to the docs manual, the default Region will be RegionOne, so we won't have to
configure 2 nodes Controller 192.168.80.83 and compute 192.168.80.84.
In the article, only configured 2 node controller 192.168.80.83 and compute 192.168.80.86
to RegionTwo, sharing Keystone, Horizon with RegionOne.
Keystone shared between 2 Openstack clusters will be on Controller 192.168.80.83
Step 1: Create RegionTwo on Controller 192.168.80.83
Implemented on 80.83 . Controller
Create a new Region
Note: Do not source admin-openrc when running keystone-manage
[root@mult-ctl1 ~]$ keystone-manage bootstrap --bootstrap-password passla123 \
--bootstrap-admin-url http://192.168.80.83:5000/v3/ \
--bootstrap-internal-url http://192.168.80.83:5000/v3/ \
--bootstrap-public-url http://192.168.80.83:5000/v3/ \
--bootstrap-region-id RegionTwo
Check the Regions on the system
[root@mult-ctl1 ~]$ source admin-openrc
[root@mult-ctl1 ~(admin-openrc)]$ openstack region list
+-----------+---------------+-------------+
| Region | Parent Region | Description |
+-----------+---------------+-------------+
| RegionOne | None | |
| RegionTwo | None | |
+-----------+---------------+-------------+
After using the RegionTwo initialization keystone, the keystone will automatically create a
new endpoint identity.
[root@mult-ctl1 ~(admin-openrc)]$ openstack endpoint list --service identity
+----------------------------------+-----------+--------------+--------------+---------+-----------
+-----------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------
+-----------------------------+
| 009af663df69409d8c86bb9125217b7c | RegionTwo | keystone | identity | True | admin |
http://192.168.80.83:5000/v3/ |
| 0cc2b298947a4e96a8d4acf3bfbe6837 | RegionTwo | keystone | identity | True | internal |
http://192.168.80.83:5000/v3/ |
| 4709d158e0fd45d98d2f2a1949e7877f | RegionTwo | keystone | identity | True | public |
http://192.168.80.83:5000/v3/ |
| 6317fc49aa2b43b4ae28a7683e8e9943 | RegionOne | keystone | identity | True | internal |
http://192.168.80.83:5000/v3/ |
| f0384c9359c14afcbd6d8b5bddc15c90 | RegionOne | keystone | identity | True | public |
http://192.168.80.83:5000/v3/ |
| f75e70e338be41cc8dbb7558ff08c249 | RegionOne | keystone | identity | True | admin |
http://192.168.80.83:5000/v3/ |
+----------------------------------+-----------+--------------+--------------+---------+-----------
+-----------------------------+
Step 2: Initialize RegionTwo endpoints for nova, cinder, glance,
neutron (Implemented on CTL1 192.168.80.83)
Note that the endpoints created for RegionTwo will use IP CTL 192.168.80.85
[root@mult-ctl1 ~]$ source admin-openrc
[root@mult-ctl1 ~(admin-openrc)]$
Create endpoints
openstack endpoint create --region RegionTwo image public http://192.168.80.85:9292
openstack endpoint create --region RegionTwo image admin http://192.168.80.85:9292
openstack endpoint create --region RegionTwo image internal http://192.168.80.85:9292
openstack endpoint create --region RegionTwo network public http://192.168.80.85:9696
openstack endpoint create --region RegionTwo network internal http://192.168.80.85:9696
openstack endpoint create --region RegionTwo network admin http://192.168.80.85:9696
openstack endpoint create --region RegionTwo compute public
http://192.168.80.85:8774/v2.1
openstack endpoint create --region RegionTwo compute admin
http://192.168.80.85:8774/v2.1
openstack endpoint create --region RegionTwo compute internal
http://192.168.80.85:8774/v2.1
openstack endpoint create --region RegionTwo placement public http://192.168.80.85:8778
openstack endpoint create --region RegionTwo placement admin http://192.168.80.85:8778
openstack endpoint create --region RegionTwo placement internal http://192.168.80.85:8778
openstack endpoint create --region RegionTwo volumev2 public
http://192.168.80.85:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionTwo volumev2 internal
http://192.168.80.85:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionTwo volumev2 admin
http://192.168.80.85:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionTwo volumev3 public
http://192.168.80.85:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionTwo volumev3 internal
http://192.168.80.85:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionTwo volumev3 admin
http://192.168.80.85:8776/v3/%\(project_id\)s
Check
[root@mult-ctl1 ~(admin-openrc)]$ openstack endpoint list --region RegionTwo
+----------------------------------+-----------+--------------+--------------+---------+-----------
+-------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------
+-------------------------------------------+
| 009af663df69409d8c86bb9125217b7c | RegionTwo | keystone | identity | True | admin |
http://192.168.80.83:5000/v3/ |
| 0cc2b298947a4e96a8d4acf3bfbe6837 | RegionTwo | keystone | identity | True | internal |
http://192.168.80.83:5000/v3/ |
| 1cef864c85e04bb6aa1345efb1ef2dd4 | RegionTwo | nova | compute | True | public |
http://192.168.80.85:8774/v2.1 |
| 21ed22d548d4422f97883e3341fd6ced | RegionTwo | nova | compute | True | admin |
http://192.168.80.85:8774/v2.1 |
| 286b82697852401c9498695892a7d388 | RegionTwo | cinderv3 | volumev3 | True | admin |
http://192.168.80.85:8776/v3/%(project_id)s |
| 3d85c57f745245bf94ab37e072ad9fd2 | RegionTwo | nova | compute | True | internal |
http://192.168.80.85:8774/v2.1 |
| 4709d158e0fd45d98d2f2a1949e7877f | RegionTwo | keystone | identity | True | public |
http://192.168.80.83:5000/v3/ |
| 53cd8b0aa4504a729456c32e8041e20a | RegionTwo | cinderv3 | volumev3 | True | internal
| http://192.168.80.85:8776/v3/%(project_id)s |
| 56f20dbb82be4b1b8f5b9bc62ad7d401 | RegionTwo | cinderv3 | volumev3 | True | public |
http://192.168.80.85:8776/v3/%(project_id)s |
| 5cae9e4cff9e43cb8c9d092d6b204658 | RegionTwo | neutron | network | True | internal |
http://192.168.80.85:9696 |
| 6dedfcfef07147e4b5547e8e43bc1d93 | RegionTwo | placement | placement | True | public |
http://192.168.80.85:8778 |
| 732c5b2e12974a35b534566b41f95254 | RegionTwo | cinderv2 | volumev2 | True | internal
| http://192.168.80.85:8776/v2/%(project_id)s |
| 7aeb05e306fe44508e0334c149428e02 | RegionTwo | glance | image | True | admin |
http://192.168.80.85:9292 |
| 8758a4edbc1248b3ad8108f63985affe | RegionTwo | cinderv2 | volumev2 | True | admin |
http://192.168.80.85:8776/v2/%(project_id)s |
| 90abe4b694a24edea9059b3125daf24e | RegionTwo | placement | placement | True |
internal | http://192.168.80.85:8778 |
| a98401bd172649b8b5c21bb26152f40b | RegionTwo | neutron | network | True | admin |
http://192.168.80.85:9696 |
| c309a0ccafe24ed2ad2bd9f31ad49cde | RegionTwo | cinderv2 | volumev2 | True | public |
http://192.168.80.85:8776/v2/%(project_id)s |
| cfcfb92d8f974260853da7abbda06e6d | RegionTwo | neutron | network | True | public |
http://192.168.80.85:9696 |
| d2aab1e0a2ad4fedb654da4a9e18dc30 | RegionTwo | placement | placement | True |
admin | http://192.168.80.85:8778 |
| f63ef7faacc74b67be231ab90e4e6038 | RegionTwo | glance | image | True | public |
http://192.168.80.85:9292 |
| fa01094bc6e1498eb52388af566ee865 | RegionTwo | glance | image | True | internal |
http://192.168.80.85:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------
+-------------------------------------------+
Step 3: Create admin openstack resource above for both CTL
192.168.80.83 and 192.168.80.85 nodes
cat << EOF >> admin-openrc-r2
export OS_REGION_NAME=RegionTwo
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=passla123
export OS_AUTH_URL=http://192.168.80.83:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='[\u@\h \W(admin-openrc)]\$ '
EOF
Step 4: Edit Glance service
On 192.168.80.85 controller
With Glance, we will edit the keystone validation entry to CTL1 192.168.80.83
Item [keystone_authtoken]
auth_uri, auth_url to CTL1 192.168.80.83
Edit region_name to RegionTwo.
Note, will edit 2 files glance-api.conf and glance-registry.conf
At file: /etc/glance/glance-api.conf
[DEFAULT]
bind_host = 192.168.80.85
registry_host = 192.168.80.85
[cors]
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
#auth_uri = http://192.168.80.85:5000
#auth_url = http://192.168.80.85:5000
auth_uri = http://192.168.80.83:5000
auth_url = http://192.168.80.83:5000
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = passla123
region_name = RegionTwo
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
At file: /etc/glance/glance-registry.conf
[DEFAULT]
bind_host = 192.168.80.85
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
#auth_uri = http://192.168.80.85:5000
#auth_url = http://192.168.80.85:5000
auth_uri = http://192.168.80.83:5000
auth_url = http://192.168.80.83:5000
memcached_servers = 192.168.80.85
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = passla123
region_name = RegionTwo
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
Restart the service
[root@mult-ctl2 ~]# systemctl restart openstack-glance-api.service openstack-glance-
registry.service
Test on CTL1 or CTL2 with admin-openrc-r2 authentication condition
[root@mult-ctl1 ~(admin-openrc-r2)]$ openstack --debug image list --os-region-name
RegionTwo
Result
Estar en CTL2 192.168.80.85 realiza la autenticación a través de Keystone 192.168.80.83
en RegionTwo
[root@mult-ctl2 ~]# source admin-openrc-r2
Sube una nueva imagen
[root@mult-ctl2 ~(admin-openrc-r2)]$ imagen de openstack crear "cirrosr2" --file cirros-0.3.5-
x86_64-disk.img --formato de disco qcow2 --formato de contenedor desnudo -- público
Imagen de prueba en el nodo 80.83
[root@mult-ctl1 ~]# openstack image list --os-region-name RegionTwo
+--------------------------------------+----------+--------+
| ID | Name | Status |
+--------------------------------------+----------+--------+
| ecb55f8d-7262-468c-9368-586ea4daf3a0 | cirros | active |
| 14a730c6-e1d4-477d-aea5-8d2b8ac36d45 | cirrosr2 | active |
+--------------------------------------+----------+--------+
Step 5: Edit Nova . service
With Nova, we need to edit the settings on CTL2 192.168.80.85, COM2 192.168.80.86
Performed on CTL2 192.168.80.85
With the Nova service, we will edit the entries [keystone_authtoken], [neutron], [placement],
[cinder] at /etc/nova/nova.conf.
Item [cinder]:
Edit os_region_name to RegionTwo
Item [keystone_authtoken]:
Edit auth_url to CTL1 192.168.80.83
region_name to RegionTwo
Item [neutron]:
Edit auth_url to CTL1 192.168.80.83
region_name to RegionTwo
Section [placement]
os_region_name to RegionTwo
auth_url to CTL1 192.168.80.83
Sample configuration (en CTL2 192.168.80.85)
[DEFAULT]
my_ip = 192.168.80.85
enabled_apis = osapi_compute,metadata
use_neutron = True
osapi_compute_listen=192.168.80.85
metadata_host=192.168.80.85
metadata_listen=192.168.80.85
metadata_listen_port=8775
firewall_driver = nova.virt.firewall.NoopFirewallDriver
allow_resize_to_same_host=True
notify_on_state_change = vm_and_task_state
transport_url = rabbit://openstack:[email protected]:5672
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api
[barbican]
[cache]
backend = oslo_cache.memcache_pool
enabled = true
memcache_servers = 192.168.80.85:11211
[cells]
[cinder]
os_region_name = RegionTwo
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:[email protected]/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://192.168.80.85:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
#auth_url = http://192.168.80.85:5000/v3
auth_url = http://192.168.80.83:5000/v3
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = passla123
region_name = RegionTwo
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://192.168.80.85:9696
#auth_url = http://192.168.80.85:35357
auth_url = http://192.168.80.83:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTwo
project_name = service
username = neutron
password = passla123
service_metadata_proxy = true
metadata_proxy_shared_secret = passla123
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_ha_queues = true
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
amqp_durable_queues= true
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionTwo
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
#auth_url = http://192.168.80.85:5000/v3
auth_url = http://192.168.80.83:5000/v3
username = placement
password = passla123
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
novncproxy_host=192.168.80.85
enabled = true
vncserver_listen = 192.168.80.85
vncserver_proxyclient_address = 192.168.80.85
novncproxy_base_url = http://192.168.80.85:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
Restart the service
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-
nova-consoleauth.service openstack-nova-conductor.service openstack-nova-
novncproxy.service
Check
GET call to compute for http://192.168.80.85:8774/v2.1/flavors/detail used request id req-
62e730eb-e8df-4d20-be53-b153c232df40
clean_up ListServer:
END return value: 0
[root@mult-ctl1 ~(admin-openrc-r2)]$ openstack --debug server list --os-region-name
RegionTwo
Implemented on COM2 192.168.80.86
With the nova service editing /etc/nova/nova.conf at COM2 192.168.80.86, we will edit
[cinder], [keystone_authtoken], [neutron], [placement]
Item [cinder]:
Specify to use RegionTwo's cinder service (os_region_name = RegionTwo)
Item [keystone_authtoken]:
Edit auth_url to CTL 80.83
region_name to RegionTwo
Item [neutron]:
Edit auth_url to CTL 80.83
region_name to RegionTwo
Section [placement]:
Edit auth_url to CTL 80.83
Change os_region_name to RegionTwo
Sample configuration
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]:5672
my_ip = 192.168.80.86
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionTwo
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://192.168.80.85:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
#auth_url = http://192.168.80.85:5000/v3
auth_url = http://192.168.80.83:5000/v3
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = passla123
region_name = RegionTwo
[libvirt]
# egrep -c '(vmx|svm)' /proc/cpuinfo = 0
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://192.168.80.85:9696
#auth_url = http://192.168.80.85:35357
auth_url = http://192.168.80.83:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTwo
project_name = service
username = neutron
password = passla123
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_ha_queues = true
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
amqp_durable_queues= true
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionTwo
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
#auth_url = http://192.168.80.85:5000/v3
auth_url = http://192.168.80.83:5000/v3
username = placement
password = passla123
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.80.86
novncproxy_base_url = http://192.168.80.85:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
Restart the service at the COM . node
systemctl restart libvirtd.service openstack-nova-compute
Note: Check log at Nova Compute
[root@compute01 ~]# cat /var/log/nova/nova-compute.log | grep 'placement'
2019-04-11 10:36:33,694 14368 ERROR nova.scheduler.client.report [req-6c9a2cb8-b840-
4345-bb9e-088068c8568f - - - - -] [req-5d53d9f5-99a8-4ce3-9579-92d93ec5f31f] Failed to
retrieve resource provider tree from placement API for UUID 52517eca-5525-4905-aaa1-
fed226b3366f. Got 401: {"error": {"message": "The request you have made requires
authentication.", "code": 401, "title": "Unauthorized"}}.
If it appears, check the config at nova controller and nova compute, then start os CTL 80.85
and 80.86 in RegionTwo (by db or cache)
Back to CTL 80.83
Check the service with the command
[root@mult-ctl1 ~(admin-openrc-r2)]$ openstack compute service list --os-region-name
RegionTwo
+----+------------------+-----------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-----------+----------+---------+-------+----------------------------+
| 1 | nova-conductor | mult-ctl2 | internal | enabled | up | 2019-06-19T04:29:59.000000 |
| 2 | nova-scheduler | mult-ctl2 | internal | enabled | up | 2019-06-19T04:29:53.000000 |
| 3 | nova-consoleauth | mult-ctl2 | internal | enabled | up | 2019-06-19T04:30:02.000000 |
| 8 | nova-compute | mult-com2 | nova | enabled | up | 2019-06-19T04:29:57.000000 |
+----+------------------+-----------+----------+---------+-------+----------------------------+
[root@mult-ctl1 ~(admin-openrc-r2)]$
Step 6: Edit the Cinder service
Performed on CTL 80.85
With the Cinder service, we will edit the [keystone_authtoken] entries at
/etc/cinder/cinder.conf
Item [keystone_authtoken]
Fix auth_uri, auth_url to CTL 80.83
region_name to RegionTwo
Sample configuration
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
my_ip = 192.168.80.85
enabled_backends = lvm
glance_api_servers = http://192.168.80.85:9292
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
#auth_uri = http://192.168.80.85:5000
#auth_url = http://192.168.80.85:35357
auth_uri = http://192.168.80.83:5000
auth_url = http://192.168.80.83:35357
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = passla123
region_name = RegionTwo
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[service_user]
[ssl]
[vault]
Restart the service
systemctl restart openstack-cinder-api.service openstack-cinder-volume.service openstack-
cinder-scheduler.service
Service Check
Return to CTL 80.83, check service
[root@mult-ctl1 ~(admin-openrc)]$ openstack volume service list --os-region-name
RegionTwo
+------------------+---------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+---------------+------+---------+-------+----------------------------+
| cinder-scheduler | mult-ctl2 | nova | enabled | up | 2019-06-19T04:34:13.000000 |
| cinder-volume | mult-ctl2@lvm | nova | enabled | up | 2019-06-19T04:34:50.000000 |
+------------------+---------------+------+---------+-------+----------------------------+
[root@mult-ctl1 ~(admin-openrc)]$ [DEFAULT]
bind_host = 192.168.80.85
core_plugin = ml2
service_plugins = router
transport_url = rabbit://openstack:
[email protected]auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
allow_overlapping_ips = True
dhcp_agents_per_network = 2
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:
[email protected]/neutron
[keystone_authtoken]
#auth_uri = http://192.168.80.85:5000
#auth_url = http://192.168.80.85:35357
auth_uri = http://192.168.80.83:5000
auth_url = http://192.168.80.83:35357
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = passla123
region_name = RegionTwo
[matchmaker_redis]
[nova]
#auth_url = http://192.168.80.85:35357
auth_url = http://192.168.80.83:35357|
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTwo
project_name = service
username = nova
password = passla123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
#driver = messagingv2
[oslo_messaging_rabbit]
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
amqp_durable_queues = true
rabbit_ha_queues = true
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]
Return to Horizon and try to create a Volume + Volume with Image and check the log.
Step 7: Edit Neutron . Service
With Neutron, we need to edit the settings on CTL2 80.85, COM2 80.86
Performed on CTL 80.85
Edit the [keystone_authtoken], [nova] entries at /etc/neutron/neutron.conf
Item [keystone_authtoken]
Fix auth_uri, auth_url to CTL 80.83
region_name to RegionTwo
Section [nova]
auth_url to CTL 80.83
region_name = RegionTwo
Sample configuration
[DEFAULT]
bind_host = 192.168.80.85
core_plugin = ml2
service_plugins = router
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
allow_overlapping_ips = True
dhcp_agents_per_network = 2
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:[email protected]/neutron
[keystone_authtoken]
#auth_uri = http://192.168.80.85:5000
#auth_url = http://192.168.80.85:35357
auth_uri = http://192.168.80.83:5000
auth_url = http://192.168.80.83:35357
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = passla123
region_name = RegionTwo
[matchmaker_redis]
[nova]
#auth_url = http://192.168.80.85:35357
auth_url = http://192.168.80.83:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTwo
project_name = service
username = nova
password = passla123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
#driver = messagingv2
[oslo_messaging_rabbit]
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
amqp_durable_queues = true
rabbit_ha_queues = true
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]
Restart service nova and neutron
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-
nova-consoleauth.service openstack-nova-conductor.service openstack-nova-
novncproxy.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service\
neutron-l3-agent.service
Implemented on COM2 192.168.80.86
Edit the [keystone_authtoken] entries at /etc/neutron/neutron.conf
Fix auth_uri, auth_url to CTL 80.83
region_name to RegionTwo
[DEFAULT]
transport_url = rabbit://openstack:[email protected]:5672
auth_strategy = keystone
[agent]
[cors]
[database]
[keystone_authtoken]
#auth_uri = http://192.168.80.85:5000
#auth_url = http://192.168.80.85:35357
auth_uri = http://192.168.80.83:5000
auth_url = http://192.168.80.83:35357
memcached_servers = 192.168.80.85:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = passla123
region_name = RegionTwo
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_ha_queues = true
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
amqp_durable_queues= true
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]
Restart the service
systemctl restart neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-
metadata-agent.service
Check
Return to CTL1 192.168.80.83
[root@mult-ctl1 ~(admin-openrc)]$ openstack network agent list --os-region-name
RegionTwo
+--------------------------------------+--------------------+-----------+-------------------+-------+-------
+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------
+---------------------------+
| 1776c225-bb36-4337-a5da-bd5d2239c928 | Linux bridge agent | mult-ctl2 | None | :-) | UP |
neutron-linuxbridge-agent |
| 1ae77c12-0918-4df8-abb6-bd43b9db6517 | Metadata agent | mult-com2 | None | :-) | UP |
neutron-metadata-agent |
| 6465c0f0-90de-41d1-a7ab-75c99ad08764 | DHCP agent | mult-com2 | nova | :-) | UP |
neutron-dhcp-agent |
| b734222f-4895-4930-a71b-b82dae54a186 | L3 agent | mult-ctl2 | nova | :-) | UP | neutron-
l3-agent |
| c90bcd55-8a07-4b7d-bf03-3940a28e41d7 | Linux bridge agent | mult-com2 | None | :-) | UP
| neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------
+---------------------------+
[root@mult-ctl1 ~(admin-openrc)]$
Step 8: Check Region
Go to RegionOne, try creating a VM with volume cinder
Go to RegionTwo, try creating a VM with volume cinder
If an error occurs at Node Compute
[root@compute01 ~]# cat /var/log/nova/nova-compute.log | grep ERROR
2019-04-11 11:40:05.363 15299 ERROR nova.compute.manager [instance: 5350c69f-24de-
4345-9556-0cc92faa3ef2] BuildAbortException: Build of instance 5350c69f-24de-4345-9556-
0cc92faa3ef2 aborted: Invalid input received: Invalid image identifier or unable to access
requested image. (HTTP 400) (Request-ID: req-1e69aa25-4f63-477d-a8a5-678ebf1bb869)
Check the cinder configuration again:
At Controller, glance_api_servers section may be missing [glance_api_servers]
(/etc/cinder/cinder.conf)
At Compute, os_region_name may be missing from the [cinder] section
(/etc/nova/nova.conf)
If an error occurs at Node Compute
2019-04-11 10:58:40.625 14019 ERROR nova.compute.manager [instance: 66eef324-058d-
443e-afa6-8893f183a7db] PortBindingFailed: Binding failed for port 68e62053-fed2-4bd8-
b3a8-0755012774ad, check neutron logs for more information.
Check Neutron configuration again:
The Neutron service has been configured incorrectly, check the neutron service again
Step 9: Redirect Horizon cluster 2 to horizon cluster 1
Login to Server 192.168.80.85 (Controller 2)
Direct file editing
mv /var/www/html/index.{html,html.bk}
filehtml=/var/www/html/index.html
touch $filehtml
cat << EOF >> $filehtml
<html>
<head>
<META HTTP-EQUIV="Refresh" Content="0.5; URL=http://192.168.80.83/dashboard">
</head>
<body>
<center> <h1>Redirecting to OpenStack Dashboard</h1> </center>
</body>
</html>
EOF