ICO User Guide PDF
ICO User Guide PDF
Version 2.4.0.2
User's Guide
User's Guide
Note
Before using this information and the product it supports, read the information in Notices on page 1073.
This edition applies to IBM Cloud Orchestrator Version 2 Release 4 Fix Pack 2 (program number 5725-H28),
available as a licensed program product, and to all subsequent releases and modifications until otherwise indicated
in new editions.
The material in this document is an excerpt from the IBM Cloud Orchestrator knowledge center and is provided for
convenience. This document should be used in conjunction with the knowledge center.
Copyright IBM Corporation 2013, 2015.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . . xi
Who should read this information .
. xi
Chapter 1. Overview . . . . . . . . . 1
What is new . . . . . . .
Product architecture . . . . .
Product features and components
Pattern Engines . . . . . .
Overview of OpenStack. . . .
Multitenancy overview . . . .
Custom extensions . . . . .
Deployment modes . . . . .
IBM Platform Resource Scheduler
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
overview
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
4
6
8
8
. 11
. 12
. 12
Chapter 2. Installing . . . . . . . . . 15
Planning your installation . . . . . . . . .
Installation overview . . . . . . . . . .
Deployment topologies . . . . . . . . .
Hardware prerequisites . . . . . . . . .
Software prerequisites . . . . . . . . . .
Planning networks . . . . . . . . . . .
Using a virtual machine as a network server
Summary of features and capabilities of a
network server . . . . . . . . . .
Neutron scenarios . . . . . . . . . .
Planning PowerVC networks . . . . . .
Preparing for the installation . . . . . . . .
Setting up virtual machines . . . . . . . .
Preparing the IBM Cloud Orchestrator servers. .
Downloading the required image files . . . .
Installing the Deployment Service . . . . . . .
Understanding deployment templates . . . . .
Customizing deployment parameters . . . . . .
Deploying an IBM Cloud Orchestrator environment
Configuring an external database . . . . . .
Deploying the Central Servers . . . . . . .
Deploying a Region Server . . . . . . . .
Configuring a Region Server. . . . . . .
Installing a Hyper-V compute node . . .
Installing a KVM compute node . . . .
Configuring a PowerVC Region Server . .
Customizing a z/VM Region Server
deployment . . . . . . . . . . .
Installing Workload Deployer in non default
directories . . . . . . . . . . . . . .
Deploying the High-Availability Distributed
topology . . . . . . . . . . . . . . .
Installing System Automation Application
Manager . . . . . . . . . . . . . .
Installing the Central Servers . . . . . . .
Installing the Region Server . . . . . . . .
Installing a KVM Region Server with Neutron
network . . . . . . . . . . . . .
Installing a KVM Region Server with Nova
network . . . . . . . . . . . . .
Copyright IBM Corp. 2013, 2015
15
15
16
19
21
24
25
26
27
28
29
29
31
32
34
36
37
39
40
44
45
47
47
49
50
51
52
53
54
55
58
58
61
iii
iv
113
114
116
117
117
117
118
118
118
118
118
119
119
121
122
125
125
127
136
142
142
143
143
144
146
147
147
150
152
158
158
164
164
167
167
168
168
171
174
176
180
Uninstalling . . . . . . . . . . . . .
Removing a KVM compute node . . . . .
Removing a region . . . . . . . . .
Troubleshooting installation . . . . . . .
Cannot create the external database . . . .
All-in-one deployment failed . . . . . .
Troubleshooting a high-availability installation
Upgrade fails with a DB2 error . . . . .
Troubleshooting upgrade . . . . . . .
Installation reference . . . . . . . . . .
Command-line methods . . . . . . . .
Using the command-line interface to deploy
IBM Cloud Orchestrator . . . . . . .
Deploying a Region Server (CLI method) .
Hypervisor-specific information when
deploying a Region Server (CLI method)
Customizable installation information . . .
Deployment templates for demo topology
Deployment templates for distributed
topology . . . . . . . . . . . .
Deployment templates for high-availability
distributed topology . . . . . . . .
Deployment parameters . . . . . . .
Quota-related installation parameters . .
System files modified by the installation
procedure . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
184
184
185
186
191
191
192
195
196
198
198
. 198
. 201
. 205
. 211
211
. 211
. 213
. 213
. 218
. 219
221
222
223
223
225
225
227
235
235
236
237
237
238
239
240
241
241
243
244
244
245
246
246
246
248
249
250
251
.
.
.
.
.
.
.
.
.
.
.
.
.
251
252
253
255
257
259
260
261
261
262
262
263
263
.
.
.
.
.
.
.
.
.
.
.
.
264
264
265
265
265
266
266
266
267
267
268
268
. 269
270
. 271
. 271
. 271
. 272
. 272
. 272
. 272
. 272
. 273
. 273
274
. 274
. 274
. 275
. 275
. 275
. 276
. 276
. 276
. 277
. 278
278
. 279
. 279
. 280
. 280
. 280
. 281
. 282
. 282
. 282
. 286
.
.
.
.
.
.
.
.
.
.
286
290
292
292
292
293
294
294
295
295
296
296
297
297
298
299
300
300
301
302
302
304
305
305
306
309
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
309
309
311
311
311
311
312
312
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
313
313
314
315
317
317
321
322
322
322
323
323
323
Contents
.
.
.
.
.
.
.
an
.
.
.
.
.
an
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
323
324
324
325
325
325
326
. . .
. . .
. . .
. . .
. . .
action
326
326
327
327
328
329
331
332
332
333
333
334
334
336
336
337
337
338
338
339
340
341
341
344
345
345
346
347
350
352
352
vi
. . . .
. . . .
. . . .
in the user
. . . .
.
.
.
. 353
. 354
. 354
. 355
355
359
360
362
365
365
366
367
370
371
371
373
374
381
382
383
388
390
391
404
405
406
407
408
409
410
411
412
413
414
414
415
416
419
419
426
427
427
428
428
429
430
434
434
435
436
437
437
438
440
442
443
446
447
449
451
455
460
461
463
463
464
468
468
469
470
472
475
476
476
478
479
480
481
482
483
484
485
486
487
492
493
494
494
495
496
497
497
498
501
503
503
505
505
Contents
506
506
507
508
509
509
510
511
512
513
513
514
514
515
515
516
516
516
517
518
532
542
553
558
563
568
569
575
580
584
628
636
636
640
642
643
645
646
647
648
650
650
652
652
653
654
655
656
656
657
657
vii
. 658
. 658
661
viii
661
662
664
664
664
666
667
667
669
670
673
673
673
674
703
708
709
711
714
715
715
715
716
679
679
675
675
675
676
676
677
680
684
685
687
688
690
691
693
693
695
695
697
697
698
700
702
702
703
.
.
.
.
.
.
.
717
717
718
718
719
720
721
. 721
. 722
. 722
. 723
. 723
. 733
735
735
736
737
739
740
740
744
745
748
749
752
755
758
762
763
764
767
770
770
772
774
775
777
781
783
784
786
792
797
799
803
804
812
818
818
821
827
833
839
842
843
847
848
854
860
862
865
867
868
869
869
870
870
871
872
873
873
873
874
874
874
875
876
876
877
879
879
883
883
884
884
885
885
885
886
886
886
887
888
888
891
891
894
895
900
904
911
918
921
921
921
927
927
941
946
950
950
953
955
957
961
964
966
968
968
973
976
985
990
ix
Virtual
Virtual
Virtual
Virtual
system
system
system
system
1025
1026
1027
1028
1029
1031
1032
1032
1034
1035
1035
1037
1038
1039
1039
1040
1041
1041
1042
1042
1043
1043
1044
1051
1051
1052
1052
1052
1054
1055
1055
1056
1059
1061
1062
1062
1063
1063
1064
1064
1065
1065
1066
1067
1067
1068
1069
1044
1044
Notices . . . . . . . . . . . . . 1073
1045
1046
1046
1047
1047
1048
1048
1050
1050
1050
Glossary . . . . . . . . . . . . . 1077
A
B
C
E
H
K
P
R
S
T
V
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1077
1078
1078
1078
1078
1078
1079
1079
1079
1079
1079
Preface
This publication documents how to use IBM Cloud Orchestrator.
xi
xii
Chapter 1. Overview
With IBM Cloud Orchestrator, you can manage your cloud infrastructure.
IBM Cloud Orchestrator helps you with end-to-end service deployment across
infrastructure and platform layers. It also provides integrated IT workflow
capabilities for process automation and IT governance, resource monitoring, and
cost management. The product offers you an extensible approach to integration
with existing environments such as network management tools. It facilitates
integration with customer-specific service management processes, such as those
defined in the IT infrastructure library (ITIL).
Using IBM Cloud Orchestrator, you have a consistent, flexible, and automated way
of integrating the cloud with customer data center policies, processes, and
infrastructures across various IT domains, such as backup, monitoring, and
security. Use the intuitive, graphical tool in IBM Cloud Orchestrator to define and
implement business rules and IT policies. You can connect the aspects of different
domains into a consistent orchestration of automated and manual tasks to achieve
your business goals.
IBM Cloud Orchestrator is based on a common cloud platform that is shared
across IBM's Cloud offerings. This common cloud stack provides a common
realization of the core technologies for comprehensive and efficient management of
cloud systems.
You choose between two editions: IBM Cloud Orchestrator and IBM Cloud
Orchestrator Enterprise Edition which also includes Monitoring and Cost
Management.
What is new
The following enhancements were introduced in the current release.
Product architecture
IBM Cloud Orchestrator is a comprehensive product that integrates the capabilities
of several other IBM solutions.
The main components of IBM Cloud Orchestrator are the process engine and the
corresponding modeling user interface, which is used to create processes. For this
purpose, IBM Cloud Orchestrator uses the capabilities of IBM Business Process
Manager. It also integrates other domain-specific components that are responsible
for such functions as monitoring, metering, and accounting. IBM Cloud
Orchestrator bundles all these products and components and provides processes
that are required to implement the domain-specific functionalities.
Cloud Marketplace
Workflow Orchestration
Service Management
- Monitor
- Back up and restore
- Security and patch compliance
Development
tools
Patterns
Software Stacks
Public Cloud
Gateway
Infrastructure-as-a-Service (IaaS)
Storage
(Cinder)
Compute
(Nova)
Network
The following is a description of the role each major component plays in IBM
Cloud Orchestrator:
Infrastructure-as-a-Service
The Infrastructure-as-a-Service (IaaS) component is responsible for
managing access to compute, storage and networking resources in the
virtual environment. All requests to provision services across these services
is performed by this component. The IaaS component is delivered by using
OpenStack, a leading open source, community-driven project for highly
scalable, highly resilient cloud infrastructure management. IBM is one of
the Platinum Members of the OpenStack Foundation.
Software Stacks
While not a specific component itself, Software Stacks represent the
concept that when one or more virtual systems are deployed, it is also
possible to specify multiple software packages to be deployed upon first
boot of those systems. It can be done by invoking simple installation
scripts, but also other strong tools can be used such as Chef recipes and
cookbooks for automated installation and configuration.
Patterns
Patterns allow for deploying more complex middleware configurations and
multinode applications. The Patterns component provides a graphical
editor that allows the user to describe multiple virtual systems, each with a
base image and set of software to be installed, and then specify the
relationships and configuration scripts necessary to connect those systems
together. With this level of automation, an entire multisystem deployment
can be done with just a few simple clicks.
Workflow Orchestration
The Workflow Orchestration component provides a graphical editor that
allows the user to easily customize and extend the procedures that are
followed when a user request is initiated. In addition, it also provides the
facilities to customize the self-service catalog so that users have access to a
variety of service request types that they can access. This component is
delivered by embedding IBM's award-winning Business Process Manager
technology along with a number of pre-built automation toolkits that make
it possible to integrate workflow automation with the cloud platform and
Chapter 1. Overview
ported across hybrid cloud environments. You can use the Public Cloud Gateway
to communicate with SoftLayer, Amazon EC2, and non-IBM-supplied OpenStack.
For more information, see Chapter 8, Managing a hybrid cloud, on page 661.
The supported hypervisors are:
v In Heat: VMware, KVM, z/VM, PowerVC, Hyper-V.
v In Hybrid: EC2, NIO, SoftLayer.
Chapter 1. Overview
Supporting TOSCA
IBM Cloud Orchestrator supports importing, deploying, and exporting service
templates according to the OASIS Topology and Orchestration Specification for
Cloud Applications (TOSCA). This support enables the consumption of third-party
content provided in a standardized format.
Managing cost
The IBM SmartCloud Cost Management component of the Enterprise Edition
provides functionality for collecting, analyzing, reporting, and billing that is based
on usage and costs of shared computing resources. With this tool, you can
understand your costs and track, allocate, and invoice based on allocated or actual
resource use by department, user, and many more criteria. For more information
about cost management, see Metering and billing.
Within IBM Cloud Orchestrator, metering is primarily driven from the OpenStack
layer to capture all virtual machine provisioning requests. For more information,
see the OpenStack Collector topic.
Monitoring
In the Enterprise Edition of IBM Cloud Orchestrator, you can monitor workloads
and instances using IBM Tivoli Monitoring. With this component, you can
measure the cost of cloud services with metering and charge-back capabilities. For
more information about monitoring, see Integrating with IBM Tivoli Monitoring
on page 717.
Pattern Engines
Learn about the different IBM Cloud Orchestrator pattern engines and determine
which one is more appropriate for your environment.
OpenStack Heat
OpenStack Heat templates are suitable for scenarios that focus on the
infrastructure. You can create resources such as instances, networks, volumes,
security groups, users, and floating IP addresses, and define the relationships
between these resources (for example, a volume must be attached to a specific
instance, some instances are to be connected using this network, and so on). It
allows the addition of auto-scaling services integrated with OpenStack Ceilometer.
Even though OpenStack Heat can be integrated with Puppet or Chef it is not the
recommended engine to create patterns in which software installation and
configuration are crucial. Images to be deployed via Heat templates require that
cloud-init be installed. For more information about OpenStack Heat, see
Working with Heat templates and stacks on page 317.
Chapter 7, Managing and deploying virtual patterns, on page 353, and Chapter 6,
Managing virtual images, on page 331.
Patterns - Workload
Deployer
Complexity
Small
Medium
Large
Purpose
Provision a single
virtual machine
Provisions multiple
Provisions multiple
virtual machines with virtual machines with
network and storage network, storage and
additional software
Capabilities
v user id and
password or ssh
key
v graphical editor
v user id and
password or ssh
key
v define
dependencies and
order
v region and
availability zone
v lookup of input
parameters
v add software
bundles
v network
v add script
executions
v details view
v details view
v delete
v graphical details
view
v delete
v execute script
v single VM actions
for parts
v custom actions
v custom actions
Actions
v delete
v custom actions
Supported
Hypervisions
All
All
Chapter 1. Overview
Overview of OpenStack
IBM Cloud Orchestrator is based on OpenStack (the Icehouse release).
OpenStack is a collection of open source technologies that provide scalable
computing software for both public and private clouds. For detailed information
about OpenStack, see the OpenStack documentation. For a list of the OpenStack
services, refer to Overview of OpenStack.
IBM Cloud Orchestrator uses the following components and services of OpenStack:
Ceilometer
Collects metering data related to CPU and networking.
Image (codenamed Glance)
Provides a catalog and repository for virtual disk images. The virtual disk
images are mostly used in the OpenStack Compute service component.
Compute (codenamed Nova)
Provides virtual servers on demand.
Identity (codenamed Keystone)
Provides authentication and authorizations for all OpenStack services.
Block Storage (codenamed Cinder)
Provides persistent block storage to guest virtual machines.
Network (codenamed Neutron)
Provides network management.
Dashboard (codenamed Horizon)
Provides a web-based user interface.
Orchestration (codenamed Heat)
Provides an engine to launch multiple composite cloud applications based
on templates.
To configure and administer IBM Cloud Orchestrator, use the OpenStack
command-line interface or the Admin Console. For example, you might need to
use the keystone command-line interface to manage authentication and
authorizations, and the glance command-line interface to manage virtual images.
For more information about the OpenStack command-line interface, see the
OpenStack CLI Guide.
Multitenancy overview
This section describes the roles and delegation in IBM Cloud Orchestrator.
Delegation means that a more powerful role can delegate certain tasks to a less
powerful role. It includes two different types of persona:
Service provider
Is responsible to host IBM Cloud Orchestrator and provide the cloud
infrastructure and services.
Service consumer
Consumes services from the service provider and acts only in the context
of the tenant.
Service Provider
Customer (Tenant)
Cloud Admin
Domain Admin
Onboard a new domain
(tenant, LOB, department),
grant AZ to domain, and
define quota
Catalog Editor
Onboard users in a domain
(tenant) and configure projects
End User
Request VM via
Self-Service UI and
run script
Inherit the functionality, for example Cloud Admin can do the same tasks as the Domain Admin
IBM Cloud Orchestrator provides different user interfaces that are optimized for
the user experience of a specific role. The following user interfaces exist:
Administration user interface
Only used by a Cloud Administrator. The user interface is based on
OpenStack Horizon and allows the configuration of the cloud
infrastructure and identities. The view shows the resources in the context
of a selected region.
IBM Process Designer and IBM Business Process Manager
Is only used by cloud administrators and content developers. It is the main
user interface to develop new toolkits and catalog content like processes
and human services. It can be used to load new content from the IBM
Cloud Orchestrator Catalog.
Self-service user interface
Is mainly used by tenant users, like domain administrator, catalog editors
and end users. It provides a self-service portal with dashboard, self-service
catalog and managing instances owned by the user. It further support
configuration of the domain and catalog content, and a bandwidth of
panels to manage patterns including a graphical editor.
User interfaces are used by the different personas. The following list explains the
roles, starting from the most powerful,Cloud Administrator, to the most restrictive,
End User:
Cloud Administrator (Service provider)
The Cloud Administrator is the most powerful role who can manage and
administer the whole cloud infrastructure resources, identities, self-service
catalog and its artifacts across all tenants. A special persona of the Cloud
Administrator is the content developer who implements content packs,
processes and coaches that implement the offerings. The Cloud
Administrator can delegate certain tasks, like user, project, pattern, and
catalog configuration to the Domain Administrator.
Domain Administrator (Service consumer)
The Domain Administrator is the most powerful role within a tenant but
less powerful than the Cloud Administrator. The Domain Administrator is
responsible to setup users, projects, patterns, and self-service catalog
content in the domain. However the Domain Administrator can only rely
on resources that are assigned to the domain by the Cloud Administrator.
The Domain Administrator can delegate certain tasks like pattern and
self-service catalog configuration to the catalog editor.
Chapter 1. Overview
For more information about the responsibility of the Domain Administrator, see
Administering as domain administrator on page 274
For more information about the role of service designers, see the following topics:
v Chapter 4, Managing orchestration workflows, on page 293
v Chapter 5, Working with self-service, on page 309
v Chapter 6, Managing virtual images, on page 331
v Chapter 7, Managing and deploying virtual patterns, on page 353
v Chapter 11, Reference, on page 735
For more information about the role of an End User, see Using self-service on
page 309.
10
Custom extensions
You create custom extensions to IBM Cloud Orchestrator in the Business Process
Manager Process Designer tool and base them on Business Process Manager
business processes. To implement user interface extensions, you can use Business
Process Manager human services.
IBM Cloud Orchestrator delivers a set of Business Process Manager toolkits that
cover the most common automation scenarios in the infrastructure-as-a-service and
platform-as-a-service environments. Each toolkit provides a set of reusable artifacts:
Business processes
A business process is any course of action or procedure that an
organization follows to achieve a larger business goal. When you break it
down, a business process is actually a series of individual tasks or
activities that are performed in a specific order. Business processes provide
the primary means through which enterprise services are integrated.
Services
Services provide functions for a business process, which itself is a sequence
of services. Creating services separately from a business process means a
service can be developed independently of a business process and that
many types of business processes can reuse that service.
Human services
Human service includes an activity in your business process definition that
creates an interactive task that process participants can perform in a
web-based user interface.
Coaches
Coaches are the user interfaces for human services.
Business object definitions
Business objects carry the functional properties, data transformation
information, and file content that the adapter needs to process requests and
generate responses.
With the help of these artifacts, you can efficiently build custom extensions for IBM
Cloud Orchestrator. The provided toolkits also contain numerous samples that
show how to define custom extensions.
You can download more Business Process Manager toolkits from the IBM Cloud
Orchestrator Catalog. These toolkits provide more content for different areas, such
as networking or storage, and you can also use them to build IBM Cloud
Orchestrator extensions.
Restriction: If you define more than one snapshot for Business Process Manager
process application or toolkit, you will be able to use only the artifacts of the top
level to define a new extension in IBM Cloud Orchestrator.
Chapter 1. Overview
11
Deployment modes
IBM Cloud Orchestrator supports several deployment modes on a variety of
hypervisor types.
IBM Cloud Orchestrator supports deployment in Demo, Distributed, and
High-Availability Distributed modes. Optionally, you can also deploy OpenStack
Neutron and an external IBM DB2 database.
IBM Cloud Orchestrator introduces a new deployment topology to make the
management stack highly available. The High-Availability Distributed topology
provides redundancy and improved recovery for core software components of the
IBM Cloud Orchestrator management stack. The key benefit of the new
High-Availability Distributed topology is a reduction of unplanned and planned
downtimes. The new topology ensures that, in certain failure situations, the
processing of the IBM Cloud Orchestrator management stack is not interrupted,
and incoming deployment requests can be processed even though some
components of the cloud management stack failed. Examples include the
introduction of application clustering for IBM Business Process Manager,
OpenStack Keystone, and many other OpenStack components. In addition, an
improved recovery approach is introduced by using classic high-availability
clustering. This approach is especially useful for software components that cannot
run in an Active-Active setup, but allow classic failover scenarios. Another key
benefit of the new High-Availability Distributed deployment topology is improved
performance. Core IBM Cloud Orchestrator components run in an Active-Active
setup, which improves throughput. The High-Availability Distributed topology is
installed through the IBM Cloud Orchestrator Deployment Service. The
Deployment Service automates most parts of the high-availability installation, and
configuration is fully automated, which reduces and simplifies the overall
installation process for the IBM Cloud Orchestrator administrator.
For more information about these deployment modes, see Deployment
topologies on page 16.
IBM Cloud Orchestrator supports the following hypervisor types: KVM, VMware,
Hyper-V, PowerVC, and z/VM.
Amazon EC2 and SoftLayer are supported via the Public Cloud Gateway. For more
information, see Chapter 8, Managing a hybrid cloud, on page 661.
IBM Cloud Orchestrator Enterprise Edition provides additional capabilities from
IBM Tivoli Monitoring and IBM SmartCloud Cost Management. For more
information about these products, see Integrating with IBM Tivoli Monitoring on
page 717 and Metering and billing.
12
Chapter 1. Overview
13
14
Chapter 2. Installing
Follow this procedure to install IBM Cloud Orchestrator.
Installation overview
Get familiar with the basic concepts of the IBM Cloud Orchestrator installation
topology so that you can plan your installation.
The main components of an IBM Cloud Orchestrator installation topology are:
Deployment Server
Hosts the Deployment Service that is the deployment management
component to deploy an IBM Cloud Orchestrator environment with a
predefined topology by using the related deployment templates.
Central Servers
Host the core IBM Cloud Orchestrator management components.
Region Servers
Are the components used to communicate with a specific hypervisor
management infrastructure (KVM, VMware, Hyper-V, PowerVC, or z/VM).
The KVM region server requires one or more KVM compute nodes to
provide the compute resources. The VMware Region server needs to
connect to existing VMware Virtual Center to provide virtual machines.
The Hyper-V region server requires one or more Hyper-V compute nodes
to provide the compute resources.The PowerVC region server requires to
connect to existing PowerVC to provide virtual machines. The z/VM
region server needs to connect to xCat management Node on z/VM to
provide virtual machines.
KVM or Hyper-V Compute Nodes
Are the components used to manage the virtual machines through the
interface provided by KVM or Hyper-V.
The first step in the installation procedure is to install the Deployment Service.
Then, you install the Central Servers and, as last step, you set up the Region
Servers.
Depending on your needs and the available hardware resources, using the
predefined topology templates you can set up one of the following environments:
v A demo environment
v An environment with management components spread across multiple nodes
v An environment with management components spread across multiple nodes
and high availability
After the deployment, you can also use the Deployment Service to manage your
environment in terms of update, scale out, or delete. Additional Region Servers can
be added to enable a multiple-region environment.
Copyright IBM Corp. 2013, 2015
15
Deployment topologies
Before you start to deploy IBM Cloud Orchestrator, you must decide which
deployment topology to install for the IBM Cloud Orchestrator management stack.
IBM Cloud Orchestrator supports the following deployment topologies:
v Demo topology
v Distributed topology
v High-Availability Distributed topology
The first step for each deployment topology is to install the Deployment Service on
a dedicated system. The descriptions of the IBM Cloud Orchestrator topologies
below do not mention the Deployment Service. The topology descriptions in this
topic focus only on the specific IBM Cloud Orchestrator components.
Demo topology
This topology is the simplest topology, and it is suitable for demo and
proof-of-concept scenarios. The Demo topology requires minimal resources, and
does not use Neutron networking or high-availability features. The Demo topology
supports a single VMware or KVM region only, it does not support more than one
region. The Demo topology is also known as the all-in-one topology.
Database
IBM HTTP
Server
Self-Service
UI
OpenStack
Nova
OpenStack
Glance
OpenStack
Ceilometer
Public Cloud
Gateway
Administration
UI
OpenStack
Cinder
OpenStack
Heat
Business
Process
Manager
OpenStack
Keystone
qpid
Central Server 1
Workload
Deployer
Central Server 2
Distributed topology
This topology is typically used for environments in which you can use all the
product features, but do not want to invest in resources for high availability. IBM
Cloud Orchestrator components are installed on three Central Servers. It supports
one or more Region Servers and it supports one or more OpenStack Neutron
servers.
16
OpenStack
Glance
Administration
UI
OpenStack
Cinder
OpenStack
Heat
OpenStack
Keystone
qpid
IBM HTTP
Server
Self-Service
UI
OpenStack
Ceilometer
Public Cloud
Gateway
Business
Process
Manager
Central Server 1
Workload
Deployer
OpenStack
Nova
Database
Central Server 2
Central Server 3
Region Server
OpenStack
Neutron
Neutron Server
Central Server 1 hosts the database and OpenStack Ceilometer. You can install the
database on this system or you can use an external database as described in
Configuring an external database on page 40.
Central Server 2 hosts the IBM HTTP Server, Business Process Manager, the Public
Cloud Gateway, the Self-service user interface, the Administration user interface,
and OpenStack Keystone.
Central Server 3 hosts the Workload Deployer component.
Each region server hosts OpenStack Nova, Glance, Cinder, Heat, and qpid. If you
want to use OpenStack Neutron features, you must install OpenStack Neutron on a
dedicated system.
Note: The keystone CLI is always installed on the same system where OpenStack
Keystone is installed. All of the other OpenStack CLI commands are available on
the Region Server.
Chapter 2. Installing
17
OpenStack
Ceilometer
Central Server 1
Database
ExternalDB
Server
Workload
Deployer
OpenStack
Nova
OpenStack
Glance
Administration
UI
OpenStack
Cinder
OpenStack
Heat
Business
Process
Manager
OpenStack
Keystone
qpid
haproxy
Tivoli
System
Automation
haproxy
IBM HTTP
Server
Self-Service
UI
Public Cloud
Gateway
Central Server 2
Central Server 3
System
Automation
Application
Manager
Tivoli
System
Automation
Region Server
OpenStack
Nova
OpenStack
Glance
Administration
UI
OpenStack
Cinder
OpenStack
Heat
OpenStack
Keystone
qpid
Tivoli
System
Automation
haproxy
IBM HTTP
Server
Self-Service
UI
Public Cloud
Gateway
Business
Process
Manager
SAAM Server
OpenStack
Neutron
Neutron Server
Tivoli
System
Automation
Region Server'
18
Hardware prerequisites
The hardware prerequisites are based on the role of the node you are using to
deploy the IBM Cloud Orchestrator.
Table 2. Hardware prerequisites
Machine
Role
Processor
(vCPU)
Memory
(GB)
Overall free
volume
requirements
(GB)
/
Deployment
Server
117
15
22
43
Central
Server 1
115
75
20
10
Central
Server 2
96
40
30
20
Central
Server 3
146
77
KVM Region
Server
77
40
30
KVM
Compute
Node
32
160
80
70
VMware
2
Region Server
77
40
30
Hyper-V
2
Region Server
77
40
30
2
Power
Region Server
77
40
30
z/VM Region 2
Server
77
40
30
Neutron
Server
32
20
Server for
4
all-in-one
deployment
with KVM
Region (demo
topology)
16
350
200
20
30
40
30
30
4
Server for
all-in-one
deployment
with VMware
Region (demo
topology)
16
377
216
22
32
43
32
32
Server for
System
Automation
Application
Manager
50
15
15
10
10
/opt
/var
/tmp
/data
27
/drouter
54
Chapter 2. Installing
19
v The specified hard disk space is the minimum free space required on the
machine before the IBM Cloud Orchestrator installation. Be sure that there is
sufficient space for the required partitions.
v For the Deployment Service node, additional disk space (about 14 GB) is
required to store the IBM Cloud Orchestrator packages. If you plan to install
IBM Cloud Orchestrator as highly available, you need 40 GB to store all of
the IBM Cloud Orchestrator packages, including the additional high-availability
packages.
v For Central Server 1, additional space might be required on the /home partition
after a period of time, depending on database size. Monitor the partition size.
Use LVM to manage the partition so that you can extend the size if required.
v For Central Server 3, the /drouter directory is used to store the contents (images
and patterns) of the Workload Deployer component. Increase the partition size
as necessary to match the contents that you want to deploy.
v For a KVM Compute node, the virtual machine master images and the virtual
machine ephemeral disks are located in the /var/lib/nova directory by default.
Plan the required disk space for this directory accordingly.
For the Demo topology, in addition to the server for all-in-one deployment, only
one central server is needed. This central server, where the Workload Deployer
component is installed, must satisfy the Central Server 3 hardware requirements.
For more information about the demo topology, see Deployment topologies on
page 16.
For more information about hardware requirements for the System Automation
Application Manager server, see the Tivoli System Automation Application
Manager V4.1 documentation at http://www-01.ibm.com/support/
knowledgecenter/SSPQ7D_4.1.0/com.ibm.saam.doc_4.1/welcome_saam.html.
For a Neutron Server or KVM Compute Node, use a physical server for better
performance.
You must configure a different Neutron Server for each Region Server that uses a
Neutron network.
For an IBM Cloud Orchestrator environment with high availability configuration,
you must configure a secondary Central Server 2 and a secondary Region Server
(for each primary Region Server in your environment) with the same prerequisites
specified in the previous table. Create these secondary virtual servers on a different
physical host than the primary virtual servers, to ensure that the IBM Cloud
Orchestrator management stack remains available if the primary host fails.
Restriction: For a high-availability installation, only VMware and KVM regions are
supported.
20
Software prerequisites
Review the software prerequisites for your environment.
IBM Cloud Orchestrator runs on Red Hat Enterprise Linux.
The installer must access one of the following Red Hat Enterprise Linux
repositories:
v Registered Red Hat Network
v Customer-provided yum repository
v Red Hat Enterprise Linux ISO
Note: Before you install the product:
v For highly-available installation: to specify that the load-balancer repository for a
specific system should be managed by Red Hat Network, complete the following
steps:
1. In the Red Hat Network interface, click the Systems tab.
2. Select the system.
3. Click Software > Software Channels.
4. In the Software Channel Subscriptions list, ensure that the RHEL Server
Load Balancer channel option is selected.
v Ensure that the /etc/yum.repos.d/ds.repo file does not exist on the Deployment
Server.
Note: If there is already an operating system yum repository configured on the IBM
Cloud Orchestrator node, the Deployment Service will not provide its internal yum
repository to the node. If there is no operating system yum repository configured in
the IBM Cloud Orchestrator node, the Deployment Service will provide its own
yum repository as the node yum repository. In the failure case, the Deployment
Service provides a Red Hat 6.4 yum repository to the Neutron node which actually
requires the Red Hat 6.5 package. The solution to this issue is to manually provide
a Red Hat 6.5 operating system repository to the Neutron node before deploying
the IBM Cloud Orchestrator node.
Manage-from requirements
IBM Cloud Orchestrator services can be installed on KVM or VMware virtual
machines, or on physical machines.
Depending on the deployment scenario that you choose, the Deployment Service
uses already existing virtual images or it creates them and then installs the
software stack on them.
If you want the Deployment Service to create the virtual machines, ensure that the
virtualization technology (VT) is enabled on the operating system that is running
the Deployment Service.
The following table describes the host and guest operating systems supported for
installing IBM Cloud Orchestrator.
Chapter 2. Installing
21
Table 3. Host and guest operating systems supported by the standard installation
Hypervisor
Reference
KVM
VMware
Note:
v Be sure that all the IBM Cloud Orchestrator central servers run the same Red
Hat Enterprise Linux version.
v VMware Tools must be installed on all the VMware virtual machines where you
install IBM Cloud Orchestrator.
v If you want to use Red Hat Enterprise Linux 6.6 as guest operating system on
VMware, VMware vCenter Server must be at least version 5.0 u3.
v If you want to use OpenStack Neutron, the server that hosts the Neutron service
must run Red Hat Enterprise Linux 6.5 and iproute must be upgraded to the
version 2.6.32-130. The iproute-2.6.32130.el6ost.netns.2.x86_64.rpm package can be downloaded from
http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/.
For this case, it is recommended that all other IBM Cloud Orchestrator servers
also run Red Hat Enterprise Linux 6.5. If you have IBM Cloud Orchestrator
servers run Red Hat Enterprise Linux 6.4 except the Neutron server, ensure that
the Neutron server has its own yum repository well configured with Red Hat
Enterprise Linux 6.5.
v If you plan to use an existing external database, Red Hat Enterprise Linux 6.4,
6.5, or 6.6 64 bits must be installed on the database system and you must use
DB2 10.5.
v If you migrated from SmartCloud Orchestrator 2.3 and you were using VMware
4.1, you must upgrade to VMware 5.0 or later before using IBM Cloud
Orchestrator 2.4 or later.
You must apply the following configurations to the IBM Cloud Orchestrator
servers:
v The operating systems must be installed at least with the basic server package
group.
v The bind-utils rpm must be installed.
The python-ldap-2.3.10-1.el6.x86_64 rpm must be installed.
The SSH daemon must be enabled.
The network interface must be configured to use static IPs.
All the IBM Cloud Orchestrator servers must have network connectivity, be in
the same network, and have the network interface used as management network
with the same name (for example, eth0).
v Host name resolution must work across all of the IBM Cloud Orchestrator
servers. You can configure the IBM Cloud Orchestrator servers with the
corporate DNS. If no corporate DNS is available, you must update the
/etc/hosts file on each of the required IBM Cloud Orchestrator servers (for
example, Central Servers, Region Servers, compute nodes) to include all of the
IBM Cloud Orchestrator server hosts. Each entry in the /etc/hosts file should
v
v
v
v
22
specify both the fully qualified domain name and the host name, in that order.
To verify that you configured the /etc/hosts file correctly, run the following
commands:
host <IP_address>
This command must return the FQDN of the server (for example,
central_server_2.subdomain.example.com).
hostname --fqdn
This command must return the same FQDN as in the previous
command.
hostname
This command must return the first part of the FQDN, that is the host
name (for example, central_server_2).
Manage-to requirements
The following table describes the host and guest operating systems supported by
each type of hypervisor in an IBM Cloud Orchestrator environment.
Table 4. Host and guest operating systems supported by the hypervisors
Hypervisor
KVM
VMware
Hyper-V
PowerVC
z/VM
Note:
v If you migrated from SmartCloud Orchestrator 2.3 and you were using VMware
4.1, you must upgrade to VMware 5.0 or later before using IBM Cloud
Orchestrator 2.4 or later.
v To allow IBM Cloud Orchestrator to connect to VMware vCenter, PowerVC, or
z/VM, you must use an account with specific privileges. For VMware vCenter,
the list of minimum permissions is available at http://docs.openstack.org/
trunk/config-reference/content/vmware.html.
v For z/VM 6.3 systems, you must install the PTFs listed at:
http://www.vm.ibm.com/sysman/xcmntlvl.html
http://www.vm.ibm.com/sysman/osmntlvl.html
v z/VM Single System Image (SSI) configuration is not supported in IBM Cloud
Orchestrator.
Chapter 2. Installing
23
v CentOS is supported only for Virtual System Pattern (Classic) when the images
are prepared using the Image Construction and Composition Tool tool that was
provided with SmartCloud Orchestrator V2.3.
v Images for Linux on System z cannot be used for Virtual System Pattern
(Classic). To deploy a z/VM pattern, you must use the generic OpenStack Image
- Linux [s390x] image to create a Virtual System Pattern.
v PowerVC Express Edition is not supported.
v PowerVC Region Servers do not support secure connections to PowerVC servers
with mixed-case or uppercase host name. For more information, see
Troubleshooting PowerVC region on page 1056.
v PowerVC V1.2.1.2 users must install an interim fix, which is shipped with IBM
Cloud Orchestrator, to ensure that the Workload Deployer component works
correctly. For more information, see Applying interim fix 1 to PowerVC 1.2.1.2
on page 50.
v The Workload Deployer component is not supported on Power6 or Power6+
systems. If you want to use Power6 or Power6+ technology, you can use Heat
templates or the Nova API to deploy servers.
Planning networks
IBM Cloud Orchestrator supports Nova networks and Neutron networks.
You can use Nova networks and Neutron networks in one or more different
regions, but you cannot use Nova networks and Neutron networks within the
same region.
In Hyper-V, PowerVC, and z/VM regions, you can only use Neutron networks.
Nova networks do not support overlapping IP addresses.
Neutron provides rich and flexible scenarios for end users compared to Nova
network. For example, it supports software defined networks (SDN) via Virtual
Extensible LAN (VXLAN) and it also supports traditional routing accessibility such
as VLAN and FLAT.
Note: You cannot use VXLAN for VMware or Hyper-V managed-to environment.
Prerequisites for Nova networks
v The switch must be configured as trunked to support different VLAN IDs.
v There must be a routable gateway for each VLAN on the router.
v The IP address of the gateway must belong to the subnet that you plan to use.
v If using KVM regions, all the Compute Nodes must use the same interface to
connect to the Region Server (for example, all eth0 or all eth1).
v If the Region Server is a VMware virtual machine, the port group must have the
VLAN ID option set to All (4095).
v VMware Region Servers must be in the same network as all the ESXi servers.
v KVM Region Servers can be in the same network as the Compute Nodes, or in a
different network.
For more information about Nova networks, see Managing Nova networks on
page 88.
Prerequisites for Neutron networks
24
v The Region Server can be in the same network as the Compute Nodes, or in a
different network.
v The Neutron Server must be in the same network as the Compute Nodes.
v In a VMware region, the port groups are not created by the installer. The name
of the port group must match the label of the network in Neutron.
For more information about Neutron networks, see Managing Neutron networks
on page 93.
where:
MGM Hyper
The manage-from hypervisor, which can be either Linux KVM or VMware
ESXi.
Network Node
The Neutron node if you are using Neutron networks, or the Region Server
if you are using Nova networks.
Compute Node
The compute node where the virtual machines are deployed.
Br
Bridge.
Pg
Port group.
eth0/vmnic0
Examples of NIC names.
The following prerequisites must be met:
v For a VMware virtual machine:
The port groups to which the network server connects must have the VLAN
ID option set to All (4095).
The port groups to which the network server connects must be configured to
accept Promiscuous Mode.
All the vNICs of the corresponding port groups must correlate correctly to the
NICs of the compute nodes. For example, the connected port group eth0 of
the network server must be in the same network as the eth0 of the compute
Chapter 2. Installing
25
nodes, the connected port group eth1 of the network server must be in the
same network as the eth1 of the compute nodes, and so on.
The network adapter of the vNIC on the network node must be E1000. If
you use the VMXNET3, you may hit the problem that UDP packets are
dropped from the network node.
v For a KVM virtual machine:
The virtual machine must not be running on a compute node.
All the vNICs of the corresponding bridges must correlate correctly to the
NICs of the compute nodes. For example, the connected bridge eth0 of the
network server must be in the same network as the eth0 of the compute
nodes, the connected bridge eth1 of the network server must be in the same
network as the eth1 of the compute nodes, and so on.
The following limitations apply when you install a network server on a KVM
virtual machine:
VXLAN
The network server must communicate with other compute nodes by using
multicast to transmit overlay packets. If the network server is running as a
virtual machine, it relies on the bridge of its hypervisors to connect
externally. A known issue for the kernel of Red Hat Enterprise Linux V6.5
is related to multicast snooping for bridges: sometimes the multicast
packets cannot be received by the virtual machine. If you see this problem,
run the following command to disable multicast snooping in the kernel:
echo 0 > /sys/devices/virtual/net/BRIDGE-OF-NETWORK-SERVER/bridge/multicast_snooping
Nova-network
on VMware
virtual machine
26
FLAT
FLATDHCP
VLAN
VXLAN
No
Yes
No
FLATDHCP
VLAN
VXLAN
Neutron on
VMware virtual
machine
Yes
Not applicable
Yes
Yes
Nova-network
on KVM virtual
machine
No
Yes
No
Neutron on
KVM virtual
machine
Yes
Not applicable
Yes
Yes
Nova-network
on physical
server
No
Yes
No
Neutron on
physical server
Yes
Not applicable
Yes
Yes
Neutron scenarios
If you use a Neutron network, choose one of the supported scenarios for your
environment.
Neutron supports the following scenarios:
Network Server as a gateway
This scenario is suitable when software-defined networks (SDN) are
required. Another important benefit is that network fencing can be
achieved. In this scenario, virtual machines get their internal IP addresses
from the DHCP service and access internet through the SDN gateway.
Because the virtual machines have only internal IP addresses, the only way
to access them is through floating IP addresses. In this scenario, the
network server provides the following functionality:
v DHCP services for different networks
v A gateway for different networks
v Floating IP addresses to the virtual machines to be accessed
Note:
v This scenario can be implemented by using either VLANs or VXLANs.
v Multiple routers are allowed for different networks.
A Cloud Service Provider (CSP) is one type of customer that can leverage
this Neutron configuration. A CSP usually has to support multiple groups
of end users (tenants). Beyond providing network isolation for the
resources provided to each tenant, a CSP may also need to provide to some
tenants the ability to access these resources from the public internet.
Leveraging the configuration described in this scenario, the CSP is able to
provide private resources to its tenants when they connect to the
Chapter 2. Installing
27
28
Chapter 2. Installing
29
b. Create a repo file in the /etc/yum.repos.d directory and change the base
path to the directory path that you specified in the mount command. For
example, create the /etc/yum.repos.d/DVD.repo file with the following
content:
[RHEL-Repository]
name=RHEL repository
baseurl=file:///mnt/rhel6
enabled=1
gpgcheck=0
2. Install the KVM management packages via yum by running the following
command:
yum install kvm virt-manager libvirt libvirt-python python-virtinst tunctl
3. Create a bridge (br0) based on the eth0 interface by running the following
command on the host:
virsh iface-bridge eth0 br0
If the command fails, you can manually configure the networks by performing
the following steps:
a. Edit your /etc/sysconfig/network-scripts/ifcfg-eth0 file as in the
following example:
DEVICE=eth0
HWADDR=AA:BB:CC:AA:BB:CC
ONBOOT=yes
BRIDGE=br0
30
d. You can review your interfaces and bridge by running the following
commands:
ifconfig
brctl show
Chapter 2. Installing
31
Procedure
1. Make sure that the machines meet the hardware prerequisites described in
Hardware prerequisites on page 19.
2. Install Red Hat Enterprise Linux on the machines and ensure to meet the
software prerequisites and to apply all the configurations described in
Manage-from requirements on page 21.
When installing the Red Hat Enterprise Linux operating system for the IBM
Cloud Orchestrator servers, the Basic Server package group is sufficient,
because the IBM Cloud Orchestrator servers deploy script installs the required
packages from the corresponding YUM repository or Red Hat ISO files.
3. Ensure that the hardware clock is configured as UTC. The final line in the
/etc/adjtime file should be the text UTC, as shown in the following example:
619.737272 1411131873 0.000000
1411131873
UTC
4. If you are using virtual machines, ensure that all the clocks are synchronized
with the host clock.
5. Make sure that SELINUX is disabled on Central Server 2. To disable SELINUX, edit
the /etc/selinux/config file and set the parameter SELINUX=disabled. Reboot
the node.
6. Remove the following packages on the Deployment Server and on each IBM
Cloud Orchestrator server, if installed:
facter
hiera
mcollective
mcollective-common
subversion-ruby
libselinux-ruby
ruby-augeas
ruby-rgen
ruby-shadow
rubygem-json
rubygem-stomp
rubygems
7. Create a backup of the Deployment Server and of each IBM Cloud Orchestrator
server. For example, create a snapshot of each virtual server.
32
If you are upgrading, copy the image files to a different temporary location, such
as /opt/ico_install_2402. Do not overwrite the original installation files.
In this procedure, the example temporary directory is /opt/ico_install. Replace
/opt/ico_install with the appropriate value for your installation or upgrade.
Procedure
1. Download the required IBM Cloud Orchestrator image files:
v If you want to install or upgrade to IBM Cloud Orchestrator V2.4.0.2 or IBM
Cloud Orchestrator Enterprise Edition V2.4.0.2, download the following files
from the IBM Fix Central site:
2.4.0-CSI-ICO-FP0002
2.4.0-CSI-ICO-FP0002.README
2. Copy the IBM Cloud Orchestrator image files to the Deployment Server and
extract the contents. For example, to extract the contents of the image files into
the /opt/ico_install directory, run the following command for each image file:
tar -xf image_file.tar -C /opt/ico_install
3. Download the following Business Process Manager packages and copy them to
the/opt/ico_install/data/orchestrator-chef-repo/packages/bpm_binaries/
directory:
BPM_Std_V85_Linux_x86_1_of_3.tar.gz
BPM_Std_V85_Linux_x86_2_of_3.tar.gz
BPM_Std_V85_Linux_x86_3_of_3.tar.gz
4. Download the following IBM HTTP Server packages and copy them to the
/opt/ico_install/data/orchestrator-chef-repo/packages/ihs_binaries/
directory:
WAS_V85_SUPPL_3_OF_3.zip
WAS_V85_SUPPL_2_OF_3.zip
WAS_V85_SUPPL_1_OF_3.zip
5. If you want to install or upgrade IBM Cloud Orchestrator with high availability
capabilities, download the following additional packages.
a. Download the following System Automation for Multiplatforms packages
and copy them to the /opt/ico_install/data/orchestrator-chef-repo/
packages/samp/ directory:
SA_MP_4.1_Linux.tar
4.1.0-TIV-SAMP-Linux-FP0001.tar
Chapter 2. Installing
33
c. Download the following Jazz for Service Management package and copy it
to the /opt/ico_install/data/orchestrator-chef-repo/packages/jazzsm
directory:
JAZZ_FOR_SM_1.1.0.2_FOR_LINUX_ML.zip
d. Download the following WebSphere Application Server for Jazz for Service
Management package and copy it to the /opt/ico_install/data/
orchestrator-chef-repo/packages/was directory:
WAS_V8.5.0.1_FOR_JAZZSM_LINUX_ML.zip
6. If you are using a Red Hat Enterprise Linux ISO file instead of using Red Hat
Subscription Management or yum as a package repository, copy the ISO file to
the /opt directory on the Deployment Server. For more information about the
Red Hat Enterprise Linux version required, see Software prerequisites on
page 21.
7. If you want to install or upgrade the additional components for IBM Cloud
Orchestrator Enterprise Edition, see the Download Document for a list of the
images to be downloaded for the following products:
v IBM SmartCloud Cost Management
v IBM Tivoli Monitoring
v IBM Tivoli Monitoring for Virtual Environments
v Jazz for Service Management
For information about installing IBM Cloud Orchestrator Enterprise Edition, see
Installing IBM Cloud Orchestrator Enterprise Edition on page 71.
34
Procedure
1. Log on to the Deployment Server.
2. Edit the /opt/ico_install/installer/deployment-service.cfg configuration
file to specify the appropriate value for the following parameters:
iso_location
Specifies the location of the ISO image file for the Red Hat Enterprise
Linux 6 (x86_64) installation DVD. This parameter is optional. If the
Deployment Server is registered with Red Hat Subscription
Management or Red Hat Network, or is configured to use an available
Yum repository, the location can be blank. If you are using a
customized Yum repository, ensure that the Yum repository is at the
same version level as the operating system.
managment_network_device
Specifies the network interface that is used to access the other servers
in the IBM Cloud Orchestrator management stack (that is, Central
Servers, Region Servers, Neutron Server, Compute Nodes). The default
value is eth0.
package_checking
Specifies whether the installer should verify that the required packages
for the High-Availability Distributed topology are in the correct location
on the Deployment Server. If this parameter is set to yes and any
required package is not found, the installation stops with an error. If
this parameter is set to no, the installer does not check the packages.
For the High-Availability Distributed topology, this parameter should
be set to yes. For the Demo topology or Distributed topology, this
parameter must be set to no.
Note: This parameter applies to the High-Availability Distributed
topology only. For the Demo topology or Distributed topology, you
must manually check that the required Business Process Manager and
IBM HTTP Server packages are in the correct location on the
Deployment Server. If any required package is not found, the
installation stops with an error. For more information about where the
packages should be located on the Deployment Server, see
Downloading the required image files on page 32.
ntp_servers
Specifies the IP address or fully qualified domain name of the NTP
server. If no value is specified, the installer uses the Deployment Server
as the NTP server for any deployed environments. To specify multiple
servers, use a comma separator. Ensure that at least one of the specified
servers is available.
3. If you are installing the High-Availability Distributed topology, you must
enable the load balancer Yum repository. To specify that the load-balancer
repository for a specific system should be managed by Red Hat Network,
complete the following steps:
Chapter 2. Installing
35
a.
b.
c.
d.
For more information about the load balancer repository, see the Red Hat
documentation.
4. Ensure that the /etc/yum.repos.d/ds.repo file does not exist on the
Deployment Server. If no operating system Yum repository is already
configured on the Deployment Server, the Deployment Service provides its own
internal Yum repository.
5. Ensure that the host name of the Deployment Server is specified correctly in
the /etc/hosts file or in the corporate DNS server. For more information about
how to verify the host name configuration, see Software prerequisites on
page 21.
6. Install the Deployment Service by running the following command:
cd /opt/ico_install/installer; sudo ./deploy_deployment_service.sh
7. If the installation does not complete successfully, review the following log files:
v /var/log/cloud-deployer/deployer_bootstrap.log
v /var/log/cloud-deployer/deploy.log
Take the appropriate action as indicated in the log files, and repeat step 6.
Results
The Deployment Service is installed. In accordance with security best practices,
remember to change the default passwords on the Deployment Server, as described
in Changing the Deployment Service passwords on page 125.
Example output:
+--------------------------------------+-----------------------| id
| name
+--------------------------------------+-----------------------...
| b779f01e-95d6-4ca6-a9ec-a37753f66a2b | sco-allinone-kvm
...
+----------------------------------------------------------------------+-------| description
| status
+----------------------------------------------------------------------+--------
36
...
| SCO Core + additional KVM compute node deployment (Existing machines)| ACTIVE
...
+----------------------------+----------------------------+
| created_at
| updated_at
|
+----------------------------+----------------------------+
...
| 2014-03-27T06:04:01.533856 | 2014-03-27T06:58:40.911237 |
...
For more details about the required resources for the template, run the ds
template-resources-list command.
Example:
source /root/keystonerc; ds template-resources-list b779f01e-95d6-4ca6-a9ec-a37753f66a2b
Example output:
+---------+------------------+-----------------------------+-----------+
| name
| type
| run_list
| run_order |
+---------+------------------+-----------------------------+-----------+
| control | Existing Machine | role[sco-allinone-kvm-mgm] | 1
|
| compute | Existing Machine | role[kvm-compute]
| 2
|
| iwd
| Existing Machine | role[iwd]
| 3
|
+---------+------------------+-----------------------------+-----------+
where <template_uuid> is the ID of the template that you can get by running the
ds template-list command.
For a list of the deployment parameters, see Deployment parameters on page
213.
If you plan to use Neutron networks, consider that:
v The MGMNetInterface parameter represents an interface that must connect to a
public and routable network.
v The DATANetInterface parameter represents an interface that must connect to a
network dedicated for VXLAN data.
v The EXTNetInterface parameter represents an interface that is used to connect to
a network for external access.
You can have the following three Neutron network topologies:
Chapter 2. Installing
37
38
You can run the previous procedure by using either a console-based wizard or the
command line interface.
Note: If you use a password whose set of allowed characters is a-zA-Z0-9!()-.?[]
the installation of the Central Server completes successfully.
Before starting the IBM Cloud Orchestrator deployment, it is important to
understand the concepts and artifacts that are used by the Deployment Service:
Node
Template
The deployment template that is used to define different topologies of IBM
Cloud Orchestrator.
Parameter
The parameter inside the template. It is used to customize the deployment
of IBM Cloud Orchestrator, for example to define the network type or the
passwords.
Deployment job
used to define the deployment of the IBM Cloud Orchestrator
environment. It can be a hierarchy if two jobs are related, for example,
Central Server and Region Server jobs (the Region Server jobs depend on
the Central Server job).
Note: Non-English characters are not supported in the Node, Template, Parameter
and Deployment job name. Make sure to use English names for those resources.
Chapter 2. Installing
39
40
If you just want to create databases for the Central Server on this machine, run
./create_dbs.sh central
If you just want to create databases for a single Region Server on this machine, run
./create_dbs.sh region
If you just want to create databases for multiple Region Servers on this machine
assumed you want to deploy three regions, run ./create_dbs.sh region=1,2,3
If you want to add a database for a new Region Server on this machine, for
example, a fourth Region Server, run ./create_dbs.sh region=4
The script uses the default environment variable to create the database, user and
schema against the specific DB2 instance on the database server. The default
values of the configuration are defined in the script and can be overridden by
exporting the system environment variables before running the create_dbs.sh
script. If you create a database for multiple regions, it uses the same database
with different user/schema in the database server. For example, the create_dbs.sh
region command creates nova user in system and database, the create_dbs.sh
region=1,2,3 command create nova1, nova2, nova3 in the system and database for
each region.
Note that the script assumes that the database and user/schema do not exist before
you run the script. You must check the DB2 server to make sure that there is not
conflict.
The following default environment variables are used in the create_dbs.sh script:
OS_DB_NAME=<OpenStack DB Name, default=OPENSTAC>
DB2_INSTANCE_PORT=<DB2 instance port, default=50000>
OS_NOVA_DB_NAME=<OpenStack Nova DB User, default=OPENSTAC>
OS_NOVA_DB_USER=<OpenStack Nova DB User, default=nova>
OS_NOVA_DB_PWD=<OpenStack Nova DB Password>
OS_CINDER_DB_NAME=<OpenStack Cinder DB Name, default=OPENSTAC>
OS_CINDER_DB_USER=<OpenStack Cinder DB User, default=cinder>
OS_CINDER_DB_PWD=<OpenStack Cinder DB Password>
OS_HEAT_DB_NAME=<OpenStack Heat DB Name, default=OPENSTAC>
OS_HEAT_DB_USER=<OpenStack Heat DB User, default=heat>
OS_HEAT_DB_PWD=<OpenStack Heat DB Password>
OS_DASH_DB_NAME=<OpenStack Dashboard DB Name, default=OPENSTAC>
OS_DASH_DB_USER=<OpenStack Dashboard DB User, default=dash>
OS_DASH_DB_PWD=<OpenStack Dashboard DB Password>
OS_KEYSTONE_DB_NAME=<OpenStack Keystone DB Name, default=OPENSTAC>
OS_KEYSTONE_DB_USER=<OpenStack Keystone DB User, default=keystone>
OS_KEYSTONE_DB_PWD=<OpenStack Keystone DB Password
OS_NEUTRON_DB_NAME=<OpenStack Neutron DB Name, default=OPENSTAC>
OS_NEUTRON_DB_USER=<OpenStack Neutron DB User, default=neutron>
OS_NEUTRON_DB_PWD=<OpenStack Neutron DB Password>
OS_CEIL_DB_NAME=<OpenStack Ceilometer DB Name, default=OPENSTAC>
OS_CEIL_DB_USER=<OpenStack Ceilometer DB User, default=ceil>
OS_CEIL_DB_PWD=<OpenStack Ceilometer DB Password>
OS_CEIL_DB_NOSQLPORT=<Mongo DB Port, default=27017>
OS_GLANCE_DB_NAME=<OpenStack Glance DB Name, default=OPENSTAC>
OS_GLANCE_DB_USER=<OpenStack Glance DB User, default=glance>
OS_GLANCE_DB_PWD=<OpenStack Glance DB Password>
BPM_DB_NAME=<BPM DB Name, default=BPMDB>
BPM_DB_USER=<BPM DB User, default=bpmuser>
BPM_DB_PWD=<BPM DB Password>
CMN_DB_NAME=<CMN DB Name, default=CMNDB>
CMN_DB_USER=<CMN DB User, default=bpmuser>
Chapter 2. Installing
41
CMN_DB_PWD=<CMN DB Password>
PDW_DB_NAME=<PDW DB Name, default=PDWDB>
PDW_DB_USER=<PDW DB User, default=bpmuser>
PDW_DB_PWD=<PDW DB Password>
Note:
v Database name length must be no more than eight characters, for example,
OPENSTAC and not OPENSTACK.
v For Business Process Manager databases (BPMDB, CMNDB and PDWDB), if you
use the same user name for different databases, make sure that you use the
same password too. Do not use different passwords for the same user.
v Make sure to set the USER_DEFAULT_PWD or the default password is passw0rd.
v Make sure to set correctly the firewall to allow the DB2 communication. The
default port numbers are 50000 and 27017.
v If you need to restart the DB, after restarting, you need to restart NoSQL by
following the procedure described in Restarting NoSQL on page 235.
The following is an example of the output of create_dbs.sh:
{ Address: <HOSTNAME>, Port: 50000, Name: OPENSTAC, User: nova, Password: <PASSWORD>, Fqdn: <HOSTNAME>}
{ Address: <HOSTNAME>, Port: 50000, Name: OPENSTAC, User: dash, Password: <PASSWORD>, Fqdn: <HOSTNAME>}
Normally external database machines are not for IBM Cloud Orchestrator only, it
may have databases used by other services. It is not appropriate to restart the DB2
service there:
1. Log on to the external database server with db2 instance user.
2. Update dbm cfg using the command:
db2 update dbm cfg using authentication server_encrypt
Note: If the database cannot be stopped, stop the applications which are
locking the related database.
42
| cmn_db
| Existing Database |
|
|
| compute_db
| Existing Database |
|
|
| dashboard_db
| Existing Database |
|
|
| image_db
| Existing Database |
|
|
| keystone_db
| Existing Database |
|
|
| metering_db
| Existing Database |
|
|
| network_db
| Existing Database |
|
|
| orchestration_db | Existing Database |
|
|
| pdw_db
| Existing Database |
|
|
| volume_db
| Existing Database |
|
|
+------------------+-------------------+--------------------------+-----------+
5. Create database nodes for all the existing database resources by running the
following commands (each command on one line):
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: cinder, Password: admin, Name: openstac}
volume_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: nova, Password: admin, Name: openstac}
compute_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: dash, Password: admin, Name: openstac}
dashboard_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: glance, Password: admin, Name: openstac}
image_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: keystone, Password: admin, Name: openstac}
keystone_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: ceil, Password: admin, Name: openstac}
metering_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: neutron, Password: admin, Name: openstac}
network_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: heat, Password: admin, Name: openstac}
orchestration_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: bpmuser, Password: password, Name: CMNDB}
cmn_db
Chapter 2. Installing
43
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: bpmuser, Password: password, Name: BPMDB}
bpm_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: bpmuser, Password: password, Name: PDWDB}
pdw_db
You can verify that the nodes were successfully created by running the ds
node-list command.
Because both the KVM Region and the VMware Region will use compute_db,
image_db, network_db, orchestration_db, and volume_db, so if you decide to install
both the KVM and VMware Region, you must create these database twice with
different names.
The following is an example for compute_db (each command on one line)::
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: nova, Password: passw0rd, Name: openstac}
kvm_compute_db
ds node-create -t IBM::SCO::Database
-p {Address: 192.0.2.147, User: nova, Password: passw0rd, Name: openstac}
vmware_compute_db
Procedure
1. [Nonroot user only:] If you want to deploy the IBM Cloud Orchestrator Central
Servers as a nonroot user, create an openrc file in your home directory, and
insert the following content:
44
export
export
export
export
export
OS_USERNAME=admin
OS_PASSWORD=$(openstack-obfuscate -u encrypted_admin_password)
OS_TENANT_NAME=admin
OS_AUTH_URL=http://fqdn_or_ip_address_of_deployment_service_node:5000/v2.0
OS_REGION_NAME=Deployment
2. To run any ds command, you must first source the openrc file:
source ~/openrc
Specify the -f parameter if you want to generate a file that records the
commands that are used by the wizard.
4. Choose the following option to deploy the IBM Cloud Orchestrator Central
Servers:
[0] New IBM Cloud Orchestrator deployment.
- Start a new IBM Cloud Orchestrator deployment.
When prompted, enter the required information. For information about the
deployment parameters, see Deployment parameters on page 213.
5. [Optional:] To review the deployment progress, complete the following steps:
a. Choose the following wizard option:
[4] Deployment job(s) status.
- Display deployment job(s) status.
Results
The IBM Cloud Orchestrator Central Servers are installed.
45
Verify that the Deployment Server and the Central Servers are ready in your
environment. For more information about installing the Deployment Server and
the Central Servers, see Installing the Deployment Service on page 34 and
Deploying the Central Servers on page 44.
VMware only: Do not install more than one IBM Cloud Orchestrator instance on
the same vCenter environment if the instances have access to the same resources.
Each IBM Cloud Orchestrator instance must use a different userid to access the
vCenter. The intersection between the resources that are seen by these users (for
example, clusters, datastore) must be empty.
PowerVC only: Ensure that the PowerVC server has been installed.
Note: Using a PowerVC server that has been configured for both Shared Storage
Pools and SAN-fabric managed Storage is not recommended. If possible, you
should use two PowerVC servers, each configured for using a single Storage Type,
and two PowerVC Region Servers (including two PowerVC Neutron Servers) to
manage them.
Note: PowerVC must be at version 1.2.1.2 and you must also apply the interim fix
shipped with IBM Cloud Orchestrator. For more information, see Applying
interim fix 1 to PowerVC 1.2.1.2 on page 50.
z/VM only: You must enable the xCAT version that is provided with z/VM 6.3. To
enable xCAT for z/VM 6.3, follow chapters 1-4 in the Enabling z/VM for OpenStack
(Support for OpenStack Icehouse Release) guide, at http://www.vm.ibm.com/
sysman/openstk.html.
Procedure
To deploy a Region Server, use the Deployment Service wizard as described in
Deploying the Central Servers on page 44 and choose the following option:
[1] Modify IBM Cloud Orchestrator deployment.
- Modify an IBM Cloud Orchestrator deployment, add region server or KVM compute node.
Use this option to deploy a Region Server of any hypervisor type: Hyper-V, KVM,
PowerVC, VMware, or z/VM.
If you deploy a KVM Region Server, you must choose this option again to create a
KVM compute node. For multiple KVM compute nodes, you must choose this
option multiple times: once for each KVM compute node.
You cannot use the wizard to deploy a Hyper-V compute node. Instead, you must
use the procedure documented in Installing a Hyper-V compute node on page
47.
Follow the interactive procedure by entering the required information. For
information about the deployment parameters, see Deployment parameters on
page 213.
Note: If you do not create a job correctly and you want to delete the job, you must
use the CLI ds job-delete command. You cannot delete a job by using the wizard.
For information about how to install a Region Server by using the command-line
interface, see Deploying a Region Server (CLI method) on page 201.
46
The command should return the value 1. If not, you can enable IP forwarding
in the /etc/sysctl.conf file. To activate the setting without rebooting, run the
sysctl -p command.
About this task
To create a Hyper-V compute node, you install the Hyper-V compute services on a
Hyper-V server. Hyper-V Compute Nodes cannot be installed by using the
Deployment Service wizard, and must be installed manually as described in this
topic.
Procedure
1. From the location where you extracted the IBM Cloud Orchestrator package,
there are two files:
/data/openstack/packages/driver/IBM Cloud Manager with OpenStack Hyper-V Agent.msi
/data/openstack/packages/ntp/ntp-4.2.6p5-ibm-win32-setup.exe
Value
Chapter 2. Installing
47
Table 6. (continued)
Parameter
Value
qpid server
qpid port
5672
qpid username
guest
qpid password
<qpid password>
neutron URL
http://<Region Server_FQDN>:9696
username
neutron
password
<password in neutron>
tenant name
service
region name
<region name>
authentication URL
10. When planning to run with multiple Hyper-V compute nodes, make sure that
all of them are enabled for live migration (otherwise the virtual machine
resize operations will fail). For details see http://docs.openstack.org/
icehouse/config-reference/content/hyper-v-virtualization-platform.html.
11. For Hyper-V-Server only you need to Install an ISO tool (for example, cygwin
genisomiage) according to: http://dev15alt2.raleigh.ibm.com:9090/kc/
SST55W_4.1.0/liaca/
liaca_enabling_hyperv_2012_systems_for_iso_generation.html in c:\Program
Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V
Agent\etc\nova\nova.conf update the location of the iso image tool you chose,
for example cygwin genisoimage:
mkisofs_cmd=c:\cygwin64\bin\genisoimage.exe
13. For multiple region, you must specify the following parameter in c:\Program
Files (x86)\IBM\Cloud Manager with OpenStack\Hyper-V
Agent\etc\nova\nova.conf:
cinder_endpoint_template=http://<RegionServer_FQDN>:8776/v1/%(project_id)s
48
14. Restart IBM Cloud Manager with OpenStack Compute Service and IBM Cloud
Manager with OpenStack Network Service.
Here are the commands to start and stop the Hyper-V services:
net
net
net
net
15. Ensure that the VLAN network range is correct for your environment.
On the Hyper-V Compute Node, edit the C:\Program Files (x86)\IBM\Cloud
Manager with OpenStack\Hyper-V Agent\etc\neutron\
hyperv_neutron_agent.ini configuration file, and verify that the specified
network_vlan_ranges value is suitable for your environment, as shown in the
following example:
network_vlan_ranges=default:100:3999
OS_USERNAME=admin
OS_PASSWORD=$(openstack-obfuscate -u encrypted_admin_password)
OS_TENANT_NAME=admin
OS_AUTH_URL=http://fqdn_or_ip_address_of_deployment_service_node:5000/v2.0
OS_REGION_NAME=Deployment
2. To run any ds command, you must first source the openrc file:
source ~/openrc
Specify the -f parameter if you want to generate a file that records the
commands that are used by the wizard.
4. Choose the following option to create a KVM compute node:
[1] Modify IBM Cloud Orchestrator deployment.
- Modify an IBM Cloud Orchestrator deployment, add region server or KVM compute node.
When prompted, enter the required information. For information about the
deployment parameters, see Deployment parameters on page 213.
5. To create multiple compute nodes, repeat the previous step as many times as
required. You can create only one compute node at a time.
6. If you do not create a job correctly and you want to delete the job, you must
use the CLI ds command. You cannot delete a job by using the wizard.
Chapter 2. Installing
49
Results
The KVM compute node is created.
Configuring a PowerVC Region Server:
To configure a PowerVC Region Server, you must apply any required fixes, copy
the SSL certificate, and configure storage. You must then restart some services.
Applying interim fix 1 to PowerVC 1.2.1.2:
PowerVC 1.2.1.2 requires an interim fix shipped with IBM Cloud Orchestrator to
be installed in order to function correctly with Workload Deployer.
To do this copy the interim fix from the /utils/powervc/ directory to the server
hosting PowerVC 1.2.1.2 on the PowerVC server, untar the interim fix and then run
the update command:
./update
After the update, it is necessary to restart the PowerVC services on the PowerVC
Region Server if they are already running.
On the PowerVC Region Server run:
service
service
service
service
openstack-glance-powervc restart
openstack-neutron-powervc restart
openstack-nova-powervc restart
openstack-cinder-powervc restart
2. Copy the certificate to the PowerVC region server with the following command
executed on the PowerVC region server:
scp root@<powervc server:/etc/pki/tls/certs/powervc.crt /etc/pki/tls/certs/powervc.crt
3. Verify that all services are running by logging on to Central Server 1 and
running SCOrchestrator.py. For details, refer to Managing services with
SCOrchestrator.py on page 223.
Results
The certificate has been copied to the PowerVC Region Server.
What to do next
Network setup
DHCP is not supported by the PowerVC OpenStack driver. When creating the
network in Horizon for use with PowerVC, ensure that the DHCP box is disabled.
Configure PowerVC with Shared Storage Pools
50
For the PowerVC Region Server to communicate with PowerVC setup leveraging
Shared Storage Pools it is necessary to manually configure the Storage Connectivity
Groups in /etc/powervc/powervc.conf. In /etc/powervc/powervc.conf on the
Region Server there is a variable named storage_connectivity_group in the
[powervc] section of the configuration file. By default its value is Any host, all
VIOS and this is used for PowerVC systems setup using a SVC. If using Shared
Storage Pools, replace this value with the string specified under Shared Storage
Pool-Backed Groups on the PowerVC User Interface under CONFIGURATION >
Storage Connectivity Groups. If you have more than one Shared Storage
Pool-Backed Groups, or if you want to use a different type of storage, deploy a
PowerVC Region Server per storage connectivity group. Where each PowerVC
Region Server connects to a single Storage Connectivity Group.
Restart the following services:
service
service
service
service
service
openstack-glance-powervc restart
openstack-neutron-powervc restart
openstack-nova-powervc restart
openstack-cinder-powervc restart
openstack-cinder-volume restart
Default Value
Description
RegionName
RegionZVM
OSLibvirtType
zvm
ComputeDriver
nova.virt.zvm.ZVMDriver
XcatUsername
admin
XcatServer
XcatZhcpNodename
zhcp
51
Default Value
Description
XcatMaster
xcat
XcatMgtIp
XcatMgtIp
XcatMgtMask
255.255.255.192
ZvmDiskPool
ROOTP1
ZvmDiskPoolType
ECKD
ZvmHost
tivlp57
VswitchMappings
xcatvsw1:6443;xcatvsw2:6243,5244
DatabagXcatPassword
admin
DatabagZlinuxRootPassword
password
DatabagMnadmin
mnpass
Ml2MechanismDrivers
zvm
Ml2FlatNetworks
xcatvsw2
Procedure
1. Create destination directories:
# mkdir /drouter
# mkdir -p /opt/app/workload/sco/drouter /drouter
2. Mount:
# mount -o bind /opt/app/workload/sco/drouter /drouter
52
This mount is removed after a server shutdown. After being sure that
Workload Deployer works properly, a system administrator can add a
permanent mount to /etc/fstab. For the /opt/ibm directory, the steps are
similar.
53
failover for components such as haproxy, IBM HTTP Server, and so on. The
automation policy for System Automation for Multiplatforms is fully
automated. The cluster is configured and started after the installation.
6. Install one or more Region Servers, as described in Installing the Region
Server on page 58.
The High-Availability Distributed topology supports the following Region
Server configurations:
v VMware Region with Neutron network
v VMware Region with Nova network
v KVM Region with Neutron network
v KVM Region with Nova network
For all Region Server installations, a policy snippet is created automatically and
is included in the overall System Automation Application Manager policy.
Restriction: For a high-availability installation, only VMware and KVM regions
are supported.
The High-Availability Distributed topology is now installed. Complete the final
configuration steps as described in Configuring high availability on page 119.
For more information about high availability, see Managing high availability on
page 235.
Procedure
1. Create a virtual machine with a supported operating system installed. For
information about supported operating systems see Supported hardware and
operating systems.
2. Create a node resource for the virtual machine, and register the resource in the
Deployment Service:
ds node-create -t "IBM::SCO::Node"
-p "{Address: saam_address, Port: 22, User: root, Password: password }" saam
where
IBM::SCO::Node
indicates that the virtual machine is registered as a node resource.
saam_address
is the IP address of the virtual machine.
password
is the root password for the virtual machine.
3. Run the ds template-list command to find the ID of the HA-saam template.
4. Run the ds node-list command to find the ID of the saam node that you
created in step 2.
5. Create the deployment job:
ds job-create -t template_id -P "MGMNetInterface=net_interface"
-N "saam=saam_nodeid" saam-job
where
54
template_id
is the ID of the HA-saam template, which you identified in step 3 on
page 54.
net_interface
is the network interface that is used for communication between the
IBM Cloud Orchestrator management components: for example, eth0.
This value must be consistent on each node.
saam_nodeid
is the ID of the saam node, which you identified in step 4 on page 54.
If a problem occurs during job creation and the job status is WARNING or ERROR,
run the ds job-show <job ID> command to inspect the details of the job which
also include the related error message.
6. Run the deployment job:
ds job-execute saam-job
Results
System Automation Application Manager is installed in the high-availability
topology.
Procedure
1. Create the following virtual machines with a supported operating system
installed, to be configured as the Central Servers:
v Central Server 1
v Central Server 2 primary
v Central Server 2 secondary
v Central Server 3
2. Create a node resource for each virtual machine, and register the resources in
the Deployment Service:
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password}"
central_server_1
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password}"
central_server_2p
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password}"
central_server_2s
Chapter 2. Installing
55
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password}"
central_server3
where
IBM::SCO::Node
indicates that the virtual machine is registered as a node resource.
node_address
is the IP address of the virtual machine.
port
user_name
is the name of a root user on the virtual machine.
password
is the password for the specified root user on the virtual machine.
3. Run the ds template-list command to find the ID of the HA-sco-centralservers-extdb template.
4. Reregister the saam node installed in Installing System Automation Application
Manager on page 54 with a different name, for example, cs_saam. Run ds
node-create -t "IBM::SCO::Node" -p "{Address: saam_address, Port: 22,
User: root, Password: password }" cs_saam. Use the nodeid created there in
the subsequent ds job-create command.
5. Run the ds node-list command to find the IDs of the following resources:
v The System Automation Application Manager resource: saam
v The Central Server resources that you created in step 2 on page 55:
central_server_1
central_server_2p
central_server_2s
central_server_3
v The external database resources:
bpm_db
cmn_db
dashboard_db
keystone_db
metering_db
pdw_db
6. Create the deployment job. The parameters within the command must be
entered without any spaces. The parameters are shown on separate lines for
clarity:
job-create
56
-t <template_id>;
-P "CentralServer2VirtualAddress=<CentralServer2VirtualAddress>;
CentralServer2VirtualHostname=<CentralServer2VirtualHostname>;
CentralServer2VirtualNetmask=<CentralServer2VirtualNetmask>;
CentralServer2VirtualTieBreaker=<CentralServer2VirtualTieBreaker>;
MGMNetInterface=<net_interface>;
SingleSignOnDomain=<SingleSignOnDomain>;
OrchestratorPassword= External DB instance password"
-N "saam=<saam_nodeid>;
central_server_1=<central_server_1_nodeid>;
central_server_2_p=<central_server_2_p_nodeid>;
central_server_2_s=<central_server_2_s_nodeid>;
central_server_3=<central_server_3_nodeid>;
bpm_db=<bpm_db_nodeid>;
dashboard_db=<dashboard_db_nodeid>;
keystone_db=<keystone_db_nodeid>;
metering_db=<metering_db_nodeid>;
pdw_db=<pdw_db_nodeid>;
cmn_db=<cmn_db_nodeid>"
HA-sco-central-servers-extdb_job
where
<template_id>
Is the ID of the HA-sco-central-servers-extdb template, which you
identified in step 3 on page 56.
<CentralServer2VirtualAddress>
Is the virtual address to be used for Central Server 2. To achieve high
availability, this server is run on two instances. Each instance requires
its own IP address. The virtual address is an additional IP address,
which is assigned to the active virtual machine of a System Automation
for Multiplatforms cluster. If the primary virtual machine fails, the
virtual address is moved to the secondary virtual machine, and the
secondary virtual machine become the active virtual machine.
Applications access the services running on this high availability cluster
by using the virtual IP address.
Note: You must create the IP address in the DNS server, there is no
need that the virtual machine exists.
<CentralServer2VirtualHostname>
Is the host name of the virtual address to be used for Central Server 2.
It must be in FQDN format.
Note: CentralServer2VirtualHostname is the fully qualified domain
name (FQDN) of IP address of CentralServer2VirtualAddress in the
DNS server.
<CentralServer2VirtualNetmask>
Is the netmask of the virtual address to be used for Central Server 2.
<CentralServer2VirtualTieBreaker>
Is the IP address of the default gateway of the subnet that is used for
the virtual machines of the IBM Cloud Orchestrator management stack.
The System Automation for Multiplatforms cluster uses network
connectivity to this default network gateway to determine if a node is
still up and running. If a cluster split occurs, the tiebreaker determines
which part of the cluster gets quorum and can manage active resources.
For more information about quorums and tiebreakers, see Operational
quorum.
<net_interface>
Is the network interface that is used for communication between the
IBM Cloud Orchestrator management components: for example, eth0.
This value must be consistent on each node.
<SingleSignOnDomain>
Is the DNS domain name where the manage-from components are
installed. The manage-from components include IBM HTTP Server, IBM
Cloud Orchestrator user interfaces, Workload Deployer, and Business
Process Manager.
Chapter 2. Installing
57
<resource_nodeid>
Are the resource IDs that you identified in step 5 on page 56.
7. Run the deployment job:
ds job-execute HA-sco-central-servers-extdb_job
Results
The Central Servers are installed in the high-availability topology.
Related concepts:
Using the command-line interface to deploy IBM Cloud Orchestrator on page
198
You can use the command-line interface to deploy your IBM Cloud Orchestrator
environment.
Deployment parameters on page 213
Check the list of all deployment parameters that you can configure before
deploying IBM Cloud Orchestrator, and the parameter default values.
Configuring an external database on page 40
If you want to use an external database in your IBM Cloud Orchestrator
environment, you must configure it by following this procedure.
58
Procedure
1. Create four virtual machines with a supported operating system installed, to be
configured as the Region Server: KVM Region Server primary, KVM Region
Server secondary, KVM compute node, and Neutron network node.
2. Create a node resource for each virtual machine, and register the resources in
the Deployment Service:
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
kvm_region_server_p
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
kvm_region_server_s
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
kvm_compute
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
neutron_network_node
where
IBM::SCO::Node
indicates that the virtual machine is registered as a node resource.
node_address
is the IP address of the virtual machine.
port
user_name
is the name of a root user on the virtual machine.
password
is the password for the specified root user on the virtual machine.
3. Run the ds template-list command to find the ID of the HA-kvm_region-withcompute-neutron-extdb template.
4. Reregister the saam node installed in Installing System Automation Application
Manager on page 54 with a different name, for example, kvm_saam. Run ds
node-create -t "IBM::SCO::Node" -p "{Address: saam_address, Port: 22,
User: root, Password: password }" kvm_saam. Use the nodeid created there in
the subsequent ds job-create command.
5. Run the ds node-list command to find the IDs of the following resources:
v The System Automation Application Manager resource: kvm_saam that you
created in step 4.
v The Region Server resources that you created in step 2: kvm_region_server_p,
kvm_region_server_s, kvm_compute, and neutron_network_node.
v The external database resources: compute_db, image_db, network_db,
orchestration_db, and volume_db.
6. Run the ds job-list command to find the ID of the HA-sco-central-serversextdb_job job.
7. Create the deployment job:
Chapter 2. Installing
59
ds job-create -t template_id
-P "RegionServerVirtualAddress=RegionServerVirtualAddress;
RegionServerVirtualHostname=RegionServerVirtualHostname;
RegionServerVirtualNetmask=RegionServerVirtualNetmask;
RegionServerVirtualTieBreaker=RegionServerVirtualTieBreaker;
MGMNetInterface=net_interface;
OrchestratorPassword=External DB instance password"
-N "saam=saam_nodeid;
kvm_region_server_p=kvm_region_server_p_nodeid;
kvm_region_server_s=kvm_region_server_s_nodeid;
neutron_network_node=neutron_network_node_nodeid;
compute_db=compute_db_nodeid;
image_db=image_db_nodeid;
network_db=network_db_nodeid;
orchestration_db=orchestration_db_nodeid;
volume_db=volume_db_nodeid"
-p central_server_job_id
HA-kvm_region-with-compute-neutron-extdb_job
where
template_id
is the ID of the HA-kvm_region-neutron-extdb_job template, which you
identified in step 3 on page 59.
RegionServerVirtualAddress
Is the virtual address to be used for Region Server. To achieve high
availability, this server is run on two instances. Each instance requires
its own IP address. The virtual address is an additional IP address,
which is assigned to the active virtual machine of a System Automation
for Multiplatforms cluster. If the primary virtual machine fails, the
virtual address is moved to the secondary virtual machine, and the
secondary virtual machine become the active virtual machine.
Applications access the services running on this high availability cluster
by using the virtual IP address.
RegionServerVirtualHostname
is the host name of the virtual address to be used for the Region Server.
RegionServerVirtualNetmask
is the netmask of the virtual address to be used for the Region Server.
RegionServerVirtualTieBreaker
is the TieBreaker for the System Automation for Multiplatforms cluster
on the Region Server. If a cluster split occurs, the tiebreaker determines
which part of the cluster gets quorum and can manage active resources.
For example, you can use the gateway of the network as the tiebreaker.
For more information about quorums and tiebreakers, see Operational
quorum.
net_interface
is the network interface that is used for communication between the
IBM Cloud Orchestrator management components: for example, eth0.
This value must be consistent on each node.
OrchestratorPassword
is the password of the instance in the external database where the IBM
Cloud Orchestrator database is installed.
<X>_nodeid
are the resource IDs that you identified in step 5 on page 59.
60
central_server_job_id
is the ID of the HA-sco-central-servers-extdb_job job, which you
identified in step 6 on page 59.
If a problem occurs during job creation and the job status is WARNING or ERROR,
run the ds job-show <job ID> command to inspect the details of the job which
also include the related error message.
8. Run the deployment job:
ds job-execute HA-kvm_region-with-compute-neutron-extdb_job
Results
The KVM Region Server with Neutron network is installed in the high-availability
topology.
Procedure
1. Create three virtual machines with a supported operating system installed, to
be configured as the Region Server: KVM Region Server primary, KVM Region
Server secondary, and KVM compute node.
2. Create a node resource for each virtual machine, and register the resources in
the Deployment Service:
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
kvm_region_server_p
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
kvm_region_server_s
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
kvm_compute
where
IBM::SCO::Node
indicates that the virtual machine is registered as a node resource.
node_address
is the IP address of the virtual machine.
port
user_name
is the name of a root user on the virtual machine.
password
is the password for the specified root user on the virtual machine.
Chapter 2. Installing
61
where
template_id
is the ID of the HA-kvm_region-with-compute-extdb template, which
you identified in step 3.
RegionServerVirtualAddress
Is the virtual address to be used for Region Server. To achieve high
availability, this server is run on two instances. Each instance requires
its own IP address. The virtual address is an additional IP address,
which is assigned to the active virtual machine of a System Automation
for Multiplatforms cluster. If the primary virtual machine fails, the
virtual address is moved to the secondary virtual machine, and the
secondary virtual machine become the active virtual machine.
Applications access the services running on this high availability cluster
by using the virtual IP address.
RegionServerVirtualHostname
is the host name of the virtual address to be used for the Region Server.
RegionServerVirtualNetmask
is the netmask of the virtual address to be used for the Region Server.
62
RegionServerVirtualTieBreaker
is the TieBreaker for the System Automation for Multiplatforms cluster
on the Region Server. If a cluster split occurs, the tiebreaker determines
which part of the cluster gets quorum and can manage active resources.
For example, you can use the gateway of the network as the tiebreaker.
For more information about quorums and tiebreakers, see Operational
quorum.
net_interface
is the network interface that is used for communication between the
IBM Cloud Orchestrator management components: for example, eth0.
This value must be consistent on each node.
OrchestratorPassword
is the password of the instance in the external database where the IBM
Cloud Orchestrator database is installed.
<X>_nodeid
are the resource IDs that you identified in step 5 on page 62.
central_server_job_id
is the ID of the HA-sco-central-servers-extdb_job job, which you
identified in step 6 on page 62.
If a problem occurs during job creation and the job status is WARNING or ERROR,
run the ds job-show <job ID> command to inspect the details of the job which
also include the related error message.
8. Run the deployment job:
ds job-execute HA-kvm_region-with-compute-extdb_job
Results
The KVM Region Server with Nova network is installed in the high-availability
topology.
Procedure
1. Create three virtual machines with a supported operating system installed, to
be configured as the Region Server: VMware Region Server primary, VMware
Region Server secondary, and Neutron network node.
2. Create a node resource for each virtual machine, and register the resources in
the Deployment Service:
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
vmware_region_server_p
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
Chapter 2. Installing
63
vmware_region_server_s
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
neutron_network_node
where
IBM::SCO::Node
indicates that the virtual machine is registered as a node resource.
node_address
is the IP address of the virtual machine.
port
user_name
is the name of a root user on the virtual machine.
password
is the password for the specified root user on the virtual machine.
3. Run the ds template-list command to find the ID of the HA-vmware_regionneutron-extdb template.
4. Reregister the saam node installed in Installing System Automation Application
Manager on page 54 with a different name, for example, vmware_n_saam. Run
ds node-create -t "IBM::SCO::Node"
-p "{Address: saam_address, Port: 22, User: root, Password: password }"
vmware_n_saam
64
volume_db=volume_db_nodeid"
-p central_server_job_id
HA-vmware_region-neutron-extdb_job
where
template_id
is the ID of the HA-vmware_region-neutron-extdb template, which you
identified in step 3 on page 64.
RegionServerVirtualAddress
Is the virtual address to be used for Region Server. To achieve high
availability, this server is run on two instances. Each instance requires
its own IP address. The virtual address is an additional IP address,
which is assigned to the active virtual machine of a System Automation
for Multiplatforms cluster. If the primary virtual machine fails, the
virtual address is moved to the secondary virtual machine, and the
secondary virtual machine become the active virtual machine.
Applications access the services running on this high availability cluster
by using the virtual IP address.
RegionServerVirtualHostname
is the host name of the virtual address to be used for the Region Server.
RegionServerVirtualNetmask
is the netmask of the virtual address to be used for the Region Server.
RegionServerVirtualTieBreaker
is the TieBreaker for the System Automation for Multiplatforms cluster
on the Region Server. If a cluster split occurs, the tiebreaker determines
which part of the cluster gets quorum and can manage active resources.
For example, you can use the gateway of the network as the tiebreaker.
For more information about quorums and tiebreakers, see Operational
quorum.
VMServerHost
is the IP address of the vCenter server.
VMServerUserName
is the name of a user on the vCenter server.
VMServerPassword
is the password for the specified user on the vCenter server.
VMClusterName
is the cluster name in vCenter that is used to start the virtual machine.
net_interface
is the network interface that is used for communication between the
IBM Cloud Orchestrator management components: for example, eth0.
This value must be consistent on each node.
<X>_nodeid
are the resource IDs that you identified in step 5 on page 64.
central_server_job_id
is the ID of the HA-sco-central-servers-extdb_job job, which you
identified in step 6 on page 64.
OrchestratorPassword
is the password of the instance in the external database where the IBM
Cloud Orchestrator database is installed.
Chapter 2. Installing
65
If a problem occurs during job creation and the job status is WARNING or ERROR,
run the ds job-show <job ID> command to inspect the details of the job which
also include the related error message.
You can run the ds template-parameters-list HA-vmware_region-neutronextdb command to get the list of the parameters that you can specify for the
HA-vmware_region-neutron-extdb template.
8. Run the deployment job:
ds job-execute HA-vmware_region-neutron-extdb_job
Results
The VMware Region Server with Neutron network is installed in the
high-availability topology.
Procedure
1. Create two virtual machines with a supported operating system installed, to be
configured as the Region Server: VMware Region Server primary and VMware
Region Server secondary.
2. Create a node resource for each virtual machine, and register the resources in
the Deployment Service:
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
vmware_region_server_p
ds node-create -t "IBM::SCO::Node"
-p "{Address: node_address, Port: port,
User: user_name, Password: password }"
vmware_region_server_s
where
IBM::SCO::Node
indicates that the virtual machine is registered as a node resource.
node_address
is the IP address of the virtual machine.
port
user_name
is the name of a root user on the virtual machine.
password
is the password for the specified root user on the virtual machine.
3. Run the ds template-list command to find the ID of the HA-vmware_regionextdb template.
4. Reregister the saam node installed in Installing System Automation Application
Manager on page 54 with a different name, for example, vmware_saam. Run ds
66
where
template_id
is the ID of the HA-vmware_region-extdb template, which you identified
in step 3 on page 66.
RegionServerVirtualAddress
Is the virtual address to be used for Region Server. To achieve high
availability, this server is run on two instances. Each instance requires
its own IP address. The virtual address is an additional IP address,
which is assigned to the active virtual machine of a System Automation
for Multiplatforms cluster. If the primary virtual machine fails, the
virtual address is moved to the secondary virtual machine, and the
secondary virtual machine become the active virtual machine.
Applications access the services running on this high availability cluster
by using the virtual IP address.
RegionServerVirtualHostname
is the host name of the virtual address to be used for the Region Server.
RegionServerVirtualNetmask
is the netmask of the virtual address to be used for the Region Server.
RegionServerVirtualTieBreaker
is the TieBreaker for the System Automation for Multiplatforms cluster
Chapter 2. Installing
67
Results
The VMware Region Server with Nova network is installed in the high-availability
topology.
Procedure
1. Verify that you can access and log in to the following IBM Cloud Orchestrator
user interfaces:
v Self-service user interface
68
Verify that the status of each IBM Cloud Orchestrator component is online,
as shown in the following example output:
===>>> Collecting Status for IBM Cloud Orchestrator
===>>> Please wait ======>>>>>>
Component
Hostname
Status
------------------------------------------------------------------bpm-dmgr
192.0.2.84
online
bpm-node
192.0.2.84
online
bpm-server
192.0.2.84
online
db2
192.0.2.83
online
iwd
192.0.2.87
online
openstack-ceilometer-api
192.0.2.84
online
openstack-ceilometer-api
192.0.2.83
online
openstack-ceilometer-central 192.0.2.83
online
openstack-ceilometer-collector 192.0.2.83
online
openstack-keystone
192.0.2.84
online
pcg
192.0.2.84
online
qpidd
192.0.2.83
online
swi
192.0.2.84
online
===>>> Status IBM Cloud Orchestrator complete
69
5. Verify the infrastructure status by checking that you have availability zones
created. In the Administration user interface, click Admin > System Panel >
Host Aggregates, and verify that the Availability Zones list is populated.
Note: If you have a multiregion environment, you can switch between regions
by using the dropdown list in the upper-right corner of the window. The
regions that you see in the Administration user interface are not the same as
70
the ones listed in the output of the keystone endpoint list because the keystone
command output includes RegionCentral, an OpenStack region that is used
only by Ceilometer.
What to do next
Before you can use IBM Cloud Orchestrator to manage your cloud environment,
you must configure your environment as described in Post-installation tasks on
page 72. At a minimum, you must complete the following basic configuration
tasks:
1. Assign zones to domains and projects, as described in Assigning zones to
domains and projects on page 72.
2. Configure your network, as described in Managing Nova networks on page
88 or Managing Neutron networks on page 93.
You can then test the configuration by creating and registering an image, and then
deploying the image to a region, as described in Chapter 6, Managing virtual
images, on page 331.
Installation procedure
The first part of the IBM Cloud Orchestrator Enterprise Edition installation is
exactly the same as for the base version. Then you must install the additional
products on separate machines:
1. Refer to the installation procedure in the Chapter 2, Installing, on page 15
section to install IBM Cloud Orchestrator and its services.
2. Install Jazz for Service Management V1.1.0.1. For instructions, see the Jazz for
Service Management V1.1.0.1 Quick Start Guide.
3. Install IBM Tivoli Monitoring V6.3.0.2. For instructions, see Installing IBM
Tivoli Monitoring on page 718.
4. [Optional] Install IBM Tivoli Monitoring for Virtual Environments V7.2.0.2. For
instructions, see IBM Tivoli Monitoring for Virtual Environments Quick Start
Guide.
5. Install IBM SmartCloud Cost Management V2.1.0.4. For instructions, see Quick
start guide for metering and billing on page 72.
Chapter 2. Installing
71
Post-installation tasks
After you install IBM Cloud Orchestrator, complete these additional configuration
steps and management tasks.
Note: Remember to update the Red Hat Enterprise Linux image to the appropriate
version to ensure that you avoid the Heartbleed vulnerability. For information
about the Heartbleed vulnerability, see https://access.redhat.com/solutions/
781793.
72
Note: The IBM Cloud Orchestrator administrator account admin already exists in
the local OpenStack database. Make sure that there is not an LDAP account that
also has the name admin, or this supersedes the local admin account.
After establishing these items, perform the following steps to configure LDAP
authentication:
1. Make sure that the ldapauth.py file exists in your Keystone installation in the
keystone/middleware directory.
2. The following properties can be defined in your Keystone configuration file to
control LDAP authentication. Comment the existing [ldap] section before you
add an [ldap_pre_auth] section. The keystone configuration file is, by default,
to be found in /etc/keystone/keystone.conf on Central Server 2:
[ldap_pre_auth]
# The url of the corporate directory server
url = ldap://localhost
# The root of the tree that contains the user records
user_tree_dn = cn=users,dc=example,dc=com
# The property in the user record that will be checked against the username
user_name_attribute = cn
# In order to search for user records, we will try and use anonymous query.
# If anonymous query is not available, then define the user and password
# of an account that does have rights to do a search
Chapter 2. Installing
73
user = cn=admin,cn=users,dc=example,dc=com
password = <admin_password>
# Define this property if you want to customize the user id
# which will be used if we automatically populate the user to keystone
user_id_attribute = dn
# By default if we fail to find a user in LDAP, we will then try and
# find that user directly in keystone. If you dont want that to happen
# then set pass_through to False
#pass_through = False
3. You can configure support for LDAP SSL or TLS. To use SSL, you must
configure the underlying openldap client with the appropriate certificate details
which are typically stored in /etc/openldap/ldap.conf, for example:
TLS_CACERT /etc/openldap/certs/serverkey.cer
TLS_REQCERT DEMAND
With openldap configured, you can refer to your secure LDAP server by
specifying url = ldaps://.... in keystone.conf.
If you use TLS, then the certificate details are specified in the keystone.conf
file itself:
[ldap_pre_auth]
tls_cacertfile = /path/to/certfile
tls_cacertdir = /path/to/certdir
tls_req_cert = demand
use_tls = True
tls_cacertfile and tls_cacertdir are not both required. If you use TLS, you
must use the regular url = ldap://.... connection specification (and not use
ldaps).
4. The LDAP objects that are checked during authentication can be restricted
using two variables in the configuration file. user_objectclass allows you to
specify the class of objects to be checked and user_filter allows you to specify
additional AND filters to be used in the objects being checked. For instance, the
following statement restricts authentication to objects of class person that have
an attribute memberOf indicating membership to an existing OpenStack group in
your corporate directory server:
[ldap_pre_auth]
user_objectclass = person
user_filter = (memberOf=CN=OpenStack,dc=ldap,dc=com)
6. Comment the existing pipeline and add the following line in the
keystone-paste.ini file:
[pipeline:api_v3]
#pipeline = access_log sizelimit url_normalize token_auth admin_token_auth xml_body json_body
#
simpletoken ec2_extension s3_extension service_v3
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth
xml_body json_body simpletoken ldapauth autopop debug stats_reporting ec2_extension
s3_extension service_v3
74
populates the minimal amount of information required into Keystone, that is, user
name and user ID. Because the plug-in does not propagate the user password,
these Keystone user entries do not allow the LDAP authentication step to be
bypassed. They are only present to allow role assignment. To configure
auto-population, you need to make the following choices:
v Which LDAP attribute is used to create a unique user_id within
the OpenStack keystone? This must be something that does not change over
time, that is limited to 64 characters, and that does not contain blank space
characters. The default is the distinguished name (DN) attribute, but you can
chose whatever suits your environment, for example, the email attribute is a
common alternative. It is acceptable, for example, to use email as both the
username and the user_id).
v Which project must the user be given an initial role on as part of the
auto-population? This project already needs to exist before the auto-population
takes place.
Note: It is not recommended that you enable a situation where a user sometimes
authenticates with LDAP and sometimes directly with Keystone, since unless these
are carefully maintained, with matching user_ids and so forth, this can cause
confusion about which is the master user record.
Note: The auto-population plug-in is not a replacement for a full directory
synchronization capability. While the plug-in creates a user-record the first time a
user authenticates, it does not, for example, delete that user record in keystone if
the LDAP user record is subsequently deleted. Such a user would indeed no longer
be able to log in to IBM Cloud Orchestrator, but the database cleanup is outside of
the scope of the auto-population plug-in. If such a full capability is required, then
the use of an external directory synchronization tool is recommended (for details
refer to Using external Directory Integration tools with LDAP authentication on
page 77).
1. Make sure the autopop.py file exists in your Keystone installation in the
keystone/middleware directory.
2. The main job of the auto-population plug-in is to ensure that a user record is
created in Keystone, so that subsequent role assignments can take place.
However, optionally, the plug-in can grant an initial role to that user. This is
achieved by specifying the project and role required in an [auto_population]
section of the configuration file. First, you must specify the project, by
providing either a project ID by defining default_project_id or a project name
by defining default_project. If a project name is specified, then this is
assumed to be in the same domain as the user who has just been authenticated,
whereas a project ID is, by definition, unique across all domains. For example:
[auto_population]
default_project = test-project
instructs the auto population plug-in to assign the authenticated user the
standard membership role in the project named test-project in the user's
domain. This project must already exist for this role assignment to take place.
Additionally, you can ask for the plug-in to grant a further explicit role to the
user on the project by specifying either a role name using a default_role or a
role ID using default_role_id. Hence, the following gives an authenticated
user both the membership role and the vm-manager role:
[auto_population]
default_project = test-project
default_role = vm-manager
Chapter 2. Installing
75
Note: The roles assigned here only affect the assignments held within IBM
Cloud Orchestrator and Keystone. They do not modify anything that is actually
stored in LDAP. The plug-in does not attempt to propagate any roles stored
explicitly within LDAP, although the use of an external directory
synchronization tool can enable such a capability.
Note: default_project_id is the standard config file variable for setting the
default project by ID.
3. In your Keystone paste configuration file, keystone-paste.ini by default,
define a filter for this class:
[filter:autopop]
paste.filter_factory = keystone.middleware.autopop:AutoPopulation.factory
4. Add the autopop plug-in to the pipeline that you are using, by default this is
the api_v3 pipeline. The plug-in autopop must come after the ldapauth filter:
[pipeline:api_v3]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth
admin_token_auth xml_body json_body simpletoken ldapauth autopop
debug stats_reporting ec2_extension s3_extension service_v3
76
character
character
character
character
Boolean
text
where:
id
name
domain_id
Must match the domain_id for the domain the user is in. If you have
not created any domains, this is the default domain for which the
domain_id is default. If you are using multiple domains, then the
domain_id must match the entry in the domains table.
password
This is not used for authentication, since that is done against the
corporate directory, so this must be set to some randomly generated
string to prevent the user from bypassing the corporate directory
authentication.
enabled
Must be set to true.
extra Can be left blank.
2. If the directory synchronization tool is used to delete user entries from the
Keystone database in response to a deletion in the corporate directory, then
there are a number of tables that must be updated and the user entry described
above must finally be deleted. Since a number of these tables use foreign keys
to point back to the user table, it is important that you only delete the user
entry itself once all the changes listed below have been made.
In the following tables you must delete any entry with a user_id that matches
the ID from the user table:
Table: UserProjectGrant:
user_id
character varying(64) NOT NULL
project_id
character varying(64) NOT NULL
data
text
Table: UserDomainGrant:
user_id
character varying(64) NOT NULL
domain_id
character varying(64) NOT NULL
data
text
Table: UserGroupMembership:
user_id
character varying(64) NOT NULL
group_id
character varying(64) NOT NULL
Chapter 2. Installing
77
If you are using an SQL server to store tokens, then to ensure that the user
being deleted is denied access to the product as soon as possible, the
TokenModel table must also be updated. If you are archiving expired tokens for
auditing purposes, then set the valid field to False for any token record that
matches the user_id. If you are not concerned about token archiving, you can
simple delete any token records that match this user_id.
Table: TokenModel:
id
character varying(64) NOT NULL
expires
datetime
extra
text
valid
boolean
user_id
character varying(64) NOT NULL
trust_id
character varying(64) NOT NULL
The directory that contains any domain specific configuration files. This is
only used if domain_specific_drivers_enabled is True.
Default
/etc/keystone/domains
Example
domain_config_dir = /etc/myconfigs
domain_specific_drivers_enabled
If True, then both the ldapauth and autopop plugins will look in the
domain_config_dir for domain specific configuration files of the form
keystone.<domainname>.conf, where <domainname>.conf is the name given to
the domain when it was created via the IBM Cloud Orchestrator UI.
Domain configuration files are only read when the keystone component of
IBM Cloud Orchestrator is started (or restarted).
Default
False
78
The local users are not authenticated with LDAP and they must be defined
via this option. The users defined in the list must already exist in the local
database. This option is required.
Default
None
Example
non_ldap_users = admin, demo, nova, neutron, cinder, glance,
monitoring, domadmin, heat, test
pass_through
If True, then if a user record is not found in the corporate directory server
that matches the specified user_name_attribute, then the authentication
request will be retried against the internal keystone database directly.
Default
True
Example
pass_through = False
password
The password for the user account specified by user to enable searching
within the corporate directory. This is unrelated to the actual user and the
password that is being authenticated.
Default
None
Example
password = secret
tls_cacertdir
tls_cacertfile
tls_req_cert
Chapter 2. Installing
79
url
TLS_CACERT /etc/openldap/certs/serverkey.cer
TLS_REQCERT ALLOW
With TLS, however, you configure these details in the keystone
configuration file itself (see the options tls_cacert and tls_req_cert
below), and use a regular ldap://... URL.
Default
None
Example
url = ldap://www.ibm.com/myserver
If using SSL, then use ldaps, for example:
url = ldaps://www.ibm.com/mysecureserver
The user account to be used for searching within the corporate directory.
This is unrelated to the actual user and password that is being
authenticated. If the corporate directory server supports anonymous query,
then this must not be specified.
user
Default
None
Example
user = cn=root
user_attribute_name
user_filter
An additional filter (or filters) that will be ANDed into the search for the
user_name_attribute. The filter must be in brackets, as in the example.
Default
None
Example
user_filter = (memberOf=cn=openstack,dc=root)
user_id_attribute
80
user_name_attribute
The attribute within each corporate directory object that will be compared
against the user name in the authentication request.
Default
cn
Example
user_name_attribute = mail
user_objectclass
The class of objects that will be searched to try and match the
user_name_attribute.
Default
*
Example
user_objectclass = person
user_tree_dn
The root of the subtree in the corporate directory server that will be
searched.
Default
cn=users,dc=example,dc=com
Example
user_tree_dn = cn=users,dc=root
use_tls
Indicates that the connection to the corporate directory should uses TLS.
Using an SSL LDAP connection (for example, url = ldaps://...) must not
be used as well as TLS.
Default
False
Example
use_tls = True
The default project to be assigned to the new user being created, which
means a member role will be given to this project for the user. The project
must already exist. Note that this project must exist in the same domain as
the user being created. If you want to assign the user a role in a project that
is in a different domain, then use default_project_id.
Default
None
Example
default_project = project1
Chapter 2. Installing
81
The default project ID to be assigned to the new user being created, which
means a member role will be given to this project for the user. The project
must already exist.
Default
None
Example
default_project_id = 87bb06e36a854c0b97c45b4e6dbf5ee4
An additional role (specified by name) that will be granted to the newly
created user on the default project (in addition to the member role).
default_role
Default
None
Example
default_role = vm-manager
default_role_id
default_tenant_id
82
Avoiding troubles
Some SSO plugins redirect an HTTP call or return an Unauthenticated message
when the client does not authenticate using the configured SSO method. IBM
Cloud Orchestrator relies on internal communication through its built-in
SimpleToken authentication mechanism. Any third party SSO integration must
therefore be setup to coexist properly with IBM Cloud Orchestrator REST APIs.
Many SSO modules including Kerberos integration therefore support to limit the
application of the SSO interception to certain URI paths. Ensure that only the
following paths are used for SSO:
v ProcessCenter
v Process Admin
v portal
v login
Procedure
1. On the system where the Workload Deployer component is installed, edit the
/opt/ibm/rainmaker/purescale.app/private/expanded/ibm/scp.ui-1.0.0/
config/openstack.config file.
2. In the /config/openstack section, set the NTP servers as shown in the
following example:
Chapter 2. Installing
83
"ntp_servers": [
dsnode.customer.ibm.com,
"127.0.0.1",
"127.0.0.2",
"127.0.0.3"
]
Note:
a. The list of the defined NTP servers is propagated to the provisioned virtual
machines, and is used to configure NTP on the virtual machines.
b. The list must include at least one valid NTP server for each region.
c. A maximum of four entries are used. IP addresses or FQDNs are allowed.
d. By default, the list contains only the NTP servers that are configured in the
/etc/ntp.conf file on the system where the Workload Deployer component
is installed.
e. During IBM Cloud Orchestrator installation, only one NTP server is
configured: that is, the Deployment Service node.
3. Restart the Workload Deployer service by running the following command:
service iwd restart
Results
After completing these steps, you have defined an NTP server to maintain clock
synchronization across your environments.
Procedure
1. Shut down IBM Cloud Orchestrator.
2. Back up the database on the DB2 node.
3.
4.
5.
6.
7.
84
Procedure
1. Check whether your network is VlanManager:
[root@ico24-node2 ~]# nova network-list
+--------------------------------------+------------+----------------+
| ID
| Label
| Cidr
|
+--------------------------------------+------------+----------------+
| e45ffdd5-0f97-482b-80ff-16058a980f8a | VM Network | 192.0.2.0/22
|
+--------------------------------------+------------+----------------+
[root@ico24-node2 ~]# nova network-show e45ffdd5-0f97-482b-80ff-16058a980f8a
+---------------------+--------------------------------------+
| Property
| Value
|
+---------------------+--------------------------------------+
| bridge
| br4096
|
| bridge_interface
| eth1
|
| broadcast
| 192.0.2.255
|
| cidr
| 192.0.2.0/22
|
| cidr_v6
| |
| created_at
| 2014-07-16T02:32:59.568911
|
| deleted
| 0
|
| deleted_at
| |
| dhcp_start
| 192.0.2.2
|
| dns1
| 8.8.4.4
|
| dns2
| |
| gateway
| 192.0.2.1
|
| gateway_v6
| |
| host
| |
| id
| e45ffdd5-0f97-482b-80ff-16058a980f8a |
| injected
| True
|
| label
| VM Network
|
| multi_host
| False
|
| netmask
| 255.255.252.0
|
| netmask_v6
| |
| priority
| |
| project_id
| |
| rxtx_base
| |
| updated_at
| 2014-07-16T02:35:00.931734
|
| vlan
| 100
|
| vpn_private_address | |
| vpn_public_address | |
| vpn_public_port
| |
+---------------------+--------------------------------------+
As above if its property "vlan" has a value then this network is in the type of
vlan.
2. Configure your Central nodes networks:
Take network in step 1 as an example. Virtual machines deployed to this
network have IP addresses in range 192.0.2.0/22 and in the VLAN 100. Then
you should set your Central Nodes networks to connect to these IP addresses
in the VLAN 100.
a. Add a vlan device in your Central Nodes:
Chapter 2. Installing
85
eth1.100
lo
bridge id
8000.005056934c8e
STP enabled
no
interfaces
eth1.100
As above create a virtual bridge and enslave the created device to it.
d. Add address to bridge:
Find an available IP in the 192.0.2.0/22 and assign it to the bridge:
[root@ico24-node2 ~]# ifconfig br100 192.0.2.192 netmask 255.255.252.0
[root@ico24-node2 ~]# ifconfig br100 up
86
Note: After step 2, you can find a an available IP in the 192.0.2.0/22 and
assign it to the eth1.100 directly. After all steps finished, you should
configure your route.
Note: Reserve the IP address in your OpenStack environment to avoid
conflicts.
OpenStack configuration
Configure various OpenStack components that are used by IBM Cloud
Orchestrator.
Procedure
1. Setup a Cinder volume service on the existing nodes, take the KVM Compute
Node for example:
Modify the chef-runlist by adding two roles role[os-block-storagescheduler],role[os-block-storage-volume], like in the following sample, then
update templates and jobs. In this example, all the KVM Compute Nodes in the
cloud deployment have a Cinder volume service enabled:
"kvm_compute": {
"Type": "IBM::SCO::Node",
"Properties": {
"Address": "COMPUTE_ADDR",
"User": "root",
"Password": "passw0rd",
"KeyFile": "/home/heat/.ssh/heat_key"
},
"Metadata": {
"chef-runlist": "role[kvm-compute],role[os-block-storage-scheduler],role[os-block-storage-volume]",
"order": 3,
"multiple": true
}
}
Chapter 2. Installing
87
For example:
nova-manage network create --label=VLAN106
--fixed_range_v4=10.10.6.0/24 --num_networks=1 --network_size=256
--gateway=10.10.6.1 --vlan=106 --bridge_interface=eth3
--dns1 9.110.51.41 --dns2 9.110.51.41 --project 9d9d88a46e5b4022aef64f6b2ed42469
88
b. Enslave the flat interface, for example eth3, to the bridge created in step 1.
Note: This interface must be different than the value of
managment_network_device in the region-server.cfg file. And all the KVM
compute nodes must be in use.
Run:
brctl addif br4080 eth3
Note: Make sure there is no IP address on this flat interface. You can ensure
this with command ifconfig eth3 0.0.0.0 up.
Add BRIDGE=br4080 in the /etc/sysconfig/network-scripts/ifcfg-eth3 file
as in the following example:
DEVICE="eth3"
BOOTPROTO=none
NM_CONTROLLED="yes"
ONBOOT=yes
TYPE="Ethernet"
BRIDGE=br4080
For example:
nova-manage network create --label=flat-network
--fixed_range_v4=10.10.11.0/24 --num_networks=1
--network_size=256 --gateway=10.10.11.1 --vlan=4080
--bridge=br4080 --bridge_interface=eth3
--dns1 9.110.51.41 --dns2 9.110.51.41
e. If there is an existing DHCP service running on this flat network, run the
following command on all compute nodes including on the Region Server:
ebtables -t nat -A POSTROUTING -o $flat_interface -p IPv4 --ip-protocol udp
--ip-destination-port 67 -j DROP
Where $flat_interface is the flat interface you are going to make use of.
3. (Optional) If you want users from multiple projects to access the same network,
you must disassociate all projects from the network, which sets the project_id
value for the network to None. To disassociate all projects from the network, run
the following command on the Region Server:
nova-manage network modify X.X.X.X/YY -disassociate-project
For example:
nova-manage network modify 10.10.6.0/24 -disassociate-project
4. Configure the nodes to use the new gateway. Otherwise, they might use their
host as a gateway. Run the following commands on the Region Server and on
each compute node:
Chapter 2. Installing
89
5. Stop all dnsmasq services and restart the openstack-nova-network service on the
Region Server or compute node:
killall dnsmasq
service openstack-nova-network restart
For example:
nova-manage network create --label=VLAN106
--fixed_range_v4=10.10.6.0/24 --num_networks=1 --network_size=256
--gateway=10.10.6.1 --vlan=106 --bridge_interface=eth3
--dns1 9.110.51.41 --dns2 9.110.51.41 --project 9d9d88a46e5b4022aef64f6b2ed42469
The label parameter must the same with the name of port group that you
want to create network for, the vlan parameter is the vlan ID that you
configured in the port group. If no vlan assigned to the port group, please
specify any Vlan ID, and add attribute, ignore_vswitch_validation, under the
[vmware] section in Nova. The project parameter associates the specified
PROJECT_ID with the network, so that only the users assigned to that project can
access the network. You can associate only one project with a network.
2. (Optional) If you want users from multiple projects to access the same network,
you must disassociate all projects from the network, which sets the project_id
value for the network to None. To disassociate all projects from the network, run
the following command on the Region Server:
nova-manage network modify X.X.X.X/YY --disassociate-project
For example:
nova-manage network modify 10.10.6.0/24 --disassociate-project
90
Results
The network can now be accessed by all users in all projects.
Deleting the existing networks:
This topic describes the commands that can be used to disassociate the project
from the network, delete an existing network, or delete the virtual machines that
belong to it.
Procedure
1. To delete all virtual machines in the network through GUI or CLI, run:
nova delete xxxx
.
2. To disassociate the project from the network, run:
nova-manage network modify x.x.x.x/yy --disassociate-project
.
3. To delete a network, run:
nova-manage network delete x.x.x.x/yy
.
4. If the network that you want to delete is not a network created in the
out-of-box installation, delete the related bridges and the dnsmasq service on the
Region Server and each of the compute nodes. Run the following commands:
ifconfig brxxxx down
brctl delbr brxxxx
killall dnsmasq
/etc/init.d/openstack-nova-network restart
Chapter 2. Installing
91
When you associate a network with a project, only the users assigned to that
project can access the network. You can associate a network with only one project.
If you want users from multiple projects to access the same network, you must
disassociate the network from all projects, which sets the project_id value for the
network to None. The network can then be accessed by all users in all projects.
Procedure
1. To view the details of an OpenStack network, run the nova network-show
network_id command on the Region Server as the admin user, as shown in the
following example:
nova network-show e325a701-ab07-4fb9-a7df-621e0eb31c9b
+---------------------+--------------------------------------+
| Property
| Value
|
+---------------------+--------------------------------------+
| bridge
| br4090
|
| bridge_interface
| eth1
|
| broadcast
| 10.10.255.255
|
| cidr
| 10.10.0.0/16
|
| cidr_v6
| None
|
| created_at
| 2013-04-20T09:43:40.000000
|
| deleted
| False
|
| deleted_at
| None
|
| dhcp_start
| 10.10.0.56
|
| dns1
| 10.10.0.57
|
| dns2
| 10.10.0.1
|
| gateway
| 10.10.0.1
|
| gateway_v6
| None
|
| host
| None
|
| id
| e325a701-ab07-4fb9-a7df-621e0eb31c9b |
| injected
| False
|
| label
| public
|
| multi_host
| True
|
| netmask
| 255.255.0.0
|
| netmask_v6
| None
|
| priority
| None
|
| project_id
| 9d9d88a46e5b4022aef64f6b2ed42469
|
| rxtx_base
| None
|
| updated_at
| 2013-04-20T09:44:41.000000
|
| vlan
| 4090
|
| vpn_private_address | 10.10.0.2
|
| vpn_public_address | 127.0.0.1
|
| vpn_public_port
| 1000
|
+---------------------+--------------------------------------+
The project_id value must be set to the ID of the project to which the users
are assigned, or set to None to grant access to this network to all users in all
projects.
2. To associate a project with a network, run the following command on one line
on the Region Server:
nova-manage network modify X.X.X.X/YY --project $PROJECT_ID
where $PROJECT_ID is the project ID of the project that you want to associate
with the network.
For example:
nova-manage network modify 10.10.0.0/16 --project 9d9d88a46e5b4022aef64f6b2ed42469
92
For example:
nova-manage network modify 10.10.6.0/24 --disassociate-project
If you want the network to be used by users from several projects, run the
following command to disassociate the project from the network:
nova-manage network modify 10.10.0.0/16 --disassociate-project
To verify that the network is disassociated from the project, run the following
command:
nova network-show e325a701-ab07-4fb9-a7df-621e0eb31c9b
+---------------------+--------------------------------------+
| Property
| Value
|
+---------------------+--------------------------------------+
| bridge
| br4090
|
| bridge_interface
| eth1
|
| broadcast
| 10.10.255.255
|
| cidr
| 10.10.0.0/16
|
| cidr_v6
| None
|
| created_at
| 2013-04-20T09:43:40.000000
|
| deleted
| False
|
| deleted_at
| None
|
| dhcp_start
| 10.10.0.56
|
| dns1
| 10.10.0.57
|
| dns2
| 10.10.0.1
|
| gateway
| 10.10.0.1
|
| gateway_v6
| None
|
| host
| None
|
| id
| e325a701-ab07-4fb9-a7df-621e0eb31c9b |
| injected
| False
|
| label
| public
|
| multi_host
| True
|
| netmask
| 255.255.0.0
|
| netmask_v6
| None
|
| priority
| None
|
| project_id
| None
|
| rxtx_base
| None
|
| updated_at
| 2013-04-20T15:09:35.000000
|
| vlan
| 4090
|
| vpn_private_address | 10.10.0.2
|
| vpn_public_address | 127.0.0.1
|
| vpn_public_port
| 1000
|
+---------------------+--------------------------------------+
The project_id value is set to None, which means that the network is shared
and can be accessed by all users in all projects.
Chapter 2. Installing
93
All instances reside on the same network, which can also be shared with
the hosts. No VLAN tagging or other network segregation takes place. It
does not support overlapping IP addresses.
Local
Instances reside on the local compute host and are effectively isolated from
any external networks.
94
b. physical_interface_mappings = physnet1:eth0,physnet2:eth1:
This means that management_interface and external_interface use the
same NIC -- eth0, and data_interface uses eth1, so the network creation
command is:
neutron net-create <public_net_name> --router:external=True
--provider:network_type flat --provider:physical_network physnet1
c. physical_interface_mappings =
physnet1:eth0,physnet2:eth1,physnet3:eth2:
This means that management_interface uses eth0, data_interface uses eth1
and external_interface uses eth2, so the network creation command is:
neutron net-create <public_net_name> --router:external=True
--provider:network_type flat --provider:physical_network physnet3
Note: If network you are going to access the public network (internet) is
using a VLAN, for example 1000, then the command becomes:
neutron net-create public --router:external=True --provider:network_type vlan
--provider:physical_network physnetX --provider:segmentation_id 1000
or:
neutron net-create <vxlan_net_name> --tenant-id <tenant_id> --provider:network_type vxlan
--provider:segmentation_id <number>
For VMware, since a VMware native driver does not support to create the port
group in vCenter automatically, you must create the corresponding port group
in vCenter for each created network by manually before boot the virtual
machine, the name of port group must be the same as the name (label) of the
neutron networks.
Note: For VLAN, the trunked port must be configured properly for the
<number> and <physnetX> must be data_interface. For VXLAN, <number> must
be from 1000 to 160000.
Add subnetworks into it which means reserve IP ranges for the virtual
machines, for example:
neutron subnet-create <vlan_net_name | vxlan_net_name>10.10.10.0/24
95
or:
neutron net-create <flat_net_name> --tenant-id <tenant_id> --provider:network_type flat
For VLAN, the trunked port must be configured properly for the <number> and
<physnetX> must be data_interface:
neutron subnet-create <vlan_net_name | vxlan_net_name> 10.10.10.0/24
--gateway <gateway_IP_on_router>
Restriction: For KVM, VMware, or Hyper-V, only Flat and VLAN mode can be
used. VXLAN, overlapping IP and floating IP are not supported for this scenario.
For information about z/VM network scenarios, see the Enabling z/VM for
OpenStack (Support for OpenStack Icehouse Release) guide at http://
www.vm.ibm.com/sysman/openstk.html.
Scenario 3: Hybrid SDN and physical routing:
This topic describes a hybrid configuration for scenario 1 and 2.
It means you may have different types of networks together. Take the following
scenario as an example:
v Network 1: VXLAN, it must use network server as gateway (configure with
Scenario 1: Use network server as gateway on page 94).
v Network 2: VLAN, you want that virtual machines on it can be accessed directly
through physical routing so this network uses physical gateway (configure with
Scenario 2: Use a physical gateway).
v Network 3 and network 4: one is VLAN and another is VXLAN, but they use
the same network CIDR (meaning overlapping IP range), to support this, you
96
must use the network server as gateway (configure with Scenario 1: Use
network server as gateway on page 94).
v Network 5: you want to use floating IP to access the virtual machines on it, then
you must use the network server as gateway (configure with Scenario 1: Use
network server as gateway on page 94).
Note: If mgmt_interface = data_interface = external_interface, see
Customizing deployment parameters on page 37 for information about these
interfaces. If you are going to have both Flat and VXLAN networks, you must
ensure that:
1. Flat networks must be created first.
2. Before any virtual machines boot on it, you must first manually create a bridge
(same as the network server) on each Compute Node.
Associating a Neutron network to a project:
To achieve network segregation, it is important that you associate each network to
a specific project ID by using the Administration user interface.
You can assign a network to a project at network creation time. You cannot
reassign a network to a different project after network creation.
Creating multiple networks assigned to different projects:
Create Neutron networks that are mapped to different VLANs, and assign the
networks to different projects.
Before you begin
Create Project A and Project B in the same domain, as described in Creating a
project on page 265. Create User A assigned to Project A, and User B assigned to
Project B, as described in Creating a user on page 272. Assign the admin role to
the Cloud Administrator. Assign the member role to User A and User B.
About this task
You create two networks and assign them to Project A: Network A1 uses VLAN
101, and Network A2 uses VLAN 102. Similarly, you create two other networks
and assign them to Project B: Network B1 uses VLAN 201, and Network B1 uses
VLAN 202.
You then log in as User A in Project A, and deploy virtual machines to connect to
Network A1 and Network A2. Similarly, you log in as User B in Project B, and
deploy virtual machines to connect to Network B1 and Network B2.
Procedure
1. Log in to the Administration user interface as a Cloud Administrator.
2. In the left navigation pane, click ADMIN > System Panel > Networks. The
Networks page is displayed.
3. Create Network A1:
a. Click Create Network. The Create Network window opens.
Chapter 2. Installing
97
98
d. Click Deploy.
The virtual machine is deployed and connects to Network A1.
8. Repeat step 7 to deploy a virtual machine that connects to Network A2.
9. Log in to the Self-service user interface as User B.
10. Repeat step 7 as necessary to deploy virtual machines that connect to
Network B1 and Network B2.
Results
IBM Cloud Orchestrator ensures that the users in one project cannot access another
project. User A can deploy virtual machines that connect only to Network A1 or
Network A2. User B can deploy virtual machines that connect only to Network B1
or Network B2.
You can repeat the steps in this procedure to create additional projects and
networks.
Neutron quota configuration:
You must update the Neutron quota configuration on each network node to
accommodate the intended usage pattern. The default values might not be
sufficient for clouds with more than 50 virtual machines.
On the Neutron node, edit the quota section of /etc./neutron/neutron.conf and
restart the Neutron service.
[QUOTAS]
# resource name(s) that are supported in quota features
quota_items = network,subnet,port
# default number of resource allowed per tenant, minus for unlimited
default_quota = -1
# number of networks allowed per tenant, and minus means unlimited
quota_network = 10
# number of subnets allowed per tenant, and minus means unlimited
quota_subnet = 10
# number of ports allowed per tenant, and minus means unlimited
quota_port = 50
# number of security groups allowed per tenant, and minus means unlimited
quota_security_group = 10
# number of security group rules allowed per tenant, and minus means unlimited
quota_security_group_rule = 100
# default driver to use for quota checks
quota_driver = neutron.db.quota_db.DbQuotaDriver
Chapter 2. Installing
99
Managing flavors
To create a flavor, you can either use the Administration user interface, or run the
nova flavor-create command from the OpenStack Nova compute node on the
appropriate Region Server.
When you create a flavor, ensure that the memory, CPU, and disk values of the
flavor are larger or equal to the values required by the image. To identify the
image requirements, complete the following steps:
1. Log in to the Administration user interface as a Cloud Administrator.
2. In the navigation pane on the left, click ADMIN > System Panel > Images.
3. On the Images page, click an image name to view the image details.
Note: The disk size of the flavor must be larger than the sum of the sizes of all the
disks when you are trying to boot an instance with an existing template. If the disk
size is 0, the instance is booted without a size check.
For z/VM, the disk size is 5 GB. Configure flavors for the supported disk size only.
To make a disk size larger than 5 GB, follow the steps documented in OpenStack
Enablement for z/VM.
To manage flavors for SoftLayer, Amazon EC2 and non-IBM supplied OpenStack,
see Configuring flavors on page 688.
Note: Each Region defines its own distinct set of flavors.
Creating a flavor from the user interface:
To create a flavor, you can use the Administration user interface.
About this task
Remember: When you create a flavor, ensure that you meet the flavor
requirements as described in Managing flavors.
Procedure
1.
2.
3.
4.
5.
100
where:
v --ephemeral ephemeral (optional) is the ephemeral space size in GB. The default
value is 0.
v --swap swap (optional) is the swap size in MB. The default value is 0.
v --rxtx-factor factor (optional) is the RX/TX factor. The default is 1.
v --is-public is-public (optional) makes the flavor accessible to the public. The
default value is true.
v flavor_name is the name of the new flavor.
v flavor_id is the unique integer ID for the new flavor.
v ram_size is the memory size in MB.
v disk_size is the disk size in GB.
v vcpus is the number of vCPUs.
Example command:
nova flavor-create 4gb1cpu 12345 4096 0 1
For information about the arguments, run the nova help flavor-create command.
To view the currently defined flavors, use the nova flavor-list command.
Customizing flavors for Power features:
Use PowerVC or OpenStack flavors to manage PowerVM LPAR settings.
Basic Flavors for use with PowerVC can be created from IBM Cloud Orchestrator.
Any flavor created on the PowerVC user interface will be automatically imported
to IBM Cloud Orchestrator with a prefix assigned in /etc/powervc/powervc.conf.
Once flavors have been created on PowerVC and imported automatically into IBM
Cloud Orchestrator no further changes will be registered for this flavor.
Any changes made to PowerVC flavors outside the PowerVC Advanced options
will not be reflected in IBM Cloud Orchestrator, this includes, renaming, changing
items such as Disk, Memory, Processors and deleting the flavor entirely.
PowerVC has an additional flavor functionality known as Advanced Compute
Templates. Rather than just specifying the vCPU, memory, disk and processing
units as you would with a standard PowerVC flavor, the advanced flavor allows
you to select a range for minimum, desired, and maximum values for vCPU,
processing units, and memory, and also allows some processor sharing options.
Chapter 2. Installing
101
Only the advanced options on flavors can be updated and have their changes
recognized by IBM Cloud Orchestrator.
Deploying PowerVC virtual machines using flavors:
When deploying a PowerVC virtual machine using a flavor it is necessary to
choose a flavor with a Disk that is equal or greater in size to the Image Disk.
When the virtual machine is deployed the Disk value that was listed in the flavor
is ignored and the virtual machine will be deployed with a Disk the same size as
that specified in the Image.
This means that when performing a Resize, if you choose a flavor with the Disk
greater than that of the original Image it will attempt a Volume resize; even if the
flavor used has the same Disk size as the flavor used to deploy the virtual
machine.
SSP does not support Volume Resize.
SVC does support one resize (flashcopy operation) at a time. Attempting to
perform more than a single flashcopy operation on the same base image will cause
the resize operation to fail, and put the virtual machine in a Error state which can
be reset in the PowerVC UI.
It is recommended where applicable to resize to a flavor which has the same Disk
size as the original image where possible.
102
For example if you want to allow placement in different datastores for the same
cluster to exploit different hardware characteristics of the datastores, you can create
a single availability zone with multiple host aggregates where each host aggregate
points to a different datastore. For more information about how to achieve this
configuration, see Connecting to different datastores in the same cluster on page
106.
If you want to leverage SDRS, configure your environment by following the
procedure described in Enabling Storage DRS on page 112. This can be done per
availability zone or host aggregate.
Templates and virtual machines are automatically discovered and published in
glance and nova after you installed and configured the region server. For more
information, see Configuring vmware-discovery on page 114 and Configuring
vmware-discovery for multiple vCenters on page 116. In this way you can
immediately manage these templates and virtual machines from the
Administration user interface. You can also view the virtual machines from the
Self-service user interface in the Assigned Resources panel. Even if these instances
were not created by IBM Cloud Orchestrator, you can start, stop, resize them, or
run custom actions by using the Self-service user interface. To use the discovered
templates as images for deployments in IBM Cloud Orchestrator, you must modify
them to meet the prerequisites documented in Chapter 6, Managing virtual
images, on page 331.
If the template was created in thin provisioning mode, all the instances generated
from it are thin provisioned. If you want to speed up the cloning operation of
instances spawn from the same template, you can turn on the OpenStack linked
clone feature. This feature can be set per availability zone or host aggregate and
relies on caching the vmdk in the datastore. For more information about this
feature, see http://docs.openstack.org/icehouse/config-reference/content/
vmware.html. Moreover, you can add disks to the image at deployment time or
after the deployment occurred. Volumes can be thin provisioned or thick
provisioned. For more information, see Configuring OpenStack to support thin
provisioning on page 113.
Chapter 2. Installing
103
Procedure
1. Create the host aggregate and associate it with a new availability zone by
running the following command:
nova aggregate-create new-cluster-host-aggregate new-cluster-availability-zone
104
cp /etc/init.d/openstack-nova-compute /etc/init.d/openstack-nova-compute-new-cluster
5. Modify the suffix and config parameters in the /etc/init.d/openstack-novacompute-new-cluster file to set:
suffix=compute-new-cluster
prog=openstack-nova-$suffix
exec="/usr/bin/nova-compute"
config="/etc/nova/nova-service-new-cluster.conf"
pidfile="/var/run/nova/nova-$suffix.pid"
logfile="/var/log/nova/$suffix.log"
-e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
lockfile=/var/lock/subsys/$prog
7. Add the host that you specified in step 3 for the new compute service to the
host aggregate by running the following command:
nova aggregate-add-host new-cluster-host-aggregate new-cluster
+----+----------------------------+-------------------------------+------------| Id | Name
| Availability Zone
| Hosts
+----+----------------------------+-------------------------------+------------| 2 | new-cluster-host-aggregate | new-cluster-availability-zone | new-cluster
+----+----------------------------+-------------------------------+------------+---------------------------------------------------+
| Metadata
|
+---------------------------------------------------+
| availability_zone=new-cluster-availability-zone |
+---------------------------------------------------+
8. Verify that the new service is up and running. Run the following command to
check if the new availability zone which is named new-cluster-availabilityzone is shown:
nova availability-zone-list
+---------------------------------------------+----------------------------------------+
| Name
| Status
|
+---------------------------------------------+----------------------------------------+
| internal
| available
|
| |- <your-local-host-name>
|
|
| | |- nova-conductor
| enabled :-) 2014-08-07T05:15:44.766879 |
| | |- nova-vmware
| enabled :-) 2014-08-07T05:15:51.017709 |
| | |- nova-consoleauth
| enabled :-) 2014-08-07T05:15:49.413705 |
| | |- nova-cert
| enabled :-) 2014-08-07T05:15:47.481551 |
| | |- nova-scheduler
| enabled :-) 2014-08-07T05:15:47.736521 |
| nova
| available
|
| |- <your-local-host-name>
|
|
| | |- nova-compute
| enabled :-) 2014-08-07T05:15:43.274219 |
| new-cluster-availability-zone
| available
|
| |- new-cluster
|
|
| | |- nova-compute
| enabled :-) 2014-08-07T05:15:44.309888 |
+---------------------------------------------+----------------------------------------+
What to do next
After you configured your VMware region to connect to multiple clusters, you
must run the vmware-discovery process as described in Configuring
Chapter 2. Installing
105
Procedure
1. Create a host aggregate for each set of datastores that you want to connect by
running the following commands, for example:
nova aggregate-create datastore1-host-aggregate your-cluster-availability-zone
nova aggregate-create datastore2-host-aggregate your-cluster-availability-zone
where
<datastore-host>
Is a different host name in each configuration file. For example,
datastore1-host and datastore2-host.
scheduler_default_filters
Must be specified on one line without spaces.
datastore_regex
Is a regular expression that you can use to identify the set of
106
7. Add the hosts that you specified in the step 3 for the new compute services to
the host aggregates by running the following commands:
nova aggregate-add-host datastore1-host-aggregate datastore1-host
nova aggregate-add-host datastore2-host-aggregate datastore2-host
+----+----------------------------+--------------------------------+---------------| Id | Name
| Availability Zone
| Hosts
+----+----------------------------+--------------------------------+---------------| 3 | datastore1-host-aggregate | your-cluster-availability-zone | datastore1-host
| 4 | datastore2-host-aggregate | your-cluster-availability-zone | datastore2-host
+----+----------------------------+--------------------------------+---------------+-----------------------------------------------------------------------+
| Metadata
|
+-----------------------------------------------------------------------+
| availability_zone=your-cluster-availability-zone
|
| availability_zone=your-cluster-availability-zone
|
+-----------------------------------------------------------------------+
8. Set a metadata to the datastore1-host-aggregate and datastore2-hostaggregate host aggregates that you created in the step 1. For example, run the
following commands:
nova aggregate-set-metadata datastore1-host-aggregate Datastore1=true
nova aggregate-set-metadata datastore2-host-aggregate Datastore2=true
+----+----------------------------+--------------------------------+---------------Chapter 2. Installing
107
| Id | Name
| Availability Zone
| Hosts
+----+----------------------------+--------------------------------+---------------| 3 | datastore1-host-aggregate | your-cluster-availability-zone | datastore1-host
| 4 | datastore2-host-aggregate | your-cluster-availability-zone | datastore2-host
+----+----------------------------+--------------------------------+---------------+-----------------------------------------------------------------------+
| Metadata
|
+-----------------------------------------------------------------------+
| Datastore1=true, availability_zone=your-cluster-availability-zone |
| Datastore2=true, availability_zone=your-cluster-availability-zone |
+-----------------------------------------------------------------------+
10. Create the flavor keys to match the metadata that you set in the aggregates by
running the following commands:
nova flavor-key flavor-datastore1 set Datastore1=true
nova flavor-key flavor-datastore2 set Datastore2=true
Results
You can use the new flavors that you created to deploy to the set of datastores that
you specified in the configuration files.
where <cluster_name> is the name of the VMware cluster where the resource pool
is defined. When you specify a cluster name and a resource pool name, the
resource pool under the cluster is the target to deploy the virtual machines.
If you have multiple resource pools in the same cluster, you can connect to a
different resource pool for deployment by creating a new host aggregate with a
procedure similar to Connecting to different datastores in the same cluster on
page 106 and specifying different resource pools with the resource_pool variable
in the new openstack-nova-compute service configuration files.
108
Procedure
1. Create a new nova compute service:
Because one openstack-nova-compute service can only connect to one vCenter
in OpenStack Icehouse, you must create a new openstack-nova-compute service
to connect to your new vCenter. The new vCenter is set as a new host
aggregate in a new availability zone. In the following procedure,
new-vCenter-availability-zone is the name of the new availability zone and
new-vCenter-aggregate-host is the name of the new aggregate host.
Perform the following steps:
a. Create the aggregate host and associate it with the new availability zone by
running the following command:
nova aggregate-create new-vCenter-aggregate-host new-vCenter-availability-zone
b.
e. Modify the suffix and config parameters in the /etc/init.d/openstacknova-compute-new-vCenter file to set:
suffix=compute-new-vCenter
prog=openstack-nova-$suffix
exec="/usr/bin/nova-compute"
Chapter 2. Installing
109
config="/etc/nova/nova-service-new-vCenter.conf"
pidfile="/var/run/nova/nova-$suffix.pid"
logfile="/var/log/nova/$suffix.log"
-e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
lockfile=/var/lock/subsys/$prog
g. Add the host that you specified in step c for the new compute service to the
aggregate host by running the following command:
nova aggregate-add-host new-vCenter-aggregate-host new-vCenter
+----+----------------------------+-------------------------------+------------| Id | Name
| Availability Zone
| Hosts
+----+----------------------------+-------------------------------+------------| 2 | new-vCenter-aggregate-host | new-vCenter-availability-zone | new-vCenter
+----+----------------------------+-------------------------------+------------+---------------------------------------------------+
| Metadata
|
+---------------------------------------------------+
| availability_zone=new-vCenter-availability-zone |
+---------------------------------------------------+
h. Verify if the new computer service is up and running. Run the following
command to check if the new availability zone which named
<your-availability-zone-name> is shown:
nova availability-zone-list
+---------------------------------------------+----------------------------------------+
| Name
| Status
|
+---------------------------------------------+----------------------------------------+
| internal
| available
|
| |- <your-local-host-name>
|
|
| | |- nova-conductor
| enabled :-) 2014-08-07T05:15:44.766879 |
| | |- nova-vmware
| enabled :-) 2014-08-07T05:15:51.017709 |
| | |- nova-consoleauth
| enabled :-) 2014-08-07T05:15:49.413705 |
| | |- nova-cert
| enabled :-) 2014-08-07T05:15:47.481551 |
| | |- nova-scheduler
| enabled :-) 2014-08-07T05:15:47.736521 |
| nova
| available
|
| |- <your-local-host-name>
|
|
| | |- nova-compute
| enabled :-) 2014-08-07T05:15:43.274219 |
| new-vCenter-availability-zone
| available
|
| |- new-vCenter
|
|
| | |- nova-compute
| enabled :-) 2014-08-07T05:15:44.309888 |
+---------------------------------------------+----------------------------------------+
110
b. Modify the suffix and config parameters in the /etc/init.d/openstacknova-network-new-vCenter file to set:
suffix=network-new-vCenter
prog=openstack-nova-$suffix
config="/etc/nova/nova-service-new-vCenter.conf"
pidfile="/var/run/nova/nova-$suffix.pid"
logfile="/var/log/nova/$suffix.log"
lockfile=/var/lock/subsys/$prog
Note: Ensure that you are using the same nova configuration file as the
related nova compute service that you created in step 1.
c. Run the following commands to start the services:
chkconfig openstack-nova-network-new-vCenter on
/etc/init.d/openstack-nova-network-new-vCenter start
d. Verify if the new network service is up and running. Run the following
command to check if the new network service under the internal
availability zone is shown:
nova availability-zone-list
+--------------------------------------+-----------------------------------------+
| Name
| Status
|
+--------------------------------------+-----------------------------------------+
| internal
| available
|
| |- <your-local-host-name>
|
|
| | |- nova-conductor
| enabled :-) 2014-08-07T05:15:44.766879 |
| | |- nova-vmware
| enabled :-) 2014-08-07T05:15:51.017709 |
| | |- nova-consoleauth
| enabled :-) 2014-08-07T05:15:49.413705 |
| | |- nova-cert
| enabled :-) 2014-08-07T05:15:47.481551 |
| | |- nova-scheduler
| enabled :-) 2014-08-07T05:15:47.736521 |
| |- new-vCenter
|
|
| | |- nova-network
| enabled :-) 2014-08-07T06:36:21.840562 |
| nova
| available
|
| |- <your-local-host-name>
|
|
| | |- nova-compute
| enabled :-) 2014-08-07T05:15:43.274219 |
| new-vCenter-availability-zone
| available
|
| |- new-vCenter
|
|
| | |- nova-compute
| enabled :-) 2014-08-07T05:15:44.309888 |
+--------------------------------------+-----------------------------------------+
111
d. Modify the suffix and config parameters in the /etc/init.d/openstackcinder-volume-new-vCenter file to set:
suffix=volume-new-vCenter
prog=openstack-cinder-$suffix
exec="/usr/bin/cinder-volume"
config="/etc/cinder/cinder-new-vCenter.conf"
pidfile="/var/run/cinder/cinder-$suffix.pid"
logfile="/var/log/cinder/$suffix.log"
lockfile=/var/lock/subsys/$prog
f. Verify if the new service is up and running. Run the following command to
check if the new availability zone which named 'new-vCenter-availabilityzone is shown:
cinder availability-zone-list
g. Run the following command to check if the new openstack-cinder-volumenew-vCenter service is shown in the service list:
cinder service-list
where
datastore_cluster_name
Specifies the name of a VMware datastore cluster (StoragePod name). The
default value is None.
use_sdrs
Specifies whether a driver must attempt to call DRS when cloning a virtual
machine template. The default value is False.
112
Note: This feature is only supported when you deploy a virtual machine from
template.
You can use the following extra specs of the flavor to override the specified
configuration when you deploy new virtual machines:
vmware:datastore_cluster_name
Set this key to override the datastore_cluster_name parameter specified in
the nova.conf file.
vmware:use_sdrs
Set this key to override the use_sdrs parameter specified in the nova.conf
file.
To set the extra specs for the flavor, use the nova flavor-key command.
113
Note also that it is possible to override the linked_clone mode on a single image
basis using the vmware_linked_clone property in the OpenStack Image Service.
To change this behavior, you must modify on the Region Server the Nova
configuration file setting use_linked_clone to False.
You must restart the Nova compute services after updating the Nova configuration
file.
Configuring vmware-discovery
The vmware-discovery process discovers existing virtual machines, templates, and
port groups in your VMware environment.
Procedure
1. Log on to the VMware Region server as a user with root or sudo access.
2. Edit the /etc/vmware-discovery.conf configuration file.
The following table describes the parameters in the /etc/vmwarediscovery.conf file:
Table 10. Parameters in the /etc/vmware-discovery.conf configuration file
Parameter
Definition
allow_instance_deletion
allow_template_deletion
auth_url
The Keystone public endpoint. You can find this value in the
/root/openrc file or the /root/keystonerc file.
keystone_version
admin_tenant_name
admin_user
admin_password
discovery_driver
The full class name for the driver for the VMware Discovery
Service (string value). The value is
vmware.nova.driver.virt.vmware.driver.VMwareVCSynchingDriver
discovery_manager
staging_project_name
114
Definition
staging_user
image_periodic_sync_interval_in_seconds
instance_prefix
instance_sync_interval
image_limit
image_sync_retry_interval_time_in_seconds
longrun_loop_interval
longrun_initial_delay
The minimum delay interval and initial delay in seconds for long
run tasks. The default interval value is 7, and the default delay
value is 10.
flavor_prefix
The prefix for all created flavors for the discovered instances.
full_instance_sync_frequency
vm_ignore_list
vmware_default_image_name
physical_network_mappings
port_group_filter_list
The port group name that you want to use. If left blank, the
process discovers all port groups.
portgroup_sync_interval
target_region
template_sync_interval
Note: Do not change the default values of the other parameters in the
/etc/vmware-discovery.conf configuration file.
3. Start the vmware-discovery service:
service nova-discovery start
115
Results
The vmware-discovery process is configured and started after installation in the
VMware region. The default value for the staging project and the staging user is
admin. You can now manage the discovered resources as described in Managing
resources on page 311.
Procedure
1. Create the configuration files for a specific vCenter:
a. Configuration file for the openstack-nova-compute:
By default, the VMware discovery loads the VMware related information
from the /etc/nova/nova.conf file. Because you already connected to the
other vCenter, there is a configuration file for this vCenter VMware
information which is applied by the related openstack-nova-compute
service. By default, it is in the /etc/nova/ directory. For example,
/etc/nova/nova-2.conf.
b. Configuration file for the vmware-discovery:
Copy the /etc/vmware-discovery.conf file and then modify it as the
discovery configuration file for the new vCenter. For example:
cp /etc/vmware-discovery.conf /etc/vmware-discovery-2.conf
Ensure that the new service file has ownership of root:root . To change the
ownership, run the following command:
chown root:root /etc/init.d/nova-discovery-2
116
suffix=discovery-new-SERVICE
prog=nova-$suffix
pidfile="/var/run/nova/nova-$suffix.pid"
logfile="/var/log/nova/$suffix.log"
[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
lockfile=/var/lock/subsys/$prog
to
daemon --user nova --pidfile $pidfile \
"$exec --config-file /etc/vmware-discovery-2.conf \
--config-file /etc/nova/nova-2.conf \
--logfile $logfile &>/dev/null & echo \$! > $pidfile"
117
vmware.html.
118
Procedure
1. Log in to the external database server as a root user.
2. Create the saamuser user:
# useradd -m saamuser
119
# passwd saamuser
New password:
Retype new password:
When prompted for the password, enter the password that is used by the
saamuser user on the IBM Cloud Orchestrator servers.
4. Create the /opt/IBM/tsamp/eez/scripts directory:
# mkdir -p /opt/IBM/tsamp/eez/scripts
5. Copy the following files from the System Automation Application Manager
server to the specified directories on the IBM DB2 server:
v Copy /opt/IBM/tsamp/eez/scripts/servicectrl.sh to /opt/IBM/tsamp/eez/
scripts.
v Copy /opt/IBM/tsamp/eez/scripts/db2ctrl to /etc/init.d.
v Copy /opt/IBM/tsamp/eez/scripts/nosqlctrl to /etc/init.d.
6. If necessary, edit the /etc/init.d/db2ctrl and /etc/init.d/nosqlctrl files to
suit your IBM DB2 installation.
7. Ensure that the saamuser user is the owner of the /opt/IBM/tsamp/eez/scripts
directory and its contents:
# # chown -R saamuser:<saamusergroup>
/opt/IBM/tsamp/eez/scripts
9. Disable any autostart of the IBM DB2 services for the external database.
10. Configure sudo for the saamuser user:
a. Create the /etc/sudoers.d/saam file with the following content:
# sudoers additional file for /etc/sudoers.d/
# IMPORTANT: This file must have no or . in its name and file permissions
# must be set to 440
# This file is for the IBM System Automation Application Manager ID to call
# the IBM DB2 control script that is provided with IBM Cloud Orchestrator
Defaults:saamuser !requiretty
# scripts found in control script directory
Cmnd_Alias SACTRL = /sbin/service
# allow for root access
saamuser ALL = (root) NOPASSWD: SACTRL
11. If you have more than one external database server, repeat this procedure to
create the saamuser user on each server in your database cluster.
Results
System Automation Application Manager can now use the saamuser user to
monitor the external database.
120
If you start System Automation Application Manager now, it will fail because the
agentless adapter is not correctly configured. You configure the agentless adapter
in the next section.
After you configure the agentless adapter, you can start and stop System
Automation Application Manager by running the start saam and stop saam
commands. System Automation Application Manager is automatically started when
the server is started.
Chapter 2. Installing
121
Procedure
1. Configure the Agentless Adapter:
a. During the installation of the Central Servers and Region Servers, all
necessary automation policies are automatically created on the node where
System Automation Application Manager is installed. These policies are
stored in the /etc/opt/IBM/tsamp/eez/aladapterPolicyPool directory.
b. Identify the domain names for all Agentless Adapter policy XML files that
you want to manage with System Automation Application Manager. The file
for the Central Servers is named SCOcentralALA.xml, and the file for each
Region Server is named <region_name>.xml. The domain name is enclosed
within the <AutomationDomainName> tag. The /etc/opt/IBM/tsamp/eez/
aladapterPolicyPool directory also contains some sample xml files and an
xsd file; ignore these files. You can use the following grep command to
identify the domain names to be managed:
grep -h "<AutomationDomainName>" `ls /etc/opt/IBM/tsamp/eez/aladapterPolicyPool/*.xml
| grep -v Sample` | sort | uniq | cut -d ">" -f 2 | cut -d "<" -f 1
122
Chapter 2. Installing
123
This list includes all the agentless adapter domains from step 1. It also contains
the System Automation for Multiplatforms domains. The System Automation
for Multiplatforms domain that ensures the high availability of the Central
Server 2 cluster is named cs2Domain. This cluster is running on the Central
Server 2 primary and secondary virtual machines. Each Region has a System
Automation for Multiplatforms domain, which has the same name as the
Region. This cluster runs on the Region Server primary and secondary nodes.
For all of these domains, click Add.... Enter the domain name, user ID,
password, and password confirmation fields and click OK. Repeat this step for
each domain that is listed in the output of the grep command above. Use the
credentials as explained above. Click Save and OK. To exit from cfgeezdmn,
click Done.
Start cfgeezdmn again and switch again to the user credentials tab of the
application manager configuration. Click a domain to select and highlight it.
Now you can click Validate to check the user credential settings for this
domain.Click the Command Shell tab. In User authentication for invoking the
end-to-end automation manager shell, select Use specified user credentials
and enter the eezadmin userid and the password specified in the
OrchestratorPassword parameter. In User authentication for issuing
commands against first-level automation (FLA) domains, select Use FLA
domain access credentials as defined under User credentials. Finally, save
your changes by clicking Save and OK. Close cfgeezdmn by clicking Done. You
have completed the configuration of the automation engine, which can now
access the adapters.
Restart System Automation Application Manager by issuing stop saam; start
saam.
4. Verify that the domains are set up correctly:
eezcs -c "lseezdom"
124
7. You can now see the state of the System Automation for Multiplatforms
domains. For the domains managed by the agentless adapter, you also have to
active the policy for these domains. To do this, right-click the domain and
select Open Policy Activation Page. This opens a new tab. Select the domain
and, in the right part of the screen, a list of all policies available for this
agentless automation domain is displayed. By default, there is only one.
Right-click it and select Activate Policy.
8. Activate the End-To-End automation policy named SCOsaam.xml. If the
end-to-end automation policy is activated, the automation engine enforces the
policy rules and starts all services. To activate, right-click the SCOsaam policy
and select Open Policy Activation Page. This opens a new tab. Select the
domain and, in the right part of the screen, a list of all policies available for
this agentless automation domain is displayed. By default, there is only one.
Right-click it and select Activate Policy. Fore more information, read Activating
an automation policy in the System Automation Application Manager,
Administrator's and User's Guide.
9. To manage the end-to-end automation, switch to Operate end-to-end resources.
Results
System Automation Application Manager is configured and the policies are
activated. System Automation Application Manager checks the status of the IBM
Cloud Orchestrator management services and starts them if needed. After the
services are started, System Automation Application Manager detects any outage
of the IBM Cloud Orchestrator management services and automatically restarts
them.
If you want to stop any services, see Controlling the management stack on page
243.
Strengthening security
Complete these tasks to strengthen the security of your IBM Cloud Orchestrator
environment.
For information about port management and security, see the IBM Cloud
Orchestrator Security Hardening Guide.
Procedure
1. Change the password of the admin user in Keystone, as follows:
a. Log in to the Deployment Service node as a root user.
b. Run the following command:
source /root/keystonerc
Chapter 2. Installing
125
Make a note of the output of this command. You use this value
(encrypted_new_admin_password) in the remaining parts of this step.
f. Create a backup of the /root/keystonerc and /root/openrc files:
cp -p /root/keystonerc /root/keystonerc.orig
cp -p /root/openrc /root/openrc.orig
v After:
export OS_PASSWORD=$(openstack-obfuscate -u encrypted_new_admin_password)
2. Change the password of the database users for OpenStack (ceilometer, glance,
heat, keystone, nova) as follows:
Tip: Depending on your local security standards, you can enable nologin
support for Deployment Service user identifiers as described in the IBM Cloud
Orchestrator V2.4 Hardening Guide. The nologin approach might remove the
need to manage passwords for individual user identifiers.
a. Log in to the Deployment Service node as a root user.
b. Change the password for the specified users by running the following
commands. After each command, you must specify the new password.
passwd
passwd
passwd
passwd
passwd
ceilometer
glance
heat
keystone
nova
Example output:
28:connection =
W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
d. For each entry identified in the previous step, decode the DB2 connect
information:
Example:
openstack-obfuscate -u connection_details
Example output:
ibm_db_sa://userid:old_password@ip_address:50000/openstac
e. For each entry, encrypt the DB2 connect information with the new password
126
Example:
openstack-obfuscate ibm_db_sa://userid:new_password@ip_address:50000/openstac
Example output:
new_connection_details
Make a note of the output of this command. You use this value in the
remaining parts of this step.
f. Edit the corresponding name.conf configuration file to comment out the
original connection or sql_connection entry, and add the encrypted new
password information in a new entry, as shown in the following example:
# connection = W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
connection = W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmFyamNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
g. Run the following command to check that all files were changed:
grep -Fnr connection /etc
Chapter 2. Installing
127
v If you are using the IBM Cloud Orchestrator high-availability solution, log in
to the System Automation Application Manager user interface, and initiate a
restart of the Workload Deployer resource.
3. Log on to Central Server 2, and complete the following steps:
a. Run the following command to encrypt the new password:
/usr/bin/openstack-obfuscate new_admin_password
Make a note of the output of this command. You use this value
(encrypted_new_admin_password) in the remaining parts of this step.
b. Create a backup of the /root/keystonerc file:
cp -p /root/keystonerc /root/keystonerc.orig
128
c. Edit the /root/keystonerc file and replace the encrypted value in the
OS_PASSWORD entry with the encrypted new value as follows:
v Before:
export OS_PASSWORD=$(openstack-obfuscate -u encrypted_old_admin_password )
v After:
export OS_PASSWORD=$(openstack-obfuscate -u encrypted_new_admin_password )
4. Edit the configuration files on the various servers as listed in Table 1, to update
the admin_password entry to the encrypted new value as follows:
v Before:
admin_password=old_admin_password
v After:
admin_password=encrypted_new_admin_password
Table 11. Configuration files that contain the admin password
Server
Component
Configuration files
Central Server 1
Ceilometer
/etc/ceilometer/
ceilometer.conf
Region Server
Cinder
/etc/cinder/cinder.conf
Glance
/etc/glance/glanceregistry.conf
/etc/glance/glance-api.conf
Heat
/etc/heat/heat.conf
Nova
/etc/nova/nova.conf
Neutron
/etc/neutron/neutron.conf
Note: It might be that after changing the admin password, and you try to login
that you are forbidden from doing so. When this happens, it is because the
keystone process has failed. If you have difficulties logging in (all passwords
refused) than you should check the keystone process and restart it if necessary.
5. If the IBM Cloud Orchestrator system is configured to attach to a Public Cloud
Gateway, then you must update the keystone administrator credentials for the
Public Cloud Gateway in the /opt/ibm/pcg/etc/admin.json. The details of the
process are found at: Changing the Keystone administrator password on
page 693.
Chapter 2. Installing
129
2.
3.
4.
5.
130
3. Select the user that you want to edit. From the Actions column on the right,
click Edit.
4. Enter the new password in the Password and Confirm Password fields.
5. Click Update User.
Example output:
nova:x:503:503::/home/nova:/bin/sh
glance:x:506:506::/home/glance:/bin/bash
cinder:x:509:509::/home/cinder:/bin/bash
e. Update the operating-system password for the database user IDs that you
identified in the previous step by running the following commands. After
each command, you must enter the new password.
passwd glance
passwd cinder
passwd nova
2. On Central Server 2, update the DB2-related password for the Keystone service
in the OpenStack configuration file, as shown in the following example:
a. Find the connection entry in the Keystone configuration file:
grep -Fnr connection /etc/keystone/keystone.conf
Example output:
28:connection =
W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
Make a note of the output of this command. You use this value in the
remaining parts of this step.
Chapter 2. Installing
131
3. On each Region Server, update the DB2-related password for the Cinder,
Glance, and Nova services in the OpenStack configuration files.
Important:
v Do not reuse any obfuscated connection string in other connection strings.
v Deobfuscate and obfuscate step-by-step each line with the new password.
a. If the region is a non-shared DB2 region, log on to each Region Server as a
root user. Otherwise, log on to Central Server 1 as a root user.
b. To find all the places in the configuration files that have to be changed, run
the following command:
grep -Fnr sql_connection /etc
Example output:
/etc/nova/nova.conf:70:sql_connection= nova_connection_details
/etc/cinder/cinder.conf:17:sql_connection = cinder_connection_details
/etc/glance/glance-registry.conf:27:sql_connection = glance-registry_connection_details
/etc/glance/glance-api.conf:32:sql_connection = glance-api_connection_details
c. For each entry identified in the previous step, decode the DB2 connect
information:
openstack-obfuscate -u connection_details
ibm_db_sa://userid:old_password@DB2_host_IP_address:50000/openstac
Make a note of the output of this command. You use this value in the
remaining parts of this step.
e. Edit the corresponding name.conf configuration file to comment out the
original sql_connection entry, and add the encrypted new password
information in a new entry.
f. Run the following command again to check that all files were changed:
grep -Fnr sql_connection /etc
Example output:
/etc/nova/nova.conf:70:#sql_connection= orig_nova_connection_details
/etc/nova/nova.conf:70:sql_connection= new_nova_connection_details
/etc/cinder/cinder.conf:17:#sql_connection = orig_cinder_connection_details
/etc/cinder/cinder.conf:17:sql_connection = new_cinder_connection_details
/etc/glance/glance-registry.conf:27:#sql_connection = orig_glance-registry_connection_details
/etc/glance/glance-registry.conf:27:sql_connection = new_glance-registry_connection_details
/etc/glance/glance-api.conf:32:#sql_connection = orig_glance-api_connection_details
/etc/glance/glance-api.conf:32:sql_connection = new_glance-api_connection_details
The output should include two entries per connection: the original entry
(now commented out) and the new entry.
4. On Central Server 1, restart all of the OpenStack services, as described in
Starting or stopping IBM Cloud Orchestrator on page 221. For example, for
an installation that is not highly available, run the following commands:
132
b. Select Resources.
c. Select JDBC.
d. Select Data sources and click BPM Business Space data source.
e. Click the option JAAS - J2C authentication data.
f. Click BPM_DB_ALIAS, and insert the new password. Click Apply to
validate the change.
g. Repeat step 2f for the CMN_DB_ALIAS and PDW_DB_ALIAS values.
h. When prompted to save your changes, click Save directly to the master
configuration.
i. Test the DB connection by clicking Test connection and selecting BPM
Business Space data source.
j. Restart Business Process Manager.
If you get errors while synchronizing the changes, log out and log in again, and
try to modify the password again.
For more information about updating passwords in WebSphere Application
Server, see Updating the data source authentication alias.
Chapter 2. Installing
133
Example output:
28:connection =
W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
5. For each entry identified in the previous step, decode the DB2 connect
information:
openstack-obfuscate -u connection_details
Example:
openstack-obfuscate
-u W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
6. For each entry, encrypt the DB2 connect information with the new password:
Example:
openstack-obfuscate ibm_db_sa://userid:new_password@DB2_host_IP_address:50000/openstac
Example output:
new_connection_details
Make a note of the output of this command. You use this value in the
remaining parts of this step.
134
7. Edit the corresponding name.conf file to comment out the original connection
or sql_connection entry, and add the encrypted new password information in a
new entry, as shown in the following example:
# connection = W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
connection = W0lCTTp2MV12b3pfcW9fZm46Ly94ZnFvOmFyamNuZmZqMGVxQDE3Mi4xOS41LjE2MDo1MDAwMC9iY3JhZmducA==
8. When you change the Neutron password, remember also to change the
password in the configuration file of the Neutron network node.
9. Start the IBM Cloud Orchestrator components that were shut down in step 1 on
page 134.
Table 12.
Component
Configuration file
Configuration key
Identity
/etc/keystone/keystone.conf
connection
keystone
Compute
/etc/nova/nova.conf
connection
nova
Image
/etc/glance/glance-api.conf
sql_connection
glance
/etc/glance/glance-registry.conf
Storage
/etc/cinder/cinder.conf
connection
cinder
Network
/etc/neutron/neutron.conf
connection
neutron
Orchestration
/etc/heat/heat.conf
connection
heat
Telemetry
/etc/ceilometer/ceilometer.conf
connection
ceilometer
Example output:
encrypted_new_database_password
Make a note of the output of this command. You use this value in the next
step.
3. On Central Server 2, edit the /etc/openstack-dashboard/local_settings file as
follows:
a. Locate the DATABASES section, which is similar to the following text:
# A dictionary containing the settings for all databases to be used with
# Django. It is a nested dictionary whose contents maps database aliases
# to a dictionary containing the options for an individual database.
DATABASES = {
default: {
ENGINE: ibm_db_django,
NAME: openstac,
USER: dash,
PASSWORD: utils.decode_opt(W0lCTTp2MV1yaHYzMjU4ZQ==),
Chapter 2. Installing
135
HOST: 172.19.8.65,
default-character-set: utf8
},
}
3. Edit the corresponding name.conf file to comment out the original entry, and
add the encrypted new password information in a new entry. The configuration
files and keys are listed in the following table:
Table 13.
Component
Configuration file
Configuration key
Compute
/etc/nova/nova.conf
host_password
Image
/etc/glance/glance-api.conf
vmware_server_password
Storage
/etc/cinder/cinder.conf
vmware_host_password
Telemetry
/etc/ceilometer/ceilometer.conf
host_password
136
4. Depending on the installation topology that you used, one or more jobs is
listed:
v If only one job is listed, make a note of the ID (that is, the long hexadecimal
string in the ID column).
v If two or more jobs are listed, make a note of the ID for the Central Server
job.
v If multiple IBM Cloud Orchestrator installations were deployed from the
same deployment server, use the time stamps to identify the correct Central
Server job.
Tip: If it is still not possible to identify the correct job:
a. Use the remaining steps in this section to discover the password used for
the jobs. Repeat the steps as necessary to identify the password for each
job ID.
b. When you generate the certificate requests, as described in the next
section, try each password in turn until you find the correct password to
unlock the certificate store.
5. Change to the Chef configuration directory:
cd /etc/chef
8. Identify the data bag whose name is user-OC_ENV-<guid>, where <guid> is the
ID of the installation task as identified in step 4.
9. List the items in the data bag:
knife data bag show <data_bag_name>
4. Check that the password from the first section works and get a list of the
certificates in the certificate store:
./gskcmd -cert -list -db key.kdb -pw <password>
Chapter 2. Installing
137
5. The output should show two certificates . The name of the first certificate is the
fully qualified domain name (FQDN) of the virtual address of Central Server
2. Make a note of this name because you are required to enter the name in the
following steps when <fqdn-cs2> is specified. <fqdn-iwd> is the FQDN of the
Workload Deployer server (this is needed only if you installed the Workload
Deployer component). The second certificate name starts with a long numeric
label followed by a number of parameters with the value unknown. This is an
internal certificate used by the IBM HTTP Server to forward traffic to the IBM
Cloud Orchestrator user interface server on port 7443. You must not modify or
delete this certificate.
6. Remove the existing SSL certificates by running the following commands:
./gskcmd -cert -delete -label <fqdn-cs2> -db key.kdb -pw <password>
./gskcmd -cert -delete -label <fqdn-iwd> -db key.kdb -pw <password>
Note: You must run the second command only if you installed the Workload
Deployer component.
7. Create the certificate requests by running the following commands:
./gskcmd -certreq -create -label <fqdn-cs2> \
-dn "CN=<fqdn>,O=<your organization>,OU=<your division>,C=<your country code>" \
-db key.kdb -file certreq_cs2.arm -pw <password>
./gskcmd -certreq -create -label <fqdn-iwd> \
-dn "CN=<fqdn>,O=<your organization>,OU=<your division>,C=<your country code>" \
-db key.kdb -file certreq_iwd.arm -pw <password>
Note: You must run the second command only if you installed the Workload
Deployer component.
8. In the current directory, locate the certreq_cs2.arm file and, if you installed the
Workload Deployer component, the certreq_cs2.iwd file and upload them to
your Certificate Authority (CA) for signing.
Note: You must run the second command only if you installed the Workload
Deployer component.
4. Check that the certificates were added to the certificate store by running the
following command:
./gskcmd -cert -list -db key.kdb -pw <password>
5. Make the Central Server 2 certificate the default certificate by running the
following command:
./gskcmd -cert -setdefault -db key.kdb -pw <password> -label <fqdn_cs2>
138
6. Check the default certificate by running the following command and add a
reminder to your calendar for the expiration date of the certificate:
./gskcmd -cert -getdefault -db key.kdb -pw <password>
a problem with the root and intermediate certificates occurred. Recheck that the
correct intermediate and root certificates were imported from the CA. If you
have to import more certificates, make sure that the default certificate selection
does not change and correct it, if needed.
4. Once the browser connects, use the browser to examine the certificate and
confirm it is as expected. If it is not the correct certificate, recheck which
certificate is the default as detailed before.
Chapter 2. Installing
139
l. Choose the certificate that matches the Central Server 2 FQDN from the
dropdown list.
m. Enter the same label (the Central Server 2 FQDN) into the Imported
certificate alias field.
n. Press OK.
Note: WebSphere will import the other certificates from the trust chain
automatically.
o. Save the change direct to the master configuration.
p. Repeat the steps from e to o for the Workload Deployer FQDN.
3. Restart WebSphere. At the terminal window enter:
service bpm-server restart
4. Copy the certificate store from the primary node to the secondary node:
scp <cs2 primary node hostname>:/opt/IBM/HTTPServer/bin/key.kdb
5. If the secondary Central Server 2 was only suspended during the work on
primary Central Server 2, you must restart BPM on the secondary node so that
it uses the new certificate:
service bpm-server restart
6. You do not need to restart the IBM HTTP Server on the secondary node since it
will not normally be running and will be started automatically if the primary
Central Server 2 node fails.
3. Back up the Workload Deployer server certificate located in the following path:
/opt/ibm/rainmaker/purescale.app/private/expanded/ibm/rainmaker.rest-4.1.0.0/config/rm.p12
140
source /etc/profile.d/jdk_iwd.sh
keytool -v -importkeystore \
-srckeystore /opt/ibm/rainmaker/purescale.app/private/expanded/ibm/rainmaker.rest-4.1.0.0/config/rm.p12 \
-srcstoretype PKCS12 -destkeystore /opt/ibm/maestro/maestro/usr/resources/security/KSTrustStore.jks \
-deststoretype JKS -deststorepass pureScale -srcstorepass <password>
To replace the certificate for the Workload Deployer command line, perform the
following steps:
1. Log on to Central Server 3 (Workload Deployer server).
2. Switch to /opt/ibm/rainmaker/purescale.app/private/expanded/ibm/
rainmaker.ui-4.1.0.0/public/downloads.
3. Unzip the file deployer.cli-5.0.0.0-<buildid>.zip, to the current folder, for
example:
unzip -q deployer.cli-5.0.0.0-20140815194859.zip
Note: There will be only one file that matches this naming scheme.
4. Move the file deployer.cli-5.0.0.0-<buildid>.zip to a backup folder.
5. Copy the Workload Deployer server certificate to the files deployer.cli/lib/
cb.p12 and deployer.cli/lib/deployer-ssl.p12, overwriting the existing files.
Make sure the files have the correct access rights (640):
chmod 640 deployer.cli/lib/cb.p12 deployer.cli/lib/deployer-ssl.p12
141
For example:
/opt/ibm/rainmaker/purescale.app/zero passwordTool -e veryS3cure;
<xor>KTotJgxsPCotOg==
where for:
running_deleted_instance_action
You can set:
142
running_deleted_instance_poll_interval
You can set:
0 = no check
>0 = period of time in seconds when the task will be executed
Check status
Start command
Stop command
http service
chef server
/opt/chef-server/
bin/chef-server-ctl
status
/opt/chef-server/
bin/chef-server-ctl
start
/opt/chef-server/
bin/chef-server-ctl
stop
Heat
service
openstack-heat-api
status
service
openstack-heat-api
start
service
openstack-heat-api
stop
service
openstack-heatengine status
service
openstack-heatengine start
service
openstack-heatengine stop
service ds-engine
status
service ds-engine
start
service ds-enginestop
Deployment service
Chapter 2. Installing
143
Check status
Start command
Stop command
Nova
service
openstack-nova-api
status
service
openstack-nova-api
start
service
openstack-nova-api
stop
service
openstack-novaschduler status
service
openstack-novascheduler start
service
openstack-novascheduler stop
service
openstack-novanetwork status
service
openstack-novanetwork start
service
openstack-novanetwork stop
service
openstack-novaconductor status
service
openstack-novaconductor start
service
openstack-novaconductor stop
service
openstack-novacompute status
service
openstack-novacompute start
service
openstack-novacompute stop
service
openstack-glance-api
status
service
openstack-glance-api
start
service
openstack-glance-api
stop
service
openstack-glanceregistry status
service
openstack-glanceregistry start
service
openstack-glanceregistry stop
su db2inst1;db2start
su db2inst1; db2stop
Glance
DB2
ds wizard
144
+--------------------------------------+---------------+---------+--------------------------------------+----------------------------+----------------------------+
| pjob_id
| created_at
| updated_at
|
+--------------------------------------+----------------------------+----------------------------+
|
| 2014-04-16T03:07:39.567679 | 2014-04-17T05:10:22.080938 |
| 80a8b343-8c92-4558-a18d-3b5d0836403c | 2014-04-17T05:38:13.854903 | 2014-04-17T12:29:11.730405 |
+--------------------------------------+----------------------------+----------------------------+
2. Retrieve the available resource for a job. Get a list of the available resources
for a job with the following command:
source /root/keystonerc; ds job-resources-list <job_id>
For example:
ds job-resources-list d66993ee-2aa6-4182-8897-ac6a633b1959
+-------------------+------------------+-----------------------------+----------| name
| type
| run_list
| run_order
+-------------------+------------------+-----------------------------+----------| kvm_region_server | Existing Machine | role[allinone]
| 1
| compute
| Existing Machine | role[os-compute-worker-sco] | 2
+-------------------+------------------+-----------------------------+----------+--------------------------------------+
| node_id
|
+--------------------------------------+
| 921b8f28-fdad-4856-9299-4cb0b596abe7 |
| 04e73c27-1b77-4f00-9444-86f74ace54f8 |
+--------------------------------------+
3. Register more nodes. More nodes can be added as additional compute nodes.
For example:
ds node-create -t IBM::SCO::Node
-p {Address: 192.0.2.108, Port: 22, User: root, Password: passw0rd} computeB
ds node-list
+--------------------------------------+----------+----------------+-------| id
| name
| type
| status
+--------------------------------------+----------+----------------+-------| df9dba9b-fae0-46cf-a0ff-dacd736c294f | central1 | IBM::SCO::Node | INUSE
| d68bb1c7-6ca8-4d5b-a491-1e078062c1c9 | computeB | IBM::SCO::Node | FREE
| e31532bc-941d-4ffc-8f5f-3aa7cc388252 | central2 | IBM::SCO::Node | INUSE
| 921b8f28-fdad-4856-9299-4cb0b596abe7 | region | IBM::SCO::Node | INUSE
| 04e73c27-1b77-4f00-9444-86f74ace54f8 | compute | IBM::SCO::Node | INUSE
+--------------------------------------+----------+----------------+-------+--------------------------------------+-------------------+---------------------------| job_id
| resource
| created_at
+--------------------------------------+-------------------+---------------------------| 80a8b343-8c92-4558-a18d-3b5d0836403c | central_server_1 | 2014-04-16T03:03:48.641240
|
|
| 2014-04-22T07:17:36.746637
| 80a8b343-8c92-4558-a18d-3b5d0836403c | central_server_2 | 2014-04-16T03:04:08.481278
| d66993ee-2aa6-4182-8897-ac6a633b1959 | kvm_region_server | 2014-04-16T03:04:46.041518
| d66993ee-2aa6-4182-8897-ac6a633b1959 | compute
| 2014-04-16T03:05:08.254468
+--------------------------------------+-------------------+---------------------------+----------------------------+
| updated_at
|
+----------------------------+
| 2014-04-16T03:07:39.639649 |
|
|
| 2014-04-16T03:07:39.611661 |
| 2014-04-17T05:38:13.884313 |
| 2014-04-17T05:38:13.898513 |
+----------------------------+
4. Associate the nodes to the existing job by running the following command:
source /root/keystonerc; ds job-associate-node [-N <RES1=NODE1;RES2=NODE2...>] <NAME or ID>
For example:
Chapter 2. Installing
145
source /root/keystonerc;
ds job-associate-node -N compute=d68bb1c7-6ca8-4d5b-a491-1e078062c1c9
d66993ee-2aa6-4182-8897-ac6a633b1959
If you need to associate one or more nodes to a new job, you can
disassociate the node (or multiple nodes) from the current job by running the
following command:
source /root/keystonerc; ds job-disassociate-node [-N <NODE1,NODE2...>]
Optional arguments:
-N <NODE1,NODE2...>, --nodes <NODE1,NODE2...>
After the execution finishes, the job is marked as UPDATED. For example:
ds job-list
+--------------------------------------+---------------+---------| id
| name
| status
+--------------------------------------+---------------+---------| 80a8b343-8c92-4558-a18d-3b5d0836403c | centralserver | FINISHED
| d66993ee-2aa6-4182-8897-ac6a633b1959 | regionserver | UPDATED
+--------------------------------------+---------------+---------+--------------------------------------+----------------------------+----------------------------+
| pjob_id
| created_at
| updated_at
|
+--------------------------------------+----------------------------+----------------------------+
|
| 2014-04-16T03:07:39.567679 | 2014-04-17T05:10:22.080938 |
| 80a8b343-8c92-4558-a18d-3b5d0836403c | 2014-04-17T05:38:13.854903 | 2014-04-17T12:29:11.730405 |
+--------------------------------------+----------------------------+----------------------------+
where:
<DBINSTANCENAME>
Defines the instance name of DB2.
<OUTPUT_LOCATION>
Defines the backup package location, the ds-backup command will package
all files into a tag.gz file and put it into this location.
[MODE]
Defines the database backup mode, can be online or offline. This
argument can be ignored and the default value is online if ignored.
146
Note: It is required to stop all of the services connected to DB2 before doing
offline backup.
Note: There are some setup steps needed to enable the online database backup.
For more information, see the DB2 documentation.
To restore with the ds-restore command:
ds-restore <DBINSTANCENAME> <BACKUP_PACKAGE>
where:
<DBINSTANCENAME>
Defines the instance name of the DB2.
<BACKUP_PACKAGE>
Defines the backup output. For example /tmp/backup/
ds20140528140442.tar.gz
Note: It is required to stop all of the services connected to DB2 before doing a
restore.
Upgrading
To upgrade IBM Cloud Orchestrator, complete these steps.
Important: Before any upgrade, create a backup of the Deployment Server and of
each IBM Cloud Orchestrator server. For example, create a snapshot of each virtual
server.
Before the upgrade, temporarily ensure that your IBM Cloud Orchestrator
passwords do not include any special characters, or the upgrade is likely to fail.
After the upgrade completes successfully, you can update the IBM Cloud
Orchestrator passwords to contain special characters.
147
v The IBM Cloud Orchestrator Deployment Service is installed and can connect to
all SmartCloud Orchestrator V2.3 Central Servers, Region Servers, and compute
nodes (for a KVM region). The version of Red Hat Enterprise Linux that is
installed on the Deployment Server must be the same as the version that is
installed on the SmartCloud Orchestrator V2.3 or V2.3.0.1 servers.
v The correct nameserver entry is configured in the /etc/resolv.conf file on the
Deployment Service server. SmartCloud Orchestrator V2.3 supports three DNS
options. If option A (built-in DNS) or option B (a built-in DNS with corporate
DNS as parent) is used, add a nameserver entry that points to Central Server 1.
If option C (corporate DNS) is used, add a nameserver entry that points to the
corporate DNS server.
v If you use Red Hat Enterprise Linux 6.5 as the operating system to host
SmartCloud Orchestrator V2.3 Central Servers, before upgrading to IBM Cloud
Orchestrator V2.4, the version of the openssl package on Central Server 3 is at
least version openssl-1.0.1e-16.el6.x86_64. Otherwise, the upgrade procedure
might fail when mapping images between the Workload Deployer component
and OpenStack.
Note: If you are using Red Hat Enterprise Linux 6.4 on the Central Servers, you
must upgrade the openssl rpm to the latest version on the Central Servers when
you will deploy Red Hat Enterprise Linux virtual machines with cloud-init. If
newer openssl agents connect to an out of date openssl server, the deployed
virtual machine gets an exception on cloud-init processing and the pattern
stays in state Preparing middleware.
v For VMware Region Server upgrade, users of vSphere 5.0 or earlier must host
their WSDL files locally. These steps are applicable for vCenter 5.0 or ESXi 5.0.
You can either mirror the WSDL from the vCenter or ESXi server that you
intend to use, or you can download the SDK directly from VMware. Refer to
section vSphere 5.0 and earlier additional set up and Procedure 2.1. To mirror
WSDL from vCenter (or ESXi) in OpenStack, see the OpenStack VMware
vSphere configuration for how to prepare WSDL files and configure
wsdl_location.
Note: Make sure that the VMware region nova user has permission to access the
folder and file configured for wsdl_location in the /etc/nova.conf file. For
upgrade from SmartCloud Orchestrator V2.3, you do not need to configure
wsdl_location in the VMware region /etc/nova.conf file. Instead, you must
configure the vmwsdlloc option in the upgrade configuration file. For more
information, see Configuring the upgrade configuration file on page 158.
v You do not install more than one IBM Cloud Orchestrator instance on the same
vCenter environment if the instances have access to the same resources. Each
IBM Cloud Orchestrator instance must use a different userid to access the
vCenter. The intersection between the resources that are seen by these users (for
example, clusters, datastore) must be empty.
v If you have IBM SmartCloud Orchestrator Content Pack for UrbanCode Deploy
installed in SmartCloud Orchestrator V2.3, you install the new version from IBM
Cloud Orchestrator before upgrading from SmartCloud Orchestrator V2.3 to IBM
Cloud Orchestrator V2.4.
Restriction: The self-service categories Virtual System Operations (Single
Instance) and Virtual System Operations (Multiple Instances) are not migrated.
These categories and related offerings are replaced with new offerings in IBM
Cloud Orchestrator V2.4. If you added your own customized offerings to either of
these categories and you want to migrate your offerings, create a new self-service
category, and move your offerings from Virtual System Operations (Single
148
149
You can upgrade to IBM Cloud Orchestrator by running one of the following
procedures:
v Using the Deployment Service wizard to upgrade
v Using the command-line interface to upgrade on page 152
Procedure
1. Start the Deployment Service wizard by running the following command on
the Deployment Server:
v For the root user:
source /root/openrc; ds wizard
v For a non-root user, create an openrc file in your home directory, and insert
the following content:
export
export
export
export
export
OS_USERNAME=admin
OS_PASSWORD=$(openstack-obfuscate -u encrypted_admin_password)
OS_TENANT_NAME=admin
OS_AUTH_URL=http://fqdn_or_ip_address_of_deployment_service_node:5000/v2.0
OS_REGION_NAME=Deployment
To run any ds command, you must first source the openrc file:
source ~/openrc;
ds wizard
150
b. Select option [0] and specify the SmartCloud Orchestrator 2.3.x Central
Server 1 information like IP address, port, user, and password or keyfile.
c. Select option [1] and specify the following parameters:
force_discover
Specify to recollect the information about SmartCloud Orchestrator
2.3.x environment if you already ran the discovery procedure.
location
Specifies the location where the upgrade configuration file is stored.
sco_admin_user
Specifies the user name of the SmartCloud Orchestrator 2.3.x
administrator.
sco_admin_password
Specifies the password of the SmartCloud Orchestrator 2.3.x
administrator.
use_extdb
Specifies if SmartCloud Orchestrator 2.3.x uses an external database.
d. Select option [2]. The discovery procedure starts.
e. When the discovery procedure completes, the following options are
displayed:
Configure upgrade configuration file for SCO2.3.X upgrade.
============================================================
Configure upgrade configuration file for SCO2.3.X upgrade.
[0] Configure central server for SCO2.3.X upgrade ...
[1] Configure region VMwareRegion for SCO2.3.X upgrade ...
[2] Configure region KVMRegion for SCO2.3.X upgrade ...
[3] End editing
Select option [0] to configure the upgrade options for Central Servers.
Select the other options to configure the upgrade options for each Region
Server.
f. Select option [3] to end the editing. The discovery procedure finished and
the following options are displayed:
[0] New IBM Cloud Orchestrator deployment.
- Start a new IBM Cloud Orchestrator deployment.
[1] Modify IBM Cloud Orchestrator deployment.
- Modify IBM Cloud Orchestrator deployment, add region server or KVM compute node.
[2] Discover an IBM SmartCloud Orchestrator 2.3.x topology.
- Discover an IBM SmartCloud Orchestrator 2.3.x topology.
[3] Upgrade an IBM SmartCloud Orchestrator 2.3.x deployment.
- Upgrade an IBM SmartCloud Orchestrator 2.3.x deployment.
[4] Deployment job(s) status.
- Display deployment job(s) status.
151
b. Select option [0] and specify the SmartCloud Orchestrator 2.3.x Central
Server 1 information like IP address, port, user, and password or keyfile.
c. Select option [1]. The following options are displayed:
Start Deployment
==================
Confirm to start the deployment
[0] Start the deployment.
- Start the IBM Cloud Orchestrator deployment.
152
endpoints and hypervisor lists, and runs Workload Deployer commands to collect
Workload Deployer image mappings and so on. The collected information is saved
on the Deployment Server.
You can subsequently run the ico-upgrade-tool command without the --force
option when the SmartCloud Orchestrator servers are stopped. In this case, the
saved information is read.
To force the ico-upgrade-tool command to connect to the SmartCloud
Orchestrator servers and collect information again, use the --force option:
ico-upgrade-tool discover -C SCO23_Central_Server1_IP_address
-p SCO23_Central_Server1_root_password
--admin-user SCO23_admin_user
--admin-pass SCO23_admin_password
--force
-f upgrade_config_file
upgrade
Upgrades SmartCloud Orchestrator to IBM Cloud Orchestrator.
help
Chapter 2. Installing
153
Procedure
1. Back up the Central Server and Region Server virtual machines and the
Compute Nodes.
For VMware-hosted Central and Region Servers, see Taking Snapshots
(VMware) to take snapshots for Central Servers and Region Servers.
2. Run the following commands on the Deployment Server to discover the
SmartCloud Orchestrator topology and generate the upgrade configuration file
to be used by the upgrade procedure:
source /root/openrc
ico-upgrade-tool discover -C CENTRAL_SRV1_IP -p CENTRAL_SRV1_ROOT_PASS
--admin-user SCO23_ADMIN_USER
--admin-pass SCO23_ADMIN_PASSWORD
-f UPGRADE_CFG_FILE
where:
CENTRAL_SRV1_IP
Specifies the IP address of Central Server 1.
CENTRAL_SRV1_ROOT_PASS
Specifies the root password of Central Server 1.
SCO23_ADMIN_USER
Specifies the user name of the SmartCloud Orchestrator admin user.
SCO23_ADMIN_PASSWORD
Specifies the user password of the SmartCloud Orchestrator admin
user.
UPGRADE_CFG_FILE
Specifies the file where all the discovered details of a SmartCloud
Orchestrator system are written to.
Note: If SmartCloud Orchestrator uses an external database, run the
ico-upgrade-tool discover command with the --extdb option:
ico-upgrade-tool discover -C CENTRAL_SRV1_IP -p CENTRAL_SRV1_ROOT_PASS
--admin-user SCO23_ADMIN_USER
--admin-pass SCO23_ADMIN_PASSWORD
--extdb -f UPGRADE_CFG_FILE
If you run the ico-upgrade-tool discover command again, it just reads the
saved information. In this case, you do not need to have SmartCloud
Orchestrator started. If you want to force discovery to collect the information
again, connect to the SmartCloud Orchestrator server and run the command
with the --force option.
ico-upgrade-tool discover -C YOUR_SCO23_CS1_ADDR -p PASSWORD
--admin-user SCO23_ADMIN_USER
--admin-pass SCO23_ADMIN_PASSWORD --force
-f UPGRADE_CFG_FILE
Note: Be sure that the SmartCloud Orchestrator servers are started when you
run the ico-upgrade-tool discover for the first time or when you specify the
--force option.
154
iso9660 loop
4. Reboot your servers after the Red Hat system upgrade. If you have upgraded
your Red Hat system according to step 3, reboot your servers to make the
new Linux kernel work.
5. See Configuring the upgrade configuration file on page 158 for information
about how to configure the upgrade configuration file.
6. Stop the SmartCloud Orchestrator processes.
If System Automation Application Manager is not used to control SmartCloud
Orchestrator, see http://www-01.ibm.com/support/knowledgecenter/
SS4KMC_2.3.0/com.ibm.sco.doc_2.3/t_start_stop_sco.html to start or stop
SmartCloud Orchestrator.
If System Automation Application Manager is used to control SmartCloud
Orchestrator, see http://www-01.ibm.com/support/knowledgecenter/
SS4KMC_2.3.0/com.ibm.sco.doc_2.3/t_control_mgmt_stack.html to request
offline SmartCloud Orchestrator from System Automation Application
Manager.
7. To upgrade SmartCloud Orchestrator to IBM Cloud Orchestrator, run the
following commands on the Deployment Server:
source /root/openrc
ico-upgrade-tool upgrade -C CENTRAL_SRV1_IP -p CENTRAL_SRV1_ROOT_PASS
where:
CENTRAL_SRV1_IP
Defines the IP address of Central Server 1.
CENTRAL_SRV1_ROOT_PASS
Defines the root password of Central Server 1.
8. For VMware region upgrade, if you are using a datastore cluster or resource
pool on your VMware region managed to vCenter, see Advanced post
configuration after a VMware region is upgraded on page 168.
9. The ico-upgrade-tool command creates deployment service jobs for the
upgrade. Wait until the status of all the upgrade jobs is FINISHED.
After the upgrade, the SmartCloud Orchestrator OpenStack configuration files
are overwritten with the IBM Cloud Orchestrator configurations. The
customized configuration in SmartCloud Orchestrator, for example, the LDAP
configuration, must be merged back manually. All the SmartCloud
Orchestrator OpenStack configuration files are backed up to
/opt/ibm/openstack_backup/sco23_upgrade/. For example, the keystone
configuration files are backed up to /opt/ibm/openstack_backup/
sco23_upgrade/keystone on Central Server 2.
Chapter 2. Installing
155
0 0
After the configuration files are merged back, the corresponding service must
be restarted. For example, after the LDAP configuration files are merged back,
run the /etc/init.d/openstack-keystone restart command to restart the
keystone service.
10. If System Automation Application Manager is used to control IBM Cloud
Orchestrator, reconfigure System Automation Application Manager by
following the procedure described in http://www-01.ibm.com/support/
knowledgecenter/SS4KMC_2.3.0/com.ibm.sco.doc_2.3/t_conf_saappman.html.
Re-configure service autostart on the Central Servers, Region Servers, and
Compute Nodes, and re-execute the step to copy control scripts from
/iaas/scorchestrator to /home/saam on all the servers.
What to do next
The ico-upgrade-tool exits when the upgrade jobs for Central Servers and Region
Servers are executed. The tool does not report failure or success status of the job.
To check if the upgrade for SmartCloud Orchestrator has completed, use the
following commands on the Deployment Server:
source /root/openrc
ds job-list
You can see the status of Central Servers upgrade from job
CentralServer_CS1_SUFFIX_Upgrade status, see the status of region servers upgrade
from job YOUR_REGION_NAME_CS1_SUFFIX_Upgrade status, for example, if your region
name is VmwareRegion, Central Server IP is 192.0.2.192, then your job name is
VmwareRegion_192_0_2_192_Upgrade.
To see the job details, use following command:
ds job-show JOB_ID
where JOB_ID is the job ID of the Central Server or Region Server upgrade job.
If the upgrade finished successfully, the status of all the upgrade jobs is FINISHED.
If an upgrade job failed with status ERROR, refer to the var/log/ds/ds-engine.log
file on the Deployment Server for details.
If the upgrade failed, you can roll back your IBM Cloud Orchestrator environment
with the data backed up.
For VMware-hosted Central and Region Servers, to restore Central Servers and
Region Servers, see http://pubs.vmware.com/vsphere-51/topic/
com.vmware.vsphere.vm_admin.doc/GUID-E0080795-C2F0-4F05-907C2F976433AC0D.html.
For KVM hosted Central and Region Servers, to restore Central Servers and Region
Servers, see http://www-01.ibm.com/support/knowledgecenter/SS4KMC_2.3.0/
com.ibm.sco.doc_2.3/t_backup_services.html.
Note:
v The opt/ibm/orchestrator/SCOrchestrator.py script (moved from the
/iaas/scorchestrator directory in SmartCloud Orchestrator to the
/opt/ibm/orchestrator directory in IBM Cloud Orchestrator) on Central Server 1
can be used to check the status of services after an upgrade. The status of each
service is shown below in the output from the script. Some of the services have
156
changed in IBM Cloud Orchestrator. These services will now show a status of
offline and should not be started. The services which are disabled after an
upgrade are as follows:
"openstack-nova-metadata-api" on a KVM compute node.
"openstack-nova-compute" and "openstack-nova-network" on a VMware
region
For a VMware region upgrade, the compute and network services are named
openstack-nova-compute-CLUSTER_NAME and openstack-nova-networkCLUSTER_NAME instead.
[root@cs-144-1 scorchestrator]# ./SCOrchestrator.py status
Environment in : /opt/ibm/orchestrator/scorchestrator/SCOEnvironment.xml
Cloud topology refreshed successfully to file:
/opt/ibm/orchestrator/scorchestrator/SCOEnvironment_fulltopo.xml
===>>> Collecting Status for SmartCloud Orchestrator
===>>> Please wait ======>>>>>>
Component
Hostname
Status
-----------------------------------------------------------------IHS
172.16.144.204
online
bpm-dmgr
172.16.144.204
online
bpm-node
172.16.144.204
online
bpm-server
172.16.144.204
online
httpd
172.16.144.204
online
iwd
172.16.144.203
online
openstack-cinder-api
172.16.144.206
online
openstack-cinder-api
172.16.144.205
online
openstack-cinder-scheduler
172.16.144.206
online
openstack-cinder-scheduler
172.16.144.205
online
openstack-cinder-volume
172.16.144.206
online
openstack-cinder-volume
172.16.144.205
online
openstack-glance-api
172.16.144.206
online
openstack-glance-api
172.16.144.205
online
openstack-glance-registry
172.16.144.206
online
openstack-glance-registry
172.16.144.205
online
openstack-heat-api
172.16.144.206
online
openstack-heat-api
172.16.144.205
online
openstack-heat-api-cfn
172.16.144.206
online
openstack-heat-api-cfn
172.16.144.205
online
openstack-heat-api-cloudwatch 172.16.144.206
online
openstack-heat-api-cloudwatch 172.16.144.205
online
openstack-heat-engine
172.16.144.206
online
openstack-heat-engine
172.16.144.205
online
openstack-keystone
172.16.144.202
online
openstack-nova-api
172.16.144.206
online
openstack-nova-api
172.16.144.205
online
openstack-nova-cert
172.16.144.206
online
openstack-nova-cert
172.16.144.205
online
openstack-nova-compute
172.16.52.20
online
openstack-nova-compute
172.16.144.205
offline
openstack-nova-conductor
172.16.144.206
online
openstack-nova-conductor
172.16.144.205
online
openstack-nova-metadata-api
172.16.52.20
offline
openstack-nova-metadata-api
172.16.144.206
online
openstack-nova-metadata-api
172.16.144.205
online
openstack-nova-network
172.16.52.20
online
openstack-nova-network
172.16.144.205
offline
openstack-nova-novncproxy
172.16.144.206
online
openstack-nova-novncproxy
172.16.144.205
online
openstack-nova-scheduler
172.16.144.206
online
openstack-nova-scheduler
172.16.144.205
online
pcg
172.16.144.202
online
qpidd
172.16.144.201
online
qpidd
172.16.144.206
online
qpidd
172.16.144.205
online
Chapter 2. Installing
157
swi
172.16.144.204
online
Procedure
1. Upgrade the external database to DB2 V10.5 manually. See the information
about upgrading DB2 manually in the IBM Knowledge Center at
http://www-01.ibm.com/support/docview.wss?uid=swg21633449.
2. There are new OpenStack components added to IBM Cloud Orchestrator V2.4.
For Central Servers, OpenStack horizon or ceilometer newly added to Central
Server 4 in IBM Cloud Orchestrator V2.4. perform the following steps on the
database server:
a. Create a DB2 database for OpenStack horizon and ceilometer or use the
same DB2 database as in the other OpenStack components.
b. Create a DB2 user for OpenStack horizon and ceilometer.
For each Region Server, OpenStack heat or ceilometer newly added to Region
Servers in IBM Cloud Orchestrator V2.4, perform the following steps on the
database server
a. Create a DB2 database for OpenStack heat and ceilometer or use the same
db2 database as in the other OpenStack components.
b. Create a DB2 user for OpenStack heat and ceilometer.
3. Upgrade SmartCloud Orchestrator V2.3 to IBM Cloud Orchestrator V2.4 with
the command ico-upgrade-tool. For information about upgrading, see
Upgrading from SmartCloud Orchestrator V2.3 or V2.3.0.1 on page 147.
4. The KVM region is upgraded automatically after step 2. For the VMware
region, manually migrate the VMware Region database after step 2 is
completed. For more information, see Migrating data in a VMware region on
page 164.
158
example, centralserver part is for all central servers, KVMRegion part is for the
KVM region, and VMwareRegion part is for the VMware region.
Most of the options are generated with the values collected from SmartCloud
Orchestrator V2.3 except what you must do in the following procedure.
Procedure
1. Make sure that no option values in the upgrade configuration file are empty.
Note: If your SmartCloud Orchestrator V2.3 environment uses corporate DNS,
the SingleSignOnDomain entry in the upgrade configuration file is empty. You
must set the SingleSignOnDomain parameter value to the DNS domain name
where the manage-from components are installed. For more information about
the SingleSignOnDomain parameter, see Customizing deployment parameters
on page 37.
Note: If your VMware region servers are connected to vCenter later than 5.0,
the vmwsdlloc option must be empty.
2. For VMware region server, the following parameters are mandatory for the
vCcenter configuration:
VMWareRegion: {
.......
"vcenter": {
......
"password": "",
"user": "",
"vminterface": ""
}
"vmdatastore": ""
.......
}
159
d. Make sure the vmwsdlloc value starts with file:// if the WSDL files are
stored on VMware region server locally. If the WSDL files are stored on a
http server, the vmwsdlloc value should start with http://.
.
Make sure that the vCenter user name and password are correct. The vCenter
credentials can be validated in either of the following ways:
a. Login to the vSphere Web Client with the user and password, check
whether the credential can be authorized successfully.
b. In SmartCloud Orchestrator 2.3, there is a method to validate vCenter
credentials from the VMware region. For details refer to the documentation
at: http://www-01.ibm.com/support/knowledgecenter/SS4KMC_2.3.0/
com.ibm.sco.doc_2.3/t_config_sco_vmware.html?lang=en . For example:
# /opt/ibm/openstack/iaas/smartcloud/bin/nova-cloud-pingtest 192.0.2.11 admin passw0rd VMware
{"cloud": {"driver":"", "name":"", "description":"OpenStack managed VMware", \
"hostname":"192.0.2.11", "username":"scoadmin10", "password":"object00X", \
"type":"VMware", "port":443} }
{"results": {"status": "Cloud is OK and ready to deploy instances.", "id": "OK", "label": "OK"}}
# /opt/ibm/openstack/iaas/smartcloud/bin/nova-cloud-pingtest 192.0.2.11 admin wrongpass VMware
{"cloud": {"driver":"", "name":"", "description":"OpenStack managed VMware", \
"hostname":"192.0.2.11", "username":"scoadmin10", "password":"object0X", \
"type":"VMware", "port":443} }
{"results": {"status": "Unreachable Cloud: Login Failed. The user and password combination \
is invalid.", "id": "UNREACHABLE", "label": "Unreachable"}}
3. For the KVM region server, the compute node's password is mandatory. All
compute nodes for the KVM region are under the computenodes item. The
item name for compute nodes are named as the compute node's host name, for
example, for a KVM region KVMRegion with compute node
scotest-d20-compute, the configuration file looks like this:
KVMRegion: {
......
"computenodes": {
"scotest-d20-compute": {
"password": "passw0rd"
}
# All computes for kvm region are put under this item
# compute nodes hostname
......
}
4. For the options with default values, only modify them if they have been
manually changed after SmartCloud Orchestrator V2.3 was installed. These
options must be modified with the new values. For example, if the nova
database user's password has been changed after the SmartCloud Orchestrator
V2.3 installation, you must modify it with the new password. All plain text
passwords in the upgrade configuration file are encrypted after you run the
ico-upgrade-tool upgrade command.
5. External database configuration
If an external database is used for SmartCloud Orchestrator V2.3, for each
central and region service, an addition option for database IP address is
created. There is no default value for database configuration options, for
example:
"image": {
"db": {
"name": "" ,
"user": "",
160
"address": ""
},
"servicepassword": "passw0rd"
}
All options with no default value must be input with a valid value.
Example
Sample upgrade configuration file
Here is an example of a sample upgrade configuration file. The comments after #
are not included in the configuration file:
{
"KVMRegion": {
"compute": {
"db": {
"name": "openstac",
"user": "noa54773"
},
"servicepassword": "passw0rd"
},
"computenodes": {
"scotest-d20-compute": {
"password": "passw0rd"
}
},
"image": {
"db": {
"name": "openstac",
"user": "gle54773"
},
"servicepassword": "passw0rd"
},
"metering": {
"db": {
"name": "openstac",
"user": "cel54773"
},
"servicepassword": "passw0rd"
},
"network": {
"db": {
"name": "openstac",
"user": "neu54773"
},
"servicepassword": "passw0rd"
},
"orchestration": {
"db": {
"name": "openstac",
"user": "het54773"
},
"servicepassword": "passw0rd"
},
"osdbpassword": "passw0rd",
"osdbport": "50000",
"regionname": "KVMRegion",
"smartcloudpassword": "passw0rd",
"template": "kvm_region_upgrade",
"volume": {
"db": {
"name": "openstac",
"user": "cir54773"
# nova configuration
# nova db name
# nova db user
# nova service users password
# KVMRegions compute nodes list
# compute node hostname
# compute nodes password
# glance configuration
# ceilometer configuration
# neutron configuration
# heat configuration
#
#
#
#
#
Chapter 2. Installing
161
},
"servicepassword": "passw0rd"
}
},
"VMWareRegion": {
"compute": {
"db": {
"name": "openstac",
"user": "noa20918"
},
"servicepassword": "passw0rd"
},
"image": {
"db": {
"name": "openstac",
"user": "gle20918"
},
"servicepassword": "passw0rd"
},
"metering": {
"db": {
"name": "openstac",
"user": "cel20918"
},
"servicepassword": "passw0rd"
},
"network": {
"db": {
"name": "openstac",
"user": "neu20918"
},
"servicepassword": "passw0rd"
},
"orchestration": {
"db": {
"name": "openstac",
"user": "het20918"
},
"servicepassword": "passw0rd"
},
"osdbpassword": "passw0rd",
"osdbport": "50000",
"regionname": "VMWareRegion",
"smartcloud": {
"db": {
"password": "passw0rd",
"user": "sce20918"
},
"vcenter": {
"clusters": "Cluster1",
"host": "192.0.2.11",
"password": "",
"user": "",
"vminterface": ""
}
},
"smartcloudpassword": "passw0rd",
"template": "vmware_region_upgrade_db2",
"volume": {
"db": {
"name": "openstac",
"user": "cir20918"
},
"servicepassword": "passw0rd"
}
},
"centralserver": {
162
"BrowserSimpleTokenSecret": "rad/M4P8/MIVFGJewQagLw==",
"SimpleTokenSecret": "8oG7vheB1vwyDgxPul7uzw==",
"bpm": {
"db": {
"name": "BPMDB",
"password": "passw0rd",
"port": "50000",
"user": "bpmuser"
}
},
"cmn": {
"db": {
"name": "CMNDB",
"password": "passw0rd",
"port": "50000",
"user": "bpmuser"
}
},
"dashboard": {
"db": {
"name": "openstac",
"port": "50000",
"user": "dash"
}
},
"db2": {
"das": {
"password": "passw0rd"
},
"fenced": {
"password": "passw0rd"
},
"instance": {
"password": "passw0rd"
}
},
"keystone": {
"adminpassword": "passw0rd",
"admintoken": "c7778550072374148e7e",
"adminuser": "admin",
"db": {
"name": "openstac",
"port": "50000",
"user": "ksdb"
}
},
"metering": {
"db": {
"name": "openstac",
"port": "50000",
"user": "ceil"
}
},
"osdbpassword": "passw0rd",
"osdbport": "50000",
"pdw": {
"db": {
"name": "PDWDB",
"password": "passw0rd",
"port": "50000",
"user": "bpmuser"
}
},
"regionname": "Master",
"scui": {
"SingleSignOnDomain": "sso.mycompany.com"
},
Chapter 2. Installing
163
"smartcloudpassword": "passw0rd",
"template": "central_server_upgrade"
}
}
Procedure
1. Create clusters in vCenter.
2. Move all EXSIs which are not under clusters to a certain cluster.
3. Wait a few minutes for the synchronization from vCenter to SmartCloud
Orchestrator V2.3.x.
4. Upgrade the VMware Region.
5. Manually migrate these virtual machines which are under the EXSIs.
Procedure
1. Migrate the Availability Zone:
a. Discover the Availability Zone in your Region node:
nova availability-zone-list
+-----------------------------+----------------------------------------+
| Name
| Status
|
+-----------------------------+----------------------------------------+
| internal
| available
|
| |- scotest-d20-region1
|
|
| | |- nova-cert
| enabled :-) 2014-07-19T02:21:49.041585 |
| | |- nova-conductor
| enabled :-) 2014-07-19T02:21:45.707118 |
| | |- nova-consoleauth
| enabled :-) 2014-07-19T02:21:40.908027 |
| | |- nova-network
| enabled :-) 2014-07-19T02:21:40.510854 |
| | |- nova-scheduler
| enabled :-) 2014-07-19T02:21:46.600970 |
| | |- nova-smartcloud
| enabled :-) 2014-07-19T02:21:44.159949 |
| [email protected]
| available
|
| |- [email protected] |
|
| | |- nova-compute
| enabled :-) 2014-07-19T02:21:44.171048 |
+-----------------------------+----------------------------------------+
164
Zone that manages your virtual machines (these virtual machines that are
under the cluster cluster1 in vCenter).
Note: There might be more than one Availability Zones that manage some
virtual machines in your Region. Each of them are in format
cluster_name@vCenter_ip. Each Availability Zone "cluster_name@vCenter_ip
manage these virtual machines that are under the cluster_name in vCenter.
You can check it in vCenter.
2. Create new configuration files for the Availability Zones:
You can create new configuration files by copying your /etc/nova/nova.config.
Take the result of Step 1 as an example:
cp /etc/nova/nova.config /etc/nova/nova_cluster1_192.0.2.11.conf
Chapter 2. Installing
165
Note: You can find the value of node by running the nova hypervisor-list
command. The item that contains the cluster name is the one that you want.
c. Migrate the template:
Go to DB2 server and create the file:
/home/db2inst1/vmware_template_data_migration.sql:
Connect to OpenStack:
export to /home/db2inst1/template_data_migration-20140718134138.backup of DEL \
lobfile template_data_migration-20140718134138.lob modified by lobsinfile \
select * from gle20918.IMAGE_PROPERTIES
merge into gle20918."IMAGE_PROPERTIES" prop \
using (select id as image_id, template_name as name_key, name as name_value, deleted \
from (select distinct id, name, deleted from gle20918."IMAGES" \
where status in (active, deleted))) image \
on prop.image_id = image.image_id and prop.name = image.name_key \
when not matched \
then insert (image_id, name, "value", created_at, deleted) values \
(image.image_id, image.name_key, image.name_value, current timestamp, image.deleted)
166
Post-upgrade configuration
After you upgrade to IBM Cloud Orchestrator V2.4, complete any necessary
post-upgrade configuration steps.
After the upgrade, VMware discovery is not started and is not configured
automatically. If you would like to use VMware discovery, you must configure and
start it as described in Configuring vmware-discovery on page 114 and
Configuring vmware-discovery for multiple vCenters on page 116.
After the upgrade, disable the SSLv3 protocol as described in Disabling SSLv3
protocol in deployed instances on page 180.
To identify other necessary post-upgrade configuration tasks, and for information
about how to configure V2.4 features, review the topics in the Post-installation
tasks on page 72 section.
Procedure
1. Log in to the system using the login credentials required for installing on a
Linux platform .
2. Enter one of the following commands:
v ./sccm_install.sh
Chapter 2. Installing
167
v ./sccm_install.sh sccm_install.properties
Note: The sccm_install.properties file can be modified as required but must
match the parameters used in the 2.1.0.3 install.
3. Follow the directions presented on the screen to complete the installation.
4. Launch the browser: https://<host>:<port>/Blaze/Console. For example,
https://<servername>:9443/Blaze/Console.
5. Once installation is successfully completed, run the automated post-installation
configuration to ensure that SmartCloud Cost Management is configured to
work with IBM Cloud Orchestrator 2.4.
Note: Metering and Billing will not work correctly for an upgraded IBM Cloud
Orchestrator 2.4 environment unless configured correctly to do so.
Note: If you have modified job files that refer to the 2.1.0.3 default Database
datasource, after running the automated post-installation configuration process,
you must change the name of the datasource in these job files to the new
datasource name used in 2.1.0.4. The name of the default Database datasource
in 2.1.0.3 is sco_db2 and in 2.1.0.4 is ico_db2.
168
Procedure
1. Run the following command to get a list of existing hypervisors and record the
related hypervisor IDs, because the original hypervisor is deleted by the nova
compute after the manual configuration and after restarting the nova
compute service.
nova hypervisor-list
+-----+-----------------------------------------------------+
| ID | Hypervisor hostname
|
+-----+-----------------------------------------------------+
| 101 | domain-c27(Cluster1)
|
| 102 | resgroup-87(ResourcePool1)
|
+----+------------------------------------------------------+
to
cluster_name = <the cluster name under which this resource pool is>
If the resource pool is under an ESXi host, perform the following steps:
a. Under the cluster_name attribute, add the following attribute:
resource_pool = <the ESXi host name under which this resource pool is>:<resource pool name>
4. Restart the nova services related to the cluster or resource pool by running the
following commands, for example:
/etc/init.d/openstac-nova-compute-Cluster1 restart
/etc/init.d/openstac-nova-network-Cluster1 restart
/etc/init.d/openstac-nova-compute-ResourcePool1 restart
/etc/init.d/openstac-nova-network-ResourcePool1 restart
Chapter 2. Installing
169
Check the related service status until the status is :-) by running the following
command:
nova availability-zone-list
+-------------------------------------+----------------------------------------+
| Name
| Status
|
+-------------------------------------+----------------------------------------+
| ResourcePool1@vcell10-443
| available
|
| |- ResourcePool1@vcell10-443
|
|
| | |- nova-compute
| enabled :-) 2014-08-29T10:36:25.947820 |
| internal
| available
|
| |- ResourcePool1@vcell10-443
|
|
| | |- nova-network
| enabled :-) 2014-08-29T10:36:21.598537 |
| |- Cluster1@vcell10-443
|
|
| | |- nova-network
| enabled :-) 2014-08-29T10:36:25.207140 |
| |- test-srv05
|
|
| | |- nova-cert
| enabled :-) 2014-08-29T10:36:30.183126 |
| | |- nova-conductor
| enabled :-) 2014-08-29T10:36:23.328309 |
| | |- nova-consoleauth
| enabled :-) 2014-08-29T10:36:22.339324 |
| | |- nova-network
| enabled XXX 2014-08-29T02:39:00.199581 |
| | |- nova-scheduler
| enabled :-) 2014-08-29T10:36:27.273330 |
| | |- nova-smartcloud
| enabled XXX 2014-08-28T02:21:03.956346 |
| Cluster1@vcell10-443
| available
|
| |- Cluster1@vcell10-443
|
|
| | |- nova-compute
| enabled :-) 2014-08-29T10:36:29.350317 |
| nova
| available
|
| |- test-srv05
|
|
| | |- nova-compute
| enabled XXX 2014-08-29T02:38:53.724831 |
+-------------------------------------+----------------------------------------+
You can see the new hypervisor by using the nova hypervisor-list command.
In the output the hypervisor_name is the same as the resource pool specified in
the nova configuration file. For example:
| 178 | Cluster1:ResourcePool1 |
5. To ensure that the hypervisor ID is not changed after the manual configuration,
you must run the change_compute_id.sh script on the VMware region node.
For example, if your original hypervisor ID is 102:
| 102 | resgroup-87(ResourcePool1) |
after the manual configuration, 102 was marked as deleted in database. You can
only see the new hypervisor 178 by using the nova hypervisor-list command:
| 178 | Cluster1:ResourcePool1 |
Note: The original hypervisor 102 is not shown by using the nova
hypervisor-list command' after the manual configuration. You must record
this ID before starting this procedure.
Log on to your VMware region node, and run following command:
cd /opt/ibm/orchestrator/hypervisor_tool
./change_compute_id.sh --org_id 178 --new_id 102
170
171
3. If the upgrade does not complete successfully, review the following log files:
v /var/log/cloud-deployer/deployer_bootstrap.log
v /var/log/cloud-deployer/deploy.log
Take the appropriate action as indicated in the log files, and repeat step 2.
Upgrading the deployed IBM Cloud Orchestrator environment
1. Log on to Central Server 1.
2. Stop all services by running the following command:
/opt/ibm/orchestrator/scorchestrator/SCOrchestrator.py stop
Example output:
+--------------------------------------+---------------+---------| id
| name
| status
+--------------------------------------+---------------+---------| 8e362f0d-ed86-4bfc-8d69-46c22fb276d9 | central-61719 | FINISHED
| 5cc5ecbe-64a7-4ecd-a025-3c701bbeaa22 | vmware-61719 | FINISHED
| fdf06d31-3cea-4d97-8dfd-8ce566a2d5c4 | kvm-61719
| FINISHED
+--------------------------------------+---------------+---------+--------------------------------------+---------------------------------| pjob_id
| created_at
+--------------------------------------+---------------------------------|
| 2014-10-16 09:57:48.558388-04:00
| 8e362f0d-ed86-4bfc-8d69-46c22fb276d9 | 2014-10-16 13:51:17.893313-04:00
| 8e362f0d-ed86-4bfc-8d69-46c22fb276d9 | 2014-10-16 14:01:56.507769-04:00
+--------------------------------------+---------------------------------+----------------------------------+
| updated_at
|
+----------------------------------+
| 2014-10-16 13:50:36.952918-04:00 |
| 2014-10-16 14:23:21.450570-04:00 |
| 2014-10-16 14:36:00.528696-04:00 |
+----------------------------------+
172
where
myICOadminPassword
The password for the default Cloud Administrator user (admin). The
admin user is used for internal connectivity across all of the IBM
Cloud Orchestrator components.
myDB2inst1Password
The password for the IBM DB2 database user (db2inst1).
myBPMadminPassword
The password for the admin user for Business Process Manager
(bpm_admin).
Note: In accordance with security best practices, it is assumed that the default
passwords were changed after the original installation. Therefore, to update
the deployment environment and to avoid resetting the passwords to the
original installation values, you must specify the current passwords in the ds
job-update command. During installation or upgrade, you must specify
passwords that do not contain any special characters.
Important: Do not run any other ds job-update command until the Central
Servers job is finished.
7. Check the status of the upgrade job by running the ds job-list command.
When the status of the Central Servers upgrade job is UPDATE_FINISHED,
proceed to the next step. If the upgrade job does not complete successfully,
review the log files in the /var/log/ds directory and take the appropriate
action.
8. Identify the job ID for each Region Server deployment by running the ds
job-list command.
In this example, the Region Server job IDs are 5cc5ecbe-64a7-4ecd-a0253c701bbeaa22 and fdf06d31-3cea-4d97-8dfd-8ce566a2d5c4.
9. Upgrade the Region Servers by running the following command for each
Region Server:
ds job-update Region_Server_Job_ID
Example:
ds job-update 5cc5ecbe-64a7-4ecd-a025-3c701bbeaa22
ds job-update fdf06d31-3cea-4d97-8dfd-8ce566a2d5c4
Component
Configuration files
Central
Server
1
Ceilometer
/etc/ceilometer/ceilometer.conf
Chapter 2. Installing
173
Server
Component
Configuration files
Central
Server
2
Keystone
/etc/keystone/keystone.conf
/etc/keystone/keystone-paste.ini
/etc/keystone/policy.json
Region
Server
Nova
/etc/nova/nova.conf
Glance
/etc/glance/glance-registry.conf
/etc/glance/glance-api.conf
Heat
/etc/heat/heat.conf
Cinder
/etc/cinder/cinder.conf
VMware
discovery
(VMware
only)
/etc/vmware-discovery.conf
Neutron Neutron
Server
/etc/neutron/neutron.conf
For example:
diff /var/chef/backup/etc/nova/nova.conf.chef-20141112145321.465905 /etc/nova/nova.conf
What to do next
Complete the post-upgrade configuration steps, as described in Configuring IBM
Cloud Orchestrator after upgrading from V2.4 on page 176.
174
2. Wait until the System Automation Application Manager user interface shows
that the IBM Cloud Orchestrator management stack can be stopped. You can
also verify it by running the following commands on the System Automation
Application Manager virtual machine:
eezcs -D SCOsaam -c lsres -Ab -r "Cloud Orchestrator" |
grep ObservedState | cut -d = -f 2 | tr -d
4. Manually start the external DB2 again by running service db2ctrl start on
the server where the external DB2 runs.
5. Verify that the Deployment Service environment works correctly by running the
following command on the Deployment Service virtual machine:
source /root/keystonerc; ds job-list
6. Verify that all deployments that you want to update are in the FINISHED state.
Procedure
1. Upgrade the Deployment Service and the management stack topologies by
following the procedure described at Upgrading from IBM Cloud Orchestrator
V2.4 on distributed deployment on page 171.
2. Upgrade the IBM Cloud Orchestrator management stack.
Upgrade the System Automation Application Manager deployment. To do this,
select the ID of the deployment job. You can display the deployment jobs by
running the following command:
[root@cil021029129 ~]# ds job-list
+--------------------------------------+---------------+---------| id
| name
| status
+--------------------------------------+---------------+---------| 8e362f0d-ed86-4bfc-8d69-46c22fb276d9 | central-61719 | FINISHED
| baf4892b-daa1-48b5-bf96-820c9f0b0725 | saam-61719
| FINISHED
| 5cc5ecbe-64a7-4ecd-a025-3c701bbeaa22 | vmware-61719 | FINISHED
| fdf06d31-3cea-4d97-8dfd-8ce566a2d5c4 | kvm-61719
| FINISHED
+--------------------------------------+---------------+---------+--------------------------------------+---------------------------------| pjob_id
| created_at
+--------------------------------------+---------------------------------|
| 2014-10-16 09:57:48.558388-04:00
|
| 2014-10-16 09:56:54.539352-04:00
| 8e362f0d-ed86-4bfc-8d69-46c22fb276d9 | 2014-10-16 13:51:17.893313-04:00
Chapter 2. Installing
175
What to do next
Post upgrade activities:
1. Configure the external database as described in Configuring the external
database server for the high-availability topology on page 119.
2. Configure System Automation Application Manager again as described in
Configuring System Automation Application Manager on page 122.
3. Verify that you can manage the IBM Cloud Orchestrator management stack
correctly.
4. To start the IBM Cloud Orchestrator management stack again, cancel the stop
request you issued before the upgrade by running the following command:
eezcs -D SCOsaam -c resreq -o cancel "Cloud Orchestrator"
176
7. Click OK.
8. From the Actions menu on the left, click Edit Resource Type Tags.
9. For each of the following resource types, remove the specified tags by clicking
the delete icon and then clicking OK:
Table 15.
Resource type
Tags to remove
Offering
Action
Category
all
Renaming categories
In the Self Service Catalog in V2.4.0.2, some categories were renamed. If you want
to rename these categories in your upgraded system, complete the following steps:
1. Log in to the Self-service user interface as a Cloud Administrator.
2. Click CONFIGURATION > Self-Service Catalog.
3. Click the Categories tab.
4. Select the Create resources for cloud services category.
5. From the Actions menu on the left, click Edit Category.
6. Edit the category as follows:
a. Change the name to Manage the lifecycle of cloud services.
b. Change the description to A set of offerings to manage the lifecycle
of cloud services such as virtual servers, patterns and stacks.
c. From the Icon list, select Configuration Category Icon.
d. Click OK.
7. Select the Infrastructure management category.
8. From the Actions menu on the left, click Edit Category.
9. Change the name to Infrastructure management services and click OK.
10. Select the Deploy cloud services category.
Chapter 2. Installing
177
11. From the Actions menu on the left, click Edit Category.
12. Change the description to A set of offerings to deploy cloud services
such as virtual machines and resource patterns and click OK.
Offering
Category in V2.4.0
Infrastructure management
services
Unregister keys
Infrastructure management
services
Decommission a cloud
service
If you want to move the offerings, perform the following steps for each of the
specified offerings:
1.
2.
3.
4.
5.
6. Change the category to the target category as specified in the previous table
and click OK.
178
New process
Edit Offering
Edit Category
Chapter 2. Installing
179
and before the upgrade the hypervisor was working correctly, click Maintain
and then click Start to enable the hypervisor.
After this step, the hypervisor status should be Started and the cloud group
status should be Connected.
What to do next
Disable the SSLv3 protocol as described in Disabling SSLv3 protocol in deployed
instances.
180
Chapter 2. Installing
181
v For new classic virtual system deployments, use the new patterns that are built
on the new operating system images.
Updating existing virtual application, shared services, and virtual system
deployed instances
To disable SSLv3 protocol in virtual application, shared services, and virtual
system deployed instances, complete the following steps.
Important: You must do these steps in the specified order or the fix will not be
successful.
1. Upgrade the pattern type to the following version:
Foundation-ptype Version 2.1.0.3
3.
4.
5.
6.
7.
Example:
deployer.virtualapplications.get("d-4e04e98d-8c3d-4bec-bab1-ca033419ffc3").virtualmachines[0].check_compliance()
8. Deploy a new caching service instance by using the new version of the caching
service plug-in, as follows:
a. Click Manage > Operations > CachingMaster, Grid Administration >
Create grid to go to the new caching service instance.
b. Create a session grid with a dedicated user name and password and a
proper grid cap for the web application.
c. Use a Secure Shell (SSH) connection to connect to the virtual machine where
WebSphere Application Server is running.
d. Run the following command on one line:
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/bin/wsadmin.sh
-lang jython
e. Run the following command on one line to configure the web application's
session management with the new caching service IP, user, password, and
grid name. The following command uses the example of configuring the
HttpSessionSample.war web application.
182
AdminApp.edit('HttpSessionSample.war', '[-SessionManagement
[[true XC10SessionManagement "' + caching_service_instance_ip
+ ':!:' + user_name + ':!:' + password + ':!:' + grid_name ]]])
183
For virtual machines running JRE6 (classic virtual system instances), the
version is updated to SR16 FP3. For virtual machines running JRE7 (shared
service, virtual applications, and virtual system instances), the version is
updated to SR8 FP10.
c. Run the following command:
openssl s_client -connect vm_ip:9999 -ssl3
Uninstalling
To remove a KVM compute node, or to remove a region, complete these steps.
Procedure
1. List the services on all the compute nodes:
On the Region Server, run:. ~/openrc && nova service-list
+------------------+--------------+----------+---------+-------+----------------------------+
| Binary
| Host
| Zone
| Status | State | Updated_at
|
+------------------+--------------+----------+---------+-------+----------------------------+
| nova-cells
| rs-135-3
| internal | enabled | up
| 2013-10-09T06:47:41.699503 |
| nova-cert
| rs-135-3
| internal | enabled | up
| 2013-10-09T06:47:48.464156 |
| nova-compute
| rs-5-blade-6 | nova
| enabled | up
| 2013-10-09T06:47:42.402192 |
| nova-conductor | rs-135-3
| internal | enabled | up
| 2013-10-09T06:47:49.713611 |
| nova-network
| rs-5-blade-6 | internal | enabled | up
| 2013-10-09T02:47:32.317892 |
| nova-consoleauth | rs-135-3
| internal | enabled | up
| 2013-10-09T06:47:49.928091 |
| nova-scheduler | rs-135-3
| internal | enabled | up
| 2013-10-09T06:47:49.929776 |
+------------------+--------------+----------+---------+-------+----------------------------+
2. Disable the nova compute and nova network service on the compute node you
want to remove. On the Region Server, run: nova-manage service disable
--host=<host> --service=<services>.
For example, if you want to remove node rs-5-blade-6, run the following
commands:
nova-manage service disable --host=rs-5-blade-6 --service=nova-network
nova-manage service disable --host=rs-5-blade-6 --service=nova-compute
184
3. If you want to remove the compute node completely, you must manually
remove it from the database, create a file drop_node.sql with following
contents:
connect to openstac user <dbuser> using <dbpassword>
delete from compute_node_stats where compute_node_id in
(select id from compute_nodes
where hypervisor_hostname=rs-5-blade-6)
delete from compute_nodes where hypervisor_hostname=rs-5-blade-6
delete from services where host=rs-5-blade-6
What to do next
The availability zone will be removed from the OpenStack region after all the
compute nodes that used this availability zone are removed. You can run the
command nova availability-zone-list to find the compute nodes that use the
availability zone.
Removing a region
To remove a region from the IBM Cloud Orchestrator environment, perform the
following steps.
Procedure
1. The IBM Cloud Orchestrator administrator must delete all the instances on the
region:
v Log on to the Self-service user interface and click ASSIGNED RESOURCES.
v Select All Regions or the region that you want to delete. The virtual
machines listed are only the ones related to the region. Delete all of them.
2. On Central Server 2, update the Keystone entries that relate to the region that
you want to remove, for example, RegionOne:
source /root/keystonerc
keystone endpoint-list |grep RegionOne | cut -d | -f2 | while read endpoint;
do keystone endpoint-delete $endpoint;
done
185
b. Select the cloud group for the region you are deleting and click Discover.
c. Select each cloud group related to the region and click Delete.
v Delete the region IP group:
a. Click PATTERNS > Deployer Configuration > IP Groups.
b. Select the required IP group and click Delete.
4.
Remove the region environment from the IBM Cloud Orchestrator tool:
a. Go to the /opt/ibm/orchestrator/scorchestrator directory on Central
Server 1.
b. Remove the SCOEnvironment_<removed_region>.xml file.
c. Run the SCOEnvironment_update.py script.
d. Check that all the node information of the removed region is not present
anymore in the SCOEnvironment_fulltopo.xml file.
e. Run the SCOrchestrator.py --status command to check if the status of the
removed region is gone.
Troubleshooting installation
Learn how to troubleshoot installation problems.
To troubleshoot problems that occurred during the installation procedure, you can
perform the following actions:
v Check the log files.
For the Deployment Service, the log files are located in the /var/log/clouddeployer directory.
To do the problem determination for the ds command, refer to the log files
located in the /var/log/ds directory.
ds-api.log
This log is for ds-api service.
ds-engine.log
This log is for ds-engine service.
ds-engine-<job_id>.log
This log is generated when running the ds job-execute command. The
knife command outputs are recorded in this log file for each job.
If any problem occurs when running the chef bootstrap command, check the
/var/chef/cache/chef-stacktrace.out log file in the target system.
v Check error details for job execution
Use the ds job-show <JOB_ID> command to show the error details in the fault
section. For example:
ds job-show aa00eb97-5d83-4acb-9589-dc8b654a6d91
+------------+-------------------------------------------------------------------------------------+
| Property
| Value
|
+------------+-------------------------------------------------------------------------------------+
| created_at | 2014-03-27T05:47:38.271874
|
| deleted_at | None
|
| fault
| {
|
|
|
"message": "Failed to execute task from compose-topo-roles : create chef roles
|
|
|
for deployment",
|
|
|
"exception": [
|
|
|
"Traceback (most recent call last):\n",
|
|
|
" File \"/usr/lib/python2.6/site-packages/ds/engine/deploy/task_runner.py\",
|
|
|
line 70, in execute\n
res = task.execute(self.context, logger)\n",
|
|
|
" File \"/usr/lib/python2.6/site-packages/ds/engine/deploy/common/
|
|
|
compose_topo_roles.py\", line 77, in execute\n
res_name, resource)\n",
|
186
|
|
" File \"/usr/lib/python2.6/site-packages/ds/engine/deploy/resource_handler/
|
|
|
node_handler.py\", line 14,in parse\n usr = res[\"Properties\"][\"User\"]\n",|
|
|
"KeyError: User\n"
|
|
| ]
|
|
| }
|
| heat_meta | {
|
|
|
"params": null,
|
|
|
"template": {
|
|
|
"AWSTemplateFormatVersion": "2010-09-09",
|
|
|
"Outputs": {
|
|
|
"ORCHESTRATION_DB_ADDR": {
|
|
|
"Value": {
|
|
|
"Fn::GetAtt": [
|
|
|
"control",
|
|
|
"PublicIp"
|
The db2prereqcheck utility failed to find the following 32-bit library file:
"libstdc++.so.6".
The db2prereqcheck utility failed to find the following 32-bit library file:
"/lib/libpam.so*".
The db2prereqcheck utility failed to find the following 32-bit library file:
"/lib/libpam.so*".
187
v Failed to install IBM Cloud Orchestrator because the node time is not sync
with deployment service node.
Deployment service will sync the time with the node that used to deploy IBM
Cloud Orchestrator, if the time is different, it will use ntp to sync the time. But
in some case if the ntp service is stopped in deployment service or the network
is not accessible, the synchronization will fail and the installation will fail as
well. You will see error messages in /var/log/ds-engine-<job-id>.log like
following:
14 Jul 21:18:47 ntpdate[3645]: no server suitable for synchronization found
Starting ntpd:
[60G[[0;32m OK [0;39m]
iptables: Saving firewall rules to /etc/sysconfig/iptables:
[60G[[0;32m OK [0;39m]
ip6tables: Saving firewall rules to /etc/sysconfig/ip6tables:
[60G[[0;32m OK [0;39m]
Starting Chef Client, version 11.6.2
Creating a new client identity for central_server_1-zpvlv3x6m4v6 using the validator key.
================================================================================
[31mChef encountered an error attempting to create the client "central_server_1-zpvlv3x6m4v6"
================================================================================
You can manually run ntpdate to verify if the time service is wrong in
deployment service by performing the following steps:
1. ssh to the failed node.
2. Check the /etc/ntp.conf. Find the last line like "server <deployment
service node ip> burst iburst prefer"
3. Stop the ntpd if it is running:
/etc/init.d/ntpd stop
188
check if ntpd is started in the deployment service node and the network is
accessible , there is no iptables that is blocking the UDP port 123.
v IBM Cloud Orchestrator deployment requires the domain name used in
cookies as one of the parameters of the deployment.
Cookies that are used to implement user interface security features can be set
only for domain names that are not top-level domain names and adhere to the
public suffix list.
To resolve this problem, provide a domain name matching the requirements to
be used in cookies as one of the parameters of the deployment.
v During deployment service installation, you break the installation manually
or power off the deployment service node.
When you try to reinstall the deployment service, the following error occurs.
Errors in the deploy.log file:
[2014-06-02T22:28:22.632128 #2220] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007ff7d9843db0> 2014-06-02 22:28:11.606 6982 TRACE
keystone ProgrammingError: (ProgrammingError)
ibm_db_dbi::ProgrammingError: Statement Execute Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0612N
"DOMAIN_ID" is a duplicate name. SQLSTATE=42711 SQLCODE=-612
\nALTER TABLE "user" ADD COLUMN domain_id VARCHAR(64) ()
...
[2014-06-02T22:28:22.689079 #2220] ERROR -- sc-deploy:
install_task.rb::`log_error::#<Thread:0x007ff7d9843db0> Failed to run command chef-client -N
deployment-service -o role[deployment-service] -E deployment-service -l debug
Resolution:
You must reinstall the operation system of deployment service node or restore
the snapshot back, and reinstall deployment service.
v Deployment service installed failed with user provided repository:
Scenario:
The deployment service installation failed, and found information in
/var/log/ds/deployer.log like following:
[2014-08-11T12:14:59.009900 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> [
2014-08-11T12:14:59-04:00] DEBUG: Executing yum -d0 -e0 -y install
openstack-nova-compute-2014.1.2-201407291735.
ibm.el6.165
I, [2014-08-11T12:15:04.846423 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> [
2014-08-11T12:15:04-04:00] DEBUG: ---- Begin output of yum -d0 -e0 -y install
openstack-nova-compute-2014.1.2201407291735.ibm.el6.165 ---I, [2014-08-11T12:15:04.846569 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> [
2014-08-11T12:15:04-04:00] DEBUG: STDOUT: You could try using --skip-broken to
work around the problem
I, [2014-08-11T12:15:04.846694 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
You could try running: rpm -Va --nofiles --nodigest
I, [2014-08-11T12:15:04.846814 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> [
2014-08-11T12:15:04-04:00] DEBUG: STDERR: Error: Multilib version problems found.
This often means that the root
I, [2014-08-11T12:15:04.846884 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
cause is something else and multilib version checking is just
I, [2014-08-11T12:15:04.846954 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
pointing out that there is a problem. For example:
I, [2014-08-11T12:15:04.847073 #12830] INFO -- sc-deploy:
Chapter 2. Installing
189
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
I, [2014-08-11T12:15:04.847155 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> 1.
You have an upgrade for gnutls which is missing some
I, [2014-08-11T12:15:04.847222 #12830] INFO -- sc-deploy:
exec_cmd.rb:`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
dependency that another package requires. Yum is trying to
I, [2014-08-11T12:15:04.847322 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
solve this by installing an older version of gnutls of the
I, [2014-08-11T12:15:04.847389 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
different architecture. If you exclude the bad architecture
I, [2014-08-11T12:15:04.847486 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
yum will tell you what the root cause is (which package
I, [2014-08-11T12:15:04.847554 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
requires what). You can try redoing the upgrade with
I, [2014-08-11T12:15:04.847624 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
exclude gnutls.otherarch ... this should give you an error
I, [2014-08-11T12:15:04.847741 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
message showing the root cause of the problem.
I, [2014-08-11T12:15:04.847810 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
I, [2014-08-11T12:15:04.847920 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> 2.
You have multiple architectures of gnutls installed, but
I, [2014-08-11T12:15:04.847991 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
yum can only see an upgrade for one of those arcitectures.
I, [2014-08-11T12:15:04.848070 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> If
you dont want/need both architectures anymore then you
I, [2014-08-11T12:15:04.848180 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
can remove the one with the missing update and everything
I, [2014-08-11T12:15:04.848250 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
will work.
I, [2014-08-11T12:15:04.848342 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
I, [2014-08-11T12:15:04.848412 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8> 3.
You have duplicate versions of gnutls installed already.
I, [2014-08-11T12:15:04.848501 #12830] INFO -- sc-deploy:
exec_cmd.rb::`run_cmd_log::#<Thread:0x007f99a9d5cbc8>
190
Reason:
IBM DB2 is stopped.
Solution:
Start IBM DB2.
Example:
# su - db2inst1
$ db2stop
11/20/2014 02:49:33
0
0
SQL1032N No start database manager command was issued.
SQL1032N No start database manager command was issued. SQLSTATE=57019
$ db2start
11/20/2014 02:49:46
0
0
SQL1063N DB2START processing was successful.
SQL1063N DB2START processing was successful.
$ exit
logout
# ./create_dbs.sh central
Creating databases: 5/8
Creating databases: 6/8
Reason:
If you provided customized parameters when deploying the all-in-one topology of
IBM Cloud Orchestrator, the parameters are passed directly to OpenStack. If the
parameters are not valid, the network creation will fail. The following error is
displayed in the /var/log/ds/ds-engine-<your job id>.log file:
[2014-08-28 04:13:46,467] 208.43.88.157 Mixlib::ShellOut::ShellCommandFailed
[2014-08-28 04:13:46,467] 208.43.88.157 -----------------------------------[2014-08-28 04:13:46,468] 208.43.88.157 Expected process to exit with [0], but received 1
[2014-08-28 04:13:46,468] 208.43.88.157 ---- Begin output of nova-manage network create
--label=VM Network --fixed_range_v4=10.32.17.224/29 --num_networks=1 --network_size=16
--bridge=br4096 --dns1= --dns2= --bridge_interface=eth1 --vlan=100 ---[2014-08-28 04:13:46,469] 208.43.88.157 STDOUT:
[2014-08-28 04:13:46,469] 208.43.88.157 STDERR: 2014-08-28 04:13:46.353 23527 CRITICAL nova
[-] UnicodeError: Message objects do not support str() because they may contain non-ascii
characters. Please use unicode() or translate() instead.
[2014-08-28 04:13:46,470] 208.43.88.157 ---- End output of nova-manage network create
--label=VM Network --fixed_range_v4=10.32.17.224/29 --num_networks=1 --network_size=16
--bridge=br4096 --dns1= --dns2= --bridge_interface=eth1 --vlan=100 ---[2014-08-28 04:13:46,471] 208.43.88.157 Ran nova-manage network create --label=VM Network
--fixed_range_v4=10.32.17.224/29 --num_networks=1 --network_size=16 --bridge=br4096 --dns1=
--dns2= --bridge_interface=eth1 --vlan=100 returned 1
Solution:
Run the nova-manage command again on the target node and get the real error
messages:
Chapter 2. Installing
191
v Log on to the command line of a cluster virtual machine and run the lssam
command. The following sample output is for the Central Server 2 cluster:
Online IBM.ResourceGroup:central-services-2-rg Nominal=Online
|- Online IBM.Application:bpm
|- Online IBM.Application:bpm:cil021029163
- Online IBM.Application:bpm:cil021029164
|- Online IBM.Application:haproxy
|- Online IBM.Application:haproxy:cil021029163
- Offline IBM.Application:haproxy:cil021029164
|- Online IBM.Application:ihs
|- Online IBM.Application:ihs:cil021029163
- Offline IBM.Application:ihs:cil021029164
|- Online IBM.Application:keystone
|- Online IBM.Application:keystone:cil021029163
- Online IBM.Application:keystone:cil021029164
|- Online IBM.Application:sar_agent
|- Online IBM.Application:sar_agent:cil021029163
- Online IBM.Application:sar_agent:cil021029164
|- Online IBM.ResourceGroup:ui-rg Nominal=Offline
|- Online IBM.Application:horizon
|- Online IBM.Application:horizon:cil021029163
- Online IBM.Application:horizon:cil021029164
|- Online IBM.Application:pcg
- Online IBM.Application:pcg:cil021029163
- Online IBM.Application:scui
|- Online IBM.Application:scui:cil021029163
- Online IBM.Application:scui:cil021029164
- Online IBM.ServiceIP:cs2-ip
|- Online IBM.ServiceIP:cs2-ip:cil021029163
- Offline IBM.ServiceIP:cs2-ip:cil021029164
Online IBM.Equivalency:cs2-network-equ
|- Online IBM.NetworkInterface:eth0:cil021029163
- Online IBM.NetworkInterface:eth0:cil021029164
If a resource is not in the correct state (either offline or online), check the actual
state of the service:
192
1. Log on, as a root user, to the virtual machine where the service runs. The host
name of the VM is listed in the GUI or in the lssam output as property of the
resource.
2. Run the following command:
service <service> status
Tip: The service registered in init.d might differ from the name of the
application resource in System Automation for Multiplatforms.
The output should match the state in the lssam command output.
3. If a service is in an error state or in unknown state, try to manually recover the
service by running the following command on the virtual machine where the
service should run:
Important: If the service is an active/standby service, run this command on
only one of the virtual machines of the cluster.
service <service> stop; service <service> start
For some services, you might encounter the following error situations:
v <name of service> dead but subsys locked: Manually start the service again.
This recovers the service and correctly sets the status to started.
v <name of service> dead but pid file exists: Manually stop the service again.
This deletes the pid file and correctly sets the status of the service to
stopped. If this does not help, delete the pid file manually.
4. If the service does not start, check the log files of the service and correct any
problems found there.
5. You might need to suspend the automation to resolve an error condition. In a
normal state, the automation automatically restarts a service even if the service
is stopped manually. To suspend the automation, log on to one of the cluster
virtual machines and run the following command:
samctrl -M t
After the error condition is resolved, start the service and verify the correct
status. If the status is correct, run the following command to resume the
automation:
samctrl -M f
6. If the resource is still in error, run the following command to reset the resource
in System Automation for Multiplatforms:
resetrsrc -s Name == "<service>" IBM.Application
Chapter 2. Installing
193
Specify the user and password that you defined during the installation. The
default user is eezadmin.
2. Click the Explore Automation Domains tab. Expand the SCOsaam end-to-end
automation policy.
3. Find the automation domain for the resource that is in error, and select this
domain. The right pane shows the resources of this automation domain.
4. Right-click on the resource that is in error, and click Reset. After some time, the
screen refreshes: if the resource recovered from the error, the resource is
working correctly.
5. To understand the cause of the problem, check the log files. Right-click on the
automation domain, and click View Log.
6. If the resource cannot recover from the error, log on to the server where the
service is running. Check the status of the service in the context of the saamuser
user by running the following command:
su saamuser -c "/opt/IBM/tsamp/eez/scripts/servicectrl.sh <service> status; echo $?"
Using /etc/hosts
If the /etc/hosts file is used for System Automation Application Manager to
resolve the host name of the IBM Cloud Orchestrator servers, you must restart
System Automation Application Manager after each change in the /etc/hosts file
to allow System Automation Application Manager to use the changes.
Specify the user and password that you defined during the installation. The
default user is eezadmin.
2. Click the Explore Automation Domains tab. Expand the SCOsaam end-to-end
automation policy.
3. In the Resources section in the right pane, expand the list of resources and
resource groups.
4. Right-click the resource or resource group for which you want to temporarily
suspend the System Automation Application Manager automation, and click
194
Reason:
DB2 passwords with special characters might fail as follows:
[2014-12-13 04:22:12,676] 172.16.144.201 [2014-12-13T04:22:12+00:00] ERROR: Exception handlers
complete
[2014-12-13 04:22:12,676] 172.16.144.201 [2014-12-13T04:22:12+00:00] FATAL: Stacktrace dumped
to /var/chef/cache/chef-stacktrace.out
[2014-12-13 04:22:12,676] 172.16.144.201 Chef Client failed. 40 resources updated
in 557.027162241 seconds
[2014-12-13 04:22:12,701] 172.16.144.201 [2014-12-13T04:22:12+00:00] ERROR:
db2_user[create database user] (sco23-upgrade::upgrade-create-central-os-db line 75) had an error:
Mixlib::ShellOut::ShellCommandFailed: execute[grant privilege to user(dash) on database(openstac)]
(/var/chef/cache/cookbooks/db2/providers/user.rb line 43) had an error:
Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received 4
[2014-12-13 04:22:12,701] 172.16.144.201 ---- Begin output of su - db2inst1 -c
"(db2 connect to openstac) && (db2 grant dbadm on database to user dash)
&& (db2 connect reset)" ---[2014-12-13 04:22:12,702] 172.16.144.201 STDOUT: SQL5035N
to the current release.
&&
Solution:
1. Rollback to the existing system before starting the upgrade. Extract the IBM
Cloud Orchestrator package.
2. Back up <ICO_PACKAGE_PATH>/data/installer/chef-repo/cookbooks/db2upgrade/recipes/db2-upgrade.rb before editing as follows:
a. Add the line
Chapter 2. Installing
195
require shellwords
Troubleshooting upgrade
Use these tips to ensure that the upgrade succeeds.
196
b. If the Central Server upgrade job is finished successfully, but the Region
Server upgrade jobs failed due to incorrect configuration in the upgrade
configuration file, roll back the Region Servers that failed to run the
upgrade jobs.
2. Edit the upgrade configuration file to provide the correct information, as
described in Configuring the upgrade configuration file on page 158.
3. Run the following command to rerun the failed upgrade jobs:
ico-upgrade-tool upgrade -C CENTRAL_SRV1_IP -p CENTRAL_SRV1_ROOT_PASS
Solution
1. Back up the /etc/sysconfig/iptables file.
2. Clean iptables by running the following command:
iptables -F && iptables -t nat -F
service iptables save
3. Restart the nova network service for the special cluster. For example, if your
cluster name is sco-cluster-01, restart the nova network service by running the
following command:
service openstack-nova-network-sco-cluster-01 restart
4. Check the backup file for additional rules that must be added manually.
Chapter 2. Installing
197
Solution
Clear your browser cache, and restart your browser.
Solution:
Use following shell script to fix your problem:
#!/bin/bash
if (pgrep tgtd > /dev/null 2>&1); then tgtd_ids=`pgrep tgtd`
for i in $tgtd_ids;do
kill -9 $i done
fi
service tgtd restart
Installation reference
Advanced users can customize the installation to suit site-specific needs.
Command-line methods
If you do not want to use the GUI method to install, you can use the CLI method
instead.
198
Port
ServiceAddress
The service IP address for DB2. This parameter is optional.
Note: This is a mandatory requirement if you want to move your database
on a different system at a later time. For information about moving the
database, see Move the database to an external system on page 84.
User
The user name to access the node via SSH. The user must be the root user
or must have sudo privilege. This parameter is required.
Password
The password used to access the node. This parameter is optional if the
KeyFile parameter is specified. The password can contain only the
following characters:
a-zA-Z0-9!()-.?[]
KeyFile
The key file location used to access the node. This parameter is optional if
the Password parameter is specified.
To understand the number of nodes that are required in the selected topology, can
use ds template-resource-list <template_UUID> command.
For example, if you deploy an environment by using the sco-allinone-kvm
template, you must run the following commands on the Deployment Server:
source keystonerc; ds template-list
...
| b779f01e-95d6-4ca6-a9ec-a37753f66a2b | sco-allinone-kvm
| SCO Core + additional KVM compute node deployment (Existing machines)| ACTIVE
| 2014-03-27T01:30:54.661581 | 2014-03-27T06:58:35.211096 |
...
ds template-resources-list b779f01e-95d6-4ca6-a9ec-a37753f66a2b
+---------+------------------+-----------------------------+-----------+
| name
| type
| run_list
| run_order |
+---------+------------------+-----------------------------+-----------+
| control | Existing Machine | role[sco-allinone-kvm-mgm] | 1
|
Chapter 2. Installing
199
where sco-core, sco-iwd, and kvm-compute are the node names that you assign to
each node. It is recommended to assign names that are related to the role of the
node.
Some templates may include Resource Reference type resource which refers to an
existing Resource. For example:
ds template-resources-list fc11c554-24d4-4bba-a891-990aeee1f310
+--------------------+--------------------+--------------------------+-----------+
| name
| type
| run_list
| run_order |
+--------------------+--------------------+--------------------------+-----------+
| central_server_1
| Existing Machine
| role[central-server-1]
| 1
|
| central_server_2
| Existing Machine
| role[central-server-2]
| 2
|
| central_server_3
| Existing Machine
| role[central-server-3]
| 3
|
| cs1_ico_agents_all | Resource Reference | role[ico_mon_agents_all] | 4
|
| cs2_ico_agent_sar | Resource Reference | role[ico_mon_agent_sar] | 5
|
| cs3_ico_agent_sar | Resource Reference | role[ico_mon_agent_sar] | 6
|
+--------------------+--------------------+--------------------------+-----------+
It is not needed to associate the node with this kind of resource when creating job.
Otherwise, an error will be reported.
You can verify that the node registration completed successfully by using the
following command:
ds node-list
+--------------------------------------+---------------+----------------+------+-------| id
| name
| address
| user | status
+--------------------------------------+---------------+----------------+------+-------| 9992bd06-0ae2-460b-aedb-0259a8acca70 | sco-core
| 192.0.2.211
| root | FREE
| e5b6d881-1aa6-4ac0-824f-d48c296eae23 | kvm-compute | 192.0.2.213
| root | FREE
| f80f622c-e7a5-40c2-be17-9c7bd60a0142 | sco-iwd
| 192.0.2.212
| root | FREE
+--------------------------------------+---------------+----------------+------+-------+----------------------------+----------------------------+
| created_at
| updated_at
|
+----------------------------+----------------------------+
| 2014-04-09T09:35:58.397878 | 2014-04-10T02:44:31.100993 |
| 2014-04-09T09:36:16.651048 | 2014-04-10T02:44:31.110808 |
| 2014-04-09T09:36:40.692384 | 2014-04-10T02:44:31.116846 |
+----------------------------+----------------------------+
200
where
v template_id is the ID of the sco-allinone-kvm template that you can get by
using the ds template command.
v net_interface is the NIC device that must be the same in each node.
v SingleSignOnDomain is the DNS domain name where the manage-from
components are installed.
v sco-core-id, sco-iwd-id, and kvm-compute-id are the IDs that you can get by
using the ds node-list command.
v myallinonekvm is the name you choose to identify the job.
Note: In the Deployment Service commands, use double quotation marks to
specify parameter values. Passwords can contain only the following characters:
a-zA-Z0-9!()-.?[]
Spaces and hyphens are not allowed. Use leading and trailing single quotation
marks (') when special characters are used within a password, as shown in the
following example:
ds node-create -t "IBM::SCO::Node" -p "{Address: 192.0.2.211, Port: 22, User: root,
Password: pass_w0rd, Fqdn: sco-core.example.com }" sco-core
If a problem occurs during job creation and the job status is WARNING or ERROR,
run the ds job-show <job ID> command to inspect the details of the job which
also include the related error message.
2. To perform the installation, execute the deployment job by running the
following command, for example:
ds job-execute myallinonekvm
where myallinonekvm is the name to identify the job that you specified in the
previous step.
3. Use the ds job-list command to check the status of the job.
4. If you do not create a job correctly and you want to delete the job, use the ds
job-delete command.
ds job-delete job_id
Chapter 2. Installing
201
PowerVC only
Ensure that the PowerVC server has been installed.
Note: Using a PowerVC server that has been configured for both Shared
Storage Pools and SAN-fabric managed Storage is not recommended. If
possible, use two PowerVC servers, each configured for using a single
Storage Type, and two PowerVC Region Servers (including two PowerVC
Neutron Servers) to manage them.
Note: PowerVC must be at version 1.2.1.2 and you must also apply the
interim fix shipped with IBM Cloud Orchestrator. For more information,
see Applying interim fix 1 to PowerVC 1.2.1.2 on page 50.
z/VM only
You must enable the xCAT version that is provided with z/VM 6.3. To
enable xCAT for z/VM 6.3, follow chapters 1-4 in the Enabling z/VM for
OpenStack (Support for OpenStack Icehouse Release) guide, at
http://www.vm.ibm.com/sysman/openstk.html.
Procedure
1. Check that the prerequisite job to install the Central Servers completed
successfully.
a. Use the ds job-list command to display the job details. The status of the
job should be FINISHED.
Example:
source ~/openrc
ds job-list
Example output:
+--------------------------------------+---------------+----------+-----------------------| id
| name
| status | pjob_id
+--------------------------------------+---------------+----------+-----------------------| 103cbe78-3015-4c7c-b02b-8de9d051ad1d | centralsrvjob | FINISHED |
+----------------------------+----------------------------+
| created_at
| updated_at
|
+----------------------------+----------------------------+
| 2014-08-13T03:41:46.237133 | 2014-08-13T05:57:25.009566 |
202
where hypervisor is one of the following values: hyperv, kvm, power, vmware,
or zvm.
Example:
ds template-list | grep vmware
Example output:
...
| e9467b25-fc7f-4d0b-ab4d-fb052768a9fc | vmware_region_neutron
| VMware region-server deployment with Neutron network.
...
Example:
ds template-resources-list e9467b25-fc7f-4d0b-ab4d-fb052768a9fc
Example output:
+----------------------+--------------------+------------------------------------------+-----------+
| name
| type
| run_list
| run_order |
+----------------------+--------------------+------------------------------------------+-----------+
| vmware_region_server | Existing Machine | role[vmware-region],role[vmware-compute] | 1
|
| neutron_network_node | Existing Machine | role[neutron-network-node]
| 2
|
| vrs_ico_agents_sar | Resource Reference | role[ico_mon_agent_sar]
| 3
|
| nnn_ico_agents_sar | Resource Reference | role[ico_mon_agent_sar]
| 4
|
+----------------------+--------------------+------------------------------------------+-----------+
Example output:
+--------------------------------------+---------------+----------------+-------| id
| name
| type
| status
+--------------------------------------+---------------+----------------+-------| 0be0d025-8637-4ec9-a51d-a8b1065aae72 | vmwareregion | IBM::SCO::Node | INUSE
| bfb2a2cc-dc4f-4c43-ae8f-be07567f3e19 | vmwareneutron | IBM::SCO::Node | INUSE
| 5d9f4dd8-683f-43ce-9861-186336414d70 | central3
| IBM::SCO::Node | INUSE
| 69926d1c-b57b-453b-9c92-b0bbd202d750 | central1
| IBM::SCO::Node | INUSE
| 7dbd222f-d20a-4bd8-8f3e-77fb66067520 | central2
| IBM::SCO::Node | INUSE
+--------------------------------------+----------------------+---------------------------| job_id
| resource
| created_at
+--------------------------------------+----------------------+----------------------------
Chapter 2. Installing
203
|
|
|
|
|
11dae147-86a0-498e-87d3-46146e687bde
11dae147-86a0-498e-87d3-46146e687bde
103cbe78-3015-4c7c-b02b-8de9d051ad1d
103cbe78-3015-4c7c-b02b-8de9d051ad1d
103cbe78-3015-4c7c-b02b-8de9d051ad1d
|
|
|
|
|
vmware_region_server
neutron_network_node
central_server_3
central_server_1
central_server_2
|
|
|
|
|
2014-08-13T03:41:41.400463
2014-08-13T03:41:41.993816
2014-08-13T03:41:38.865586
2014-08-13T03:41:37.574286
2014-08-13T03:41:38.233124
+----------------------------+
| updated_at
|
+----------------------------+
| 2014-08-13T05:57:37.564439 |
| 2014-08-13T05:57:37.552176 |
| 2014-08-13T03:41:46.279254 |
| 2014-08-13T03:41:46.286440 |
| 2014-08-13T03:41:46.269587 |
In this example, the vmwareregion node ID is 0be0d025-8637-4ec9-a51da8b1065aae72, and the vmwareneutron node ID is bfb2a2cc-dc4f-4c43-ae8fbe07567f3e19.
5. Create a job to install the nodes that you created in the previous step.
a. Use the ds job-create command.
The parameters that you specify in the ds job-create command are
hypervisor-specific. For more information about the parameters required for
each hypervisor, see Hypervisor-specific information when deploying a
Region Server (CLI method) on page 205.
Example:
ds job-create -t e9467b25-fc7f-4d0b-ab4d-fb052768a9fc \
-P "MGMNetInterface=eth0;VMServerHost=203.0.113.0; \
VMServerPassword=vmserver_password;VMServerUserName=vmserver_user; \
VMDataStoreName=vm_datastore;VMClusterName=vm_cluster" \
-N "vmware_region_server=0be0d025-8637-4ec9-a51d-a8b1065aae72; \
neutron_network_node=bfb2a2cc-dc4f-4c43-ae8f-be07567f3e19" \
-p 103cbe78-3015-4c7c-b02b-8de9d051ad1d \
vmware-region-server-neutron
Example output:
+--------------------------------------+------------------------------+---------| id
| name
| status
+--------------------------------------+------------------------------+---------| 11dae147-86a0-498e-87d3-46146e687bde | vmware-region-server-neutron | CREATED
+--------------------------------------+----------------------------+----------------------------+
| pjob_id
| created_at
| updated_at
|
+--------------------------------------+----------------------------+----------------------------+
| 103cbe78-3015-4c7c-b02b-8de9d051ad1d | 2014-08-13T05:57:37.378651 |
|
If you want to deploy a Region Server with shared database mode (that is,
the Region Server uses the database that is deployed in the Central Server),
you must include the following additional parameters in the -P option of
the ds job-create command:
ComputeDBUsername=compute_database_user
DashboardDBUsername=dashboard_database_user
ImageDBUsername=image_database_user
MeteringDBUsername=metering_database_user
NetworkDBUsername=network_database_user
OrchestrationDBUsername=orchestration_database_user
VolumeDBUsername=volume_database_user
Example:
204
ds job-execute 11dae147-86a0-498e-87d3-46146e687bde
Results
The Region Server, and the Neutron Server if specified, is deployed.
Hypervisor-specific information when deploying a Region Server (CLI method):
When you use the command-line interface to deploy a Region Server, some of the
information that you must specify is hypervisor-specific.
Hyper-V
If you use one of the Hyper-V templates, you must create nodes for the Hyper-V
Region Server and the Neutron Server.
Example:
ds node-create -t IBM::SCO::Node \
-p {Address: 192.0.2.254, Port: 22, User: root, Password:password } \
hypervregion
ds node-create -t IBM::SCO::Node
-p {Address: 192.0.2.253, Port: 22, User: root, Password:password } \
hypervneutron
In this example, the hypervregion node (the Hyper-V Region Server) is created for
the hyperv_region_neutron resource, and the hypervneutron node (the Neutron
Server) is created for the neutron_network_node resource.
To create a job to install the Hyper-V Region Server and the Neutron Server, run
the following command:
ds job-create -t template_id \
-P "MGMNetInterface=net_interface" \
-N "hyperv_region_neutron=hyperv_region_node_id; \
neutron_network_node=neutron_network_node_id" \
-p central_server_job_id \
hyperv-region-server-neutron
where
template_id
The template ID can be identified from the output of the ds template-list
command. Find the relevant template name in the name column (for
example, hyperv_region_neutron), and identify the corresponding ID in the
id column.
net_interface
You must specify the same NIC device (for example, eth0) in each node.
hyperv_region_node_id
The Region Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, hypervregion), and identify the corresponding ID in the id
column.
neutron_network_node_id
The Neutron Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, hypervneutron), and identify the corresponding ID in the id
column.
Chapter 2. Installing
205
central_server_job_id
The job ID for the Central Servers deployment can be identified from the
output of the ds job-list command. Find the relevant job name in the
name column (for example, sco-central-servers), and identify the
corresponding ID in the id column.
KVM
If you use one of the KVM templates, you must create nodes for the KVM Region
Server, the Neutron Server (if using Neutron network), and the KVM Compute
Node.
Example:
ds node-create -t "IBM::SCO::Node" \
-p "{Address: 192.0.2.219, Port: 22, User: root, Password: password }"
kvm-region
ds node-create -t "IBM::SCO::Node" \
-p "{Address: 192.0.2.220, Port: 22, User: root, Password: password }"
kvm-network-node
ds node-create -t "IBM::SCO::Node" \
-p "{Address: 192.0.2.221, Port: 22, User: root, Password: password }"
kvm-compute
In this example, the kvm-region node (the KVM Region Server) is created for the
kvm_region_neutron resource, the kvm-network-node node (the Neutron Server) is
created for the neutron_network_node resource, and the kvm-compute node (the
KVM Compute Node) is created for the kvm_compute resource.
To create a job to install the KVM Region Server, the Neutron Server, and the KVM
Compute Node, run the following command:
ds job-create -t template_id \
-P "MGMNetInterface=net_interface" \
-N "kvm_region_neutron=kvm_region_id; \
neutron_network_node=kvm_network_node_id; \
kvm_compute=kvm_computeIid" \
-p central_server_job_id \
kvm-region-server-neutron
If you want to add additional compute nodes to your environment, see Scaling
out a deployed environment on page 144.
PowerVC
If you use one of the PowerVC templates, you must create nodes for PowerVC
Region Server and the Neutron Server.
206
Example:
ds node-create -t IBM::SCO::Node \
-p {Address: 192.0.2.15, Port: 22, User: root, Password:password, Fqdn: PVCRS.69.customer.ibm.com} \
powervc_region_server
ds node-create -t IBM::SCO::Node
-p {Address: 192.0.2.16, Port: 22, User: root, Password:password, Fqdn: PVCNS.69.customer.ibm.com} \
neutron_network_node
where
template_id
The template ID can be identified from the output of the ds template-list
command. Find the relevant template name in the name column (for
example, powervc_region_neutron), and identify the corresponding ID in
the id column.
net_interface
You must specify the same NIC device (for example, eth0) in each node.
PVCServer
This mandatory parameter specifies the location of the PowerVC server.
PVCServerUser
This mandatory parameter specifies the root user of the PowerVC server.
PVCServerPassword
This mandatory parameter specifies the root password for the PowerVC
server.
PVCQpidPassword
This mandatory parameter specifies the qpid password for the PowerVC
server.
To find this password, run the following command on the PowerVC server:
cat /opt/ibm/powervc/data/pw.file
powervc_region_node_id
The Region Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, powervc_region_server), and identify the corresponding ID in the
id column.
Chapter 2. Installing
207
neutron_network_node_id
The Neutron Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, neutron_network_node), and identify the corresponding ID in the
id column.
central_server_job_id
The job ID for the Central Servers deployment can be identified from the
output of the ds job-list command. Find the relevant job name in the
name column (for example, sco-central-servers), and identify the
corresponding ID in the id column.
VMware
If you use the VMware templates, you must create nodes for the VMware Region
Server and the Neutron Server (if using Neutron network).
Example:
ds node-create -t IBM::SCO::Node \
-p {Address: 192.0.2.254, Port: 22, User: root, Password:password } \
hypervregion
ds node-create -t IBM::SCO::Node
-p {Address: 192.0.2.253, Port: 22, User: root, Password:password } \
hypervneutron
In this example, the vmwareregion node (the VMware Region Server) is created for
the vmware_region_server resource, and the vmwareneutron node (the Neutron
Server) is created for the neutron_network_node resource.
To create a job to install the VMware Region Server and the Neutron Server, run
the following command:
ds job-create
-t #{template_id} \
-P "MGMNetInterface=#{net_interface};VMServerHost=#{def_vcenter_ip}; \
VMServerPassword=#{def_passwd};VMServerUserName=#{def_user}; \
VMDataStoreName=#{def_data_store};VMClusterName=#{def_cluster_name}" \
-N "vmware_region_server=#{vmware-region-id}; \
neutron_network_node=#{vmware-network-node-id}" \
-p <central-server-job-id> \
vmware-region-server-neutron
where
template_id
The template ID can be identified from the output of the ds template-list
command. Find the relevant template name in the name column (for
example, vmware_region_neutron), and identify the corresponding ID in the
id column.
net_interface
You must specify the same NIC device (for example, eth0) in each node.
VMServerHost
This mandatory parameter specifies the vCenter host IP address.
VMServerUsername
This mandatory parameter specifies the user name that is used to connect
to the vCenter.
VMServerPassword
This mandatory parameter specifies the password for VMServerUsername.
208
VMDataStoreName
This mandatory parameter specifies the datastore that is used to store the
virtual machine. You can specify a regular expression to match the name of
a datastore. For example, VMDataStoreName="nas.*" selects all datastores
with a name that starts with nas.
VMClusterName
This mandatory parameter specifies the cluster name in vCenter where the
virtual machine is located.
vmware-region-id
The Region Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, vmwareregion), and identify the corresponding ID in the id
column.
vmware-network-node-id
The Neutron Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, vmwareneutron), and identify the corresponding ID in the id
column.
central-server-job-id
The job ID for the Central Servers deployment can be identified from the
output of the ds job-list command. Find the relevant job name in the
name column (for example, sco-central-servers), and identify the
corresponding ID in the id column.
z/VM
If you use one of the z/VM templates, you must create nodes for the z/VM
Region Server and the Neutron Server.
Example:
ds node-create -t IBM::SCO::Node -p {Address: 192.0.2.12, Port: 22, User: root,
Password: passw0rd} rs-zenv1
ds node-create -t IBM::SCO::Node -p {Address: 192.0.2.13, Port: 22, User: root,
Password: passw0rd} ns-zenv1
In this example, the rs-zenv1 node (the z/VM Region Server) is created for the
zvm_region_server resource, and the ns-zenv1 node (the Neutron Server) is created
for the neutron_network_node resource.
To create a job to install the z/VM Region Server and the Neutron Server, run the
following command:
ds job-create -t template_id \
-P "MGMNetInterface=net_interface;" \
-N "zvm_region_neutron=zvm_region_node_id; \
neutron_network_node=neutron_network_node_id" \
-p central_server_job_id \
zvm-region-server-neutron
where
template_id
The template ID can be identified from the output of the ds template-list
command. Find the relevant template name in the name column (for
example, zvm_region_neutron), and identify the corresponding ID in the id
column.
Chapter 2. Installing
209
net_interface
You must specify the same NIC device (for example, eth0) in each node.
zvm_region_node_id
The Region Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, rs-zenv1), and identify the corresponding ID in the id column.
neutron_network_node_id
The Neutron Server node ID can be identified from the output of the ds
node-list command. Find the relevant node name in the name column (for
example, ns-zenv1), and identify the corresponding ID in the id column.
central_server_job_id
The job ID for the Central Servers deployment can be identified from the
output of the ds job-list command. Find the relevant job name in the
name column (for example, sco-central-servers), and identify the
corresponding ID in the id column.
Example:
ds job-create -t abbd039b-e5ee-44bc-b770-6e334272fb14
-N zvm_region_server=303a5d4f-f9bf-4ed5-a1f5-5039b5b057c7;
neutron_network_node=6f3d9608-7942-49a8-887f-8e638a6161d2
-P MGMNetInterface=eth1;DataNetInterface=eth1;ExtNetInterface=eth1;
XcatServer=192.0.2.66
-p 1e593793-3f48-4782-ace5-3ddd8b37289b job-zvm-server
Note:
v If you do not want to use the default parameter value, use the -P option to set
the parameter value, as shown in the following example:
-P MGMNetInterface=eth1;DataNetInterface=eth1;ExtNetInterface=eth1
v Use the -p option to specify the job ID for the parent job that installs the Central
Servers, as shown in the following example:
-p 1e593793-3f48-4782-ace5-3ddd8b37289b
The job ID for the Central Servers deployment can be identified from the output
of the ds job-list command. Find the relevant job name in the name column
(for example, sco-central-servers), and identify the corresponding ID in the id
column.
v For more information about template parameters and how to configure the
parameters, refer to Customizing a z/VM Region Server deployment on page
51.
After you run the ds job-execute command, configure OpenStack as described in
Configuring the non-CMO Node within "Chapter 5. OpenStack Configuration" in the
Enabling z/VM for OpenStack (Support for OpenStack Icehouse Release) guide, at
http://www.vm.ibm.com/sysman/openstk.html.
Note: z/VM Single System Image (SSI) configuration is not supported.
210
Comments
sco-allinonekvm
sco-allinoneVMware
sco-allinoneextdb
sco-devstack
Role
Managed-to
hypervisor Network
Database
location
sco-central-servers
Central
Servers
hypervisor
agnostic
network
agnostic
Central
Server 1
sco-central-serversextdb
Central
Servers
hypervisor
agnostic
network
agnostic
external
database
hyperv_region-neutron
Hyper-V
Region
Server
Hyper-V
Neutron
Region
Server
hyperv_region-neutronextdb
Hyper-V
Region
Server
Hyper-V
Neutron
external
database
hyperv_region-neutronsharedb
Hyper-V
Region
Server
Hyper-V
Neutron
Central
Server 1
kvm_region-with-compute KVM
Region
Server
KVM
novanetwork
Region
Server
kvm_region-withcompute-extdb
KVM
Region
Server
KVM
Neutron
external
database
kvm_region-withcompute-neutron
KVM
Region
Server
KVM
Neutron
Region
Server
Comments
Cannot use
scocentralserversextdb
Chapter 2. Installing
211
Database
location
kvm_region-withKVM
compute-neutron-sharedb Region
Server
KVM
Neutron
Central
Server 1
Cannot use
scocentralserversextdb
kvm_region-withcompute-sharedb
KVM
Region
Server
KVM
novanetwork
Central
Server 1
Cannot use
scocentralserversextdb
powervc_region_neutron
PowerVC
Region
Server
PowerVC
Neutron
Region
Server
powervc_region_neutron- PowerVC
extdb
Region
Server
PowerVC
Neutron
external
database
powervc_region_neutron- PowerVC
sharedb
Region
Server
PowerVC
Neutron
Central
Server 1
Template name
212
Role
vmware_region
VMware
Region
Server
VMware
novanetwork
Region
Server
vmware_region_neutron
VMware
Region
Server
VMware
Neutron
Region
Server
vmware_region_neutronextdb
VMware
Region
Server
VMware
Neutron
external
database
vmware_region_neutronsharedb
VMware
Region
Server
VMware
Neutron
Central
Server 1
vmware_region-extdb
VMware
Region
Server
VMware
novanetwork
external
database
vmware_region-sharedb
VMware
Region
Server
VMware
novanetwork
Central
Server 1
zvm_region_neutron
z/VM
Region
Server
z/VM
Neutron
Region
Server
Comments
Cannot use
scocentralserversextdb
Cannot use
scocentralserversextdb
Cannot use
scocentralserversextdb
Database
location
z/VM
Region
Server
z/VM
Neutron
external
database
z/VM
Region
Server
z/VM
Neutron
Central
Server 1
Template name
Role
zvm_region_neutronextdb
zvm_region_neutronsharedb
Comments
Cannot use
scocentralserversextdb
Role
Managed to
hypervisor Network
Database
location
HA-sco-central-serversextdb
central
servers
hypervisor
agnostic
network
agnostic
external
database
HA-vmware_regionneutron-extdb
VMware
region
server
VMware
neutron
external
database
HA-vmware_regionextdb
VMware
region
server
VMware
novanetwork
external
database
HA-kvm_region-withcompute-neutron-extdb
KVM
region
server
KVM
neutron
external
database
HA-kvm_region-withcompute-extdb
KVM
region
server
KVM
novanetwork
external
database
HA-saam
hypervisor
System
Automation agnostic
Application
Manager
server
network
agnostic
database
agnostic
Comments
Deployment parameters
Check the list of all deployment parameters that you can configure before
deploying IBM Cloud Orchestrator, and the parameter default values.
Table 21. Deployment parameters
Name
Default Value
Description
BPMHome
/opt/ibm/BPM/v8.5
Chapter 2. Installing
213
Default Value
Description
CinderStatePath
/var/lib/cinder
CinderVGSize
100
CinderVolumesDir
/var/lib/cinder/volumes
DATANetInterface
eth0
DB2DataDir
/home/db2inst1
DB2DownloadDir
/tmp
DB2ServiceIPNetmask
255.255.255.0
DiscoveredFlavorPrefix
Onboarded
DiscoveredVMPrefix
Onboarded
EXTNetInterface
eth0
GlanceFileStore
/var/lib/glance/images
214
Default Value
Description
IHSLocalDir
/tmp/ihsinstall
MGMNetInterface
eth0
NetworkManager
nova.network.manager.VlanManager
NovaStatePath
/var/lib/nova
OrchestratorPassword
passw0rd
OSLibvirtType
kvm
OSNetworkType
PCGDownloadDir
/tmp/pcg
215
Default Value
Description
PCGHomeDir
/opt/ibm/pcg
RegionName
RegionOne (all-in-one)
RegionCentral (multiple region)
RegionKVM (KVM region)
RegionVMware (VMware region)
RegionZVM (z/VM region)
RegionPower (Power region)
SingleSignOnDomain
TSAAMLocal
/tsa
VMBridge
br4096
VMClusterName
VMDataStoreName
VMDns1
216
Default Value
Description
The secondary DNS used by
nova-network.
VMDns2
10.10.0.200
VMGATEWAY
10.10.0.1
VMNETLABEL
demo
VMInterface
vmnic0
VMIPV4ADDR
10.10.0.0/24
VMServerHost
VMServerUserName
VMServerPassword
VMStartIP
10.10.0.100
VMVLANID
100
217
Default Value
Description
VMware wsdl location. This
parameter is only for a VMware
Region.
VMWsdlLocation
VXLANMcastGroup
224.0.0.100
VXLANVniRange
1000:1600000
WorkloadDeployerTmpPath
/tmp/iwd
Default Value
Description
QuotaCores
20
QuotaDriver
nova.quota.DbQuotaDriver
QuotaFixedIps
-1
QuotaFloatingIps
10
QuotaInjectedFileContentBytes
10240
QuotaInjectedFilePathBytes
255
QuotaInjectedFiles
QuotaInstances
10
QuotaKeyPairs
100
QuotaMetadataItems
128
QuotaRam
51200
QuotaSecurityGroupRules
20
QuotaSecurityGroups
50
218
v On Central Server 1:
/etc/group
/etc/passwd
/etc/hosts
/etc/sysconfig/iptables
/etc/security/limits.conf
/etc/sysctl.conf
v On Central Server 2:
/etc/group
/etc/passwd
/etc/hosts
/etc/sysconfig/iptables
/etc/security/limits.conf
/etc/sysctl.conf
v On Central Server 3:
/etc/hosts
/etc/sysconfig/iptables
/etc/security/limits.conf
/etc/sysctl.conf
/etc/sudoers
/etc/ntp.conf
v On Region Server:
/etc/group
/etc/passwd
/etc/hosts
/etc/sysconfig/iptables
/etc/security/limits.conf
/etc/sysctl.conf
Chapter 2. Installing
219
220
Chapter 3. Administering
After you have installed IBM Cloud Orchestrator, you can start your environment,
configure optional settings, and define users, projects, and domains as described in
the following sections.
221
URL
Access granted to
https://central-server-ihs_fqdn
https://central-serverihs_fqdn:8443
admin role
domain_admin role
catalogeditor role
member role
https://central-serverihs_fqdn:8443/ProcessCenter/
login.jsp
Each IBM Cloud Orchestrator user interface URL includes the following
specification:
https://central-server-ihs_fqdn
v Do not use the IP address to access the IBM Cloud Orchestrator user interfaces.
v When logging to the Administration user interface, if you save the password
details with the browser, these details might be loaded unexpectedly in the
Update User dialog for that user. To avoid this behaviour, you must clear the
password details from the browser.
v If you are using the Administration user interface in non-English language, you
might see English strings because of OpenStack limitations.
You can extend the Self-service user interface URL to include the IBM Cloud
Orchestrator domain name, as shown in the following example:
https://central-server-ihs_fqdn:8443/login?domainName=myDomain
222
In this example, the Domain field on the login screen is prepopulated with the
value myDomain. If you do not specify a domain, the user is authenticated to the
Default domain.
By default, the user is also authenticated to the scope of the primary project that
you specified when you created the user. After you log in, you can change the
project scope by selecting a new project from the project list in the top banner of
the user interface. For more information about users, projects, and domains, see
Managing security on page 252.
Note: In IBM Cloud Orchestrator, the following limitations apply:
v A user name cannot contain a colon (:) character.
v A password cannot contain an at sign (@) character.
v Users cannot log in if the primary project to which they are assigned is disabled.
v You cannot log in to the same IBM Cloud Orchestrator user interface with more
than one browser session on the same machine. If you must log in to the same
IBM Cloud Orchestrator user interface with two browser sessions on the same
machine, use a different browser for each session. For example, use an Internet
Explorer browser and a Firefox browser.
v The Administration user interface login credentials are case-sensitive.
To view the IBM Cloud Orchestrator user interface in another language, set the
language option in your browser. Move the preferred language to the top of the
list, clear the browser cache, and refresh your browser view. For some browser and
operating system combinations, you might need to change the regional settings of
your operating system to the locale and language of your choice.
You must set the locale in Business Process Manager separately. Log into the
Business Process Manager user interface, and click Preferences. Select the locale
from the Locale preferences list, and click Save changes. You might need to log in
again for the changes to take effect.
223
to start or stop all the IBM Cloud Orchestrator services, the services are started or
stopped in the correct sequence and all the dependencies are resolved.
The SCOrchestrator.py script uses XML files to obtain the information about the
environment and the components:
v SCOEnvironment.xml
v SCOComponents.xml
v SCOEnvironment_fulltopo.xml
The XML files define the names and the start or stop priority of the IBM Cloud
Orchestrator services.
The SCOEnvironment.xml file is automatically generated by the installation
procedure when the central IBM Cloud Orchestrator servers are installed.
Afterwards, the installation procedure automatically updates the file if a Region
Server is installed. You must manually modify the file only if a Region Server is
removed.
The following sequence occurs when you run SCOrchestrator.py:
1. The script invokes SCOEnvironment_update.py to refresh the region topology
and generate or update the SCOEnvironment_fulltopo.xml file.
2. The script reads the SCOEnvironment_fulltopo.xml file, or the
SCOEnvironment.xml file if the SCOEnvironment_fulltopo.xml file does not exist,
and the SCOComponents.xml file.
3. The script obtains the parameters and options and analyzes them for the
actions to take.
4. For start, stop, and status sequences, scripts are copied to the specific systems
and executed on the remote systems.
5. The scripts are cleaned up from the /tmp directory of the systems.
The script files which are executed on the systems are in the same directory as the
SCOrchestrator.py script.
You can start or stop certain components if you know the exact name of the
component or host name. You can start or stop all modules of a specific virtual
machine by using the host name of the machine. As a result, all components of
that machine are started or stopped.
Note: Because some IBM Cloud Orchestrator services must be started in a given
sequence to work correctly, do not start or stop any single services but only start or
stop the whole IBM Cloud Orchestrator stack. Only experienced administrators
who are aware of the dependencies between the IBM Cloud Orchestrator services
can use the SCOrchestrator.py script to start or stop single services.
224
Procedure
1. As root, navigate to /opt/ibm/orchestrator/scorchestrator on the Central
Server 1.
2. Run the SCOrchestrator.py script with the following options:
v To start the whole product, run ./SCOrchestrator.py --start.
v To stop the whole product, run ./SCOrchestrator.py --stop.
v To view the status of components, run ./SCOrchestrator.py --status.
v To view help for this script, run ./SCOrchestrator.py --help. The following
help is displayed:
Usage: SCOrchestrator.py [options]
Options:
-h, --help
-s, --start
To view a list of components that can be specified with the -p option, open the
SCOEnvironment_fulltopo.xml file that lists all available component names for
each host of the IBM Cloud Orchestrator environment.
Procedure
1. Create a new user ssh for all the Central Servers and Region Servers:
a. On each of the Central Servers and Region Servers:
Create a new user <yourmechid> and set the password:
useradd -m <yourmechid>
passwd <yourmechid> #enter at the prompt <yourmechpwd>
b. On Central Server 1:
Generate the ssh keys for <yourmechid> and copy it to all IBM Cloud
Orchestrator servers:
su - <yourmechid> -c "ssh-keygen -q -t rsa -N -f ~/.ssh/id_rsa"
Chapter 3. Administering
225
Here $i stands for the IP address of each IBM Cloud Orchestrator server
including Central Server 1:.
[root@cs-1] su <yourmechid>
[yourmechid@cs-1] scp ~/.ssh/id_rsa.pub $i:~/.ssh/authorized_keys
Note: It is important in the command above that you accept the server key
and the password required is of <yourmechid>.
c. Verify that <yourmechid> on Central Server 1 can ssh to all the IBM Cloud
Orchestrator servers including central Server 1 without interruption:
su - <yourmechid> -c "ssh <yourmechid>@$SCO_server_ip"
su - <yourmechid> -c "ssh <yourmechid>@$SCO_server_ip"
3. On each of the IBM Cloud Orchestrator servers, add the user <yourmechid> in
the sudo list:
a. Create a sudoer file named <yourmechid> and place it in /etc/sudoers.d.
The content of the file <yourmechid> is like what follows:
Note: Replace <yourmechid> with your new user name.
# sudoers additional file for /etc/sudoers.d/
# IMPORTANT: This file must have no ~ or . in its name and file permissions
# must be set to 440!!!
# this file is for the SAAM mech-ID to call the SCO control scripts
Defaults:<yourmechid> !requiretty
# scripts found in control script directory
# adapt the directory names to the mech id!
Cmnd_Alias SACTRL = /tmp/*.sh
# allow for
<yourmechid> ALL = (root) NOPASSWD: SACTRL
5. Run SCOrchestrator.py:
Move to the path /home/<yourmechid>/scorchestrator and run
SCOrchestrator.py with user <yourmechid>:
[root@cs-1] cd /home/<yourmechid>/scorchestrator
[root@CS-1 scorchestrator]# su <yourmechid>
[yourmechid@CS-1 scorchestrator2]$ ./SCOrchestrator.py
226
DB2
su - db2inst1; db2start
openstack-ceilometercollector
service
openstack-ceilometercollector status
service
openstack-ceilometercollector start
openstack-ceilometercentral
service
openstack-ceilometercentral status
service
openstack-ceilometercentral start
openstack-ceilometernotification
service
openstack-ceilometernotification status
service
openstack-ceilometernotification start
Keystone
service openstack-keystone
status
service openstack-keystone
start
openstack-ceilometer-api
service
openstack-ceilometer-api
status
service
openstack-ceilometer-api
start
Administration user
interface
Workload Deployer
Server name
Components deployed
Central Server 1
Central Server 2
Central Server 3
Chapter 3. Administering
227
openstack-nova-api
service openstack-nova-api
status
service openstack-nova-api
start
openstack-nova-scheduler
service
openstack-nova-scheduler
status
service
openstack-nova-scheduler
start
openstack-nova-conductor
service
openstack-nova-conductor
status
service
openstack-nova-conductor
start
openstack-glance-api
service
openstack-glance-api status
service
openstack-glance-api start
openstack-glance-registry
service
openstack-glance-registry
status
service
openstack-glance-registry
start
openstack-cinder-api
service
openstack-cinder-api status
service
openstack-cinder-api start
openstack-cinder-volume
service
openstack-cinder-volume
status
service
openstack-cinder-volume
start
openstack-cinder-scheduler
service
openstack-cinder-scheduler
status
service
openstack-cinder-scheduler
start
openstack-heat-api
service openstack-heat-api
status
service openstack-heat-api
start
openstack-heat-api-cfn
service
openstack-heat-api-cfn
status
service
openstack-heat-api-cfn start
openstack-heat-apicloudwatch
service
openstack-heat-apicloudwatch status
service
openstack-heat-apicloudwatch start
openstack-heat-engine
service
openstack-heat-engine
status
service
openstack-heat-engine start
openstack-ceilometernotification
service
openstack-ceilometernotification status
service
openstack-ceilometernotification start
openstack-ceilometercollector
service
openstack-ceilometercollector status
service
openstack-ceilometercollector start
openstack-ceilometercentral
service
openstack-ceilometercentral status
service
openstack-ceilometercentral start
neutron-server
service neutron-server
status
Server name
Components deployed
Region Server
228
neutron-linuxbridge-agent
service
neutron-linuxbridge-agent
status
service
neutron-linuxbridge-agent
start
neutron-l3-agent
service neutron-l3-agent
status
service neutron-l3-agent
start
neutron-metadata-agent
service
neutron-metadata-agent
status
service
neutron-metadata-agent
start
neutron-dhcp-agent
service neutron-dhcp-agent
status
service neutron-dhcp-agent
start
openstack-nova-compute
service
openstack-nova-compute
status
service
openstack-nova-compute
start
openstack-nova-apimetadata
service
openstack-nova-apimetadata status
service
openstack-nova-apimetadata start
neutron-linuxbridge-agent
service
neutron-linuxbridge-agent
status
service
neutron-linuxbridge-agent
start
Server name
Components deployed
In a high availability topology, services on central server 2 and region servers are
configured highly available on respective secondary nodes. They are illustrated in
the following table:
Table 24. Servers and their high availability configuration
Command to check the
service
openstack-ceilometer-api
service
openstack-ceilometer-api
status
service
openstack-ceilometer-api
start
Keystone
service openstack-keystone
status
service openstack-keystone
start
Administration user
interface
Keystone
service openstack-keystone
status
service openstack-keystone
start
Administration user
interface
Server name
Components deployed
Chapter 3. Administering
229
openstack-ceilometercentral
service
openstack-ceilometercentral status
service
openstack-ceilometercentral start
openstack-ceilometercollector
service
openstack-ceilometercollector status
service
openstack-ceilometercollector start
openstack-ceilometercompute
service
openstack-ceilometercompute status
service
openstack-ceilometercompute start
openstack-ceilometernotification
service
openstack-ceilometernotification status
service
openstack-ceilometernotification start
openstack-cinder-api
service
service
openstack-cinder-api status openstack-cinder-api start
openstack-cinder-scheduler
service
openstack-cinder-scheduler
status
service
openstack-cinder-scheduler
start
openstack-cinder-volume
service
openstack-cinder-volume
status
service
openstack-cinder-volume
start
openstack-glance-api
service
service
openstack-glance-api status openstack-glance-api start
openstack-glance-registry
service
openstack-glance-registry
status
service
openstack-glance-registry
start
openstack-heat-api
service openstack-heat-api
status
service openstack-heat-api
start
openstack-heat-api-cfn
service
openstack-heat-api-cfn
status
service
openstack-heat-api-cfn start
openstack-heat-apicloudwatch
service
openstack-heat-apicloudwatch status
service
openstack-heat-apicloudwatch start
openstack-heat-engine
service
openstack-heat-engine
status
service
openstack-heat-engine start
openstack-nova-api
service openstack-nova-api
status
service openstack-nova-api
start
openstack-nova-cert
openstack-nova-conductor
service
openstack-nova-conductor
status
service
openstack-nova-conductor
start
openstack-novaconsoleauth
service
openstack-novaconsoleauth status
service
openstack-novaconsoleauth start
Server name
Components deployed
230
openstack-nova-metadataapi
service
openstack-nova-metadataapi status
service
openstack-nova-metadataapi start
openstack-novanovncproxy
service
openstack-novanovncproxy status
service
openstack-novanovncproxy start
openstack-nova-scheduler
service
openstack-nova-scheduler
status
service
openstack-nova-scheduler
start
neutron-server
service neutron-server
status
openstack-cinder-api
service
service
openstack-cinder-api status openstack-cinder-api start
openstack-cinder-scheduler
service
openstack-cinder-scheduler
status
service
openstack-cinder-scheduler
start
openstack-glance-registry
service
openstack-glance-registry
status
service
openstack-glance-registry
start
openstack-heat-api
service openstack-heat-api
status
service openstack-heat-api
start
openstack-nova-api
service openstack-nova-api
status
service openstack-nova-api
start
openstack-nova-cert
openstack-nova-conductor
service
openstack-nova-conductor
status
service
openstack-nova-conductor
start
openstack-novaconsoleauth
service
openstack-novaconsoleauth status
service
openstack-novaconsoleauth start
openstack-nova-metadataapi
service
openstack-nova-metadataapi status
service
openstack-nova-metadataapi start
openstack-novanovncproxy
service
openstack-novanovncproxy status
service
openstack-novanovncproxy start
openstack-nova-scheduler
service
openstack-nova-scheduler
status
service
openstack-nova-scheduler
start
neutron-server
service neutron-server
status
Server name
Components deployed
Chapter 3. Administering
231
openstack-ceilometercentral
service
openstack-ceilometercentral status
service
openstack-ceilometercentral start
openstack-ceilometercollector
service
openstack-ceilometercollector status
service
openstack-ceilometercollector start
openstack-ceilometercompute
service
openstack-ceilometercompute status
service
openstack-ceilometercompute start
openstack-cinder-api
service
service
openstack-cinder-api status openstack-cinder-api start
openstack-cinder-scheduler
service
openstack-cinder-scheduler
status
service
openstack-cinder-scheduler
start
openstack-cinder-volume
service
openstack-cinder-volume
status
service
openstack-cinder-volume
start
openstack-glance-api
service
service
openstack-glance-api status openstack-glance-api start
openstack-glance-registry
service
openstack-glance-registry
status
service
openstack-glance-registry
start
openstack-heat-api
service openstack-heat-api
status
service openstack-heat-api
start
openstack-heat-api-cfn
service
openstack-heat-api-cfn
status
service
openstack-heat-api-cfn start
openstack-heat-apicloudwatch
service
openstack-heat-apicloudwatch status
service
openstack-heat-apicloudwatch start
openstack-heat-engine
service
openstack-heat-engine
status
service
openstack-heat-engine start
openstack-nova-api
service openstack-nova-api
status
service openstack-nova-api
start
openstack-nova-cert
openstack-nova-compute
service
openstack-nova-compute
status
service
openstack-nova-compute
start
openstack-nova-conductor
service
openstack-nova-conductor
status
service
openstack-nova-conductor
start
openstack-novaconsoleauth
service
openstack-novaconsoleauth status
service
openstack-novaconsoleauth start
Server name
Components deployed
232
openstack-nova-metadataapi
service
openstack-nova-metadataapi status
service
openstack-nova-metadataapi start
openstack-novanovncproxy
service
openstack-novanovncproxy status
service
openstack-novanovncproxy start
openstack-nova-scheduler
service
openstack-nova-scheduler
status
service
openstack-nova-scheduler
start
neutron-linuxbridge-agent
service
neutron-linuxbridge-agent
status
service
neutron-linuxbridge-agent
start
neutron-server
service neutron-server
status
Server name
Components deployed
Chapter 3. Administering
233
Server name
Components deployed
openstack-cinder-api
service
service
openstack-cinder-api status openstack-cinder-api start
openstack-cinder-scheduler
service
openstack-cinder-scheduler
status
service
openstack-cinder-scheduler
start
openstack-glance-registry
service
openstack-glance-registry
status
service
openstack-glance-registry
start
openstack-heat-api
service openstack-heat-api
status
service openstack-heat-api
start
openstack-nova-api
service openstack-nova-api
status
service openstack-nova-api
start
openstack-nova-cert
openstack-nova-compute
service
openstack-nova-compute
status
service
openstack-nova-compute
start
openstack-nova-conductor
service
openstack-nova-conductor
status
service
openstack-nova-conductor
start
openstack-novaconsoleauth
service
openstack-novaconsoleauth status
service
openstack-novaconsoleauth start
openstack-nova-metadataapi
service
openstack-nova-metadataapi status
service
openstack-nova-metadataapi start
openstack-novanovncproxy
service
openstack-novanovncproxy status
service
openstack-novanovncproxy start
openstack-nova-scheduler
service
openstack-nova-scheduler
status
service
openstack-nova-scheduler
start
neutron-linuxbridge-agent
service
neutron-linuxbridge-agent
status
service
neutron-linuxbridge-agent
start
neutron-server
service neutron-server
status
For information about the high availability policies of these services, see System
automation policies on page 240. If a service is configured active-active, it should
be running on both the primary and the secondary nodes. If a service is configured
active-passive, only one instance of the service should be running, on either the
primary or secondary node.
Some services do not apply to the current configuration, therefore must not be
running at all on either the primary or secondary node, for example
openstack-nova-console and openstack-nova-xvpvncproxy.
234
Restarting NoSQL
To restart NoSQL, perform the following procedure.
Procedure
1. Log in as DB2 instance user (for example, db2inst1) by running the following
command:
su - db2inst1
235
High-availability solutions
Learn how to set up an IBM Cloud Orchestrator management stack with
high-availability quality of service (QoS), and reduce the downtime of the IBM
Cloud Orchestrator management stack.
IBM Cloud Orchestrator achieves high availability by using redundant active-active
installed components, where possible.
The high-availability capabilities of the IBM Cloud Orchestrator management stack
are supported by using the following products:
System Automation Application Manager
System Automation Application Manager automates the availability of
resources by starting and stopping resources automatically and in the
correct sequence. System Automation Application Manager uses agentless
adapters to monitor and control remote applications. System Automation
Application Manager ensures availability and provides automation of these
services across operating-system boundaries. System Automation
Application Manager provides a centralized server that monitors and
automatically stops, starts, or restarts the various applications on the
virtual machines in the IBM Cloud Orchestrator environment.
System Automation for Multiplatforms
System Automation for Multiplatforms is a clustering solution that
provides high-availability and automation features for critical components
such as applications, network interfaces, virtual IP addresses, and storage.
In the IBM Cloud Orchestrator environment, System Automation for
Multiplatforms can monitor and fail over critical components that rely on
an active-standby setup. System Automation for Multiplatforms uses
automation capabilities to ensure that actions are completed in the correct
order if a component fails. System Automation for Multiplatforms
automates services on Central Server 2, on KVM Region Servers, and on
VMware Region Servers.
These high-availability solutions are compatible and can be used together to
achieve high availability of the IBM Cloud Orchestrator management stack. When
you useSystem Automation Application Manager and System Automation for
Multiplatforms with IBM Cloud Orchestrator, you can recover the IBM Cloud
Orchestrator environment after hardware or software failure. This solution reduces
the downtime of the IBM Cloud Orchestrator management stack. In addition,
operation of the IBM Cloud Orchestrator management stack is simplified.
To use the hypervisor high-availability capabilities, you must complete extra
manual steps at installation time. For information about installing high availability,
see Deploying the High-Availability Distributed topology on page 53.
236
High-availability configurations
For the IBM Cloud Orchestrator management stack, high-availability quality of
service (QoS) can be provided in an active-active configuration or in an
active-standby configuration.
In a high-availability setup, the applications and services are usually configured as
redundant. One of the main differences between the high-availability
configurations is how the second server is set up.
active-active
In an active-active setup, the application is running on all instances. A load
balancer or proxy can be deployed to accept requests and to distribute the
requests to the nodes. Balancing the load in this way enhances
performance because more systems can handle the workload
simultaneously. To use the active-active setup, an application usually must
be designed to provide this capability: for example, to store and access the
underlying data.
active-standby
In an active-standby setup, the application is running on only one instance
at a time. It is often necessary to enforce this limit because the shared
storage or database might become corrupted if accessed by multiple
instances simultaneously. Applications that do not have an active-active
design are usually run in the active-standby mode when they are made
highly available.
System Automation for Multiplatforms provides services to both types of
high-availability configurations:
v For components in an active-standby setup, which is also known as warm
standby, System Automation for Multiplatforms provides monitoring and failover
capabilities, and ensures that exactly one instance of the service is running at
any time.
The application or service is installed and readily configured on both nodes. If
data access is necessary, the data can be made available to both nodes. If an
unrecoverable error occurs on the first node, the application and all related
services are stopped and automatically restarted on the second node.
v For components in an active-active setup, System Automation for Multiplatforms
provides monitoring and restart capabilities, and ensures that the service
remains online even if temporary outages occur.
The application or service is installed and readily configured in the active-active
setup on both nodes. If an unrecoverable error occurs on any node, System
Automation for Multiplatforms detects the problem and automatically recovers,
which ensures maximum performance.
Goal-driven automation
Goal-driven automation helps to maintain each component in the desired state,
when IBM Cloud Orchestrator is configured as highly available. Do not interfere
with goal-driven automation by doing manual actions.
Automation can be goal-driven or command-driven. In command-driven
automation, the command is issued without any preconditions. In goal-driven
automation, the desired state is computed from multiple persistent inputs.
When IBM Cloud Orchestrator is configured as highly available, the components
are managed by System Automation Application Manager and System Automation
for Multiplatforms. Both of these system automation products try to maintain a
Chapter 3. Administering
237
desired state for every resource that they manage. This desired state can be specified
as online or offline, and is calculated from the following input factors:
v The default state that is defined in the policy
v Operator requests sent from the System Automation Application Manager and
System Automation for Multiplatforms user interfaces
v Relationship to other resources, which might require a component to start or
stop
Therefore, it is important that you manage these resources by changing their
desired state. If you start or stop resources outside the scope of System
Automation Application Manager or System Automation for Multiplatforms, the
action is evaluated as an unwanted deviation from the automation goal, and can
result in countermeasures.
For more information about goal-driven automation and how to manage resources,
see the following web pages:
v Understanding automation goals and requests
v Working with resources
v System Automation Application Manager command-line interface
v Administering a resource group
238
Chapter 3. Administering
239
240
Type
ihs
Active-passive
haproxy
haproxy
Active-passive
bpm
Active-active
Keystone
keystone
Active-active
Horizon
horizon
Active-active
pcg
Active-active
scui
Active-active
Chapter 3. Administering
241
Type
haproxy
haproxy
Active-passive
qpid
qpid
Active-active
qpid-primary
ceilometer-central-agent
Single Instance
ceilometer-collector
Single Instance
ceilometer-compute-agents
Single Instance
cinder-api
Active-active
cinder-scheduler
Active-active
cinder-volume
Single Instance
glance-api
Single Instance
glance-registry
Active-active
heat-api-cfn
Single Instance
heat-api-cloudwatch
Single Instance
heat-api
Active-active
heat-engine
Single Instance
Neutron
neutron-api
Active-active
Nova
nova-api
Active-active
nova-cells
Active-active
nova-cert
Active-active
nova-compute-vmwarapi
Single Instance
nova-conductor
Active-active
nova-consoleauth
Active-active
nova-novncproxy
Active-active
nova-scheduler
Active-active
Ceilometer
Cinder
Glance
Heat
242
broker copies all messages to the secondary node (Node B) before sending the
acknowledgment of receipt. When a failover occurs, the daemon resource is
restarted to ensure that the roles switch correctly: the original secondary node
(Node B) now becomes the primary node, and the qpid daemon runs on this
node. When the original primary node (Node A) comes back online, it starts as
the secondary node: the qpid-primary resource now runs on this node. After the
nodes reconnect to each other, the message queue is backed up on the new
secondary node (Node A).
v qpid-primary and haproxy depend on an IBM.ServiceIP resource that represents
a virtual IP address. If qpid-primary or haproxy fails over, the ServiceIP fails
over first. All clients of these two services must connect to the ServiceIP at all
times. Otherwise, such clients cannot connect to the service (qpid-primary or
haproxy) when it fails over.
Note: qpid-primary and haproxy share the same ServiceIP. Therefore, both
services always run on the same node. The qpid resource always runs on both
nodes and is independent of the IP address.
Procedure
1. Log on to System Automation Application Manager web user interface at
https://<SAAM server hostname>:16311/ibm/console/. Specify the user and
password that you defined during the installation. The default user is eezadmin.
2. Click the Operate end-to-end resources link. A view of the managed end to
end resources is displayed.
3. Click the plus sign in front of the top level resource SCOsaam to expand and
show the lower level resources.
4. In the Resources section for the policy, click the SCO resource group to expand
the list of all the related resources and resource groups.
Chapter 3. Administering
243
5. If you want to stop the whole IBM Cloud Orchestrator management stack,
select the top level resource SCOsaam and right-click the resource. Select
Request Offline. Enter a comment in the displayed window and click Submit.
6. If you want to stop a specific resource, right-click the resource name to select it
and click Request Offline.... Enter a comment in the displayed window and
click Submit. The service and all the dependent services are stopped. This only
works for resources that are managed by the agentless adapter.
7. To restart a resource or a resource group, right-click on the resource and select
Cancel Request.
Results
You stopped and restarted the IBM Cloud Orchestrator services.
For more information about using System Automation Application Manager, see
the System Automation Application Manager, Administrator's and User's Guide.
Managing settings
You can configure product settings before building a cloud infrastructure.
Procedure
1. Click PATTERNS > Deployer Administration > Settings.
2. Expand Mail Delivery.
244
3. Add an SMTP server. Provide the IP address or host name for the SMTP server
to be used for IBM Cloud Orchestrator.
4. Add a reply-to address. The email address for the administrator should be used
for this field.
Results
The mail deliver function has been configured to send event notifications.
where
initial_delay
Specifies the time interval (in seconds) between the start of the Workload
Deployer server and the first synchronization with the OpenStack
environment. It is optional. The default value is 60.
interval_sec
Specifies the time interval (in seconds) between the synchronizations with
the OpenStack environment. The default value is 600.
hypervisor_discovery_timeout_sec
Specifies the timeout after which the operation will be reported as failed.
hypervisor_discovery_check_interval_sec
Specifies the time interval for the hypervisor discovery operation. By
default, Workload Deployer will check every 10 seconds for a maximum of
10 minutes.
Chapter 3. Administering
245
246
Description
Text values
Image values
Description
Content values
Style properties
The style properties section of the file contains all the css style classes,
attributes, and elements that are customized. The style properties that are
defined must be legal css because it is this metadata that forms the direct
input for creating the css properties in the catalog-branding.css file. The
catalog-branding.css file is dynamically rendered through the Common
Custom Service. The catalog-branding.css is available at the following
URL:
<scuiHostName:port>/styles/catalog-branding.css
IBM Cloud Orchestrator supports customizable properties that are defined in the
customizations.json file. These supported customizable properties are described
in the following table.
Table 28. Customizable properties
Metadata property values
Type
bannerLogo
bannerLogoAlt
Text
title
Text
bodyBackgroundColor
Text
bodyBackgroundImage
loginLogo
Chapter 3. Administering
247
Type
loginLogoAlt
Text
loginBackgroundColor
Text
loginBackgroundImage
logoutURL
Text
248
Enter the following link in your browser to clear the server cache:
http://<scuiHostName:port>/customization/clearcustomcache
6. Log in to the IBM Cloud Orchestrator user interface with the domain that was
customized and view the results to ensure that you are satisfied with the style
updates. To open the login screen for a specific domain, for example mydomain,
enter the following link in your browser:
http://<scuiHostName:port>/login?domainName=mydomain
Dashboard extensions
A dashboard extension file is a single html file that displays one or more BPM
coaches. For the purpose of the dashboard extension these coaches should contain
dashboards, graphs or other reporting elements developed in BPM.
The user defined extension html file contains a fragment of html without a head or
a body element. These are included in the parent html file which embeds the
supplied extension content.
The html file must contain iframe tags to specify the Business Process Manager
dashboard coaches that you want to display on the page. In this case an iframe is
used to embed one html document, the dashboard coach, within another html
document, the parent extension. Define the width and height of the inline coach
frames to achieve the layout you want.
The following snippet of code is an example of a user defined html fragment with
a Business Process Manager dashboard iframe that is contained in a sample
extension html file:
<div align="center">
<iframe id="ifm" name="ifm" width="1000" height="1000" frameborder="0"
src="{{bpmEndpoint}}/teamworks/process.lsw?
Chapter 3. Administering
249
zWorkflowState=5&zProcessRef=/1.fc983f33-e98f-4999-b0b5bd1e39d6102e&zBaseContext=2064.abec322d-430c-43dd-820a98f223d29fa4T&applicationInstanceId=guid:6416d37cd5f92c3a:33127541:1461a68c1cd:7ffe&applicationId=2"
scrolling="auto" align="middle">
</iframe>
</div>
Role
member
End User
admin
Cloud Administrator
domain_admin
Domain Administrator
catalogeditor
Service Designer
The dashboard folder is the parent extension directory. It contains a folder for each
of the four roles. It is located on the Self-service user interface server at the
following location: {server_location}/etc/dashboard
e.g. {server_location}/etc/dashboard/admin or {server_location}/etc/
dashboard/member
When a service designer wants to make a new dashboard available to an admin
user, for example, they add the extension html file to the: {server_location}/etc/
dashboard/admin directory
250
Menu label
Network Dashboard.html
Network Dashboard
Performance Dashboard.html
Performance Dashboard
Ordinal numbering of extension files is included so that you can control the order
in which the extension labels appear in the DASHBOARD menu. The ordinal
numbering format convention is a number, then a hyphen (-), then the file name.
Any file name starting with this pattern is placed in this relative numbered
position in the sub menu. The pattern is stripped from the file name when
constructing the label. If an ordinal numbering format convention is not used
when naming extension files the files is added alphabetically.
Packaging and deployment:
A dashboard extension made available via the market place must be packaged as a
compressed zip or tar.gz file.
The package structure must match the role based directory structure mentioned in
the previous section. For example:
extensionExample.zip
+ dashboard
+ admin
+ 01 - My Admin Extension Example.html
Chapter 3. Administering
251
Managing security
You can manage users and the level of access that each user has in the IBM Cloud
Orchestrator environment. You can assign which roles a user has on a specific
project, as well as the primary project for a user. A user can have different roles on
different projects. Both users and projects belong to a domain.
252
Domain
1
Quota
1
n
Project
n
n
Membership
User
Quota
Role
n
[region, AZ]
n
Network
n
Image
n
Instances
(VMs, Stacks)
Domain
A domain is the highest entity in the identity model and represents a tenant as it is
a container and namespace for projects and users of a customer. IBM Cloud
Chapter 3. Administering
253
Orchestrator allows segregation on both the domain level and the project level.
Whether the domain concept is leveraged depends if the tenant should be allowed
to organize itself and requires the role of a domain administrator. If the domain is
a self-organized unit, create a domain and its domain administrator. A domain can
have multiple projects and users. The project and users are owned by a domain.
The domain administrator can manage the project and users and assign resources
to them. If the customer is not a self-organized unit and the administrator of the
service provider configures all projects, users and resources, the domain concept
can be ignored and the Default domain can be used. The Default domain always
exists.
User
A user represents the account of a person. You can log in to IBM Cloud
Orchestrator with a user account. A user account contains:
v user name
v password
v email address
A user is unique within a domain. You can have two different users with the same
name in two different domains. A user must always be member of at least one
project and have a default project defined.
Project
A project is a container which owns resources. Resources can be:
v virtual machines
v stacks
v images
v networks
v volumes
The project is unique within a domain. This means you can have two projects with
the same name in two different domains. A project can also have one or more
users as members. With a membership you can access all resources owned by the
project, so if you are a member of a project, you can access all resources owned by
that project.
Role
A role grants you access to a set of management actions. IBM Cloud Orchestrator
supports four different roles:
v admin
v domain_admin
v catalogeditor
v member
For information about the roles, see User roles in IBM Cloud Orchestrator on
page 255.
Scope
254
You can be member of one or multiple projects. As a user, you always work in the
scope of a project. When you log in, you work on-behalf-of a default project. If you
are a member of multiple projects, you can switch across projects in the self-service
banner.
LDAP
IBM Cloud Orchestrator can be configured to authenticate users with an LDAP or
Active Directory. It is allowed to configure one LDAP for all domains or a specific
LDAP per domain. If you log in to a domain with a LDAP configured you are
authenticated against the LDAP of that domain.
Chapter 3. Administering
255
domain_admin role
A user with this role can do the following tasks in the IBM Cloud
Orchestrator environment:
v View the details of the domain.
v View the projects, users, groups, offerings, and actions of the domain.
v Create, edit, and delete projects, users, groups, offerings, and actions that
are associated with the domain.
v Manage the quota, availability zones, and networks for projects in the
domain.
v Do all of the tasks that a user with the catalogeditor role can do.
catalogeditor role
A user with this role can do the following tasks in the IBM Cloud
Orchestrator environment:
v Create virtual system patterns and virtual application patterns. Edit or
delete any patterns that they create or to which they have access.
v Add objects to the IBM Cloud Orchestrator catalog. Modify or delete any
catalog content that they create or to which they have access.
v Import image files.
v Create self-service offerings, self-service categories, and orchestration
actions. Modify or delete any self-service offerings, self-service
categories, and orchestration actions that they create or to which they
have access.
v Do all of the tasks that a user with the member role can do.
member role
A user with this role can do the following tasks in the IBM Cloud
Orchestrator environment:
v View and manage the virtual system instances, patterns, and catalog
content to which they are granted access.
v Deploy virtual system patterns and virtual application patterns.
Important: A user with the member role cannot add, remove, or modify
any items.
Note: The following roles, which are showed in the Administration user interface,
are used only for the OpenStack services and not for IBM Cloud Orchestrator:
_member_
KeystoneAdmin
KeystoneServiceAdmin
sysadmin
netadmin
256
v
v
v
v
v
Virtual images
Patterns
Script packages
Add-ons
Virtual system instances
Description
Read
Write
All or owner
None
Related tasks:
Modifying the access control list of an action on page 329
You can modify the access control list of an action by adding or removing access.
Modifying the access control list of an offering on page 326
You can modify the access control list of an offering by adding or removing access.
Chapter 3. Administering
257
Domain Default
Project 1
Domain Customer A
Project 2
Project A1
Domain Customer B
Project A2
Project A2
Keystone
Corporate
LDAP
OpenStack Region 1
OpenStack Region 2
OpenStack Region 3
OpenStack Region 4
KVM
EC2
VMware
Power
AZ 2.1
AZ 1.1
AZ 3.1
AZ 3.2
AZ 4.1
AZ 4.2
Compute Nodes
vCenter
AZ 1.2
Compute Nodes
ESX
Cluster 1
PowerVC
ESX
System p
System p
Cluster 2
Server Pool 1
Server Pool 2
258
Network isolation
The segregation of tenants also requires the isolation of tenant networks.
Managed By Tenants
Domain A
External
Network-2
Network-A1
Network-A2
Domain B
Network-A3
Network-B1
Tenant's web-tier
VM communicates
to external world
IP:
10.10.10.x
Domain A
Router
Internet
Tenant's web-tier
VM to DB-tier via
internal network
IP:
10.10.12.x
10.10.10.x
Floating IP:
192.0.10.x
Domain B
Router
192.0.10.1-10
192.0.11.1-10
10.10.12.0/24
10.10.10.0/24
Domain Admin
Cloud Admin
Provides physical Infrastructure and VLANs
Creates external networks and Routers
Hands over VLANs and external networks to Domain Admin
10.10.11.0/24 10.10.10.0/24
Chapter 3. Administering
259
Procedure
1. Generate the Public and Private Key on the PowerVC server (this requires the
powervc-source file that is documented on page Making PowerVC images
compatible with Workload Deployer on page 346 in steps 8a and 8b):
ssh-keygen -t rsa -f test1.key
This will create a test1.key and a test1.key.pub which are the private and
public keys respectively.
2. Add a new key pair to the PowerVC key pair list ensuring that you select the
public key that was created in step 1:
nova keypair-add --pub-key test1.key.pub test1
4. Through Horizon or any other method you are familiar with, create a key pair
on IBM Cloud Orchestrator and specify the same name as the key pair on
PowerVC and ensure that you also specify the contents of test1.key.pub as the
public key.
260
Results
You are now able to deploy a pLinux virtual machine from IBM Cloud
Orchestrator that can use the private key test1.key for access.
Managing a domain
You can manage domains in IBM Cloud Orchestrator with the Administration user
interface.
Procedure
1. Create a domain resource. This step automatically creates a default project for
the domain to facilitate user onboarding.
2. Ensure that the domain has access to at least one deployment availability zone.
This allows users in that domain to access virtual images and deploy virtual
servers when logged into the domain projects. The availability zones assigned
to the domain will then be visible to be assigned to projects within the domain.
3. To delegate the domain administration, ensure that at least one user is assigned
to the domain with domain_admin role. With this role, the Cloud
Administrator can delegate the administrative tasks of the domain to the
Domain Administrator who can then start creating projects and assigning users.
Note: The default OpenStack Cloud Administration domain Default should
not be disabled. If it is, you are unable to log in to the Orchestrator and
Administration UI as default Cloud Administrator. It makes the default Cloud
Administrator domain and projects not valid. If you disabled the domain, you
can enable it again in one of the following ways:
v Send an HTTP request as follows:
curl -i -X PATCH http://<HOST>:35357/v3/domains/default
-H "User-Agent: python-keystoneclient"
-H "Content-Type: application/json" -H "X-Auth-Token:<TOKEN>"
-d {"domain": {"enabled": true, "id": "default", "name": "Default"}}
v Update the domain to be enabled with the python client using V3 keystone.
Chapter 3. Administering
261
Creating a domain:
The Cloud Administrator creates domains to organize projects, groups, and users.
Domain administrators can update and delete resources in a domain.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. In the left navigation pane, click ADMIN > Identity Panel > Domains. The
Domains page opens.
3. Select Create Domain. The Create Domain window is displayed.
4. Specify the domain name and, optionally, the domain description.
5. Optional: Clear the Enabled check box to disable the domain. If the domain is
disabled, the Domain Administrator cannot create, update, or delete resources
related to the domain. New domains are enabled by default.
6. Click Create Domain.
Results
A message is displayed, indicating that the domain is created successfully. A
project called Default is automatically created for the new domain.
Assigning a zone to a domain:
Assigning a zone to a domain enables users within a zone to access a specific
domain.
Before you begin
You must be logged in with the admin role to complete these steps.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the domains page by clicking ADMIN > Identity Panel > Domains in
the navigation pane.
3. In the domains page, find the entry for the domain and click More > Edit in
the Actions column to open the Edit Domain window.
4. Click the Availability Zones tab. The Available Zones and the Assigned Zones
are listed in the following format: Zone_Name - Region_Name
5. To assign a zone to a domain, from the list of Available Zones, click the plus
button beside the zone name. The selected zone moves to the Assigned Zones
list. To return an Assigned Zone to an Available Zone, select the minus button
beside the zone name. Use the Filter field to search for specific zones.
6. When you have assigned all zones, click Save.
Results
A message indicates that the domain is modified successfully.
262
263
264
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. In the left navigation pane, click ADMIN > Identity Panel > Domains.
3. In the domains page, find the entry for the domain and click Set Domain
Context
Results
The domain page title changes to <domainName>:Domains. Selecting the Projects,
Users, Groups or Roles web pages will only display details for the selected
domain context.
Clearing the domain context:
Cloud administrators can clear the scope of all domains, enabling visibility across
all domains.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. In the left navigation pane, click ADMIN > Identity Panel > Domains.
3. In the domains page, select Clear Domain Context from the top right-hand
corner.
Results
All domains are visible.
Managing projects
You can manage the level of access for each project to IBM Cloud Orchestrator
with the user interface.
Chapter 3. Administering
265
6. Optional: By clearing the Enabled check box, you disable and cannot authorize
the domain. Selecting the Enabled check box keeps the domain enabled so that
you can authorize the domain.
7. Click Create Project.
Results
A message indicates that the project is created successfully.
Enabling a project:
Enabling a project allows you to set that project as your default project. the action
only appears if the project is disabled.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the projects page by clicking ADMIN > Identity Panel > Projects in the
navigation pane.
3. In the projects page, find the entry for the project and click More > Edit Project
in the Actions column.
4. In the Edit Project window, click the Enabled check box so that the box
contains a tick symbol.
What to do next
A message is displayed indicating that the project is enabled.
Editing a project:
You can modify the name and description of a project.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the projects page by clicking ADMIN > Identity Panel > Projects in the
navigation pane.
3. In the projects page, find the entry for the project and click More > Edit Project
in the Actions column.
4. In the Project Info tab, edit the name and description of the project.
Results
A message is displayed indicating that the project information has been modified.
Disabling a project:
Disabling a project in a domain means that users who previously had that project
set as their default cannot log in to it anymore. Other users also cannot switch to
this project anymore.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the projects page by clicking ADMIN > Identity Panel > Projects in the
navigation pane.
266
3. In the projects page, find the entry for the project and click More > Edit Project
in the Actions column.
4. In the Edit Project window, deselect the Enabled check box so that the box is
empty.
Results
A message is displayed indicating that the project is disabled.
Deleting a project:
Delete a project in the Administration user interface as the Cloud Administrator.
About this task
If a project is deleted, all of its assigned resources (virtual machines, stacks,
patterns, networks, images, and so on) remain in the cloud. Only the Cloud
Administrator can manage these orphan resources. The Domain Administrator
cannot recover from this situation.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the projects page by clicking ADMIN > Identity Panel > Projects in the
navigation pane.
3. Find the entry for the project that you want to delete. In the Actions column for
that entry, click More > Delete Project.
Note: Deleting the default project of a domain results in the domain quotas
becoming empty because the domain quotas are a multiplier of the default
project quotas. Refer to Setting the default domain quotas for more details on
the domain quotas.
Results
A message is displayed, indicating that the project has been deleted.
Assigning a zone to a project:
Assigning a zone to a project enables users within a zone to access a specific
project.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the domains page by clicking ADMIN > Identity Panel > Domains in
the navigation pane.
3. In the domains page, find the entry for the domain and select Set Domain
Context in the Actions column. The Identity Panel group is now in the context
of the selected domain and the Domains page is also changed. You are now
working within the context of the domain that you created.
4. Select Identity Panel > Projects.
5. In the Actions column in the table for the project, click More > Edit Project.
6. Click the Availability Zones tab. The available zones and the assigned zones
are listed in the following format: Zone_Name - Region_Name.
Chapter 3. Administering
267
7. To assign a zone to a domain, from the list of Available Zones, click the plus
button beside the zone name. The selected zone moves to the Assigned Zones
list. To return an Assigned Zone to an Available Zone, select the minus button
beside the zone name. Use the Filter field to search for specific zones.
8. When you have assigned all zones, click Save.
Results
A message indicates that the project is modified successfully.
Configuring project quotas:
The Cloud Administrator can configure the project quotas in OpenStack.
About this task
Log in to the Administration user interface as the Cloud Administrator. Use the
command-line interface to set the default project quotas and to change the project
quotas in OpenStack. You can specify project quotas for several different resources,
such as CPUs and memory. For more information, see the OpenStack
documentation (http://docs.openstack.org/user-guide-admin/content/
cli_set_quotas.html).
Reassigning VM instances to a project:
Reassigning VM instances to a project enables VMs which have been loaded to a
default project to be assigned to the project of a user who owns them.
Before you begin
In order to reassign instances, you must have an admin role on the source project
containing the instances to be reassigned.
Procedure
1. Log in to the Administration user interface as a Cloud Administrator.
2. In the navigation pane, click PROJECT > Instances.
3. Find the instances to be reassigned and select the check boxes beside their
names.
4. Click Reassign Instances.
Note: Reassign Instances is only visible for a VMware region.
5. Selected instances contain a list of the instances selected from the Instances
table. The following options are available:
v To deselect an instance, click on the instance in the list box. At least one
instance must be selected.
v To select all instances press Ctrl-Shift-End.
6. In the Reassign Instances window, select the Target Domain where the
instances are to be assigned.
7. Select Target Project where the instances are to be assigned.
8. Click Reassign.
268
Results
A message is displayed indicating that the instances have been reassigned
successfully from the source domain and project to the target domain and project.
Note: When a VM instance is reassigned from one project to another, the resources
that are associated with the VM (such as networks, IPs, flavors) are owned by the
source project. If there are access issues to these resources from the new project,
you need to recreate the resources on the new project.
Customizing and extending the user interface:
Role based directory structure:
Dashboards are role based. This allows service designers to have their dashboard
extension content available to specific roles.
The directory structure that the extension content is deployed into is based on the
IBM Cloud Orchestrator roles in the following table. The four IBM Cloud
Orchestrator roles which the extension is compatible with are:
Table 32.
Name of role directory
Role
member
End User
admin
Cloud Administrator
domain_admin
Domain Administrator
catalogeditor
Service Designer
The dashboard folder is the parent extension directory. It contains a folder for each
of the four roles. It is located on the Self-service user interface server at the
following location: {server_location}/etc/dashboard
e.g. {server_location}/etc/dashboard/admin or {server_location}/etc/
dashboard/member
When a service designer wants to make a new dashboard available to an admin
user, for example, they add the extension html file to the: {server_location}/etc/
dashboard/admin directory
Navigation elements and extension naming conventions:
Naming extension files is important as these names drive the names of the
navigation elements where the dashboards are accessed from.
Navigation elements should be meaningful so that you can easily locate important
content in the UI. As these dashboard extension navigation elements are driven
from the names of the dashboard extension files, these file names should be
meaningful. Dashboard extensions appear as submenu items under the
DASHBOARD menu and take the name of the extension file as their menu label.
The label is the complete file name with any ordinal position specified and the file
extension removed.
The following table describes an example of file names and their respective menu
labels:
Chapter 3. Administering
269
Table 33.
File
Menu label
Network Dashboard.html
Network Dashboard
Performance Dashboard.html
Performance Dashboard
Ordinal numbering of extension files is included so that you can control the order
in which the extension labels appear in the DASHBOARD menu. The ordinal
numbering format convention is a number, then a hyphen (-), then the file name.
Any file name starting with this pattern is placed in this relative numbered
position in the sub menu. The pattern is stripped from the file name when
constructing the label. If an ordinal numbering format convention is not used
when naming extension files the files is added alphabetically.
Packaging and deployment:
A dashboard extension made available via the market place must be packaged as a
compressed zip or tar.gz file.
The package structure must match the role based directory structure mentioned in
the previous section. For example:
extensionExample.zip
+ dashboard
+ admin
+ 01 - My Admin Extension Example.html
270
v Project Members: Users who are members of this project and associated
roles. This list also shows the roles that are assigned to each project member.
4. To assign a user to this project, click +. The user is moved from the All Users
list to the Project Members list.
5. To remove a user from this project, click -. The user is moved from the Project
Members list to theAll Users list.
6. To change the roles that are assigned to a project member: in the Project
Members list, expand the role list for the user, and select the roles.
7. Click Save.
Results
The changes you made to user assignments for a project has been saved.
Managing groups
You can manage the level of access for each group to IBM Cloud Orchestrator with
the user interface.
Deleting a group:
Delete one or more groups in a domain.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the groups page by clicking ADMIN > Identity Panel > Groups in the
navigation pane.
3. Find the entry for the project that you want to delete. In the Actions column for
that entry, click More > Delete Group.
Results
A message is displayed, indicating that the group has been deleted.
Managing users
You can manage the level of access for each individual user to IBM Cloud
Orchestrator with the user interface.
Chapter 3. Administering
271
Creating a user:
You can manage the level of access for each user in IBM Cloud Orchestrator. Users
can be assigned to different roles on different projects.
Procedure
1.
2.
3.
4.
Results
A message indicates that the user is created successfully.
Deleting a user:
You can delete one or multiple users in a domain.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. Open the users page by clicking ADMIN > Identity Panel > Users in the
navigation pane.
3. Find the entry for the user that you want to delete. In the Actions column for
that entry, click More > Delete User.
Results
A message is displayed, indicating that the user has been deleted.
Managing volumes
You can attach volumes to instances to enable persistent storage.
Procedure
1. Log in to the Administration user interface as a Cloud Administrator.
2. Click ADMIN > System Panel > Volumes.
Managing networks
As a cloud administrator you can manage networks in IBM Cloud Orchestrator
with the user interface.
Creating a network:
As a cloud administrator, you can create a new network in your environment.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. You can create a network in either of the following ways:
v To create both a network and a subnet, click PROJECT > Network >
Networks. Then, click Create Network. The Create Network window is
displayed.
You cannot specify the provider network type by using this method.
272
After you entered the required information, click Create. A message appears
indicating that the network is created successfully.
For more information, see Create and manage networks and refer to the
"Create a network" section.
v To create a network and specify the provider network type, click ADMIN >
System Panel > Networks. Then, click Create Network. The Create Network
window is displayed.
After you entered the required information, click Create Network. A message
appears indicating that the network is created successfully.
Use this method also to create networks that are shared among different
projects.
You cannot create a subnet by using this method. You can create the subnet
after the network is created by following the procedure described in Adding
a subnet to an existing network on page 274.
For more information about managing networks in OpenStack, see the
OpenStack Cloud Administrator Guide.
Note: For a VMware region, the name of the new network must match the
name of the network as defined in the vCenter Server that is being managed.
Deleting a network:
As a cloud administrator, you can delete one or multiple networks in your
environment.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. In the left panel, click ADMIN > System Panel > Networks. The Networks
window is displayed.
3. Select the networks that you want to delete.
4. Click Delete Networks. A confirmation window is displayed.
5. Click Delete Networks. A message appears in the top right of the screen
confirming that the networks have been deleted.
Modifying a network:
As a cloud administrator, you can edit a network to modify the name and some
options as, for example, if the network is shared among different projects.
Procedure
1. Log in to the Administration user interface as the Cloud Administrator.
2. In the left panel, click ADMIN > System Panel > Networks. The Networks
window is displayed.
3. Click Edit Network on the right of the network that you want to modify. The
Edit Network window is displayed.
4. Make the required changes and click Save Changes. A message appears in the
top right of the screen confirming that the network has been updated.
Chapter 3. Administering
273
Managing domains
Log in to the Self-service user interface as a Domain Administrator.
Click CONFIGURATION > Domain and select Domains from the menu that is
displayed, to see a list of domains that you can administer. To see more details
about a particular domain, click the domain name. You can search for a particular
instance by specifying the instance name or description in the search field. The
instance table can be sorted using any column that has the sort icon.
Domain Administrators can manage projects, groups, users, actions, offerings, and
categories. Domain Administrators can distribute the domain quotas among the
projects in their domain.
Note: Domain Administrators cannot create domains, and they cannot change the
domain quotas.
The Domain Administrator must perform the following steps to set up projects in
the domain and to grant access to cloud resources to the domain users:
1. Create a project resource.
2. Ensure that the project has access to at least a deployment availability zone.
This allows users in that project to access virtual images and deploy virtual
servers when working within this project in the logged in domain. The
availability zones that can be assigned to the project are the ones that have
been previously assigned to the domain to which the project belongs to.
3. Set the quota in the project.
4. Create and add users to the project.
These steps are detailed in Managing projects on page 275 and Managing
users on page 280.
274
Managing projects
As a domain administrator you can manage the level of access for each project to
IBM Cloud Orchestrator with the user interface.
You can search for a particular instance by specifying the instance name or
description in the search field. The instance table can be sorted using any column
that has the sort icon.
Creating a project:
As a domain administrator you can create a new project in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Domains in the menu below the navigation menu
4. Select the check box next to the domain you want displayed in the list.
5. Click Create Project in the Actions menu. The Create Project window is
displayed.
6. Specify the name for the project.
7. Enter a description for the project
8. Optional: By clearing the enabled check box, you disable and cannot authorize
the domain. Selecting the enabled check box keeps the domain enabled so that
you can authorize the project.
9. Click OK.
Results
A new project is created.
Enabling a project:
As a domain administrator you can enable a project in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Projects in the menu below the navigation menu.
4. Select the check box next to the project you want displayed in the list.
5. Click Enable Project in the Actions menu. This option will only appear if the
project is disabled.
6. Click Confirm
Results
A window appears at the top right of the screen confirming that the project has
been enabled.
Chapter 3. Administering
275
Edit a project:
As a domain administrator you can edit a project in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Projects in the menu below the navigation menu.
4. Select the check box next to the project you want displayed in the list.
5. Click Edit Project in the Actions menu.
6. Specify the name of the project.
7. Optional: Enter a description and domain for the project. By clearing the
enabled check box, you disable and cannot authorize the project. Selecting the
enabled check box keeps the project enabled so that you can authorize the
project.
8. Click OK.
Results
A window appears in the top right of the screen confirming that the project has
been edited.
Disabling a project:
As a domain administrator you can disable a project in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Projects in the menu below the navigation menu.
4. Select the check box next to the project you want displayed in the list.
5. Click Disable Project in the Actions menu. A window is displayed asking if
you want to confirm launching the action: Disable Project.
6. Click Confirm.
Results
A window appears in the top right confirming that the project has been disabled.
Deleting a project:
As a Domain Administrator you can delete a project in a domain.
About this task
If a project is deleted, all of its assigned resources (virtual machines, stacks,
patterns, networks, images, and so on) remain in the cloud. Only the Cloud
Administrator can manage these orphan resources. The Domain Administrator
cannot recover from this situation.
276
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Projects in the menu below the navigation menu.
4. Select the check box next to the project you want deleted.
5. Click Delete Project in the Actions menu.
6. A window appears asking if you want to delete the project. Click Confirm.
Note: Deleting the default project of a domain results in the domain quotas
becoming empty because the domain quotas are a multiplier of the default
project quotas. Refer to Setting the default domain quotas for more details on
the domain quotas.
Results
A window appears at the top right of the screen confirming that the project has
been deleted.
Creating a network for a project:
As a domain administrator you can create a network for a project in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Projects in the menu below the navigation menu.
4. Select the check box next to the project you want displayed in the list.
5. Click Create Network in the Actions menu.
Note: The Create Network action in the Self-service user interface is limited to
Nova networks only. To create a Neutron network, use the Administration user
interface.
6. Select the region from the drop down menu
7. Use the following network parameters:
v Name: the name of the network
v Mask (CIDR): the definition of the IP range of the network, e.g. 10.10.10.x/24
for IP addresses between 10.10.10.1 and 10.10.10.255.
v VLAN/VXLAN ID: the VLAN ID that should be used to seperate the network
traffic from other networks, e.g. 1001.
v Bridge interface: the interface on the cloud management servers that
should be used for the bridge, e.g. eth0. Only required in regions having
nova-network configured. Not required for regions having neutron
configured.
v Gateway: the gateway of the subnet, e.g. 10.10.10.1. By default the first IP in
the range is used.
v DNS: the primary and secondary DNS servers to be used.
8. Click OK.
Chapter 3. Administering
277
Results
The network is created for and is owned by the selected project. Only members of
the project can use the network.
Deleting a network from a project:
As a domain administrator you can delete one or multiple networks from a project.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3.
4.
5.
6.
7.
Results
The network is deleted from the region it has been created.
Modify the availability zones of a project:
As a domain administrator you can grant and revoke access of availability zones to
a single project in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
In the navigation menu, click CONFIGURATION > Domain.
Click Projects in the menu below the navigation menu.
Select the check box next to the project you want displayed in the list.
Click Modify Availability Zones in the Actions menu. The Availability Zones
of Domain and the Availability Zones of Project are listed in the following
format: Availability_Zone Region.
6. Complete one or more of the following options to modify the user availability
zones of a project:
2.
3.
4.
5.
278
8. The quota dialog box contains values for the number of cores, the number of
instances, amount of memory and the number of floating Ips. Enter a value for
each.
Note: The sum of all project quota of a domain can not exceed the overall
quota of the domain. The validate button checks if that condition is met. If the
condition is not met, the quota can not be changed. To view the overall domain
quota and the quota remaining in the domain, click Show Domain Quota.
9. Click OK.
Results
The changes you made to the quota of the project has been saved.
Modifying users in a project:
As a domain administrator you can add and remove users of a single project in a
domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Projects in the menu below the navigation menu.
4. Select the check box for the project that you want to edit.
5. In the Actions menu, click Modify Users.
Note: The Modify Users page shows the following lists of users:
6.
7.
8.
9.
v Users in Domain: Users in the current domain who are not assigned to the
selected project.
v Users in Project: Users assigned to the current project, with roles assigned.
The user roles are also shown.
To assign a user to a project, select the check box beside the user, then click >>.
The selected user moves to the User in Project list.
To remove a user from a project, select the check box beside the user, then click
<<. The selected user moves to the Users in Domain list.
To edit the role assignment of a user in the User in Project list, click the Role
column for the user. Select one or multiple roles from the roles list.
Click OK.
Chapter 3. Administering
279
Results
The changes you made to user roles and user assignments for a project have been
saved.
Managing users
As a domain administrator you can manage the level of access for each individual
user to IBM Cloud Orchestrator with the user interface.
You can search for a particular instance by specifying the instance name or
description in the search field. The instance table can be sorted using any column
that has the sort icon.
Creating a user:
As a domain administrator you can create a new user in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Domains in the menu below the navigation menu.
4. Select the check box next to the domain you want displayed in the list.
5. Click Create User in the Actions menu. The Create User window is displayed.
6. Specify the name for the user, the default project to assign the user to and the
users role in that project.
7. Optional: Enter an email, password and domain for the user. By clearing the
enabled check box, you disable and cannot authorize the user. Selecting the
enabled check box keeps the user enabled so that you can authorize the project.
8. Click OK.
Results
A new user is created and appears in the User view. This action applies only to a
single domain.
Deleting a User:
As a domain administrator you can delete one or multiple users in a domain.
Procedure
1. Log in to the Self-service user interface as a Domain Administrator.
2. In the navigation menu, click CONFIGURATION > Domain.
3. Click Users in the menu below the navigation menu.
4. Select the check box next to the user you want displayed in the list.
5. Click Delete User in the Actions menu.
6. Click Confirm in the window that opens
Results
A window appears in the top right of the screen confirming that the user has been
deleted.
280
Auditing
IBM Cloud Orchestrator provides a comprehensive auditing function to help you
maintain the security and integrity of your environment. This topic introduces you
to the auditing capabilities, the business value of auditing, and procedures for
working with the event log.
Capabilities overview
The auditing function is essentially a continuous logging activity; IBM Cloud
Orchestrator records information about administrative and security-related events
that occur in the product and in the cloud.
The following list displays a few examples of the events that are tracked by the
auditing function:
v configuration changes
v user authentication
v attempts to access objects that are secured by object-level access control
v digital signature validation
For each event, the collected information identifies the user who initiated the
operation, and whether it succeeded. IBM Cloud Orchestrator makes this audit
data available for download in the form of event records.
Login actions are logged separately from the other events. For information about
auditing login actions, see Auditing login on page 282.
Business value
With these capabilities you can protect your environment from both internal and
external security threats. Use the audit data to identify suspicious user activity, and
then hold those users accountable for their actions. In the case of an attempted
security attack, analysis of the audit data can help you determine if and how your
infrastructure was compromised. Based on that information, you can strategize the
most effective defensive measures.
The auditing function also helps your organization to comply with regulatory laws
such as the Health Insurance Portability and Accountability ACT (HIPAA) and the
Sarbanes-Oxley (SOX) Act. These laws mandate formal practices not only for
protecting data and detecting fraud, but also for documenting your efforts to do
so. The audit data provides that evidence; with IBM Cloud Orchestrator you have
numerous options for downloading the data in a manner that suits your business
processes.
For detailed information on how to exploit the business value of audit event
records, see Audit event record attributes and usage tips on page 282.
281
After you download records from the log and store them in your own archives,
you must delete those same records from the log. Otherwise, when the log reaches
a pre-set capacity limit, IBM Cloud Orchestrator suspends the auditing function
until storage frees up. When consumption nears 90%, clean the audit log storage.
See Deleting audit data to free storage on page 290 for more information.
Best practice: Designate one individual with admin role to download audit data,
archive it to external storage, and then delete it from the Workload Deployer
component machine as a routine process.
Auditing login
When a user tries to log in to the Self-service user interface, the login action is
logged for auditing purpose.
If the login fails, also the failure reason is logged.
The login actions are logged in the /var/log/scoui.log file on Central Server 2,
with the other log messages related to the Self-service user interface. You can find
the messages related to the login actions by searching for the login string.
Example of messages related to login actions:
[2015-04-13 15:36:57,892] [qtp-380990413-64] [INFO] n3.app.handler.action.LoginHandler
Successful login: for user admin
...
[2015-04-13 15:37:38,764] [qtp-380990413-66] [INFO] n3.app.handler.action.LoginHandler
Failed login: Invalid credentials for user admin
...
[2015-04-13 15:38:00,192] [qtp-380990413-55] [INFO] n3.app.handler.action.LoginHandler
Failed login: No password provided for user admin
Audit events
With the IBM Cloud Orchestrator auditing function, you collect data about specific
types of user activity, security events, and configuration changes on the product
and in the cloud. That audit data can help you detect and analyze potential
security breaches, or other misuse of cloud resources.
Event records
IBM Cloud Orchestrator collects audit data about events in event records; one record
corresponds with each event. For descriptions of event record attributes and an
understanding of how to analyze the attribute values, see Audit event record
attributes and usage tips.
282
followed by attribute name-value pairs that can vary from record to record. Table 2
in Attribute name-value pairs lists the pairs that you can use in your analysis of
cloud activity.
Attribute
Product version
Definition
Version of the IBM Cloud Orchestrator product
Timestamp
Time (in UTC time zone) when the event record was generated
Resource type
Action
Resource ID
Resource name
User
Client IP address
Description
event_authz_acl_check
/admin/users/u-0/
userdata.json_RWF_true
event_authz_check
Chapter 3. Administering
283
Description
event_authz_header
[{"attributes":
"{\"authorizationAttributes\":
. . . }
| {"attributes":
"{\"authorizationAttributes\":
. . . }]
event_outcome
event_request_remote
192.0.2.4_192.0.2.4_52917
event_request_url
https://workload_deployer:9444/
sts/admin/registries
event_roles
[REPORT_READER]_[AUDIT_READER]
_[AUDIT_WRITER]
event_subjects
[user1]
284
{ "attributes":
"{ "authorizationAttributes" : { "groups" : ["g-0"],
"roles" :
["11","13","14","15","16","17","1","2","3","4","5","6","7","8","9","10"] },
"ownerProcessTypeID" :"IT",
"ownerPublicKey": "IT",
"AT" : "1316453354588",
"userName" : "cbadmin",
"userID" : "u-0",
"type": "user",
"issuerProcessTypeID" : "TS",
"expirationTime" : 86400000,
"issuerPublicKey" : "TS"
}",
"signature":"IPf***A=="}
285
/admin/plugins/webservice/1.0.0.3/parts/webservice.scripts.tgz_WF_true
This value indicates that the user who accessed the resource
/admin/plugins/webservice/1.0.0.3/parts/webservice.scripts.tgz has write
and full permissions for that resource. Thus, when the integrity of a resource
is compromised, you can refine your list of suspected perpetrators to users
who have write and full permissions for the resource in question.
Based on these tips and illustrations, you can plan other ways to use the rich audit
data that IBM Cloud Orchestrator provides to protect your environment and,
consequently, your business data.
File names
You can specify the name of the .zip file to which your records are written for the
download operation.
REST API
XXX.zip, where XXX is a name that you specify by parameter. This .zip file contains:
v audit-events.csv - This file contains the audit event records that you specified, in CSV
format.
v audit-events-signed-events-checksum - Contains the digital signature that verifies both
the integrity and authenticity of your audit data.
v audit-events-record-IDs
v audit-events-signed-record-IDs
286
Functional description
Location
create_basicauth_header.py
...\deployer.cli-XXX\deployer.cli\lib\XXX\
deployer, where XXX is the version number of
the CLI
cscurl.sh
...\deployer.cli-XXX\deployer.cli\lib\XXX\
deployer, where XXX is the version number of
the CLI
Steps 1 - 5 are preparatory steps for using these scripts. Steps 6 - 7 describe how to
use cscurl.sh to download audit data in a .zip file, and then unzip that file. Step 8
describes how to use auditFetch.sh. The auditFetch.sh script invokes both of the
other scripts to automate the download and unzip operations. Consider it as a
model for code that you can run with a job scheduler to regularly download audit
data.
Procedure
1. Place the scripts in an appropriate working directory.
2. Download your IBM Cloud Orchestrator user keys from the product and save
them in the same directory that contains the scripts.
a. Open a browser window and go to https://Workload_Deployer_server/
resources/userKeys/.
b. Type your user name and password in the authentication fields.
c. Select the directory and provide a file name for storing your keys.
d. Save the keys as a .json file.
3. Optional: For added security in a production environment, you can use the
IBM Cloud Orchestrator root certificate to authenticate the scripts to the REST
API. Follow these steps to download the certificate and save it in the same
directory that contains the scripts and your user keys:
a. Open a browser window and go to https://Workload_Deployer_server/
resources/rootcacertificate/.
b. Select the directory and provide a file name for storing the root certificate.
Note that the default file name is cert.pem.
c. Save the root certificate.
4. Optional (but necessary if you want to use the root certificate for
authentication): In the /etc/hosts file of your workstation, bind the IP address
of your IBM Cloud Orchestrator to the name that is used in the root certificate,
IBMWorkloadDeployer.
Chapter 3. Administering
287
5. Construct the URL for the download request that the cscurl.sh script sends to
the REST API; you must provide this URL as a parameter to run cscurl.sh in
the next step.
You can choose between two options for downloading your audit data. The
more basic option is to specify a maximum number of records to download.
Alternatively, you can specify both a maximum number of records and the time
frame in which the product logged those records. For either option, the URL
must include the location of the REST API code that downloads the data and
the resource name of the option that you choose. Use the following models for
your URL:
v To simply specify a maximum number of records in the request, construct a
URL for the events resource and use the size parameter:
https://Workload_Deployer:9444/audit/archiver/events?size=X
For the X variable, substitute the number of records that you want to
download. You can request up to 20,000 records. If you specify a greater
number, the product automatically resets your request to 20,000 records, and
ultimately writes that number of records to the .zip file.
v To add a time frame to your request, construct a URL for the filteredEvents
resource and specify the start and end times as Epoch timestamps. (Use the
time conversion tool of your choice to convert your desired date and times
into Epoch timestamps.)
https://Workload_Deployer:9444/audit/archiver/filteredEvents?size=X
&startTime=long_value&endTime=long_value
Note all of the variables that represent parameter values in the statement:
v user1 = the user name
v user1 = the user password
v userkeys.user1.json = the file that contains the user keys
v root_cert.pem = the name of the file that contains the IBM Cloud Orchestrator
root certificate
v X = the number of records to be downloaded
v ArchiveFetchTempFile = the name of the .zip file to which the audit data is
written
Also, be aware of the following usage notes for running cscurl.sh:
v To use the script to send a request for the filteredEvents resource (to
retrieve event records that the product logged within a specific time frame),
encapsulate the URL in single quotes rather than double quotes.
v Running cscurl.sh without parameters triggers display of its help message.
v To guard against data loss, you must specify a file for the audit data.
Otherwise, the script returns it as simple command-line output.
288
7. Unzip the archive file that is returned from the REST API. (In response to the
previous example of running the script, the REST API would return
ArchiveFetchTempFile.zip.) Consequently you now have four files, as depicted
in the following list.
v audit-events.csv - Contains your audit event records in CSV format.
v audit-events-signed-events-checksum - Contains the digital signature that
verifies both the integrity and authenticity of your audit data. Archive this
file along with your event records.
v audit-events-record-IDs
v audit-events-signed-record-IDs
Note: The last two files contain data that you must send back to the REST
API to delete the event records from the Workload Deployer machine (to free
storage resources). See the "What to do next" section of this article for more
information about deleting audit data.
At this point the retrieval process is complete. If you followed Steps 1 - 7, you
successfully used the individual scripts and the REST API to write your audit
data to a .zip file and download it. Step 8 describes auditFetch.sh , which
automates the entire process; the script provides an example of code that you
can run with a job scheduler to regularly download audit data.
8. To run auditFetch.sh, use the following statement as a model:
./auditFetch.sh username=auditor password=auditor
keyfile=userkeys.auditor.json IWD=IP address size=X > ArchiveFetchTempFile
289
What to do next
Because IBM Cloud Orchestrator does not automatically delete audit data after you
download it, you must run the auditDelete.sh script to delete the data from the
Workload Deployer machine and free storage resources. You can use this script
along with your customization of auditFetch.sh as part of a regularly scheduled
job to download, archive, and then delete audit data. See the article Deleting
audit data to free storage for information about auditDelete.sh.
290
Procedure
1. Locate the auditDelete.sh script in the samples library of the IBM Cloud
Orchestrator command-line interface (CLI). The directory path is
...\deployer.cli-XXX\deployer.cli\samples, where XXX is the version
number of the CLI. Copy the script to another directory if you wish.
2. Download your IBM Cloud Orchestrator user keys from the Workload
Deployer machine and save them in the same directory that contains the
auditDelete.sh script.
a. Open a browser window and go to https://
your_workload_deployer_server/resources/userKeys/.
b. Type your user name and password in the authentication fields.
c. Select a directory and provide a file name for storing your keys.
d. Save the keys as a .json file.
3. To run the script, use the following statement as a model:
./auditDelete.sh username=auditor password=auditorpswd
keyfile=userkeys.auditor.json IWD=your_workload_deployer_server
map=record_IDs hash=signed_hash
Note all of the variables that represent parameter values in the statement:
v auditor = Your user name
v auditorpswd = Your user password
v userkeys.user1.json = The file that contains your user keys
v your_workload_deployer_server = IP address of the machine where the
Workload Deployer component has been installed.
v record_IDs and signed_hash - Both variables, which represent values for the
parameters map and hash, identify the record set to be deleted from the
Workload Deployer machine. These parameter values name two files that
were products of a previous download operation, and were included in your
audit-events.zip file. Define the parameters as follows:
For the map parameter, specify the name of the file that lists the IDs of the
records that you downloaded. It was included in your downloaded .zip
file as audit-events-record-IDs.
For the hash parameter, specify the name of the file that contains the
signed hash of the record IDs. It was included in your downloaded .zip
file as audit-events-signed-record-IDs.
However, you do not need to specify the map and hash parameters if all of
the following conditions are true:
You are deleting records that you just downloaded from the Workload
Deployer machine.
You have not changed the name of either audit-events-record-IDs or
audit-events-signed-record-IDs.
You have placed both files in the same directory as auditDelete.sh.
Results
You have now deleted the previously downloaded audit event records from the
Workload Deployer machine.
Chapter 3. Administering
291
What to do next
Review the article Audit event record attributes and usage tips on page 282 for
an understanding of how you can best exploit IBM Cloud Orchestrator event
records in your auditing analyses.
Password management
There are two ways of managing your password in IBM Cloud Orchestrator.
Procedure
1. Log in to the Administration user interface as a Cloud Administrator.
2. In the upper-right corner of the window, you can see the name of the current
project, the current region, and the current user. Click the name of the current
user, and click Settings to display the User Settings page.
3. In the navigation pane on the left, click SETTINGS > Change Password.
4. In the New password and Confirm new password fields, enter the new
password.
5. Click Change.
Results
The password is changed successfully.
Resetting a password
You can reset a forgotten password using the keystone command line interface
tool.
Procedure
1. As root log onto Central Server 1.
2. Source the keystonerc file by running the following command: source
keystonerc.
3. Use the keystone user-get command to see the unique id related to the user
whose password you want to change.
4. Once you have the id, use the keystone user-password-update command to
reset the password. The format of the command is as follows : keystone
user-password-update [--pass <password>] <userId>. The password parameter
is the password you want to update to. The userId is the unique user Id
retrieved from the user-get command
Results
The password has been reset.
292
Orchestration workflows
An orchestration workflow, which is based on Business Process Manager Business
Process Definition, defines a logical flow of activities or tasks from a Start event to
an End event to accomplish a specific service.
You can use the following types of orchestration workflows:
v Managing offerings on page 325: These workflows are used to define the
offerings that cloud users can select in the Self-Service Catalog. They include
user interface and the service request flow.
v Managing actions on page 327: These workflows are used to define IBM
Cloud Orchestrator actions. They include user interface and the action flow.
v Orchestration actions for Virtual System Patterns (Classic) on page 294:
Orchestration actions are provided for backward-compatibility with IBM
SmartCloud Orchestrator v2.3. These workflows are used to define additional
Orchestration actions for Virtual System Patterns (classic).
The service can be started either by events triggered by IBM Cloud Orchestrator
management actions or by user actions in the IBM Cloud Orchestrator user
interface. The activities that comprise the service can be either scripts (JavaScript),
Java implementations, web services or REST calls, human tasks, and so on. They
can be either executed in sequence or in parallel with interleaving decision points.
Each activity within an orchestration workflow has access to the cloud
environment data in the form of the OperationContext object, which is passed as
input parameter to each orchestration workflow. The operation context is an
umbrella object that contains all data that is related to the execution of an
operation. The operation context object must be defined as an input parameter
variable for all business processes that are started as an extension for an IBM
Cloud Orchestrator operation. Human services must define the operation context
ID as an input parameter and as a first activity, must retrieve the operation context
object with its ID. The operation context object contains metadata information, for
example:
v User
v Project
v Event topic
v Status
It also contains information about the instance in which the orchestration workflow
is executed, for example:
v Type
v Status
v Virtual system ID
v Virtual system pattern ID
v Virtual application ID
Copyright IBM Corp. 2013, 2015
293
v Information about the virtual machines that belong to the instance - CPU,
memory, disk space, and so on.
For more technical details about the operation context object, see the IBM Cloud
Orchestrator Content Development Guide.
Workflows can throw error events or post-status messages, which are then shown
in the IBM Cloud Orchestrator user interface. For more information about errors,
see the IBM Cloud Orchestrator Content Development Guide.
An orchestration workflow can also have additional user interface panels in order
to collect data that is needed as input. These panels are also implemented based on
workflow technology, and they are called human services in Business Process
Manager.
Self-service offerings
Self-service offerings are typical administrative actions that are used to automate
the configuration process.
Offerings, like actions, are custom extensions to IBM Cloud Orchestrator. You can
develop these extensions by using Business Process Manager Process Designer, and
then add them as offerings in the CONFIGURATION tab in the IBM Cloud
Orchestrator Self-service user interface. An offering can consist of:
v A Business Process Manager business process defining the activities to be
performed by the extension.
v User interface panels that collect additional data, implemented by a Business
Process Manager human service (optional).
Users access offerings in the SELF-SERVICE CATALOG tab, where they are
grouped into categories.
Related tasks:
Designing self-service on page 324
A Service Designer can manage the artifacts in the Self-Service Catalog, and use
them to customize the IBM Cloud Orchestrator environment. A Service Designer is
a user with the catalogeditor role.
294
User actions
User actions are custom actions that can be run on virtual system instances.
User actions implement additional lifecycle management actions which extend the
set of predefined actions.
To view the list of all available user actions click PATTERNS > Pattern Design >
Orchestration Actions. Within this view you can also add new actions and modify
or delete existing actions. To search for an action enter the action name or
description in the search field.
Event-triggered actions
Event-triggered actions are Business Process Manager business processes that are
triggered by a specified event during a predefined management action for a classic
virtual system.
To view the list of all available event-triggered actions, click PATTERNS > Pattern
Design > Orchestration Actions. Within this view you can also add new actions
and modify or delete existing actions. To search for an action enter the action name
or description in the search field. You can customize event-triggered actions to run
during events that are called plug points. The plug points are categorized based on
actions during which they occur:
v Deployment or undeployment of a pattern:
Before provisioning
After provisioning
Before start of virtual system instance
Before virtual system instance deletion
After virtual system instance deletion
v Server actions, like Start, Stop, Delete, and Modify Server Resources:
Before the instance status changes to start
Before the instance status changes to stop
After the instance status changes to start
After the instance status changes to stop
Before the server status changes to start
Note: A user interface must not be defined if the related pattern is handled via a
self-service offering
295
Pre-built samples
IBM Cloud Orchestrator includes a set of toolkits that contain templates that you
can reuse and adapt to your needs.
The SCOrchestrator_toolkit provides the essential building blocks, which are
needed to build Business Process Manager business processes and human tasks,
which are then used as extensions for IBM Cloud Orchestrator.
Note: This toolkit and all the other provided toolkits can be found in Developing
IBM Cloud Orchestrator content.
You can also search for additional samples in the IBM Cloud Orchestrator Catalog
at https://www-304.ibm.com/software/brandcatalog/ismlibrary/
cloudorchestratorcatalog#. IBM Cloud Orchestrator Catalog is a platform and
one-stop-shop for IBM customers, partners, and employees, where developers,
partners, and IBM Service teams continuously share content among one another.
Advanced programming
To create more sophisticated automation, involving richer programming languages,
see Developing IBM Cloud Orchestrator content.
296
For more information about developing process applications and toolkits, see
Developing IBM Cloud Orchestrator content.
Procedure
1. In a web browser, log on to the Business Process Manager user interface as user
admin and with the password set for admin during installation.
2. Install Process Designer on a Windows machine on which you design the
workflows:
a. On the right-side panel of Process Center, click Download Process
Designer. This is a link to the Process Designer installation package.
b. Install the package as described in Installing IBM Process Designer in the
Business Process Manager information center.
3. Click Start > IBM Process Designer Edition > Process Designer and log on as
user admin with password passw0rd.
Results
Process Designer stand-alone application opens and a list of process applications is
displayed in the Process Apps tab. When you click the process application name,
you can view its details, such as snapshots, history, you can edit some details such
as name, or who can access it, but you are not able to configure the process
application in this view. To configure a process application, click Open in Designer
next to the item name.
You can switch between Designer, Inspector, and Optimizer tabs.
v To plan and design processes, use the Designer view.
v To test and debug processes, use the Inspector view.
v To analyze and optimize processes, use the Optimizer view.
To return to Process Center view, click Process Center in the upper right corner of
the screen. In the Process Center view, click Open in Designer to get back to the
Designer view.
Procedure
1.
2.
3.
4.
5. Select your user in the Results box and click Add selected.
297
Procedure
1. Open the Process Designer and log on with administrative credentials. The
Process Center panel is displayed. In this panel, you can review and manage
process applications and toolkits that are known to the process server.
2. Create a process application:
a. Click the Process Apps tab and in the panel on the right side, click Create
New Process App.
b. In the window that is displayed, provide a name and a unique acronym for
your new process application. Optionally, provide a description.
Remember: After the process application is created, do not change its
acronym, because it is used to reference the processes in self-service
offerings.
c. Click Create.
Tip: Steps from a to c can be performed in both Process Designer and Process
Center, with the same result, but only in Process Designer view you can
configure the process application when it is created.
3. Click Open in Designer for your newly created process application. The
Designer view is opened.
4. In the Designer view, click one of the categories from the pane on the left. A list
of artifacts that are relevant for this category is displayed. In this pane, you can
also review the existing artifacts and add new artifacts to toolkits or process
applications.
Note: You can click All in the newly created process application to see that it
initially contains only one artifact.
298
Procedure
1. In the Process Designer view search for SCOrchestrator_Toolkit.
2. On the right side of the toolkit name, click Open in Designer. Details for the
toolkit are displayed.
3. From the list of available items on the navigation pane on the left, in the User
Interface section, right-click the Template_HumanService. That is the user
interface that you want to copy into your process application. The contextual
menu for the selected item opens.
4. In the contextual menu, click Copy Item to > Other Process App. Select your
process application from the list. The process is copied in the background. No
confirmation is provided.
Repeat steps 3. and 4. for all the items that you want to copy.
5. To return to the list of process applications and toolkits, click Process Center in
the upper right corner of the screen.
Results
When you open your process application, the Template_HumanService is now
available on the list.
299
Authorization required
admin
tw_admins, tw_authors
admin
Creating a process
Create a process using IBM Process Designer and incorporate a new activity into it.
Procedure
1. In the Designer view, click the newly created process application and then
click the plus sign next to Processes to open the Create new menu.
2. From the menu, select Business Process Definition.
3. Provide a name for the new business process definition, for example Hello
World. Click Finish. The new process definition opens in the main canvas.
Initially, the diagram view contains two lanes:
v The System lane that is the default lane for system activities.
v The Participant lane that is the default lane for user services.
The start event and the end event are added automatically.
4. To add a user activity to the process, select Activity from the palette to the
right. Add the activity to the Participant lane. An activity represents one step
in the process. Properties of an activity are shown in the bottom panel.
5. In the Properties panel at the bottom of the screen, you can set the name of
the activity, for example Say Hello.
6. To make the activity part of a flow, connect it with the start and end event.
Select Sequence Flow from the palette.
7. With the Sequence Flow tool, click the connection points of the elements.
First, connect the start event with the activity and then connect the activity
with the end event.
8. To create an implementation for this process, click the plus sign next to User
Interface in the Designer view.
9. From the menu that opens, select Human Service and name it Say Hello. The
main canvas opens. You can now use the Coach element to create a simple
user interface dialog that brings up the Hello World string..
300
301
Business Process Manager human service. When you select the task from
the list, the Business Process Manager coach opens. Provide any required
parameters and click Submit.
Procedure
1. In Business Process Manager, expose the process to make it available in the
IBM Cloud Orchestrator user interface:
a. Open IBM Process Designer.
b. Select your process and switch to the Overview tab.
c. In the Exposing section, click Select in the Expose to start row.
d. Select the All Users participant group or any other group that you want to
expose the process to, and save the setting.
Tip: A similar procedure must be performed to make a user interface (human
service) visible in IBM Cloud Orchestrator:
a. In Process Designer, select the human service and open the Overview tab.
b. In the Exposing section, click Select in the Expose to row.
c. Select the All Users participant group or any other group that you want to
expose the process to, and save the setting.
d. In the Expose as row, click Select.
e. Select URL and save the setting.
2. In IBM Cloud Orchestrator, create an offering that is based on the process, and
a category for it. For information about creating self-service categories and
offerings, see Creating a category on page 327 and Creating an offering on
page 326.
Results
The user can now access the offering in the Self-Service Catalog, and can request
the offering.
302
Procedure
1. In Business Process Manager, expose the process to make it available in the
IBM Cloud Orchestrator user interface:
a. Open IBM Process Designer.
b. Select the process and switch to the Overview tab.
c. In the Exposing section, click Select in the Expose to start row.
d. Select the All Users participant group or any other group that you want to
expose the process to, and save the setting.
The process is now visible in IBM Cloud Orchestrator.
Tip: A similar procedure must be performed to make a user interface (human
service) visible in IBM Cloud Orchestrator:
a. In Process Designer, select the human service and open the Overview tab.
b. In the Exposing section, click Select in the Expose to row.
c. Select the All Users participant group or any other group that you want to
expose the process to, and save the setting.
d. In the Expose as row, click Select.
e. Select URL and save the setting.
2. In IBM Cloud Orchestrator, create an orchestration action based on the process:
a. Log on to IBM Cloud Orchestrator. You must be assigned the catalogeditor
role or the admin role.
b. Click CONFIGURATION > Actions Registry.
c. In the actions menu, click Create Action. A new dialog opens.
d. Provide a name and, optionally, a description for the action.
e. From the Action type list, select User.
f. Select one or more virtual system patterns to which the action is applied.
g. Select the process that you exposed in step 1.
h. If the selected process requires user interaction, specify a user interface by
selecting the related Human Service.
i. Specify sequence priority if you want a specific run order to be applied. For
actions that have the same event and priority defined, the order is
unspecified.
j. Click Configure Access Control to create the action.
k. To allow other users to access the new action, select the action and add the
project to which the users belong to the Access granted to field.
Results
The action is now configured. If it is an event-triggered action, it is started
automatically for selected virtual system patterns at the time the selected event
takes place. You can now start the user action in the Virtual System Instances
(Classic) view that you can open by clicking PATTERNS > Instances > Virtual
System Instances (Classic). If user interaction is required for the configured action
to complete, the user receives a new assignment in the INBOX tab.
303
304
Procedure
1. Log on to the Business Process Manager WebSphere Application Server console
as an administrator user:
https://$central-server-2:9043/ibm/console/logon.jsp
Results
IBM Cloud Orchestrator is now configured as a development system.
Procedure
1. Log on to the Business Process Manager WebSphere Application Server console
as an administrator user:
https://$central-server-2:9043/ibm/console/logon.jsp
305
Results
IBM Cloud Orchestrator is now configured as a production system.
306
AA
The IBM Cloud Orchestrator release that is prerequisite for the toolkit or
process application, for example, 24 for IBM Cloud Orchestrator V2.4.
BB
Counting up the version of the toolkit, for example. 00 for the first
release, and 01 for the second release
YYYYMMDD
The date the snapshot was created
v When updating an existing process application or toolkit, do not change the
chosen acronym because it is used to reference the processes in self-service
offerings.
307
308
Using self-service
Use the IBM Cloud Orchestrator Self-service user interface to request resources,
monitor the status of your requests, and do additional tasks related to resources.
Inbox
The Inbox area provides an overview of the inbox statistics.
The section header displays the number of each of the following types of tasks:
v New today
v To-do (tasks that have not yet been claimed)
v Overdue
The table displays the following information about the most recent tasks:
v Latest Items
v Requested by
v Priority
v If a task is overdue, the overdue icon is displayed.
Click a task type in the section header, or click an item in the table, to open the
INBOX tab.
Request History
The Request History area provides an overview of requests statistics.
The section header displays the number of each of the following types of requests:
v New today
Copyright IBM Corp. 2013, 2015
309
v In progress
v Failed
The table displays the following information about the most recent requests:
v Latest Requests
v Submitted On
v Status
Click a request type in the section header, or click an item in the table, to open the
REQUEST HISTORY tab. If you click a request type, only requests of that type are
displayed.
% Usage
Green
0-50
Yellow
51-75
Red
76-100
VM Status
The VM Status area provides information about the deployed virtual machines for
the current project. The total number of deployed virtual machines in your project
across all regions is displayed, with a breakdown based on status. The virtual
machine status is color-coded as follows:
Table 40.
Color
Status
Green
Active
Blue
Paused
Red
Error
Yellow
Shutoff
Click a status type to open the ASSIGNED RESOURCES tab. The tab contents are
filtered to display only virtual machines with the selected status.
310
Procedure
1. Log on to the IBM Cloud Orchestrator Self-service user interface and click the
SELF-SERVICE CATALOG tab.
2. Open the category of your choice to view the offerings. You can also use the
Search field to look up a specific offering by name.
3. Select an offering from the list. A window with request details opens.
4. Specify any required parameters for the request.
5. Click OK to submit the request.
Results
The request is submitted. A message is displayed at the top of the page, reporting
the result. You can also check the request status in the REQUEST HISTORY tab.
Procedure
1. Click the REQUEST HISTORY tab. All the requests that you have access to are
displayed on the left side of the page.
You can search for a specific request and sort the requests in the view.
2. Click any request to view its details.
Managing resources
Use the ASSIGNED RESOURCES tab to manage your assigned resources.
The columns for each instance type table can vary. From the table view you can
launch actions on a single instance or on multiple instances. To view detailed
information about an instance click anywhere on the instance row. The details
screen contains actions that pertain only to the selected instances.
Resource types
IBM Cloud Orchestrator supports several types of resources, including domains,
virtual machines, and volumes. Resources types are also known as instance types
(for example, in the Core Services REST API).
IBM Cloud Orchestrator provides the following resource types:
Action
An action is an instance that can be applied to other instances. An action
always includes a Business Process Manager process that can be run on the
associated instance.
Chapter 5. Working with self-service
311
Category
A category is a container for self-service offerings that are displayed in the
Self-Service Catalog. You can organize your offerings inside the catalog.
Domain
A domain is the highest entity in the identity model. It is a container and
namespace for projects and users of a customer.
Heat
Offering
An offering is an IBM Cloud Orchestrator process made available in the
Self-Service Catalog. IBM Cloud Orchestrator provides a set of offerings
out of the box. However, you can create your own offerings with IBM
Process Designer.
Openstackvms
An instance of type openstackvms represents a single OpenStack virtual
server.
Project
A project is a container that owns resources such as virtual machines,
stacks, and images.
User
A user represents the account of a person. You can log in to IBM Cloud
Orchestrator with a user account. A user must always be member of at
least one project.
Volumes
The volumes instance type is the disk space resource that can be attached
to a virtual server.
312
No human service
If the action you select does not require a human service, a
confirmation dialog box is displayed. Select Continue to execute the
action or Cancel to close the dialog and return to the main view.
Note: When an action is executed, a message is displayed at the top of the
page, reporting the result. You can also check the request status in the
REQUEST HISTORY tab.
Related concepts:
Managing actions on page 327
A Service Designer can manage actions and their access control list in the Actions
Registry.
Removing from the list of managed servers in PowerVC:
PowerVC treats base instances used for the capture of images no differently from
other servers on the system. This means there is the possibility for the base servers
to be deleted accidentally by administrators. To avoid this, it is recommended that
after a server is captured as an image, it should be removed from PowerVC.
Removing a server does not affect the server except it is no longer managed by
PowerVC and it cannot be deleted inadvertently.
About this task
Removing a server does not affect the server or its associated disk. If needed, the
server can be managed through the servers panel.
Procedure
1. Click on the Hosts icon in the side panel on the left of the screen.
2. Click on a specific Host. A panel appears displaying a list of managed servers
on that host.
3. Select the server you want to remove and click Remove in the menu bar.
4. A window appears asking you to confirm removing the virtual machine you
have selected from PowerVC management. Click OK.
Results
The server has been removed and can no longer be managed by PowerVC.
313
314
What to do next
Proceed to Managing virtual machine instances.
Managing virtual machine instances:
Virtual machine instances represent the servers (virtual machines) that are running
in the OpenStack backend of IBM Cloud Orchestrator.
IBM Cloud Orchestrator provides a built-in instance type called OpenStack virtual
machines which provide the functionality to manage deployed virtual machines.
To
1.
2.
3.
4. From the region list, select a region. The table shows only the virtual machines
in the specified region.
From this view, you can perform the following actions:
Starting one or more virtual machines
1. In the table, select one or more virtual machines that have the SHUTOFF
status.
2. In the Actions menu to the left of the table, click Start.
Note: This action is available only if all of the selected virtual machines
have the SHUTOFF status.
Stopping one or more virtual machines
1. In the table, select one or more virtual machines that have the ACTIVE
status.
2. In the Actions menu to the left of the table, click Stop.
Note: This action is available only if all of the selected virtual machines
have the ACTIVE status.
Note: If the virtual machine was deployed with a virtual system
pattern and you stop it from this panel, the virtual machine is
automatically restarted by the Workload Deployer component. To stop
a virtual machine deployed with a virtual system pattern, perform the
action from the PATTERNS > Instances > Virtual System Instances
panel.
Deleting one or more virtual machines
1. In the table, select one or more virtual machines that have the ACTIVE
status.
2. In the Actions menu to the left of the table, click Delete.
Note: This action is available only if all of the selected virtual machines
have the ACTIVE status.
Resizing a virtual machine
1. In the table, select a single virtual machine.
315
2. In the Actions menu to the left of the table, click Resize to change the
flavor of an existing virtual machine. You can see the current flavor of
the virtual machine, and select the desired flavor.
Tip: After a resize action to increase the disk size of the virtual
machine successfully completes, the disk is increased from a hypervisor
point of view. If you log on to the virtual machine and the file system
does not reflect the new disk size, you must rescan or reboot to reflect
the changed disk size. This action depends on the operating system and
disk type, as follows:
v Microsoft Windows: For information about how to resize the file
system without a reboot, see the Microsoft TechNet article Update
disk information.
v Linux: For information about how to adapt a file system after you
increase the virtual disk, see the VMware Knowledge Base article
Increasing the size of a disk partition (1004071) .
Executing a script
1. In the table, select one or more virtual machines that have the ACTIVE
status and also have a key pair defined.
2. In the Actions menu to the left of the table, click Execute Script.
3. From the Select network list, select the network interface to be used for
the script execution. Click OK.
4. Specify the following script parameters:
v Script Repository path.
v The Script Repository SubFolder specifies a subfolder under the
Script Repository where the script is located. If blank, the Script
Repository path is used.
v Script Name.
v SSH User.
v Destination Directory.
v Working Directory.
v Command Line.
Note: The Execute Script action is implemented using the
SCOrchetrator_Scripting_Utilities Toolkits.
Note: This action is only available if the ssh-key is used to access
the virtual machine which is specified during deployment.
5. Click OK.
Note: When deploying instances to a VMware region, the name specified at
deployment time is propagated from the Self-service user interface to the
OpenStack component. The instance is created in the VMware hypervisor using the
corresponding OpenStack UUID. To match an instance in OpenStack with the
actual instance deployed on VMware, complete the following steps:
1. On the Region Server where the deployment occurred, run the nova list
command and identify the instance UUID in the command output.
2. In the vCenter client, in the search field, type the instance UUID and press
Enter.
Related tasks:
316
317
7. Specify the timeout value for the deployment. If the stack is not deployed
within the specified time, an error message is displayed.
8. Optional: If you want to roll back the Heat instance if the stack fails to deploy,
select the Rollback on failure check box.
9. If the template contains parameter definitions, each parameter name,
description, and value is listed in the Parameters table.
For each parameter, specify the parameter value:
v Default parameter values might be provided in the parameter definition in
the template.
v If a parameter description is prefixed with the name of a supported lookup
annotation, you can select the parameter value from a list in the Select
Value column.
v Otherwise, you must type the parameter value in a field in the Enter Value
column.
10. Optional: To modify the Heat stack resources, click Stack Details:
a. Select the resource that you want to modify.
b. To view details of the volumes that are attached to the selected resource,
click Volumes. To attach a volume to the selected resource, click Add
Volume, specify the volume size and mount point, and click OK.
c. To view details of the networks that are attached to the selected resource,
click Network Interfaces. To attach a network to the selected resource,
click Add Network Interface, specify the network name and fixed IP
address, and click OK.
d. To return to the Launch Heat Template page, click OK.
11. Click OK. A REST call is posted to the OpenStack Heat engine, and the Heat
template is deployed.
12. Monitor the status of your deployment request, as described in Viewing the
status of your requests on page 311.
Tip: If a problem occurs while you are deploying a Heat template, check the
following log files for detailed information:
v In the Business Process Manager server:
/opt/ibm/BPM/v8.5/profiles/Node1Profile/logs/SingleClusterMember1/SystemOut.log
What to do next
You can manage the deployed Heat stack. For information, see Managing Heat
stacks on page 321.
Heat template examples:
A Heat template is a valid Heat Orchestration Template (HOT), as defined in the
OpenStack HOT specification.
For detailed information about the Heat Orchestration Templates, see the
OpenStack Template Guide at http://docs.openstack.org/developer/heat/
template_guide/. In the guide, you can find the following information:
v The introduction and some basic examples at http://docs.openstack.org/
developer/heat/template_guide/hot_guide.html
318
Example 2
The following example is a Heat template to deploy a single virtual system with
parameters and it is therefore reusable for other configurations:
heat_template_version: 2013-05-23
description: Simple template to deploy a single compute instance with parameters
parameters:
key_name:
Chapter 5. Working with self-service
319
type: string
label: Key Name
description: Name of key-pair to be used for compute instance
image_id:
type: string
label: Image ID
description: Image to be used for compute instance
instance_type:
type: string
label: Instance Type
description: Type of instance (flavor) to be used
resources:
my_instance:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
image: { get_param: image_id }
flavor: { get_param: instance_type }
Example 3
The following example is a simple Heat template to deploy a stack with two
virtual machine instances by using lookup annotations for parameters:
heat_template_version: 2013-05-23
description: Simple template to deploy a stack with two virtual machine instances
parameters:
image_name_1:
type: string
label: Image Name
description: SCOIMAGE Specify an image name for instance1
default: cirros-0.3.1-x86_64
image_name_2:
type: string
label: Image Name
description: SCOIMAGE Specify an image name for instance2
default: cirros-0.3.1-x86_64
network_id:
type: string
label: Network ID
description: SCONETWORK Network to be used for the compute instance
resources:
my_instance1:
type: OS::Nova::Server
properties:
image: { get_param: image_name_1 }
flavor: m1.small
networks:
- network : { get_param : network_id }
my_instance2:
type: OS::Nova::Server
properties:
image: { get_param: image_name_2 }
flavor: m1.tiny
networks:
- network : { get_param : network_id }
Example 4
The following example is a simple Heat template to set the admin password for a
virtual machine by using the user_data section:
320
heat_template_version: 2013-05-23
description: Simple template to set the admin password for a virtual machine
parameters:
key_name:
type: string
label: Key Name
description: SCOKEY Name of the key pair to be used for the compute instance
image_name:
type: string
label: Image Name
description: SCOIMAGE Name of the image to be used for the compute instance
password:
type: string
label: password
description: admin password
hidden: true
resources:
my_instance:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
admin_user: sampleuser
image: { get_param: image_name }
flavor: m1.small
user_data:
str_replace:
template: |
#!/bin/bash
echo "Setting password to " $password
echo $password |passwd --stdin sampleuser
params:
$password: { get_param: password }
Related information:
OpenStack Heat Orchestration Template (HOT) Specification
OpenStack Heat Orchestration Template (HOT) Guide
OpenStack Building JEOS images for use with Heat
Managing Heat stacks:
You can use the Self-service user interface to manage deployed Heat stacks.
Procedure
1. Log in to the Self-service user interface as an End User.
2. Click ASSIGNED RESOURCES > Stacks.
The page shows a list of deployed Heat stacks, with an Actions menu to the
left of the list.
If you select one or more Heat stacks in the list, the Actions menu is updated
to show only the actions that you can apply to the selected Heat stacks.
3. To show more details about a Heat stack, click the Heat stack name in the
instance list.
The Heat Stack Details page is displayed. The details page also displays a list
of the virtual machine instances that are associated with the Heat stack. The
Actions menu is updated to show only the actions that you can apply to the
selected Heat stack.
Chapter 5. Working with self-service
321
Related tasks:
Applying an action to a resource on page 312
Use the ASSIGNED RESOURCES tab to select an action for a particular instance,
for example, to start or stop a virtual machine.
322
Results
A message appears indicating that the key pairs you selected are now unregistered
and no longer available to be selected during provisioning of a virtual machine.
Procedure
1. Click the ACTION LOG tab. All the actions that you have access to are
displayed on the left side of the page.
You can search for a specific action and sort the actions in the view.
2. Click any action to view its details.
323
Procedure
1. Log in to the Self-service user interface and click the INBOX tab. A list of
assignments that require user interaction is displayed. There are the following
types of assignments:
Approval request
General task
v
You can click the assignment icon to see details about it.
2. The assignment can have one of the following statuses:
v If the assignment was still not claimed by any user, the Claim button is
displayed. Click Claim to take the ownership of the assignment.
v If you already claimed the assignment, the Reassign Back button is
displayed. Click Reassign Back to release the assignment and allow another
user to claim it.
3. To complete an assignment that you claimed, perform the following steps:
a. Click on the assignment icon to view the assignment details.
b. If you want to complete a general task, enter any information required and
click Submit.
c. If you want to complete an approval request, click Accept or Reject. You
can optionally enter a reason.
A completion message is displayed and the assignment is deleted from the
INBOX tab.
Designing self-service
A Service Designer can manage the artifacts in the Self-Service Catalog, and use
them to customize the IBM Cloud Orchestrator environment. A Service Designer is
a user with the catalogeditor role.
Related concepts:
Developing IBM Cloud Orchestrator content
IBM Cloud Orchestrator content is a set of automation packages to enable IBM
Cloud Orchestrator to use the features that are delivered by external software and
infrastructure devices.
324
Managing offerings
A Service Designer can managing offerings and their access control list in the
Self-Service Catalog.
In the Self-service user interface, click CONFIGURATION > Self-Service Catalog
in the navigation menu, and then click Offerings. You can search for an offering
by specifying the offering name or description in the search field. The offering
table can be sorted using any column that has the sort icon.
If you select one or more offerings in the table, the Actions menu is updated to
show only the actions that you can apply to the selected offerings.
Depending on your permissions, you can perform the following actions:
Create an offering
See Creating an offering on page 326.
Edit an offering
Select an offering in the table and click Edit Offering.
Delete a category
Select one or more offerings in the table and click Delete Offering.
Modify the access control list of an offering
See Modifying the access control list of an offering on page 326.
325
Creating an offering
You can create a new offering in a domain.
Procedure
Log in to the Self-service user interface as a Service Designer.
In the navigation menu, click CONFIGURATION > Self-Service Catalog.
Click Offerings in the menu below the navigation menu.
Click Create Offering in the Actions menu. The Create Offering window is
displayed.
5. Enter a name for the offering.
6. Select an icon and a category for the offering.
1.
2.
3.
4.
Results
A message appears indicating that the offering is created successfully.
Procedure
1. Log in to the Self-service user interface as a Service Designer.
2. In the navigation menu, click CONFIGURATION > Self-Service Catalog.
3. Select an offering and click Modify Access Control List in the Actions menu.
The Modify Access Control List window appears displaying the list of the
roles in the specified domain and project that have access rights to the offering.
4. You can perform the following actions:
v To create a new entry in the list, specify the a domain, a project, a role, and
select the appropriate access rights. Click Add to Access Control List.
v To remove an access control entry from the list, click the related Delete icon.
5. Click Save.
Managing categories
A Service Designer can manage categories in the Self-Service Catalog.
In the Self-service user interface, click CONFIGURATION > Self-Service Catalog
in the navigation menu, and then click Categories. You can search for a category
by specifying the category name or description in the search field. The category
table can be sorted using any column that has the sort icon.
If you select one or more categories in the table, the Actions menu is updated to
show only the actions that you can apply to the selected categories.
326
Creating a category
You can create a new category in a domain.
Procedure
1. Log in to the Self-service user interface as a Service Designer.
2. In the navigation menu, click CONFIGURATION > Self-Service Catalog.
3. Click Categories in the menu below the navigation menu.
4. Click Create Category in the Actions menu. The Create Category window is
displayed.
5. Enter a name for the category.
6. Select an icon for the category.
7. Enter a description for the category.
8. Click Create.
Results
A message appears indicating that the category is created successfully.
Managing actions
A Service Designer can manage actions and their access control list in the Actions
Registry.
In the Self-service user interface, click CONFIGURATION > Actions Registry in
the navigation menu to manage actions. You can search for an action by specifying
the action name or description in the search field. The action table can be sorted
using any column that has the sort icon.
To manage actions on Virtual System (Classic) Pattern instances, see User actions
on page 295.
If you select one or more actions in the table, the Actions menu is updated to
show only the actions that you can apply to the selected actions.
Depending on your permissions, you can perform the following actions:
Create an action
See Creating an action on page 328.
Edit an action
Select an action in the table and click Edit Action.
Delete an action
Select one or more actions in the table and click Delete Action.
Modify the access control list of an action
See Modifying the access control list of an action on page 329.
Chapter 5. Working with self-service
327
Creating an action
You can create a new action in a domain.
Procedure
1. Log in to the Self-service user interface as a Service Designer.
2. In the navigation menu, click CONFIGURATION > Actions Registry.
3. Click Create Action in the Actions menu. The Create Action window is
displayed.
4. Enter a name for the action.
5. Select an icon and a process for the action.
6. Optional: Enter a description for the action. Select the type of instance the
action applies to including the tags you want the action to apply to. Select an
application and human service for the action.
Note: You must specify which instance the action applies to. Based on the
selection of the type, choose from a list of tags that the instance could have.
The action only appears on instances having the type and tag. The field Specify
the item selection criteria allows you to specify whether the action is able to:
v Create an instance. Select createInstanceAction.
v Modify only a single instance. Select singleInstanceAction.
v Modify multiple instances. Choose multiInstanceAction.
Select the application to filter the processes by that application. Once the
process has been found, select the user interface from the list of available
human services for the selected process. Then, configure the access control. The
Domain Administrator and catalog editor of the domain are allowed to modify
the offering.
7.
328
Results
A message appears indicating that the action is created successfully.
Procedure
1. Log in to the Self-service user interface as a Service Designer.
2. In the navigation menu, click CONFIGURATION > Actions Registry.
3. Click Modify Access Control List in the Actions menu. The Modify Access
Control List window appears displaying the list of the roles in the specified
domain and project that have access rights to the action.
4. You can perform the following actions:
v To create a new entry in the list, specify the a domain, a project, a role, and
select the appropriate access rights. Click Add to Access Control List.
v To remove an access control entry from the list, click the related Delete icon.
5. Click Save.
329
330
Virtual
System
Pattern
Virtual
Application
U3
U3
U3
Heat
Single Image Template
Multiple disk U
base image1
Add disk at
deployment
time
U2
Add disk
after
deployment4
U2
Add NIC at
deployment
time5
331
Supported.
The volume must be created in advance. When creating volumes on a VMware region,
you can decide if the volume should be thin, thick or eagerZeroedThick.
In a VMware region, deciding if a volume needs to be thin/thick or eagerZeroedThick
provisioned relies on the volume type.
For example:
cinder type-create thick_volume
cinder type-key thick_volume set vmware:vmdk_type=thick
The two instructions above create a volume type for thick provisioning. So if you use
them when creating a volume, the volume will be created with thick provisioning
For VMware regions, the volume is thin or thick provisioned depending on value of
the volume type specified for the default_volume_type entry in openstack.config file
on Central Server 3.
Network Interface Cards can only be added at deployment time. This can be done
when deploying single instances directly in the Self-service user interface by using
Heat templates or by using the Network Interface Card add-ons if using virtual system
patterns (classic), virtual system patterns, or virtual application patterns.
332
For information about creating images for Amazon EC2, SoftLayer and non-IBM
supplied OpenStack via the Public Cloud Gateway, see Creating a supported
image on page 676.
To use Linux images as part of Heat templates it is recommended to install the
heat-cfntools (for additional information see https://wiki.openstack.org/wiki/
Heat/ApplicationDeployment).
Note:
v heat-cfntools are not supported on Linux on System z.
v heat-cfntools for Linux on Power can be downloaded from
http://dl.fedoraproject.org/pub/epel/6Server/ppc64/ for PowerVC images.
When you create a base image, ensure that the image meets the following
prerequisites:
v When you create a Linux image, ensure that you create a single ext3 or ext4
partition (not managed by LVM), otherwise you might have issues when
installing cloud-init.
v If you use a template for a hypervisor that is not VMware, ensure that the image
has a single disk. You can add additional disks at deployment time. For
additional information, see Add-ons in the catalog on page 409.
v When you create a Linux image, ensure that the image has IPv6 networking
disabled. For more information, see the documentation related to your Linux
distribution.
v When creating a Linux image for Hyper-V, make sure that the kernel parameter
no_timer_check is specified to the kernel parameters in the bootloader
configuration. Without this option, your image may fail to boot due to a
problem validating the system timer. Some linux distribution versions may
enable this option automatically when they detect they are running on Hyper-V.
For more information about this kernel parameter, see https://
lists.fedoraproject.org/pipermail/cloud/2014-June/003975.html.
333
v If you get the OS can not be restarted automatically message after changing
the host name, use the latest cloudbase-init version.
v cloudbase-init allows to set password for a user. The user name is configured
at image preparation time and cannot be modified at virtual machine creation
time. You can specify a user name during cloudbase-init installation or in the
cloudbase-init.conf file. If the user does not exist, a new user account is
created at virtual machine initialization time. If there are multiple Windows
users at image preparation time, at virtual machine initialization time password
is changed only for the user specified in the cloudbase-init configuration. Other
user's passwords are not changed.
After cloudbase-init is installed, complete the following procedures.
Installing virtio driver (KVM hypervisor only):
To use Windows operating system images on a KVM hypervisor, install the virtio
driver into the system because OpenStack presents the disk using a VIRTIO
interface while launching the instance.
You can download an virtio-win*.iso file containing the VIRTIO drivers from the
following location: http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
bin/
Use virt-manager to connect virtio-win*.iso to the image and update the
network adapter in the virtual machine by completing the following steps:
1. Right-click Computer > Properties > Change settings > Hardware > Device
Manager.
2. Click Network adapter > Update driver software > Browse my computer for
driver software.
3. Select the virtual CD/DVD drive and then select the inf file.
4. Restart the virtual machine.
Running sysprep.exe:
Run sysprep.exe to remove all the unique system information, like computer name
and hardware specific information, from your Windows image.
To run sysprep.exe on Windows 2008 R2, complete the following steps. Refer to
the Microsoft documentation for the other Windows platforms.
1. Download and install the Windows Automated Installation Kit (AIK). You can
download Windows AIK from the Microsoft Download Center:
http://www.microsoft.com/en-us/download/details.aspx?id=9085. Windows
System Image Manager is installed as part of the Windows Automated
Installation Kit (AIK).
2. Copy the install.wim file from the \sources directory of the Windows 2008
R2 installation DVD to the hard disk of the virtual machine.
3. Start the Windows System Image Manager.
4. In the Windows Image pane, right-click Select a Windows image or catalog
file to load the install.wim file you just copied.
5. When a warning that the catalog file cannot be opened is displayed, click Yes
to create a new catalog file. Remember to select the Windows 2008 R2 Edition.
6. In the Answer File pane, right-click to create a new answer file:
Language and Country or Region:
334
a. Generate the answer file from the Windows System Image Manager by
expanding Components in your Windows Image pane, right-click and add
the Microsoft-Windows-International-Core setting to Pass 7 oobeSystem.
b. In your Answer File pane, configure the InputLocale, SystemLocale,
UILanguage, and UserLocale with the appropriate settings for your
language and country or region.
Administrator Password:
v In the Windows Image panel, expand the Microsoft-Windows-Shell-Setup
component, and expand User Accounts, right-click on
AdministratorPassword, and add the setting to the Pass 7 oobeSystem
configuration pass of your answer file.
v In the Answer File panel, specify a password next to Value.
Note: You can read the AIK documentation and set more options
depending on your deployment. The steps described here are the minimum
needed for the Windows unattended setup.
Software License Terms:
In the Windows Image panel, expand Components and find the
Microsoft-Windows-Shell-Setup component. Highlight the OOBE setting, and
add the setting to the Pass 7 oobeSystem. In the Answer File panel, set
HideEULAPage true in OOBE settings.
Product Key and Computer Name:
v In the Windows Image panel, right-click on the Microsoft-Windows-ShellSetupcomponent and add the settings to the Pass 4 specialize configuration
pass of your answer file.
v In the Answer File panel, enter your Product Key in the space provided
next to ProductKey. Furthermore, to automate the Computer Name
Selection page, specify a computer name next to ComputerName.
7. Save the answer file as unattend.xml. Ignore the warning messages that
appear in the validation window.
8. Copy the unattend.xml file into the c:\windows\system32\sysprep directory of
the Windows 2008 R2 Virtual Machine.
9. Clean the environment of the virtual machine.
10. Uninstall Windows AIK which might not be part of the virtual machine you
create.
11. Remove the install.wim file that was copied to the virtual machine.
12. Run the sysprep tool as follows:
cd c:\Windows\System32\sysprep
sysprep.exe /oobe /generalize /shutdown
The Windows 2008 R2 virtual machine shuts down automatically after sysprep is
complete.
335
336
v libgcc.i686
v compat-libstdc++-33.i686
v nss-softokn-freebl.i686
The following Python modules must be installed:
v base64
v gettext
v hashlib
v pycurl
v ssl
Chapter 6. Managing virtual images
337
The specified Python modules are provided in the following RPM packages:
v Red Hat Enterprise Linux
python
python-pycurl
v SUSE Linux Enterprise Server
python
python-base
python-curl
Note: In the Linux image, if the /etc/sudoers file contains the following line:
Defaults
requiretty
!requiretty
338
v
v
v
v
v
compat-libstdc++-33.s390x
dos2unix.s390x
ksh.s390x
genisoimage.s390x
python 2.X (2.6.2 or later)
v curl
Note: For SUSE Linux Enterprise Server, use following replacements:
v For libXtst.s390, use xorg-x11-libs
v For compat-libstdc++, use libstdc++33
The following Python modules must be installed:
v base64
v gettext
v hashlib
v pycurl
v setuptools
v ssl
The specified Python modules are provided in the following RPM packages:
v Red Hat Enterprise Linux
python
python-pycurl
python-setuptools
v SUSE Linux Enterprise Server
python
python-base
python-curl
python-setuptools
339
Deployer enablement packages which includes the Activation Engine can be found
here: Installing AIX enablement package on page 347.
To enable the activation engine, and prepare the virtual machine for capture,
follow the instructions at http://www-01.ibm.com/support/knowledgecenter/
SSXK2N_1.2.1/com.ibm.powervc.standard.help.doc/
powervc_enabling_activation_engine_hmc.html.
Procedure
1. Obtain IconImageSynchronizer.zip and ovf-env.xml from the
<downloaded_ibm_cloud_orchestrator_package>/utils/imageSynchronizer/
[Linux/Windows] directory of the installation media.
2. Start an instance of the image.
3. Copy IconImageSynchronizer.zip in an instance directory and unzip it.
4. On Windows run IconImageSynchronizer.cmd and on Linux, give executable
permission to IconImageSynchronizer.sh and run it.
5. Copy ovf-env.xml from the installation media, on Windows under
c:\windows\setup\ibm\AP and on Linux under /opt/IBM/AE/AP.
6. On Windows, run the following command:
c:\windows\setup\ibm\AE.bat --reset -n
340
What to do next
To add the image to IBM Cloud Orchestrator, see Using images created for
SmartCloud Orchestrator V2.3 on page 352.
341
If you are using KVM, the only available option is the image in Glance.
To add an image to OpenStack, complete the following steps on the Region Server
where OpenStack is installed:
Procedure
1. Set the environment by running the following command:
source /root/keystonerc
where
image_name
Specifies a name for the new image that you are adding.
disk_format
Specifies one of the following disk formats:
raw
qcow2
vmdk
container-format
Specifies the container format for the image. The acceptable formats
are: aki, ami, ari, bare, and ovf.
--is-public
Specifies whether the image is accessible by other users. The value can
be true or false.
image_path
Specifies the full path of the image to be added.
For more information about the glance image-create command, see the
OpenStack documentation.
342
Tip: If using the glance image-create command, specify the minimum disk
size by using the --min-disk value option. If using the Administration user
interface, specify the required value in the Minimum Disk (GB) field.
Note: Windows has a different mechanism of interpreting the hardware clock
than Linux does. Therefore the following settings are recommended for a
Windows guest image:
v Set the time zone of the image the same as the compute node.
v Disable time synchronization with Internet inside the image, so that the guest
virtual machine will get its time solely from the hypervisor.
v Set the os_type=windows metadata with the --property option when
registering the image.
343
If this error message is displayed, you must manually add and map the
image, as described below.
344
v If the virtual image is built on multiple virtual disks, you cannot use
OpenStack Glance to manage the image. For such images, you must
manually add and map the image, as follows:
a. Convert the image to the required format, and add the image to the
OpenStack region, as described in Converting an OVA image to VMware
format on page 350.
b. Map the image to IBM Cloud Orchestrator, as described in Mapping the
image created in IBM Cloud Orchestrator on page 352.
If the OVA file is a compressed file, you must uncompress the OVA file before the
import, to make the file bootable.
Procedure
1. Copy the raw file onto a VIOS, and copy that disk image into a PowerVC
volume to create and populate the disk on the SAN.
2. After the image is copied into a PowerVC volume, use the
powervc-volume-image-import command to import it as an image:
powervc-volume-image-import [-h] --name NAME --os-distro {aix,rhel,sles,ibmi}
--volume VOLUME-ID [--user USER] [--activation-type {ae,cloud-init}] [--ovf OVF]
where os-distro is aix, activation-type is ae, and the OVF file is the one
that you extracted from the OVA file. The image is visible in Glance.
For more information about importing volume-backed images, see
http://www-01.ibm.com/support/knowledgecenter/SSXK2N_1.2.1/
345
com.ibm.powervc.standard.help.doc/
powervc_import_volume_images_hmc.html.
3. You can register the image, as described in Using images created for
SmartCloud Orchestrator V2.3 on page 352.
Alternatively, you can import the OVA file into IBM Cloud Orchestrator, and
then map the file to the actual disk on PowerVC, as follows:
Log in to the Self-service user interface as a Service Designer.
Click PATTERNS > Pattern Design > Virtual Images.
Click Create New.
Specify the full path to the OVA file, and specify the credentials if the file is
on a remote system that requires authentication. Click OK.
e. Link the imported artifact to the disk on PowerVC, as described in
Mapping the image created in IBM Cloud Orchestrator on page 352.
a.
b.
c.
d.
Note: For virtual system patterns (classic), it is not mandatory to import the
OVA into Workload Deployer once the PowerVC image is available in
Glance. It is also not mandatory for user-created Workload Deployer images
for use with virtual system patterns. For more information, see Making
PowerVC images compatible with Workload Deployer. To use
IBM-provided Workload Deployer images in virtual system patterns, import
the OVA image into Workload Deployer and map it to the actual disk on
PowerVC.
346
OS_USERNAME=<root user>
OS_PASSWORD=<root password>
OS_TENANT_NAME=ibm-default
OS_AUTH_URL=https://<powervcserver>/powervc/openstack/identity/v2.0/
OS_CACERT=/etc/pki/tls/certs/powervc.crt
OS_REGION_NAME=RegionOne
Note: OS_REGION_NAME is the region name of the PowerVC node, not the
region name of the region server. It is RegionOne by default.
b.
Output:
+-----------+----------------------------------------------------+
| Property |
Value
|
+-----------+----------------------------------------------------+
| expires |
2014-08-17T12:11:47Z
|
|
id
| d7c49ff3ee37440189a47daece4ad944
|
| tenant_id | ba1b17e309524fa88177019781e814f5
|
| user_id |
0
|
+-----------+----------------------------------------------------+
Use the ID, the auth-token file, and the <image-name>.ovf files to apply the
OVF to the image:
python /usr/lib/python2.6/site-packages/nova/compute/ibm/configuration_strategy_ovf.py
--ovf <image-name>.ovf --auth_token auth-token --image_id <image-id> --replace
347
Orchestrator installs an agent and related services in the launched virtual machine
instance that orchestrate and execute software configuration. A set of prerequisites
is required in the virtual image in order to successfully activate and bootstrap the
virtual machine instance. The enablement package is used to install and configure
the prerequisites in your virtual image to make it suitable for pattern deployment.
The enablement package includes Activation Engine with operating system
activation utilities, Java Development Kit, pattern engine bootstrap utilities, and an
automated installer script.
You must run all commands using the root user ID. The name of the enablement
package for AIX 6.1 and 7.1 is ae-install-package-aix-version.tgz. Follow these
steps to install the enablement package for your system:
Procedure
1. Obtain the enablement package from its public download site and put it in a
designated location that is accessible from the deployed virtual machines.
2. Before installing the enablement package, add the following products if they
are not currently installed:
v rpm.rte
v gcc-4.2.0-3.aix6.1.ppc.rpm
Note: When you prepare the image, make sure that the right locales are
installed. You can check the installed locales by running the lslpp -l | grep
bos.loc command. To install an additional locale, you must install the related
fileset. For example, if en_US in UTF-8 codepage is needed, you must install
the bos.loc.utf.EN_US fileset.
a. Perform the following steps to install the rpm.rte fileset:
1) Download the rpm.rte fileset from the following location:
ftp://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/INSTALLP/ppc/rpm.rte
For example:
# lslpp -l rpm.rte
For example:
# installp -d /var/tmp/rpm.rte -acgXY rpm.rte
348
Tip:
v The installation also generates the uninstall script, opt/ibm/ae/uninstall/aeuninstall-aix.sh. You can run this script independently to remove the
enablement environment. For example:
# /opt/ibm/ae/uninstall/ae-uninstall-aix.sh 2>&1 | tee /tmp/ae-uninstall-aix.sh.log
What to do next
To ensure that /etc/hosts is used before a DNS lookup on an AIX system edit the
/etc/netsrv.conf file and add the following line if not already present:
hosts = local, bind
Edit the netcd service configuration file to only cache DNS queries. If the
/etc/netcd.conf file does not exist create it. Add the following line to
/etc/netcd.conf:
cache dns hosts 128 0
Ensure that curl is available by running the curl command. If the command is not
found, add it to the system path.
349
Note: Custom AIX deployment might fail on a virtual system pattern due to a curl
error curl: (6) Couldn't resolve host IBMWorkloadDeployer. Failed to update status:
6 and the status of the pattern might hang. During deployment the IBM Workload
Deployer has been added to the /etc/hosts file pointing to Central Server 3,
however the /etc/hosts file might not being used immediately and this might
cause the curl command to fail. To prevent this situation it is necessary to add IBM
Workload Deployer as a new entry in existing DNS server configuration pointing
to Central Server 3.
Procedure
1. Deploy the image in either of the following ways:
v Manually:
a. Convert the OVA file to an OVF directory that can be used by the
VMware ovftool command, by running the following command on one
line:
python ibm_ova_converter.py --file ova_path
[--temp-dir output_dir]
The converter script produces an OVF directory with the same name as
the source OVA file, without file extension.
b. Change directory to the OVF directory:
cd ova_converter_output
v Automatically:
350
Convert the OVA file and deploy the image on VMware automatically, by
running the following command on one line:
python ibm_ova_converter.py --file ova_path
[--temp-dir output_dir]
--user vmware_user
--password vmware_password
--host vmware_host
--datacenter vmware_datacenter
--cluster vmware_cluster
--datastore vmware_datastore
[--template-name template_name]
where:
ova_path
The full path name of the OVA file to be converted.
output_dir
The target directory where the converted OVF directory is to be placed.
If this optional parameter is not specified, the output is stored in the
same location as the source OVA file.
ova_converter_output
The OVF directory created by the OVA converter tool. The
ova_converter_output.ovf file is located in this directory.
vmware_user
The VMware user name.
vmware_password
The password for the specified VMware user.
vmware_host
The host name or IP address of the VMware vSphere.
vmware_datacenter
The name of the target datacenter on VMware where the image is to be
deployed.
vmware_cluster
The name of the target cluster on VMware where the image is to be
deployed.
vmware_datastore
The name of the target datastore on VMware where the image is to be
deployed.
template_name
The template name to be used when deploying the image to VMware.
If this optional parameter is not specified, the OVA file name is used.
2. After the OVF is deployed in VMware, disable the vApp options of the virtual
machine. To do this:
a. Log on to the VMware vSphere client.
b. Open the inventory of the virtual machines and templates.
c. Select the virtual machine that has been deployed from OVA image.
d. Right-click on the virtual machine and select Edit Settings.
e. Select the vApp Options tab and disable the Enable vApp Options setting.
f. Click OK to save virtual machine settings and close the dialog.
3. Convert the virtual machine to a template in VMware. To do this:
a. Log on to the VMware vSphere Client.
Chapter 6. Managing virtual images
351
Procedure
1. Log in to the Self-service user interface as a user with admin or catalogeditor
role.
2. Click PATTERNS > Pattern Design > Virtual Images.
3. Select the image that corresponds to your OVA file and, in Available on
Locations, click Managed image locations.
4. Click Create New and select Create image mapping.
5. Select the region in the Region menu and then select the row corresponding to
the image in Glance.
6. Click Create.
Results
The image is ready to be used in virtual system patterns (classic), virtual system
patterns, or virtual application patterns.
Procedure
1. Click PATTERNS > Pattern Design > Virtual Images.
2. Click Register OpenStack image and type in the name of the image or use the
lookup function to find it in Glance.
3. Select the operating system in the list and click Register.
Note: Images for Linux on System z cannot be used for virtual system patterns
(classic).
352
353
354
Procedure
1. Click PATTERNS > Deployer Configuration > Environment Profiles on the
menu bar to open the Environment Profiles window.
2. Work with a profile. You can perform the following tasks:
v Create an environment profile with the information in Creating an
environment profile.
v Clone an existing environment profile for reuse with the information in
Cloning an environment profile on page 359.
v Edit an existing environment profile with the information in Editing an
environment profile on page 360.
Procedure
1. From the upper left panel of the Environment profiles window, click New to
add an environment profile.
2. Provide the following basic information about the environment profile you are
creating:
Name Enter a unique name for the profile in the Name field. This information
is required.
Description
Optionally, enter a detailed description to identify the profile in the
Description field.
355
Hypervisor type
Select OpenStack as the type of hypervisor in the cloud group you are
using.
Environment
Select the environment in which this profile is to be created. The
following options are available:
v
v
v
v
v
v
All
Development
Test
Quality Assurance
Performance
Research
v Production
v Pre-Production
The default value is All.
3. Click OK to create the profile. When the information is processed, you return
to the Environment Profiles view and the profile you created is added to the
list in the left panel. It is selected so that the information about it is shown in
the right panel. For more information about the fields on this panel, see
Environment Profiles window fields on page 362.
4. Complete the configuration. Before the environment profile is ready to use, you
must provide additional configuration information in the following fields:
Virtual machine name format
It must contain one of the following variables:
${hostname}
Replaced with the host name of the virtual machine, for
example: My${hostname}VM.
Note: Underscores are not valid characters in the virtual
machine hostname.
${vs-name}
Replaced with the name of the virtual system instance, for
example: My${vs-name}VM. This variable cannot be used alone in
the Virtual machine name format field. The ${vs-name}
variable must be used with one of the other formatting
variables. Otherwise, if a cluster pattern is being deployed, all
virtual machines would then have the same name and the
deployment would fail.
${x-counter}
Replaced with a counter of x digits, for example:
MyVM${3-counter}. The x in this example represents the number
of digits for the counter. So if the value of x is two, then it is
represented as 02. This value could be 01, 02 or 03, for example.
IP addresses provided by
Choose whether you want the IP address for a virtual machine to be
provided by IBM Cloud Orchestrator or specified when the pattern is
being deployed. Use the following options:
356
Pattern deployer
To provide the IP address for a virtual machine at deployment,
you must also specify the following information for each part:
v
v
v
v
Cloud group
IP group
Host name
IP address
You can specify an alias name for the IP group for use in the
environment profile. The default setting is the actual name of
the IP group.
Subnet address
Shows the subnet address of the IP group.
Gateway
Shows the gateway address of the IP group.
Netmask
Shows the netmask address of the IP group.
Windows domain information
The Windows domain section in the environment profile is optional. If
the Domain name field is empty, other fields in the section will be
ignored, and the deployed system will not be added to a domain. If the
Domain name field is specified, the User name and Password fields
become mandatory. However, the Organizational unit field remains
optional. If the Organizational unit field is not specified, the computer
account will be stored in the default Computers container located under
the Active Directory domain root.
Important: Windows computer names must be 15 characters or less in
length and are derived from the corresponding host names in DNS.
DNS host names, which are more than 15 characters in length, may
cause duplicate computer names by keeping the first 15 characters of
the DNS host names. In the case of a duplicate computer name, when
the computer is joined to an Active Directory domain, it will either
Chapter 7. Managing and deploying virtual patterns
357
358
Click the link name of the project to show information about that
project. You can also click the remove link to remove access for a
project.
Results
When you have completed these steps, you have configured basic information
about the environment profile.
What to do next
If there are no errors and all the resources the environment profile contains are
operational, you can deploy it to the cloud or clouds you specified.
Procedure
1. From the left panel of the Environment Profiles window, click the profile you
want to clone. The description and general information about this environment
profile display in the right panel of the Environment Profiles view.
2. Clone the environment profile. Click the clone icon on the upper right panel of
the Environment Profiles view.
3. Provide the following basic information about the new environment profile you
are cloning:
Name Enter a new unique name for the environment profile in the Name
field. This information is required.
Description
Optionally, enter a detailed description to identify and differentiate the
environment profile in the Description field.
4. Click OK to save your changes. When the information is processed, you return
to the Environment Profiles view and the profile you created is added to the
list in the left panel. It is selected so that the information about it is shown in
the right panel. For more information about the fields on this panel, see
Environment Profiles window fields on page 362.
Chapter 7. Managing and deploying virtual patterns
359
5. Edit the environment profile. You can edit the fields described in Editing an
environment profile.
Results
When you have completed these steps, you have cloned and customized the
environment profile.
Procedure
1. From the left panel of the Environment Profiles window, select the environment
profile to edit. The information about that environment profile is shown in the
right panel of the Environment Profiles view.
2. Optional: Determine your access. If you are not able to edit the environment
profile, check the Access granted to: field on the lower right panel to verify
that you have access. If you do not have access, you can click the link on the
owner, view the contact information, and contact the owner to ask for access.
3. Optional: Edit the following configuration information:
a. Edit the description. Add or change the description of the environment
profile in the Description field.
a. Change the environment. Select a different environment, in which your
environment profile is to run, in the Environment field. The following
options are available:
v All
v Development
v Test
v Quality Assurance
v Performance
v Research
v Production
v Pre-Production
b. Specify or change the format of the virtual machine name. In the Virtual
machine name format field, you can specify the format for the virtual
machine name, for example d_${hostname}.
c. Specify how the IP addresses are provided. In the IP addresses provided by
field, select one of the following options to specify how the IP addresses are
provided:
360
Important: You cannot modify this field once an instance has been
deployed to the cloud. You must create a new environment profile with
your desired setting for the field IP addresses provided by.
Pattern deployer
If you choose to provide the IP address for a virtual machine at
deployment, then you must also specify the cloud group, IP group,
host name, and IP address for each part.
Important: If you choose this option, then the person deploying the
pattern cannot specify an IP address that is contained within the IP
groups that are defined in IBM Cloud Orchestrator.
IP Groups
If IBM Cloud Orchestrator provides the IP address for a virtual
machine, you only specify the cloud group and IP group for the
pattern parts. IBM Cloud Orchestrator provides the IP address
information
d. Add, remove, or change the alias name for the cloud group in which the
environment profile is to run.
Add
To add a cloud group, click the entry field under the Deploy to
cloud groups label and select the cloud group to add.
Remove
Click the Remove link beside any listed cloud groups to remove
them from the environment profile.
Change alias name
In the Alias field, change the name of the cloud. This name is
shown at deployment.
e. Add, remove, or rename IP groups. Select or clear the In use box to indicate
the IP groups in each cloud group to be used. You can also change the
name of the IP group, as it is shown at deployment, in the Alias field.
f. Expand the Windows domain information field, to modify the domain
information.
g. Expand the Windows key management service field, to modify the KMS
server information.
h. In the Environment limits field, you can modify the limits of the virtual
CPU, virtual memory, and storage.
i. Grant or remove access to the environment profile to projects. Use the
Access granted to field to add, remove, or change access to this environment
profile.
Results
If the hypervisors and resources for the cloud group specified are available, the
environment profile can be deployed to the cloud group.
361
362
Current status
Provides the status of the profile. This field shows if the environment
profile is complete or if information is needed.
The success icon
The success icon indicates that the environment profile is complete
and resources are available.
The warning icon
The warning icon indicates that environment profile is incomplete.
A textual explanation, in addition to the warning icon, provides an
explanation of the problem, or problems, with the environment
profile configuration.
Updated on
Shows the timestamp of the most recent update.
Virtual machine name format
This optional field is a free form editing space to indicate the format of the
virtual machine, for example, d_${hostname}. This field displays None
provided initially.
IP addresses provided by
This field provides the following options:
IP Groups
Indicates that the IP address is to be provided by IBM Cloud
Orchestrator at deployment. IP Groups is the default setting.
Pattern deployer
Indicates that the IP address is to be provided by the person
deploying the pattern at the time of deployment.
Important: If this option is selected, the person deploying the
pattern cannot specify an IP address that is contained within IP
groups that are defined in IBM Cloud Orchestrator.
Deploy to cloud groups
Shows the following information for each cloud group in the list:
Name Shows the name of the IP group in the selected cloud.
Alias
An entry field to specify an alias for the IP group for use in the
environment profile. The default setting is the actual name of the
IP group. Click to change the alias name.
remove
Removes the cloud group from the environment profile.
Clicking the expand icon shows the following additional fields for the
selected cloud group:
Using Environment profile
Selection box to specify the IP group to use.
Name The name of the cloud group.
Deploy to cloud groups
The cloud groups to which this environment profile can deploy.
Subnet address
Shows the subnet address of the IP group.
363
Gateway
Shows the gateway address of the IP group.
Netmask
Shows the netmask address of the IP group.
Windows domain information
Shows the following domain information:
Domain name
Shows the name of the domain.
User name
Shows the user name that is authorized to add a computer account
to the domain.
Password
Shows the password of the domain user specified in User name.
Organizational unit
Shows the organizational units where the computer account is
stored.
Windows key management service
Shows the following KMS server information:
KMS server IP address
Shows the IP address of the KMS server in your environment.
KMS server port
Shows the port used for KMS service.
Environment limits
In the table, you can set the following types of environment profile limits:
v Virtual CPU
v Virtual Memory
v Storage
This table also shows the current usage and the reserved usage for each of
these types.
Access granted to
By default, the user who created the environment profile has access to it
and other users cannot edit it. This field can be edited and to provide
access to this environment profile for projects. Selecting projects makes the
environment profile readable or writable to the users belonging to these
projects.
By default, the Add more box contains the Everyone built-in project. When
a project has been added, click the link beside the entry to toggle between
the following access levels:
v Read
v Write
v All
Click the link name of the project to show information about that project.
You can also click the remove link to remove access for a project.
Comments
A comments field is provided to enable administrators to communicate
information with one another regarding environment profiles.
364
365
(AUTOSTART=FALSE) to run a script before the server is started. For more information
about environment variables, see Script package environment variables on page
388.
Scripts run in a prescribed order. If you are running an IBM WebSphere
Application Server Hypervisor Edition script on multiple virtual machines, for
example, then the script runs on the virtual machines in the following order:
1. Stand-alone Nodes
2. Deployment Manager
3.
4.
5.
6.
7.
If multiple script packages are included with a pattern, by default, the scripts are
run in the same order they were added to that pattern. You can change the order
in which script parts are run in the pattern. For more information, see Ordering
parts to run at deployment on page 446.
When scripts are run by IBM Cloud Orchestrator, the IBM Cloud Orchestrator
establishes a run time environment on the virtual machine using a Secure Shell
(SSH) tunnel (this is valid only on Linux). By default, on the included Linux based
virtual machines, this is the bash directory. This includes the definition of a set of
environment variables. For more information about these environment variables,
see Script package environment variables on page 388.
Script packages remain on the virtual machine after they are run. As they exist on
the virtual machine, you can manually run your scripts from the virtual machine
after deployment. To set the environment variables to run a script manually on the
virtual machine, you must source the /etc/virtualimage.properties file (this is
valid only on Linux). If you want the script package to be removed after it has
run, you can build the script to delete itself.
In addition to the included scripts, you can review the examples that are provided
in the subsection. The examples demonstrate how to use scripts in a virtual
environment.
366
The compressed file includes the script file (script.sh on Linux, or script.bat or
script.cmd on Windows) in addition to the .json file needed to run the script on
the deployed virtual machine. For more information, see Configuring script
packages using the cbscript.json object on page 374.
Procedure
1. Navigate to the Script Packages window by clicking PATTERNS > Pattern
Design > Script Packages from the menu.
2. Perform any of the following tasks:
v Add a script package. For information about adding a script package, see
Adding a script package.
v Making a script package read-only. See Making script packages read-only
on page 371 for more information.
v Associate a script package with a pattern. After adding the script package,
you can associate it with a pattern. For information about associating a script
package with a pattern, see Associating a script package with a pattern on
page 371.
v Delete a script package. If you determine that a script package is no longer
needed, you can delete it. For information about deleting a script package,
see Deleting a script package on page 373.
Results
When the script package is created and associated with a specific pattern, the
script package runs when the pattern is deployed. If you delete a script package, it
is no longer available for use and this operation cannot be undone.
367
Procedure
1. From the upper left of the Script Packages window, click Create New.
Additionally, you can clone an existing script package to create a copy of that
script package. The cloned script package can then be modified to your
specification. For more information about cloning a script package, see
Cloning a script package on page 370
2. Select the script package archive and click Import. The new script package
displays in the left panel of the Script Packages window. The right panel
displays configuration options for the script package.
3. Configure the script package.
The script package can be configured manually or by including a special JSON
object in the script package. See Configuring script packages using the
cbscript.json object on page 374 for more information about configuring a
script package using a JSON object.
The following information is required to configure a script package:
Script package files
Specifies the name and location of the compressed file that contains the
script package. Locate the local file system and select this file. After
selecting the file, click Upload to copy the file to IBM Cloud
Orchestrator. Only one file can be uploaded to a script package.
Environment
Defines a set of environment variables that are available when the
script package is run on its target virtual system instance. The
environment variables are a set of key/ value pairs that are defined to
the run time environment of the script. IBM Cloud Orchestrator
supplies a set of environment entries for you.
See Script package environment variables on page 388 for a listing of
available environment variables.
In this section, you can also specify additional values that are specific
to your deployment. The environment variable is added as a parameter
in the pattern. The value for this environment variable is then provided
when you, or another user you have provided with access to the
pattern, deploys the pattern. A default value can be specified in the
pattern.
Working directory
Specifies the location on the target virtual machine that IBM Cloud
Orchestrator extracts the script package. The working directory is also
the initial current working directory when the script is run.
Logging directory
Specifies the location of the logs generated by the script after it has
been run. These logs can be accessed for viewing from either IBM
Cloud Orchestrator or by directly accessing them on the virtual
machine.
Executable
Specifies the command to be started for this script package. This can be
an executable command already on the virtual machine, for example
wsadmin, tar, ant, or another system command. You can also provide
your own script to be run as part of the script package.
Arguments
Specify the command line that is passed to the executable command.
This field can optionally contain environment variables and other valid
368
369
4. When you have completed the configuration for the script package, the script
package is saved. You can exit from the Script Packages view.
Results
The script package is created and any users with access can use it with patterns.
What to do next
You can now associate this script package with a pattern. For more information
about patterns, see Working with virtual system patterns (classic) on page 434.
Procedure
1. From the left panel of the Script Packages window, locate the script package to
clone. If the script package that you want to clone is not displayed, you can
search for the script package. To search for the script package, on the left panel
of the Script Packages view, enter all or part of the name of the script package
in the search field.
2. Select the script package to clone. Click the script package to clone from the list
on the left panel of the Script Packages view. Details about the script package
are displayed in the right panel of the Script Packages view.
3. Click the clone icon to clone the script package you have selected.
4. Enter the name for the cloned script package and click OK. The details panel
for the new script package displays and can be modified.
5. When you have completed the configuration for the script package, the script
package is saved when you exit the Script Packages view.
Results
After you have completed these steps, users or groups with access, can use the
script package with patterns.
What to do next
You can now associate this script package with a pattern. See Working with
virtual system patterns (classic) on page 434 for more information about patterns.
370
Procedure
1. Select the script package. From the left panel of the Script Packages window,
select the script package. Script packages that have the read-only symbol by
them are already read-only and cannot be edited. Script packages with the edit
symbol beside them are not read-only and can be edited. Basic information
about the selected script package is shown in the right panel of the Script
Packages window.
2. Made the script package read-only to lock it to future editing. Click the Lock
icon in the upper right toolbar of the Script Packages window.
3. Verify that you want to make the script package read-only. When prompted to
verify that you want to make the script package read only, click OK to lock the
script package.
Results
When you have made the script package read-only.
What to do next
You can now associate the script package with a pattern. See Working with virtual
system patterns (classic) on page 434 for more information about patterns.
371
Procedure
1. Navigate to the Virtual System Patterns window by clicking PATTERNS >
Pattern Design > Virtual System Patterns for Virtual Systems, or by clicking
PATTERNS > Pattern Design > Virtual System Patterns (Classic) for Virtual
Systems (classic).
2. Select a pattern. In the left panel of the Virtual System Patterns window, select
the pattern with which to associate the script package. The pattern must not be
read-only. For more information about patterns and editing them, see the
Virtual system pattern (classic) editing views and parts on page 443 topic.
Basic information about the selected pattern displays in the right panel of the
Virtual System Patterns view.
3. Edit the pattern. Click Edit on the upper right of the Virtual System Patterns
(classic) window for virtual systems (classic) or click Open on the upper right
of the Virtual System Patterns window for virtual systems (classic) to edit the
pattern.
4. Select Scripts. From the drop-down box in the left panel of the Pattern Editor,
for virtual systems (classic), or in the left panel of the Pattern Builder, for
virtual systems, click Scripts. A list of the script package parts is provided that
can be dropped into the virtual image parts on the right panel of the Pattern
Editor view. This list can contain any script packages that you have provided
for use with IBM Cloud Orchestrator. Script packages can then be added to the
virtual image parts.
5. Add a script package. Any script packages you have defined to IBM Cloud
Orchestrator are available in the list of script packages on the left panel of the
Virtual System Patterns view. You can drop any script package from this list
onto the virtual image parts on the canvas on the right. This associates the
script package with that part.
If a script runs on multiple virtual machines on the pattern, then the script runs
on the virtual machines in the following order:
a. Stand-alone Nodes
b. Deployment Manager
c. Job Manager (version 7.0.0.x patterns only)
d. Administrative Agent (version 7.0.0.x patterns only)
e. Custom Nodes
f. IBM HTTP Server
If multiple script packages are included with a pattern, then the scripts are run
in the same order they were added to that pattern.
6. Optional: Configure any properties defined in the script package. Properties
added to script packages can be defined when associating the script package
with a part or it can be defined during deployment. Click the edit properties
icon to set the value now. It is possible to use a variable syntax to set the value
for properties where the value is not yet known. For more information about
setting the value of a property to be variable, see Properties variable syntax
on page 390
Results
You have added one or more script packages to the virtual images on the pattern.
372
What to do next
When you have associated the script package with a pattern, you can complete
configuration of the pattern and deploy it to the cloud group.
Procedure
1. From the left panel of the Script Packages window, locate the script package. If
the script package that you want to delete is not displayed, you can search for
the script package. To search for the script package, from the left panel of the
Script Packages view, enter all or part of the name of the script package in the
search field.
2. Select the script package to delete. Click the script package to delete from the
list on the left panel of the Script Packages view. Details about the script
package are displayed in the right panel of the Script Packages view.
3. Determine if the script package can be deleted. A script package can only be
deleted if it is not:
v Marked as read-only. The read-only icon is displayed in the listing of script
packages if it is read-only.
v Included in any patterns. If it is included in any patterns, the delete icon is
not available and the Included in patterns field displays the linked patterns
for which this script package is included.
If the script package is referenced by any patterns, you can click the pattern
name link in the Included in patterns field to go to the Virtual System Patterns
panel for that pattern. From this panel, you can remove the script package from
the pattern.
4. Delete the package. Click the delete icon on the upper right of the Script
Packages view.
5. Confirm the deletion. You are prompted to confirm that you want to delete the
selected script package. Click OK to delete the script package.
Results
The script package is deleted from IBM Cloud Orchestrator.
373
Overview
When you add a new script package to the catalog, whether by creating a new one
or cloning an existing package and configuring it for your needs, you need to
specify a number of configuration parameters as defined in the Script Packages
pane of the catalog. After uploading your compressed file (.zip and .tgz file types
are supported) containing the main executable file and associated artifacts that
support the execution, configure the various commands and arguments, working
and log files, environment variables needed by the script, and other items that are
required to complete the script package definition.
Even though you can do this manually in the Script Packages pane, a best practice
is to define these configuration settings once in a special JSON object file that you
can include as part of the compressed file before uploading into your script
package. The file must be named cbscript.json, and must be located in the root
directory of the uploaded compressed file.
The cbscript.json object describes the script package and points to the location of
the main script (the script that is the starting point of execution). The
cbscript.json file can also contain all of the configuration parameters that you
would manually specify in the Script Packages pane. When the compressed file
containing the cbscript.json file is uploaded into the script package, the various
fields in the Script Packages pane are populated with the contents of the file.
Including a cbscript.json file in your compressed file helps you to ensure that if
the same script package needs to be reloaded or shared among multiple virtual
system patterns, or if you need to move the virtual system pattern to another
system, its definition will be consistent.
Note: After you upload the compressed file into the script package definition,
refresh the Script Packages pane to display the updated configuration settings from
the cbscript.json file.
374
},
{
"scriptkey": "INSTALL_ARGS",
"scriptvalue": "",
"scriptdefaultvalue": ""
}
]
}
]
An optional plain text string that identifies the script package. The value of
this parameter is not displayed in the Script Packages pane when the
compressed file is uploaded to the script package. This name does not
affect the name in the Script Packages pane that you give to the script
package when you create or clone it. The text string can have a maximum
of 1024 characters.
Example:
"name": "Install and configure the ITM OS agent",
When the script package is downloaded, this text string is replaced with
the name of the script package specified in the name field of the Script
Packages pane.
version
An optional plain text string that provides version information. This value
is not used by IBM Cloud Orchestrator and is not displayed in the Script
Packages pane when the compressed file is uploaded into the script
package. If you are the originator of the cbscript.json object file, you
might use this field for internal version control as needed.
Example:
"version": "1.0.0",
When the script package is downloaded, this value is not written in the
cbscript.json file.
description
An optional plain text string that describes the script package function.
This text string is displayed in the Description field of the Script Packages
pane when the compressed script package is uploaded. The text string can
have a maximum of 1024 characters.
Example:
"description": "This script package creates a JDBC Provider and Data Source
for a highly available DB2 Enterprise database cluster",
375
An optional location on the virtual machine for log files that are written
resulting from the script package execution. The value of this parameter is
displayed in the Logging directory field of the Script Packages pane when
the compressed script package is uploaded. The string value can have a
maximum of 4098 characters.
Example:
"log": "/tmp/SCAS_scriptpkg_logs",
376
Specifies that the script is run when the virtual system has finished
starting during the initial creation. This is the default value if this
parameter is not specified.
Specifies that the script is run when the virtual system is deleted.
Specifies that the script is started manually using the start icon that
is displayed next to the script name for a virtual machine. Click the
icon to run the script. There is no limit on the number of times a
script is run using this method.
Specifies that the script is run when the virtual system has finished
starting during the initial creation, and is also available to be
started manually by using the start icon that is displayed next to
the script name for a virtual machine. Click the icon to run the
script. There is no limit on the number of times a script is run
using this method.
Example:
"execmode": "2",
An optional indication of the type of script package. The only valid value
(and the default if not specified) is Application. Other values are for internal
use only.
Example:
"type": "APPLICATION",
377
keys
If the text string value of this attribute includes password, the key is
treated the same as if the password type attribute is specified.
Example:
"scriptkey": "DATABASE_PASSWORD",
378
scriptdefaultvalue
An initial default value for the environment variable that you can
modify later if needed. This attribute is required, but the value can
be blank. If validvalues is specified, the value specified for this
default must be one of the valid values specified in the
validvalues list.
Example:
"scriptdefaultvalue": "mainhost.ibm.com",
Note that this locking feature sets the locked state of the parameter
(to locked) when it is added to the Pattern Editor canvas, and is
only in effect at the time of deployment. This setting does not
affect your ability to do any of the following tasks:
Chapter 7. Managing and deploying virtual patterns
379
380
The information in the following additional fields on the Script Packages pane is
not included in the cbscript.json file when the script package is downloaded:
v Current status
v Access granted to
v Comments
Overview
When you define script keys in your JSON object, you might need to specify a
script key as a password and ensure that the value for that field is obscured in the
user interface.
In this situation, you can define the script key and include a type field with a
value of password to indicate that this field is to be protected. The script key
format is similar to the following example:
{
"scriptkey": "DATABASE_PASSWORD",
"scriptvalue": "",
"scriptdefaultvalue": "",
"type": "password"
},
381
{
"scriptkey": "DATABASE_PASSWORD",
"scriptvalue": "",
"scriptdefaultvalue": "",
"type": "password"
},
{
"scriptkey": "DATABASE_HOST",
"scriptvalue": "",
"scriptdefaultvalue": "${DB2_ESE_Primary.hostname}.${DB2_ESE_Primary.domain}"
},
{
"scriptkey": "DATABASE_PORT",
"scriptvalue": "",
"scriptdefaultvalue": "50000"
}
]
}
Overview
By default, when you run a script package on a deployed virtual machine, the
original administrator user ID and password (for Windows) or the SSH RSA-key
(for Linux) is used to connect to the virtual machine.
If, for security reasons, the administrator password is changed, or if the capability
for root SSH login is disabled, the connection to the virtual machine cannot be
completed and the script package fails to run successfully.
In this situation, you can add two special environment variables to your
cbscript.json object file to specify an alternate user ID and password to connect
to the virtual machine:
v REMOTE_EXEC_OS_USER
v REMOTE_EXEC_OS_PASSWORD
This alternate set of credentials must be a valid operating system user with
sufficient permission to connect remotely to the virtual machine and run the script.
Important: To specify a user ID that is a member of the local Administrators
group, you must first disable User Account Control (UAC) remote restrictions.
Used remotely, these users have no elevation potential and cannot perform
administrative tasks. However a domain user in the Administrators group will run
with a full administrator access token from remote, and UAC remote restrictions
are not in effect. For more information about disabling UAC remote restrictions,
see the related links.
You can add these environment variables to your script package definition, by
adding them directly to the cbscript.json object file or by adding them as new
environment variables in the Environment section of the Script Packages pane.
When you later add this script package to a virtual system pattern part, you can
configure the user ID and password values in the script package while editing the
382
pattern, or later when you deploy the pattern and run the script package on
demand.
Overview
In addition to the cbscript.json file, which contains essential configuration
information for your script package, you can include several more attributes to
extend your script package configuration for several special situations. This
extended attribute information is stored in another special JSON object that is
named extendedattributes.json, and must be in the root directory of the
uploaded compressed file (in the same directory as the cbscript.json object).
The extendedattributes.json object contains configuration information for the
following attributes:
383
envonly
Specifies whether script package parameter values are set only in the
environment, or if they are also written to the virtualimage.properties
file before the script package is run.
On Linux and AIX, this file is in /etc/virtualimage.properties. On
Windows, this file is in c:\windows\setup\ibm\virtualimage.properties.
For more information, see the subsequent section, Writing script package
parameters to virtualimage.properties on page 385.
savevars
Specifies whether changes made to script package parameter values are
persisted after deployment, and reapplied to subsequent runs of the script
package. For more information, see the subsequent section, Saving
configuration changes to run script packages after deployment on page
386.
product_license
Specifies license information for the script package. The following
attributes are available:
productid
The product ID associated with the license, as represented in the
license catalog. Example:
"productid": "5725L53"
licensetype
Defines the type of license. The following values are valid:
PerCore
This type of license charges users according to the number
of processor cores that are used.
PVU
licensecpu
When the type of license is SERVER, this required attribute
specifies the processor count limit for this server license. Example:
"licensecpu": "4"
licensememory
When the type of license is SERVER, this required attribute
specifies the memory limit in GB for this server license. Example:
"licensememory": "4"
Note: After you upload a script package, refresh the Script Packages page to
display the uploaded information from the JSON object in the console.
384
Script package parameter values are set only in the environment and are
not written to the virtualimage.properties file. This setting is the
preferred setting.
false
Script package parameter values are set in the environment and are written
to the virtualimage.properties file. This value is the default value. If this
file is not included in your script package, the default value of false is
assumed.
In general, your script packages should not need to access parameters from the
virtualimage.properties file. If you must run scripts manually in your
environment, save these parameters in your own file instead.
Important: When a script package executes, its environment variables, if any, are
added to the environment, along with any environment variables from previously
385
executed script packages. These environment variables are then written out to the
virtualimage.properties file or not, based on the value of the envonly setting for
that script package.
This means that if one script package runs with envonly set to true, followed by a
second script package that runs with envonly set to false (the default), the
environment variables for the second script package are written to
virtualimage.properties, including the environmental variables from the first
script package.
If your pattern deployment is running multiple script packages, and all have
envonly set to true, environment variables are still written to the
virtualimage.properties file, however script package variables are not. However,
if one or more scripts have envonly set to false (or if envonly is not specified),
then the virtualimage.properties file contains the variables from all of the script
packages that have run up to this point.
386
Example:
[
{
"envonly": "true",
"savevars": "1",
"product_license": {
"productid": "5725A26",
"licensetype": "SERVER",
"licensecpu": "16",
"licensememory": "4"
}
}
]
387
v Now, suppose you run the script package on instance VMa1 and change the
configuration settings, effectively creating a new set of parameters, PCa3.
v With the savevars attribute set to 1 (Yes), the configuration PCa3 is saved,
replacing both PCa1 for VMa1 and PCa for the virtual machine:
Virtual machine VMa now has the latest configuration settings, PCa3.
Instance VMa1 also has the latest configuration settings, PCa3.
Instance VMa2 still has its copy of the original configuration, PCa2.
v When the script is run subsequent times on VMa1, PCa3 is applied.
v The next time that the script is run on VMa2, however, PCa2 is applied, but after
execution completes, PCa2 is replaced by the latest configuration, PCa3. On
subsequent runs of the script on VMa2, PCa3 is applied:
Virtual machine VMa has the latest configuration settings, PCa3.
Instance VMa1 has the latest configuration settings, PCa3.
Instance VMa2 now has the latest configuration settings, PCa3.
388
389
WebSphere Application Server operation commands to use for start and stop
services commands
OPERATION_COMMAND="${WAS_PROFILE_ROOT}/bin/ws_ant.sh -f
/opt/IBM/AE/AS/wasHVControl.ant"
WebSphere Administrative Console URL
ADMIN_CONSOLE_URL=
WebSphere Cell Name
CELL_NAME=RainmakerCell0
WebSphere Default Profile location
WAS_PROFILE_ROOT=/opt/IBM/WebSphere/Profiles/DefaultAppSrv01
WebSphere Default Install Location
WAS_INSTALL_ROOT=/opt/IBM/WebSphere/AppServer
WebSphere Profile Root
PROFILE_ROOT=/opt/IBM/WebSphere/Profiles
WebSphere Hostname
HOSTNAME=vm-009-097.rainmaker.raleigh.ibm.com
WebSphere Node Name
NODE_NAME=RainmakerNode0
WebSphere Profile Name
PROFILE_NAME=DefaultAppSrv01
WebSphere Profile Type
PROFILE_TYPE=default
The following additional variables are dynamic. IBM Cloud Orchestrator adds
them each time a script requiring them is run. After the script has completed, these
environment variables are removed.
WebSphere Administrative Password
WAS_PASSWORD=password
WebSphere Administrative Username
WAS_USERNAME=virtuser
390
represent that future value of that property. In addition to custom properties, the
property name can also be any of the following built-in values:
Network-oriented variables
v hostname
v domain
v
v
v
v
v
ipaddr
netmask
gateway
pri_dns
sec_dns
Note: If you want to set the hostname variable for the Windows system,
consider the limitation described in Setting the host name when
deploying a Windows system on page 1039
Locale-oriented variables
v language
v country
v encoding
WebSphere Application Server-oriented variables
v cell_name
v node_name
v augment_list
Related tasks:
Associating a script package with a pattern on page 371
You can associate a script package with a pattern and defined cell through the user
interface.
Working with virtual system patterns (classic) on page 434
Using a virtual system pattern, you can describe the topology of a system that you
want to deploy. Virtual system patterns provide repeatable system deployment that
can be reproduced. To build virtual system patterns, you can use parts from one or
more virtual images, add-ons, and script packages.
391
392
393
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "0",
"scriptkey": "SCALE_DOWN_CPU_THRESHOLD"
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "300",
"scriptkey": "TRIGGER_TIME"
}
],
"location": "\/tmp\/VerticalScaleGenericCPU",
"log": "\/tmp\/VerticalScaleGenericCPU",
"name": "VerticalScaleGenericCPU",
"ostype": "linux\/unix",
"timeout": 0,
"type": "APPLICATION"
}
]
394
For each vertical scaling operation, the number of vCPUs is increased by one CPU.
395
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "300",
"scriptkey": "TRIGGER_TIME"
}
],
"location": "\/tmp\/VerticalScaleGenericMemory",
"log": "\/tmp\/VerticalScaleGenericMemory",
"name": "VerticalScaleGenericMemory",
"ostype": "linux\/unix",
"timeout": 0,
"type": "APPLICATION"
}
]
396
enable_as.sh
This is the script that is called when the script package is run.
When you deploy a virtual system pattern that uses this script package, the
deployment history (over time) shows when the amount of memory is changed. A
message is displayed, similar to the following example:
Memory changed for virtual machine auslpas158-Standalone from 2048 to 3072
For each vertical scaling operation, the amount of memory is increased by 1024
MB.
397
"locked": false,
"required": true,
"scriptdefaultvalue": "80",
"scriptkey": "SCALE_UP_CPU_THRESHOLD",
"scriptvalue": "80"
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "0",
"scriptkey": "SCALE_DOWN_CPU_THRESHOLD",
"scriptvalue": "0"
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "120",
"scriptkey": "TRIGGER_TIME",
"scriptvalue": "120"
}
],
"location": "\/tmp\/WASHV_CPUBased",
"log": "\/tmp\/WASHV_CPUBased",
"name": "WAS_VerticalScalingCPUScriptPkg",
"ostype": "linux\/unix",
"timeout": 0,
"type": "APPLICATION"
}
]
398
[
{
"savevars": "0"
}
]
was_webresource.py
This script is located in the scripts folder, and is called by resource.py to
perform the middleware tuning to adjust the thread pool size for
WebSphere Application Server.
The script contains the following code:
import AdminUtilities
def getName(objectId):
endIndex = (objectId.find("(c") - 1)
stIndex = 0
if (objectId.find("\"") == 0):
stIndex = 1
return objectId[stIndex:endIndex+1]
assert len(sys.argv) == 1
target_cpu = int(sys.argv[0])
setMaxInt = 20 * target_cpu
setMaxStr = str(setMaxInt)
print "Set WebContainers thread pool max size: %s, min size: %s" %
(setMaxStr, setMaxStr)
try:
theList = AdminControl.completeObjectName(
WebSphere:*,type=ThreadPool,name=WebContainer)
theList = theList.splitlines()
for tp in theList:
if tp.find(WebContainer) != -1:
currMinSize = AdminControl.invoke(tp, getMinimumPoolSize)
currMinSize = int(currMinSize)
currMaxSize = AdminControl.invoke(tp, getMaximumPoolSize)
currMaxSize = int(currMaxSize)
399
400
"scriptkey": "MAX_MEMORY",
"scriptvalue": "4096"
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "80",
"scriptkey": "SCALE_UP_MEMORY_THRESHOLD",
"scriptvalue": "80"
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "0",
"scriptkey": "SCALE_DOWN_MEMORY_THRESHOLD",
"scriptvalue": "0"
},
{
"locked": false,
"required": true,
"scriptdefaultvalue": "120",
"scriptkey": "TRIGGER_TIME",
"scriptvalue": "120"
}
],
"location": "\/tmp\/WASHV_MemoryBased",
"log": "\/tmp\/WASHV_MemoryBased",
"name": "WAS_VerticalScalingMemoryScriptPkg",
"ostype": "linux\/unix",
"timeout": 0,
"type": "APPLICATION"
}
]
401
os
sys
json
re
shutil
commands
logging
subprocess
def calculate_64bit_jvmmemory(matchobj):
newHeapSize = int(matchobj.group(1)) + int(gap/1.5/128)*128
if newHeapSize > 6144:
return 6144
else :
return str(newHeapSize)
def calculate_32bit_jvmmemory(matchobj):
newHeapSize = int(matchobj.group(1)) + int(gap/1.5/128)*128
if newHeapSize > 2048:
return 2048
else :
return str(newHeapSize)
def jvm_memory_str(matchobj):
newHeapSize = int(int(parms[newMemory])/1.5/128)*128
if newHeapSize > 2048:
newHeapSize = 2048
strval = str(matchobj.group(1)) + initialHeapSize="128" maximumHeapSize="
+str(newHeapSize)+"
return str(strval)
def startServer():
#os.system(/opt/IBM/WebSphere/AppServer/bin/startServer.sh server1)
subprocess.call([sh, /opt/IBM/WebSphere/AppServer/bin/startServer.sh,
server1])
logger.debug("start Server...")
def stopServer():
#os.system(/opt/IBM/WebSphere/AppServer/bin/stopServer.sh server1)
command = /opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -conntype SOAP
-lang jython -f /tmp/WASHV_MemoryBased/scripts/stopServer.py
subprocess.call(command, shell=True)
402
logger.debug("stop Server...")
logger = logging.getLogger("resource.py")
parms = json.loads(sys.argv[1])
if int(parms[newMemory]) > int(parms[oldMemory]):
gap = int(parms[newMemory]) - int(parms[oldMemory])
#originalFile =
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/config/cells/localhostNode01Cell
/nodes/localhostNode01/servers/server1/server.xml
originalFile =
/opt/IBM/WebSphere/Profiles/DefaultAppSrv01/config/cells/CloudBurstCell_1
/nodes/CloudBurstNode_1/servers/server1/server.xml
backupFile = originalFile + .bk
output = commands.getoutput(/opt/IBM/WebSphere/AppServer/bin/versionInfo.sh)
#print output
if os.path.exists(originalFile):
if not os.path.exists(backupFile):
shutil.copy(originalFile, backupFile)
with open(originalFile) as cfgFile :
data = cfgFile.read()
m = re.search(32 bit,output)
n = re.search(64 bit,output)
if m :
print IBM 32-bit SDK for Java
t = re.search(maximumHeapSize=,data)
if t :
if re.search(maximumHeapSize="2048",data):
print maximumHeapSize="2048" There is no extra
memory size to scaling up
logger.debug("There is no extra memory size to scaling up")
new_data = data
else :
new_data=re.sub(r(?<=maximumHeapSize=")
(\d+), calculate_32bit_jvmmemory, data)
else :
new_data=re.sub(r(?<=verboseModeJNI=")(\S+), jvm_memory_str, data)
elif n :
print IBM 64-bit SDK for Java
new_data=re.sub(r(?<=maximumHeapSize=")
(\d+), calculate_64bit_jvmmemory, data)
else :
print Nothing changed...
logger.debug("There is no related information ")
if cmp(data,new_data) != 0 :
with open(originalFile, "w") as cfgFile :
cfgFile.write(new_data)
Info = commands.getoutput(ps -ef | grep java)
res = re.search(server1,Info)
if res :
print stopServer ....
stopServer()
print startServer ....
startServer()
else :
print startServer ....
startServer()
else :
logger.debug("There is no configuration file to update ")
stopServer.py
This script is located in the scripts folder, and is called by resource.py to
stop the WebSphere Application Server instance.
The script contains the following code:
AdminControl.stopServer("server1", "CloudBurstNode_1")
403
system environment.
Script variables
There are two parameters that are part of this script package.
APP_LOCATION:
Required. The location of the application. The location of the application
can be either a file system location or remote location over http or https.
INSTALL_ARGS:
Optional. Install arguments for the AdminApp.install() wsadmin command.
The default command is "AdminApp.install(appLocation,
'[-usedefaultbindings]'). Other arguments can be supplied using this
variable. An example value for this variable is "-usedefaultbinding
-server myServer -appName MyApp". If the application is remote, it is
copied to the current working directory before the installation command is
started.
cbscript.json example
[
{
"name": "Install application",
"version": "1.0.0",
"description": "This script package installs the specified application",
"command": "/bin/sh ${WAS_PROFILE_ROOT}/bin/wsadmin.sh",
"log": "${WAS_PROFILE_ROOT}/logs/wsadmin.traceout",
"location": "/opt/tmp/installapp",
"timeout": "0",
"ostype": "linux/unix",
"commandargs": "-lang jython -f /opt/tmp/installapp/install_app.jy
$APP_LOCATION $INSTALL_ARGS",
"keys":
[
{
"scriptkey": "APP_LOCATION",
"scriptvalue": "",
"scriptdefaultvalue": ""
},
{
"scriptkey": "INSTALL_ARGS",
"scriptvalue": "",
"scriptdefaultvalue": ""
}
]
}
]
Example script
Note: This example script is designed for version 7.0.0.x patterns only.
import urllib
from
from
from
from
404
def download(url):
fileLocs = String(url).split(/)
lastPart = fileLocs[len(fileLocs) - 1]
file = File(lastPart)
file.createNewFile()
newFileLoc = file.getAbsolutePath()
urllib.urlretrieve(url, newFileLoc)
return newFileLoc
def copyZip(binURL):
binURL = str(binURL)
url = None;
fileRemote = Boolean.FALSE
appFileLoc =
try:
url = URL(binURL)
fileRemote = Boolean.TRUE
except:
pass
if fileRemote:
print Start retrieval of + binURL
appFileLoc = download(str(binURL))
else:
print File already local + binURL
appFileLoc = File(binURL).getAbsolutePath()
return appFileLoc
binURL = sys.argv[0]
installArgs = [-usedefaultbindings]
if len(sys.argv) == 2:
installArgs = [ + sys.argv[1] + ]
appLocation = copyZip(binURL)
AdminApp.install(appLocation, installArgs)
AdminConfig.save()
Script variables
The following parameter is included in this script package.
TRACE_SPEC:
Specifies the trace specification for the cell. This parameter is required.
cbscript.json example
[
{
"name": "Configure Trace Specification",
"version": "1.0.0",
"description": "This script package configures trace specification
on all servers in a cell",
"command": "${WAS_PROFILE_ROOT}/bin/wsadmin.sh",
"log": "${WAS_PROFILE_ROOT}/logs/wsadmin.traceout",
"location": "/opt/tmp/configtrace",
"timeout": "0",
Chapter 7. Managing and deploying virtual patterns
405
"ostype": "linux/unix",
"commandargs": "-lang jython -f /opt/tmp/configtrace/configure_trace.jy
$TRACE_SPEC",
"keys":
[
{
"scriptkey": "TRACE_SPEC",
"scriptvalue": "",
"scriptdefaultvalue": ""
}
]
}
]
Example script
Note: This example script is designed for version 7.0.0.x patterns only.
from java.lang import String
traceSpec = sys.argv[0]
nodes = AdminNodeManagement.listNodes()
for node in nodes:
nodeStr = String(node)
nodeStr = String(nodeStr.substring(0, nodeStr.indexOf(())).trim()
appServers = AdminServerManagement.listServers(APPLICATION_SERVER, nodeStr)
for appServer in appServers:
appServerStr = String(appServer)
appServerStr = String(appServerStr.substring(0, appServerStr.indexOf(())).trim()
AdminTask.setTraceSpecification([-serverName + appServerStr + -nodeName
+ nodeStr + -traceSpecification + traceSpec + -persist true])
AdminConfig.save()
Script variables
The following parameter is include in this script package.
SERVER_NAME:
Specifies the name of the server to be created on each node. If multiple
nodes exist in the pattern, the server name is augmented with a counter
that begins at 1. This parameter is required.
cbscript.json example
[
{
"name": "Server creation",
"version": "1.0.0",
"description": "This script package creates a server on each node
within the cell",
"command": "${WAS_PROFILE_ROOT}/bin/wsadmin.sh",
"log": "${WAS_PROFILE_ROOT}/logs/wsadmin.traceout",
"location": "/opt/tmp/createserver",
"timeout": "0",
"ostype": "linux/unix",
"commandargs": "-lang jython -f /opt/tmp/createserver/create_server.jy
$SERVER_NAME",
"keys":
[
406
{
"scriptkey": "SERVER_NAME",
"scriptvalue": "",
"scriptdefaultvalue": ""
}
]
}
]
Example script
Note: This example script is designed for version 7.0.0.x patterns only.
serverName = sys.argv[0]
managedNodeStr = AdminTask.listManagedNodes()
if len(managedNodeStr) != 0:
managedNodes = managedNodeStr.split("\n")
i=1
for managedNode in managedNodes:
thisServer = serverName + "_" + str(i)
AdminServerManagement.createApplicationServer(managedNode, thisServer, default)
i=i+1
else:
node = AdminControl.getNode()
AdminServerManagement.createApplicationServer(node, serverName, default)
AdminConfig.save()
Script variables
There are two parameters included in this script package.
CLUSTER_NAME:
Specifies the name of the new cluster. This parameter is required.
SERVER_NAME:
Specifies the name of the servers. The script package automatically
appends numbers to the supplied server name to ensure that each server
name is unique. This parameter is required.
cbscript.json example
[
{
"name": "Cluster creation",
"version": "1.0.0",
"description": "This script package creates a server on each node
within the cell and then creates a cluster from those servers",
"command": "${WAS_PROFILE_ROOT}/bin/wsadmin.sh",
"log": "${WAS_PROFILE_ROOT}/logs/wsadmin.traceout",
"location": "/opt/tmp/createcluster",
"timeout": "0",
"ostype": "linux/unix",
"commandargs": "-lang jython -f /opt/tmp/createcluster/createCluster.jy
Chapter 7. Managing and deploying virtual patterns
407
$CLUSTER_NAME $SERVER_NAME",
"keys":
[
{
"scriptkey": "CLUSTER_NAME",
"scriptvalue": "",
"scriptdefaultvalue": ""
},
{
"scriptkey": "SERVER_NAME",
"scriptvalue": "",
"scriptdefaultvalue": ""
}
]
}
]
Example script
Note: This example script is designed for version 7.0.0.x patterns only.
cellName = AdminControl.getCell()
clusterName = sys.argv[0]
serverName = sys.argv[1]
managedNodeStr = AdminTask.listManagedNodes()
managedNodes = managedNodeStr.split("\n")
i=0
for managedNode in managedNodes:
appServers = AdminServerManagement.listServers(APPLICATION_SERVER, managedNode)
webServers = AdminServerManagement.listServers(WEB_SERVER, managedNode)
appSrvLen = len(appServers)
webSrvLen = len(webServers)
if appSrvLen == 0 and webSrvLen == 0:
if i == 0:
AdminTask.createCluster([-clusterConfig [-clusterName + clusterName +
-preferLocal true]])
cluster = AdminConfig.getid(/ServerCluster: + clusterName + /)
memberName = serverName + str(i)
node = AdminConfig.getid(/Node: + managedNode + /)
AdminConfig.createClusterMember(cluster, node, [[memberName, memberName ]])
i = i + 1
AdminConfig.save()
Managing add-ons
You can configure user and NIC parts in your catalog and then use them as parts
in your patterns.
408
409
Procedure
1. Navigate to the Add-Ons window by clicking PATTERNS > Pattern Design >
Add-Ons from the menu bar.
2. You can perform the following tasks:
v Adding add-ons to the catalog on page 411
v Cloning an add-on on page 412
v Editing an add-on on page 413
v Making add-ons read-only on page 414
v Associating an add-on with a pattern on page 414
v Deleting an add-on on page 415
For more information about the fields on the Add-Ons window, see Fields on
the Add-Ons user interface on page 416.
410
Results
After completing these steps you have managed the add-ons that you can add to
parts on deployable patterns.
What to do next
After you have configured your add-ons you can work with them as parts on a
pattern in the Pattern Editor window. For more information about editing parts,
see For more information, see Configuring parts on page 447. For more
information about working with patterns, see Working with virtual system
patterns (classic) in the user interface on page 436. If you are working with a NIC
add-on, it can be configured with an environment profile. See Managing
environment profiles on page 354 for more information.
Procedure
1. From the left panel of the Add-Ons window, click Create New to add an
Add-On to the catalog.
2. Select the add-on archive and click Import. The new add-on displays in the left
panel of the Add-Ons window. The right panel displays configuration options
for the add-on.
3. Select the type of add-on. Add-ons can be one of the following types:
4.
5.
6.
7.
Disk
Adds a virtual disk to the virtual machine and, optionally, formats and
mounts the disk.
NIC
411
v Executable
v Arguments
8. Optional: Set a timeout value in the Timeout field.
9. Optional: Add or change the user permissions for this add-on with the Access
granted to field.
Results
You added a new add-on to the catalog.
What to do next
The add-on is ready to be used as a part in your patterns.
Cloning an add-on
You can create an add-on based on an existing add-on. Default add-ons are
included that can either be added as they are or cloned and edited. The cloned
add-on can then be modified to your specifications.
Procedure
1. Locate the add-on to clone. If it is not displayed, you can search for the
add-on. From the left panel of the Add-Ons window, enter all or part of the
name of the add-on in the search field.
2. Select the add-on to clone. Click the add-on to copy from the listing on the
left panel of the Add-Ons window. Details about the add-on are displayed in
the right panel of the Add-Ons window.
3. Click the clone icon to copy the add-on you have selected.
4. Enter the name for the new add-on and click OK. The details panel for the
new add-on is now displayed and can be modified. When you have
completed the configuration for the add-on, the add-on is saved for you as
you navigate away from the Add-Ons window.
5. Optional: Add or change the description to help identify the add-on.
6. Optional: In the Add-on package files section, provide the add-on package
files in one of the following ways:
v Provide a custom add-on package using the browse function to locate your
custom package.
v Download and modify the default add-on implementation.
7. Optional: Use the Environment section to remove existing environment
variables or create new ones.
8. Optional: Configure standard script package parameters for the following
directories:
v Working
412
v Logging
v Executable
v Arguments
9. Optional: Set a time out value in the Timeout field.
10. Optional: Add or change the user permissions for this add-on.
Results
After you have completed these steps, any users or groups with access, can use the
add-on with pattern nodes.
What to do next
You can now associate this add-on with a pattern. See Working with virtual
system patterns (classic) on page 434 for more information about patterns.
Editing an add-on
You can edit any add-on that is not read-only. You can modify a add-on to suit the
changing needs of your environment.
Procedure
1. Select the add-on. Select the add-on you want to edit from the left panel of the
Add-Ons window. Add-ons that have the locked symbol by them cannot be
edited. Add-ons with the edit symbol beside them can be edited. The
information about that add-on is shown in the right panel of the Add-Ons
window.
2. Click the edit icon. From the top of the right panel of the Add-Ons window,
click the edit icon.
3. Edit the fields on the right panel of the Add-Ons window. For more
information about these fields, see Fields on the Add-Ons user interface on
page 416.
Results
When you have finished editing an add-on, it is ready to be added to part on a
pattern.
What to do next
You can lock the add-on against future editing. For more information, see Making
add-ons read-only on page 414. You can add the add-on to a part on a pattern.
For more information, see Working with virtual system patterns (classic) on page
434.
413
Procedure
1. Select the add-on. From the left panel of the Add-Ons window, select the
add-on. Add-ons that have the read-only symbol by them are already read-only
and cannot be edited. Add-ons with the edit symbol beside them are not
read-only and can be edited. Basic information about the selected add-on is
shown in the right panel of the Add-Ons window.
2. Made the add-on read-only to lock it to future editing. Click the Lock icon in
the upper right toolbar of the Add-Ons window.
3. Verify that you want to make the add-on read-only. When prompted to verify
that you want to make the add-on read only, click OK to lock the add-on.
Results
When you have made the add-on read-only, it can be cloned or deleted but it
cannot be edited.
What to do next
You can include the add-on on a part in a pattern to be deployed to the cloud. For
more information, see Working with virtual system patterns (classic) on page
434.
Procedure
1. Navigate to the Virtual System Patterns window by clicking PATTERNS >
Instances > Virtual System Patterns, for virtual systems, or by clicking
PATTERNS > Instances > Virtual System Patterns (Classic), for virtual
systems (classic).
414
2. Select a pattern. In the left panel of the Virtual System Patterns window, select
the pattern with which to associate the add-on. The pattern must not be
read-only. Basic information about the selected pattern displays in the right
panel of the Virtual System Patterns view.
3. Edit the pattern. Click Edit on the upper right of the Virtual System Patterns
(Classic) window, for virtual systems (classic), or click Open on the upper right
of the Virtual System Patterns window, for virtual systems.
4. Add Add-Ons.
For virtual systems (classic): from the drop-down box in the left panel of the
Pattern Editor, click Add-Ons. A list of the add-on parts is provided that can be
dropped into the virtual image parts on the right panel of the Pattern Editor
view. This list can contain any add-ons provided for use with IBM Cloud
Orchestrator. Add-ons can then be added to the virtual image parts. You can
drop any add-on from this list onto the virtual image parts on the canvas on
the right. This associates the add-on with that part.
For virtual systems: in Pattern Builder click the image object. Then click Add a
Component Add-On and select from the list the add-on that you want to add
to the image object. This operation associates the add-on with the image part.
5. Optional: Configure any properties defined in the add-on. Properties added to
add-ons can be defined when associating the add-on with a part or it can be
defined during deployment.
For virtual systems (classic): click the edit properties icon to set the value now.
It is possible to use a variable syntax to set the value for properties where the
value is not yet known. For more information about setting the value of a
property to be variable, see Properties variable syntax on page 390.
For virtual systems: click the associated add-on object in the image part and
edit the properties of the add-on on the menu in the right side of Pattern
Builder.
Results
You added one or more add-ons to the virtual images on the pattern.
What to do next
When you have associated the add-on with a pattern, you can complete
configuration of the pattern and deploy it to the cloud group.
Deleting an add-on
You can manually delete an add-on, disassociating it with the pattern and cell,
using the user interface.
415
Procedure
1. Locate the add-on. If it is not displayed, you can search for the add-on you
want to delete. From the left panel of the Add-Ons window, enter all or part of
the name of the add-on in the search field.
2. Select the add-on to delete from the listing on the left panel of the Add-Ons
window. Details about the add-on are displayed in the right panel of the
Add-Ons window.
3. Determine if it can be deleted. An add-on can only be deleted if it is not:
v Marked as read-only. The read-only icon is shown in the listing of add-ons if
it is read-only.
v Included in any patterns. If it is included in any patterns, the delete icon is
not available and the Included in patterns field displays the linked patterns
in which this add-on is included.
If the add-on is referenced by any patterns, you can click the pattern name link
in the Included in patterns field to go to the Virtual System Patterns panel for
that pattern. From this panel, you can remove the add-on part from the part, or
parts, on the pattern.
4. Delete the add-on. Click the delete icon on the top right of the Add-Ons
window.
5. Confirm the deletion. You are prompted to confirm that you want to delete the
selected add-on. Click OK to delete the add-on.
Results
The add-on is deleted from IBM Cloud Orchestrator.
416
Delete Removes the add-on from the IBM Cloud Orchestrator catalog.
Created on
The creation time of the add-on, as the number of seconds since midnight
January 1, 1970 UTC. When the add-on is displayed, this value is shown as
the date and time in the local timezone.
Current status
The status of the add-on can be one of the following status types:
The edit icon
The add-on can be edited.
Read-only
The add-on is locked to editing. For information about making an
add-on read-only, see Making add-ons read-only on page 414.
Updated on
Thd time the add-on was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the add-on is displayed, this value
is shown as the date and time in the local timezone. This field is read-only.
Add-on package files
If you are cloning one of the provided default add-ons, you can create
custom add-ons by downloading and modifying the add-on package. The
add-on package is defined for each type of add-on:
Default add NIC
Download the defaultaddnic.zip package.
Default configure NIC
Download the defaultconfigurenic.zip package.
417
418
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. Click Create New.
3. Build your virtual system pattern:
a. Select a virtual system template or start from a blank pattern.
b. Set the Name and Version for the virtual system pattern.
Note: Use a unique name and version combination for the pattern. If you
attempt to create a pattern with the same name and version as an existing
pattern, an error is displayed.
c. Click Start Building.
419
The Pattern Builder opens in a new web browser page where you can
build the virtual system pattern.
4. Specify the properties for the pattern in the pattern properties pane:
Name The name of the virtual system pattern.
Version
The version of the virtual system pattern.
Description
The description of the virtual system pattern. This field is optional.
Type
420
v Some properties have a lock symbol next to their value in the right
pane. If you lock a property by clicking this symbol, that property is not
configurable from the deployment page when a user deploys the virtual
system pattern.
v Some properties have an icon with a green arrow next to them. Click
this icon to add a reference to a component-level or pattern-level
parameter as the value for the attribute, and create an explicit data
dependency link. After you click the icon, the Add a reference page is
displayed:
Select whether you want to reference a component-level parameter or
a pattern-level parameter.
Then, select the component and an output attribute for that
component that you want to reference.
Click Add to create the reference.
Tip:
For example, if your pattern contains a DB2 component and a
WebSphere Application Server component, you might want the
db_user parameter for the DB2 component to reference the WASROOT
output attribute of the WebSphere Application Server component.
Refer to the documentation for the pattern that you are using for
specific details about its components and attributes.
v If you want to remove an asset from the canvas, click Remove
component on the asset.
v After you build your pattern, you can change to the List View tab to
view the topology as a vertical list of components. You can configure the
values for the properties in the pattern from the canvas, or from the List
View tab.
Images
Virtual images provide the operating system and product binary
files that are required to create a virtual system instance. Some
virtual images include only the operating system files, while others
also include product binary files.
If multiple versions of the image are available, such as 8.5.5.0
and 8.5.5.1, select the version that you want to deploy with the
pattern after you add the image to the canvas.
Scripts
Scripts can include almost any set of executable files and artifacts
that are valid for the target virtual machine environment. Create
these scripts on your local workstation, in IBM PureApplication
System, or by using the Plug-in Development Kit. Then, import the
script package to the system so that it is available in the catalog
and in the Assets palette in the Pattern Builder.
Note:
v Scripts cannot be added to the canvas directly. You must add
them to images or components.
v If multiple versions of the script are available, such as 1.0 and
2.0, select the version that you want to use with the pattern
after you add the script to the canvas.
421
422
423
424
Enable HTTPS
Specify whether HTTPS is used by the topology (for a
pattern-level policy) or component (for a component-level
policy). If you select Enable HTTPS, you must enter the
port that the topology or component uses for HTTPS. If the
port is specified, it is opened on the firewall when the
pattern deploys.
6. Optional: Click the Add a Component Add-on icon on the image to add an
add-on. For more information about add-ons, see the Related concepts
section.
7. Optional: Click Advanced Options on the canvas to configure advanced
options. The advanced options that are available depend on the topology of
the virtual system pattern you are editing.
Note: When you open the advanced options editor for a new virtual system
pattern that has no advanced options set, the displayed settings are the
recommended values for the topology.
8. Optional: If your pattern includes a WebSphere Application Server software
component, configure these properties:
v Specify the location of the WebSphere Application Server installation.
Note: This property is mapped to the installation directory property of the
WebSphere Application Server software component, so it should be
populated automatically. If it is not populated, enter the installation location
manually.
v If there is more than one WebSphere Application Server on the node, and
you want to specify the order in which the servers are restarted, specify the
order in the WAS Servers Restarting Order property. Enter the server
names in the required order. Separate the server names with semicolons.
v Specify the WebSphere Application Server user name.
9. Set the order of deployment for each of the components in the pattern.
a. Use the Move up and Move down options on software components and
scripts to determine the order in which components are started.
b. Change to the Ordering tab to change the order for other components in
the pattern. You can also modify the order for software components and
scripts from this tab.
10. Click Save.
What to do next
When creating virtual system patterns, you can use the Pattern Builder to specify
an OS prerequisite list or a set of global package installation options that are
required by a component in the metadata of the system plug-in. The Red Hat
Update Infrastructure (RHUI) Service or the Red Hat Satellite Service must be
deployed in the cloud group before you deploy the pattern, which allows virtual
machine clients to download and install the RPMs using these services. The RHUI
Service provides a Yellowdog Updater, Modified (YUM) repository.
If you are a software component developer and need to ensure certain Red Hat OS
RPMs are installed in the image, you must ensure that they are described in the
metadata.json file. For example, to add MySQL and VNC packages to the
deployed virtual machines automatically, you can add a stanza to the
metadata.json file as shown in the following text:
Chapter 7. Managing and deploying virtual patterns
425
"configurations": {
-"partsKey": "Yum",
-"packages": [
-"YUM"],
-"prereq_os_packages": [
-{
-"packages": [
-"mysql",
-"vnc"],
-"type": "yum"}]},
During pattern deployment, OS packages can be added or removed from the list
specified by the Pattern Builder, and can override deployment policies at a virtual
machine level or a pattern level for the entire pattern with multiple virtual
machines.
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. Click the Import icon on the toolbar.
3. Click Browse to select the compressed file that contains the pattern that you
want to import.
4. Click Import.
v If there are no existing patterns on the system with the same name and
version as the pattern that you are importing, the pattern is imported.
v If a pattern with the same name and version exists on the system, and one or
both of the patterns are not read-only, you are prompted with options:
a. Specify a unique name for the imported pattern.
Note: This option is available only when the pattern that you are
importing is not read-only.
b. Specify a unique version for the imported pattern.
Note: This option is available only when the pattern that you are
importing is not read-only.
c. Replace the existing pattern.
426
Note: This option is available only when the existing pattern is not
read-only.
After you make a selection, click Import again to complete the import
process.
v If a pattern with the same name and version exists on the system, and both
of the patterns are read-only, the import fails.
What to do next
After you import your virtual system, you can edit the model and deploy it into
the system.
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. Select a virtual system pattern.
3. Click the Clone icon on the toolbar.
4. Provide the following information about the new virtual system pattern:
Name Enter a unique name for the virtual system pattern in the Name field.
Virtual images
Select a different virtual image than the image that is used in pattern
for any of the virtual images that are in the pattern as needed.
5. Click OK. When the information is processed, the Virtual System Patterns page
displays. The virtual system pattern that you created is added to the list. Select
the cloned virtual system pattern to display more details about the pattern.
6. Click the Open icon to open the cloned virtual system pattern in the Pattern
Builder. Edit the virtual system pattern as needed. You can add or remove
images, and then you can add or remove software components, script packages,
add-ons, and policies. You can also add or update property mappings between
components.
7. When you are finished editing the pattern, click Save.
What to do next
You can lock the virtual system pattern against future editing. For more
information about making virtual system patterns read-only, see the related links.
You can also deploy the virtual system pattern to the cloud. For more information
about deploying, see the related links.
427
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. Click Export on the toolbar.
3. Click Save to save the compressed file for the pattern to a local directory.
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. If the pattern has multiple versions, expand the entry for the pattern.
3. Click Delete in the Actions column for the pattern that you want to delete or
select the virtual system pattern and click Delete on the virtual system pattern
details page.
4. Click Confirm to confirm that you want to delete the pattern.
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. Select the virtual system pattern that you want to modify.
428
Note: If the pattern has multiple versions, expand the entry for the pattern,
and then select the version that you want to modify.
3. Click Open on the toolbar.
4. Modify the pattern, as needed. For more information about the assets that are
available, see the "Creating virtual system patterns" topic in the related links.
If you want to preserve the previous configuration for the pattern, you can save
the modified pattern as a different version by changing the value in the
Version field. You also have the option to set a different version when you save
the pattern.
5. Save the pattern:
v Click the Save icon to save the pattern with the same name and version. This
option overwrites the existing pattern with the specified name and version.
v Click Save as to change the name or version for the pattern. This option
preserves the original pattern, and saves the modified pattern with the new
name and version. If you change only the version, the pattern is displayed
by name, and you can expand the entry to see all versions of the pattern.
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns. Virtual system
patterns that have a status of Read-only cannot be edited. Virtual system
patterns that have a status of Draft can be edited.
2. Select the virtual system pattern that you want to make read-only to show the
pattern details.
3. To lock editing of the virtual system pattern, click the Lock icon in the toolbar
of the Pattern details page.
You are prompted with a list of any unlocked components in the pattern that
can be modified even if the pattern is read-only.
Chapter 7. Managing and deploying virtual patterns
429
4. Click Confirm to lock the virtual system pattern or Cancel to discard the
changes.
CAUTION:
You cannot unlock a virtual system pattern after you lock it.
Results
If you clicked Confirm to lock the virtual system pattern, the pattern is now
read-only.
Note: If the pattern contains one or more script packages or add-ons that are not
locked, the Current status field in the Virtual System Patterns details page is
displayed with the following status:
Read-only (The pattern content can still be modified because some script packages,
add-ons, or images are not read-only)
You can lock your script packages and add-ons as needed before you deploy the
virtual system pattern. To lock a script package or add-on, return to the
appropriate catalog page in the console, select the script package or add-on, and
use the Lock function to change the state to read-only.
430
Note: After upgrading to 2.4.0.x, when deploying a virtual system pattern created
in IBM Cloud Orchestrator 2.4 that uses Linux images, you are prompted for the
password of the virtuser user (the default non-root user to access the deployed
virtual machines). Provide the additional information in the virtual system pattern
deployment dialog or modify the pattern by using the Pattern Editor.
Procedure
1. Click PATTERNS > Pattern Design > Virtual System Patterns.
2. Click Deploy in the Actions column for the pattern that you want to deploy,
or select the pattern that you want to deploy and click Deploy on the toolbar.
On the Configure pane:
3. Edit the name for the deployment, if needed. This name displays on the
Instances page after the pattern deploys.
4. Select the environment profile that you want to use for the deployment.
v If the network for the selected environment profile is set to Internally
Managed:
Select a Cloud Group and an IP Group.
Note: You can also set the IP group for each virtual machine on the
Distribute pane. You cannot change the Cloud Group for the
deployment after you configure it on this pane.
The deployment is limited to a single cloud group.
If the network for the selected environment profile is set to Externally
Managed, you select the cloud group and IP group for the deployment
later, on the Distribute pane.
5. Set the priority for the deployment.
v
Note: For more information about deployment priorities, see the Related
tasks.
6. Modify the deployment schedule as needed:
v Choose Start now, or choose Start later and select a date and time for the
deployment to start.
v Choose Run indefinitely, or choose Run until and select a date and time
for the deployment to end.
7. Optional: To set up SSH access, use one of the following options in the SSH
Key section to set the public key:
v To generate a key automatically, click Generate. Click Download to save
the private key file to a secure location. The default name is id_rsa.txt.
The system does not keep a copy of the private key. If you do not
download the private key, you cannot access the virtual machine, unless
you generate a new key pair. You can also copy and paste the public key
into a text file to save the key. Then, you can reuse the same key pair for
another deployment. When you have the private key, make sure that it has
the correct permissions (chmod 0400 id_rsa.txt). By default, the SSH client
does not use a private key file that provides open permission for all users.
v To use an existing SSH public key, open the public key file in a text editor
and copy and paste it into the SSH Key field.
Important: Do not use cat, less, or more to copy and paste from a
command shell. The copy and paste operation adds spaces to the key that
prevent you from accessing the virtual machine.
431
The SSH key provides access to the virtual machines in the cloud group for
troubleshooting and maintenance purposes. See the topic, "Configuring SSH
key-based access", for details about SSH key-based access to virtual machines.
8. Modify the pattern and component attributes as needed.
The attributes that display in the pattern configuration column are attributes
from the pattern and components in the pattern that are not locked from
editing. You can modify existing values or set values that were not specified
during pattern creation. Be sure that all required fields have values.
Components that have a blue dot next to the name contain required attributes
that must be set before the pattern is deployed.
9. When you are finished configuring all of the fields on the Configure tab, click
Continue to distribute.
On the Distribute pane:
The virtual machines in the deployment are placed in cloud groups by the system.
10. Optional: To modify the placement of the virtual machines, drag the virtual
machines to different cloud groups.
v If you drag a virtual machine cell that contains more than one virtual
machine, you are prompted to select the number of virtual machines that
you want to move. You must select the number from the list in the dialog.
After you move a virtual machine to a different cell, the IP group
assignments are set to default values. If needed, you can edit the virtual
machine network settings in the next step to modify the IP group.
v If you modify the placement of the virtual machines, the new placement is
validated to ensure that the necessary resources and artifacts are available
in the selected cloud group.
v If there is a problem with the placement, a message is displayed. Resolve
the issue with the placement before you continue.
For example, if this message displays when you modify the placement:
CWZKS7002E Insufficient memory to place the pattern, move the virtual
machine to a different cloud group with sufficient memory resources for the
pattern.
If you see the error: Unable to assign to cloud group, there is an error
with the location, cloud group, NIC or IP groups for the cell where the
error is displayed. If this error message occurs, you must resolve the issue
with that cell before you are allowed to drag a virtual machine to that cell
for placement there. Hover your mouse over the error to display more
details about the problem in a pop-up window.
11. To edit the network or storage volume settings for a virtual machine, hover
your mouse over the virtual machine icon and click the pencil icon.
a. On the IP Groups tab, you can modify IP group for each of the NICs in
the virtual machine. The IP groups that are listed are associated with the
environment profile that you chose for the deployment. If the IP address
provided by field in the environment profile that you chose for the
deployment is set to Pattern Deployer, you must set the IP address for
each NIC in the deployment.
b. If there is a Default attach block disk add-on in the pattern, you can
modify the storage volumes for the virtual machine on the Storage
Volumes tab. You can use an existing storage volume or create one to
attach to the component during deployment. If you choose to create a new
storage volume, configure these settings:
Name Set the name for the storage volume.
432
Description
Optional. Set a description for the storage volume.
Size (GB)
Set the size for the storage volume, in GB.
Volume Groups
Select a volume group for the storage volume. A storage volume
group is a logical grouping of volumes that can span workloads
and cloud groups.
c. Click OK when you are finished updating the settings.
12. When you are finished modifying the settings, click Deploy.
When the virtual system is deployed, the virtual system instance is listed
under the Virtual System Instances section of the IBM Cloud Orchestrator
Self-service user interface. To view the virtual system instance, click
PATTERNS > Instances > Virtual System Instances.
The virtual memory and virtual processor settings that are configured for the
virtual images in the virtual system pattern must be met by the requirements
for the software components in the pattern. If these requirements are not met,
the deployment fails and an error message that lists the memory and
processor requirements is displayed. If this error occurs, modify the processor
and memory settings in the pattern so that the requirements are met, and
deploy the pattern again.
13. View the details of the virtual system instance in the Virtual System Instances
page.
Results
The placement is validated again to ensure that the resources and artifacts that
were used for validation during the initial placement are still available. If there is a
problem with the placement, an error message is displayed, and a red circle is
displayed on the circle that contains errors. Hover over the cell that contains
errors, and then hover over the yellow triangle in the resulting pop-up window to
view more details about the errors. Resolve the issue with the placement, such as
moving the virtual machine to a different system with sufficient resources so that
the deployment can continue.
After placement validation is successful, the virtual system instance is deployed
and started. To stop the virtual system instance, select the virtual system instance
from the list, and click Stop. To start the virtual system instance again, select the
virtual application click Start.
To remove a stopped topology, select it from the Virtual System Patterns page,
and click Delete.
Note: When deploying instances to a VMware region, the name specified at
deployment time is propagated from the Self-service user interface to the
OpenStack component. The instance is created in the VMware hypervisor using the
corresponding OpenStack UUID. To match an instance in OpenStack with the
actual instance deployed on VMware, complete the following steps:
1. On the Region Server where the deployment occurred, run the nova list
command and identify the instance UUID in the command output.
2. In the vCenter client, in the search field, type the instance UUID and press
Enter.
433
What to do next
After you deploy the virtual system instance, you can use the IP address of the
virtual machines to access the application artifacts. For example, you can manually
enter the URL in your browser.
http://IP_address:9080/tradelite/
Procedure
1. Because a base image license is not included in a Business Process Manager
pattern, you must provide the base image. For example, you can use the IBM
OS Image for Red Hat Linux Systems V2.1.0.1 (CN3CHML), or you can create a
base image as described in Creating base images on page 332.
2. Import the DB2 pattern (CN30GML), if you want to use Business Process Manager
patterns with the embedded DB2. The DB2 pattern (CN30GML) is included in the
Business Process Manager 8.5.5 pattern license, so, when you search CRT40ML for
the Business Process Manager 8.5.5 pattern, you find also the DB2 pattern to be
downloaded. For information about importing a virtual system pattern, see
Importing virtual system patterns on page 426.
3. Import the Business Process Manager 8.5.5 virtual system pattern. For
information about importing a virtual system pattern, see Importing virtual
system patterns on page 426.
4. Clone the predefined Business Process Manager pattern and make necessary
changes, such as adding flavor settings. For information about cloning a virtual
system pattern, see Cloning virtual system patterns on page 427.
5. Deploy the pattern. For information about deploying a virtual system pattern,
see Deploying virtual system patterns on page 430.
Procedure
1. Use the user interface, as described in Working with virtual system patterns
(classic) in the user interface on page 436.
434
2. Configure the advanced options. You can configure advanced options for the
virtual system patterns, as described in Configuring advanced options on
page 449.
435
Procedure
1. From the menu, open the Virtual System Patterns (Classic) window by clicking
PATTERNS > Instances > Virtual System Patterns (Classic).
2. Determine the task to perform. You can perform the following tasks with the
IBM Cloud Orchestrator user interface:
Determine the virtual system pattern to use.
You can use a predefined virtual system pattern, clone an existing
virtual system pattern, or create a virtual system pattern, as described
in Selecting a virtual system pattern (classic) on page 437.
Edit a virtual system pattern.
You can edit any virtual system pattern that is not read-only and which
you have permission to edit, as described in Editing a virtual system
pattern (classic) on page 442.
Configure advanced options.
You can edit and configure the advanced options for a virtual system
pattern, as described in Configuring advanced options on page 449.
Make a virtual system pattern read-only.
If you want to lock a virtual system pattern to future editing, you can
make it read-only, as described in Making virtual system patterns
(classic) read-only on page 463.
Deploy a virtual system pattern.
You can deploy the virtual system pattern after you have configured it,
as described in Deploying a virtual system pattern (classic) on page
464.
436
Procedure
v Use a predefined virtual system pattern. If one of the predefined virtual system
patterns meets the needs of your environment, you can use it without altering it
and deploy it to your cloud. See Using a predefined virtual system pattern
(classic) for more information.
v Clone an existing virtual system pattern. If one of the predefined virtual system
patterns closely meets your needs but you must customize it, you can clone the
virtual system pattern and then edit the copy. See Cloning an existing virtual
system pattern (classic) on page 438 for more information. See Editing a
virtual system pattern (classic) on page 442 for more information.
v Create a virtual system pattern. If the predefined or cloned virtual system
patterns do not meet the needs of your environment, you can create a virtual
system pattern. See Creating a virtual system pattern (classic) on page 440 for
more information.
What to do next
When you have completed any necessary work with the virtual system pattern,
you can deploy the virtual system pattern to your cloud. See Deploying a virtual
system pattern (classic) on page 464 for more information.
Using a predefined virtual system pattern (classic):
You can use a set of predefined virtual system patterns that is provided by IBM.
Virtual system pattern are made up of parts from one or more virtual images, and
script packages from the IBM Cloud Orchestrator catalog. Virtual system patterns
provide a topology definition for repeatable deployment that can be shared.
Before you begin
Review the virtual system patterns provided by IBM Cloud Orchestrator to
determine which virtual system pattern best fits your needs. See Supported
Chapter 7. Managing and deploying virtual patterns
437
virtual system patterns (classic) on page 435 for information about these virtual
system patterns.
About this task
You can use the virtual system patterns provided by IBM Cloud Orchestrator, as
they are, and deploy them to your cloud.
Procedure
1. From the list in the left panel select a predefined virtual system pattern.
2. Add it to the cloud. Click the Deploy icon to provide the necessary information
to deploy this virtual system pattern.
3. Deploy the virtual system pattern. When all of the information is provided
correctly in the Deploy pattern dialog, click OK to deploy the virtual system
pattern. A green check mark beside each entry indicates that the information
has been provided. For more information about deploying a virtual system
pattern, see Deploying a virtual system pattern (classic) on page 464.
Results
The virtual system pattern is now running in the virtual system instance.
Cloning an existing virtual system pattern (classic):
IBM provides a set of predefined virtual system pattern that you can clone.
Because the predefined virtual system patterns cannot be edited, cloning them
provides a starting point for creating customized virtual system patterns that work
in your environment.
Before you begin
You must be granted access to the pattern and be assigned the catalogeditor role
or the admin role.
Select a virtual system pattern that most closely meets your needs. See Supported
virtual system patterns (classic) on page 435 for virtual system pattern
descriptions.
Because virtual system patterns are associated with virtual images, if you have not
accepted the license for the virtual image with which the virtual system patterns
are associated, the Clone option is not available. If the clone function is not
available for the virtual system pattern you want to use, accept the license for the
image associated with the virtual system pattern. To accept the license, click
PATTERNS > Pattern Design > Virtual Images. See Chapter 6, Managing virtual
images, on page 331 for more information about virtual images.
Important: You can accept a license and then change the image that the virtual
system pattern is using when you define the cloned virtual system pattern. If you
change the image and do not actually use the image for which you accepted the
license, you are not charged for that license.
About this task
This task provides the necessary steps to clone a virtual system pattern and then
customize the copy to meet the needs of your environment.
438
Procedure
1. From the left panel Virtual System Patterns (Classic) window, select a virtual
system pattern to clone. The description and general information about this
virtual system pattern display in the right panel of the Virtual System Patterns
(Classic) window.
2. Clone the virtual system pattern. Click the clone icon on the upper right panel
of the Virtual System Patterns (Classic) window.
3. You can provide the following basic information about the new virtual system
pattern you are cloning:
Name Enter a unique name for the virtual system pattern in the Name field.
This information is required to clone a virtual system pattern.
Description
Optionally, enter a detailed description to identify the virtual system
pattern in the Description field.
Virtual image
Select a virtual image with which to associate the virtual system pattern
from the listing. This information is required to clone a virtual system
pattern. You can edit the new virtual system pattern to associate
individual parts with different virtual images. If all the parts in the
virtual system pattern you are cloning are from a single virtual image,
use this option. This option switches all of the parts to a different
virtual image in the new virtual system pattern. If the original virtual
system pattern contains parts from different virtual images, this option
is disabled. If this option is disabled, the parts in the new virtual
system pattern are associated with the same virtual images as the
corresponding parts in the original virtual system pattern.
4. Click OK to save your changes. When the information is processed, you return
to the Virtual System Patterns (Classic) window and the virtual system pattern
you created is added to the list in the left panel. It is selected so that the
information about it is shown in the right panel. For more information about
the fields on this panel, see Virtual system pattern (classic) windows on page
469.
5. Edit the virtual system pattern. To change the virtual system pattern topology,
click the edit icon on the upper right panel of the Virtual System Patterns
(Classic) window. You can perform the following actions with virtual system
patterns:
v Add or remove parts
v Edit parts
v Add or remove script packages to the parts
v Add or remove add-ons to the parts
v Configure properties for the parts and parameters for the script packages
that have parameters
v Define advanced options
v Modify the start up order of the parts
The Pattern Editor window provides a list of parts. For more information about
the interaction of the parts on the Pattern Editor window, see Virtual system
pattern (classic) editing views and parts on page 443.
6. Edit the parts on the canvas. See Editing a virtual system pattern (classic) on
page 442 for more information about editing functions you can perform on
virtual system patterns.
Chapter 7. Managing and deploying virtual patterns
439
7. Edit advanced options. Default advanced options are provided with the virtual
system patterns but you can edit those settings. For more information, see
Configuring advanced options on page 449.
8. Complete the virtual system pattern. When you have finished editing this
virtual system pattern, click the Done editing link on the upper right panel of
the Pattern Editor window. This virtual system pattern is listed on the left
panel of the Virtual System Patterns (Classic) window.
Results
When you have completed these steps, you have cloned the virtual system pattern
and it can be customized.
What to do next
You can lock the virtual system pattern against future editing. For more
information, see Making virtual system patterns (classic) read-only on page 463.
You can deploy the virtual system pattern to the cloud. For more information, see
Deploying a virtual system pattern (classic) on page 464.
Creating a virtual system pattern (classic):
You can create a virtual system pattern using the IBM Cloud Orchestrator user
interface. Virtual system patterns are topology definitions for repeatable
deployment that can be shared.
Before you begin
You must be assigned the catalogeditor role or the admin role.
Review the predefined virtual system patterns to ensure that none of the existing
virtual system patterns can be cloned and customized to meet your needs. For
more information about the predefined virtual system patterns, see Supported
virtual system patterns (classic) on page 435.
About this task
You can create a virtual system pattern by cloning an existing virtual system
pattern or by creating a virtual system pattern. This task provides the steps for
creating a virtual system pattern.
Procedure
1. Add a virtual system pattern. On the upper left panel of the Virtual System
Patterns (Classic) window, click Add to provide the following basic information
about the virtual system pattern you are creating.
Name Enter a unique name for the virtual system pattern in the Name field.
This information is required to create a virtual system pattern.
Description
Optionally, enter a detailed description to identify the virtual system
pattern in the Description field.
2. Click OK to indicate that you have finished editing and return to the initial
view of the virtual system pattern. When the information is processed, you
return to the Virtual System Patterns (Classic) window and the virtual system
pattern you created is added to the list in the left panel. It is selected so that
440
the information about it is shown in the right panel. For more information
about the fields on this panel, see Virtual system pattern (classic) windows
on page 469.
3. Edit the virtual system pattern. To change the virtual system pattern topology,
click edit on the upper right panel of the Virtual System Patterns (Classic)
window. You can perform the following actions:
v Add or remove parts
v Edit parts
v Add script packages to the parts
v Add or remove add-ons to the parts
v
v
v
v
v
4.
5.
6.
7.
The Pattern Editor window provides a list of parts. For more information about
the interaction of the parts on the Pattern Editor window, see Virtual system
pattern (classic) editing views and parts on page 443.
Edit the parts on the canvas. See Editing a virtual system pattern (classic) on
page 442 for more information about editing functions you can perform on
virtual system patterns.
Edit advanced options. Default advanced options are provided with the virtual
system patterns but you can edit those settings. For more information, see
Configuring advanced options on page 449.
Optional: Modify the default order in which the parts run at deployment. See
Ordering parts to run at deployment on page 446 for more information.
Indicate that you have finished editing and return to the initial view of the
virtual system pattern. When you have finished editing this virtual system
pattern, click the Done editing link on the top of the right panel of the Pattern
Editor window.
Results
When you have completed these steps, you have configured basic information
about the virtual system pattern you have created and it can be deployed to the
cloud.
What to do next
You can lock the virtual system pattern against future editing. For more
information, see Making virtual system patterns (classic) read-only on page 463.
You can deploy the virtual system pattern to the cloud. For more information, see
Deploying a virtual system pattern (classic) on page 464.
441
Procedure
1. Select the virtual system pattern you want to edit from the left panel of the
Virtual System Patterns (Classic) window. Details about the virtual system
pattern are shown in the right panel of the Virtual System Patterns (Classic)
window.
2. From the top of the right panel of the Virtual System Patterns (Classic) window,
click the edit icon The Virtual System Patterns (Classic) window opens for this
virtual system pattern.
3. Edit the parts on the canvas.
a. Select a part from the lists on the left panel of the Pattern Editor window.
The lists of parts, script packages, and add-ons show available parts that
can be dropped onto the editing canvas on the right side of the Pattern
Editor window.
b. Drop the selected parts onto the editing canvas on the right of the Pattern
Editor window.
The editing canvas graphically shows the topology of the virtual system
pattern. See Virtual system pattern (classic) editing views and parts on page
443 for information about the virtual image parts and the interaction between
them.
4. Optional: Configure advanced options. For more information, see Configuring
advanced options on page 449.
5. Optional: Configure the order in which the parts are to deploy. For more
information, see Ordering parts to run at deployment on page 446.
6. Indicate that you have finished editing and return to the initial view of the
virtual system pattern. When you have finished editing this virtual system
pattern, click the Done editing link on the top of the right panel of the Virtual
System Patterns (Classic) window.
442
Results
When you have finished editing this virtual system pattern, it is ready to be
deployed to the cloud.
What to do next
You can lock the virtual system pattern against future editing. For more
information, see Making virtual system patterns (classic) read-only on page 463.
You can deploy the virtual system pattern to the cloud. For more information, see
Deploying a virtual system pattern (classic) on page 464.
Virtual system pattern (classic) editing views and parts:
A virtual system pattern, that is not read-only, can be edited if you have
permission to edit it. The topology for a virtual system pattern is graphically
shown. Virtual image parts, add-ons, and script packages can be dropped onto an
editing canvas to create or change relationships between the parts that define the
topology.
The Virtual System Patterns (Classic) window
When you select a virtual system pattern to edit in Virtual System Patterns
(Classic) window, you can see information about the virtual system pattern. The
topology of the virtual system pattern is shown on the right panel of the Virtual
System Patterns (Classic) window. For more information about the predefined
virtual system patterns and what they provide, see Supported virtual system
patterns (classic) on page 435.
The Pattern Editor window
Clicking the edit icon on the upper right panel of the Virtual System Patterns
(Classic) window opens the Pattern Editor window for the selected virtual system
pattern. The Pattern Editor window provides lists to select virtual image parts,
add-ons, and script packages.
Virtual image parts
Selecting the Parts list on the Pattern Editor provides a listing of the parts that can
be dropped onto the virtual system pattern canvas. The virtual system pattern
canvas is on the right panel of the Virtual System Patterns (Classic) window. The
following virtual image parts are examples of the parts available for IBM
WebSphere Application Server Hypervisor Edition images:
v Administrative agents
v
v
v
v
v
v
Custom nodes
Deployment manager
IBM HTTP servers
Job manager
Stand-alone server
On-demand router: The on-demand router part is available if you are using the
WebSphere Application Server 7.0.0.17 with Intelligent Management Pack image.
These parts are determined by the virtual images you are using. For more
information about virtual images, see Chapter 6, Managing virtual images, on
page 331.
Chapter 7. Managing and deploying virtual patterns
443
Some virtual image parts represent multiple instances. These graphical parts on the
editing canvas have a badge that shows the number of instances of the part. A
valid number of instances that can be specified is 1 - 999.
You can configure the parts either when you deploy the virtual system pattern or
directly from the part before deployment. To configure the part before deploying it,
click the edit properties icon for the part on the editing canvas. For more
information about configuring the parts, see Configuring parts on page 447.
Script packages
The Scripts list on the Pattern Editor provides a listing of the script package parts
that can be dropped into the virtual image parts. Virtual image parts are on the
right panel of the Virtual System Patterns (Classic) window. This list can contain
script packages associated with the virtual image and any that you have defined
for use with IBM Cloud Orchestrator. For more information about script packages,
see Adding a script package on page 367 and Associating a script package with
a pattern on page 371. Script packages can then be added to the virtual image
parts.
Add-ons
The following default add-ons are provided with IBM Cloud Orchestrator and can
be added to parts on the editing canvas:
Default add NIC
Adds a new virtual network interface controller (NIC) to the virtual
machine, configures its IP address information, and activates it. Use this
add-on for virtual image parts that support communication using ssh to
run scripts after initial activation of the virtual machine.
Note: This add-on is not supported on PowerVM virtual images.
Default configure NIC
Triggers configuration via VSAE of the additional NICs present in the
image.
Note: This add-on is not supported on PowerVM virtual images.
Default add disk
Adds a virtual disk to the virtual machine and optionally formats and
mounts the disk. Prerequisite: the parted (parted RPM) and sfdisk
(util-linux RPM) tools must be installed for a Red Hat image (or other type
of packages depending on different operating systems). The prerequisite
for VMware virtual images is that the virtual disk type must be SCSI.
Default AIX add disk
Adds a virtual disk to the virtual machine and optionally formats and
mounts the disk. Use this add-on for PowerVM virtual images.
Note: IBM Cloud Orchestrator does not support the disk add-on function
for PowerVM with Shared Storage pool.
Default Windows add disk
Adds a virtual disk to the virtual machine and formats and makes the disk
available under new letter. Prerequisites: PowerShell and Diskpart tools,
which are available in the default Windows installation. New disk is
444
445
446
Configuring parts
Before deploying a virtual system pattern to run in a cloud group, you must first
configure the parts included in the virtual system pattern.
447
Procedure
1. Open the Properties configuration panel for the part by using one of the
following methods:
Editing the virtual system pattern
From the Virtual System Patterns (Classic) window, select the virtual
system pattern to edit and click the Edit icon. For each virtual image
part requiring information, click the Properties icon on the part. You
can also configure any script packages or disk or user add-ons on the
parts. NIC add-ons require an environment profile for configuration.
Deploying the virtual system pattern
To deploy a virtual system pattern from the Virtual System Patterns
(Classic) window, click the Deploy icon. on the upper right of the
Virtual System Patterns (Classic) window. When deploying a virtual
system pattern, you must describe the virtual system instance that you
want to deploy. As part of that process, the parts in the virtual system
pattern to deploy are listed.
When the information for each of the virtual image parts in your
virtual system pattern is provided, a green check mark is shown to the
left of the virtual image part. If information for one of these parts is
missing, then the check box to the left of the Configure virtual parts
field does not contain a green check mark. In this case, click Configure
virtual parts and then click the link for the virtual image part that is
missing information.
2. Provide the necessary information. Part properties vary, depending on the type
of part you are editing, and the scripts and add-ons it includes. If the script
packages have parameters, you can also edit these properties.
You can edit the following properties for add-ons while editing the part or
during the deployment process:
Disk add-on
Has the following properties to edit:
v DISK_SIZE_GB
v FILESYSTEM_TYPE
v MOUNT_POINT
Raw disk add-on
Has the following properties to edit:
v DISK_SIZE_GB
v FILESYSTEM_TYPE
Note: The FILESYSTEM_TYPE property is read-only.
User add-on
Has the following properties to edit:
448
v USERNAME
v PASSWORD
v Verify password
Note: NIC add-ons require an environment profile for configuration.
3. Optional: Lock the properties. If you are editing part properties from the virtual
system pattern, you can lock the values so that they cannot be changed during
deployment. Use the unlocked or a locked icons next to each field on the
Properties window to change the status of the field. By default, the parts
properties are not locked so you must lock them if you want to prevent them
being changed during deployment.
Results
The virtual image parts for the virtual system pattern are configured.
What to do next
Deploy the virtual system pattern to the cloud.
Procedure
1. From the left panel of the Virtual System Patterns (Classic) window, select the
virtual system pattern.
2. Put the virtual system pattern in edit mode. Click the Edit icon at the top of the
right panel to see the topology of the virtual system pattern and edit it.
3. Edit the advanced options. Click the Advanced Options... link on the right
panel of the Pattern Editor window. The options available on this panel depend
on the topology of the virtual system pattern you are editing.
449
Important: When you open the advanced options editor for a new virtual
system pattern that has no advanced options set, default settings are shown for
the virtual system pattern. These settings are recommended values for the
topology. To accept these default values for this topology, click OK. To return to
the virtual system pattern without setting these default values, click Cancel.
The following general options are available:
Single server virtual system patterns
v Enable session persistence
v Global security
For detailed information about advanced configuration options for
single server virtual system patterns, see Configuring advanced
options for single server virtual system patterns (classic) on page 461.
Cluster virtual system patterns
v Define clusters
Enable messaging
Enable session persistence
Global security
For more information about the advanced configuration options for
cluster virtual system patterns, see Configuring advanced options for
cluster virtual system patterns (classic) on page 451.
IBM WebSphere Application Server Hypervisor Edition Intelligent
Management Pack cluster virtual system patterns
If the cluster virtual system pattern you are editing is from an
Intelligent Management Pack image, then the following options are also
available:
v Define dynamic clusters
v Enable overload protection
v Configure standard health policies
v On demand router-dependent health policies
For more information about the advanced configuration options for
Intelligent Management Pack cluster virtual system patterns, see
Configuring advanced options for Intelligent Management Pack on
page 455.
4. Save your changes. When you change the settings and click OK, your changes
are saved in place of the default values.
5. Optional: Configure advanced messaging options for cluster virtual system
patterns. If you are working with a cluster virtual system pattern and you want
to configure advanced messaging for it, see Configuring advanced messaging
for databases on page 460.
6. Optional: Enable the database implemented session persistence option. If you
are enabling session persistence for either a cluster or single server virtual
system pattern, you must enable the database implemented session persistence.
See Configuring database implemented session persistence for Derby on page
463 for more information.
What to do next
You can perform the following tasks after configuring the advanced options for a
virtual system pattern:
450
451
Important: You must have the Define clusters option selected to work with
messaging.
For more information about advanced configuration for WebSphere Application
Server clustered messaging engines or a sample authentication alias for
databases, see Configuring advanced messaging for databases on page 460.
Use the following options to configure messaging with IBM Cloud
Orchestrator:
Standard messaging engine configuration
When configuring the standard Java Message Service (JMS), IBM Cloud
Orchestrator provides the following function:
v An application cluster with the default name prefix HVMsgCluster
(which you can change)
v A default number of clusters (which you can change)
v A default number of servers per node (which you can change)
v A Service Integration Bus (SIBus), the name of which has the
HVSIBUS prefix, is created for each message cluster
v A messaging engine (ME) is defined on each member of the cluster,
because each message cluster is added to the SIBus
v A Derby Java Database Connectivity (JDBC) provider and Derby data
source are defined for use by the messaging engine or engines
defined. See Configuring advanced messaging for databases on
page 460 for more information.
v An example authentication alias provides configuration options for
the messaging engine to a database other than Derby
v Activation in only one of the members as the messaging cluster
members are started
Highly available messaging engine configuration
Processed over standard Java Message Service (JMS) support, this
option provides messaging engine failover. For a high availability
messaging WebSphere Application Server configuration, there is one
messaging engine running at a time for each SIBus. The messaging
engine can run in multiple application servers, specifically the other
members of the messaging cluster. If the server in which it is currently
running becomes unavailable, it is activated in another of the servers of
the messaging cluster with which the messaging engine is associated.
All messages are preserved because the messaging engine state is saved
in Derby. Messaging engines are activated in different application
servers. The advanced configuration scripts create both the appropriate
schemas and the high availability group OneOfNPolicy core group
policies for messaging engine election and activation.
Note: Multiple messaging engines can run in a given application
server.
Scalable messaging engine configuration
Processed over standard Java Message Service (JMS) support, this
option enables multiple messaging engines to run in a WebSphere
Application Server SIBus at a time. Therefore, the message flows for the
various JMS applications can be divided or partitioned across the
different messaging engines. Scalable messaging provides a greater
number of messages that are processed by the WebSphere Application
Server JMS support and therefore a scalable implementation.
452
453
MQ server configuration
Select the WebSphere MQ server configuration option to
perform the following function:
v Create new transport chains for the WebSphere MQ server
and associate them with each messaging engine. These
transport chains are basic if no security exists or SSL if
security is enabled.
v Create the WebSphere MQ server
v Add the WebSphere MQ server to the SIBuses
When defining the WebSphere MQ server there are items that
use WebSphere MQ configuration attributes. You must adjust
these attributes, for example the sample host, port, virtual
queue manager name, and user IDs, to reflect your actual
WebSphere MQ environment.
3. Enable session persistence. HVWebCluster or application clusters are created
with associated replication domains. Therefore, you can use the hyper text
transfer protocol (HTTP) session replication. If the replication domain is
defined, no resources are created or used unless session replication is
configured. You can use the Enable session persistence option and then one of
the following options to use HTTP session persistence:
Memory-memory implemented session persistence
The HTTP session memory bit is set on all the HVWebCluster servers.
Database implemented session persistence
On the virtual system instance, the JDBC data source created must be
updated on the deployment manager with valid host, port, user name,
and password values. The appropriate client drivers for your database,
for example jars and libraries, must be installed on your WebSphere
systems.
To use this option on the virtual system instance, the JDBC data source
created must be updated on the deployment manager. JDBC data
source must have valid host, port, user name, and password values.
The appropriate client drivers for your database, for example jars and
libraries, must be installed on your WebSphere Application Server
systems. IBM Cloud Orchestrator performs the following operations:
v Creates a DB2 JDBC provider
v Creates a sample DB2 data source, with dummy values for the host,
port, ID, and password
v Sets up a session manager on each HVWebCluster server to do HTTP
session to the database, using the sample data source and dummy
connection values
For important information about suppling a database that an HTTP
session over the database supports, see Configuring database
implemented session persistence for Derby on page 463.
4. Enable global security. Use the global security option to perform the following
function:
v Set the global security admin bit
v Use the WIM user registry that is provided with IBM Cloud Orchestrator
v Use both LTPA and BasicAuth for authentication (BasicAuth is needed for
stand-alone clients)
v Use an SSL-allowed policy for CSIv2
454
455
Procedure
1. Define dynamic clusters. Select this option to begin to define dynamic clusters
across the custom node parts in the virtual system pattern.
a. Create dynamic clusters. Select this option to create dynamic clusters across
all custom node parts in the virtual system pattern. You can set values for
the following parameters:
v
v
v
v
v
v
DYNAMIC_CLUSTER_PREFIX
NUMBER_OF_DYNAMIC_CLUSTERS
MAXIMUM_INSTANCES_PER_NODE
MAXIMUM_NODES
MINIMUM_TOTAL_INSTANCES
MAXIMUM_TOTAL_INSTANCES
456
a. Excessive heap usage: The excessive heap usage health policy is triggered
when memory usage exceeds the specified percentage of the heap size for a
specified time. Selecting this option adds the excessive memory usage
policy configuration script to the deployment manager part. You can
configure the excessive memory usage health policy by specifying following
script parameters:
v HEAP_USAGE_PERCENTAGE
v OFFENDING_TIME_PERIOD
v OFFENDING_TIME_UNIT: Specify the time unit in minutes.
v EXCESSIVE_MEMORY_USAGE_POLICY_REACTION_MODE
v EXCESSIVE_MEMORY_USAGE_POLICY_ACTION
v EXCESSIVE_MEMORY_USAGE_POLICY_NAME
b. Memory leak: Starts when a memory leak is detected. This policy checks if
trends, in the free memory that is available to the server in the Java heap,
decrease over time. This option adds the memory leak policy configuration
script to the deployment manager. Configure the configuration script by
specifying the following parameters:
v MEMORY_LEAK_DETECTION
v MEMORY_LEAK_POLICY_REACTION_MODE
v MEMORY_LEAK_POLICY_ACTIONS
v MEMORY_LEAK_POLICY_NAME
c. Maximum server age: Starts after an application server has been running
for a specified amount of time. This option adds the maximum server age
policy configuration script to the deployment manager. Configure the
configuration script by specifying the following parameters:
v SERVER_AGE
v SERVER_AGE_UNIT
v MAXIMUM_SERVER_AGE_POLICY_REACTION_MODE
v MAXIMUM_SERVER_AGE_POLICY_ACTIONS
v MAXIMUM_SERVER_AGE_POLICY_NAME
d. Email notification list: Specifies a list of email addresses to receive
notification when a health condition is met. This option adds the email
notification configuration script to the deployment manager. Configure the
configuration script by specifying the following parameters:
v
v
v
v
v
v
SMTP_HOST_NAME
SMTP_PORT
SMTP_USERID
SMTP_PASSWORD
EMAIL_ADDRESSES_TO_NOTIFY
SENDER_ADDRESS
457
v MAXIMUM_REQUESTS_POLICY_NAME
b. Excessive number of timed out requests: Starts after a specified number of
requests timeout within a 1 minute interval. Configure the configuration
script by specifying the following parameters:
v REQUEST_TIMEOUT_PERCENTAGE
v EXCESSIVE_REQUEST_TIMEOUT_POLICY_REACTION_MODE
v EXCESSIVE_REQUEST_TIMEOUT_POLICY_ACTIONS
v EXCESSIVE_REQUEST_TIMEOUT_POLICY_NAME
c. Excessive average response time: Starts when the average response time
exceeds a specified response time threshold. Configure the configuration
script by specifying the following parameters:
v EXCESSIVE_RESPONSE_TIME
v EXCESSIVE_RESPONSE_TIME_UNIT
v EXCESSIVE_RESPONSE_TIME_POLICY_REACTION_MODE
v EXCESSIVE_RESPONSE_TIME_POLICY_ACTIONS
v EXCESSIVE_RESPONSE_TIME_POLICY_NAME
d. Storm drain detection: This policy tracks requests that have a decreased
response time which has been predetermined as a significant decrease. The
actions specified for this policy are run and the associated server is restarted
when the specified detection level is reached. Configure the configuration
script by specifying the following parameters:
v STORM_DRAIN_DETECTION_LEVEL
v STORM_DRAIN_POLICY_REACTION_MODE
v STORM_DRAIN_POLICY_ACTIONS
v STORM_DRAIN_POLICY_NAME
Results
You have configured the Intelligent Management Pack advanced options for the
virtual system pattern.
What to do next
Depending on the database you are using, you can configure advanced messaging.
For more information, see Configuring advanced messaging for databases on
page 460.
Configuring elasticity mode and the associated operations:
Configure elasticity mode to add logic that causes the application placement
controller to minimize the number of nodes that are used, as well as remove nodes
that are not needed, while still meeting service policy goals. Additionally, you can
configure elasticity mode to add logic so that when the controller recognizes a
particular dynamic cluster is not meeting service policies and has started all
possible servers, the controller calls to add a node.
Before you begin
v Select the Enable elasticity mode option in the advanced options editor as
described in Configuring advanced options for Intelligent Management Pack
on page 455.
v For optimal performance, ensure that your dynamic clusters are running in
supervised mode or automatic mode. It is not recommended to have elasticity
458
mode enabled when your dynamic clusters are running in manual mode.
However, if the dynamic clusters are running in manual mode with elasticity
enabled, consider the following items:
The application placement controller does not add nodes to dynamic clusters
in manual mode.
The application placement controller does not remove nodes from dynamic
clusters in manual mode when a server is started on the specific nodes.
The application placement controller does remove nodes from dynamic
clusters in manual mode when a server is not started on the specific nodes.
v It is not recommended to use elasticity mode with uncapped mode.
v It is not recommended to enable elasticity mode when the following option is set
in the administrative console for one or more dynamic clusters: If other dynamic
clusters need resources, stop all instances of this cluster during periods of
inactivity. If you have elasticity mode enabled and the option set, the application
placement controller can remove all of the custom nodes in the cell.
v Configure certain controllers to start on the deployment manager or node agent
that will not be removed.
1. To configure the application placement controller to start on the deployment
manager, select System administration > Deployment manager > Java and
process management > Process definition > Java virtual machine > Custom
properties.
a. Enter the name of the custom property as HAManagedItemPreferred_apc.
b. Set the value of the custom property to true.
c. Click Apply, and save your changes.
d. Restart the current process in which the application placement controller
is running.
2. To configure the application placement controller to start on one of the nodes
that contains an ODR, select System administration > Nodes > node_name
> node_agent_name > Java and process management > Process definition >
Java virtual machine > Custom properties.
a. Enter the name of the custom property as HAManagedItemPreferred_apc.
b. Set the value of the custom property to true.
c. Click Apply, and save your changes.
d. Restart the current process in which the application placement controller
is running.
3. When you use elasticity mode in an environment in which multi-cell
performance management is configured, you must configure certain
controllers to start on the deployment managers of the center cell and the
point cells.
a. Set the HAManagedItemPreferred_apc custom property to true on the
deployment manager of the center cell.
b. Set the HAManagedItemPreferred_cellagent custom property to true on
the deployment manager of the point cells.
About this task
When you enable elasticity mode in the advanced options editor, the following
default actions are associated with the add and remove operations. The elasticity
operations define the runtime behaviors to monitor, and the corrective actions to
take when the behaviors are present.
1. Add virtual machine: Creates and federates a new node into the cell
Chapter 7. Managing and deploying virtual patterns
459
460
2. Start the Derby network server. To start this server, from the
<WAS_HOME>/derby/bin/networkServer/ directory run the following command:
startNetworkServer.sh | .bat
461
configured. You can use the Enable session persistence option and then one of
the following options to configure HTTP session persistence:
Memory-memory implemented session persistence
The HTTP session memory bit is set on all the HVWebCluster servers.
Database implemented session persistence
To use this option on the virtual system instance, the Java Database
Connectivity (JDBC) data source created must be updated on the
deployment manager. You must provide valid host, port, user name,
and password values. Also, the appropriate client drivers for your
database, for example jars and libraries, must be installed on your
WebSphere Application Server systems. IBM Cloud Orchestrator
performs the following operations:
v Creates a DB2 JDBC provider
v Creates a sample DB2 data source, with dummy values for the host,
port, ID, and password
v Sets up a session manager on each HVWebCluster server to do HTTP
session to the database, using the sample data source and dummy
connection values
For important information about suppling a database that an HTTP
session over the database supports, see Configuring database
implemented session persistence for Derby on page 463.
2. Enable global security. Using global security provides the following functions:
v Sets the global security admin bit
v Uses the WIM user registry that is provided with IBM Cloud Orchestrator
v Uses both LTPA and BasicAuth for authentication (BasicAuth is needed for
stand-alone clients)
v Uses an SSL-allowed policy for CSIv2
v Turns off single sign-on interoperability
v Configures secure file transfer between the deployment manager and the
node agents
v Enables the high availability manager to use the secure DCS channel
Use the global security option to configure secure messaging. Secure messaging
provides the following function:
v Sets the security bit on the SIBus
v Reduces the set of users that can connect to the SIBus. Only the WebSphere
Application Server ID created with the CB UI can connect. The default value
for that ID is virtuser.
v Disables the InboundBasicMessaging transport for the messaging engines
v Adds the virtuser ID to the sender role for foreign buses for MQLink
configurations
Results
You have configured the advanced options for a single server virtual system
pattern.
What to do next
You can run the wasCBUpdateSessDSInfo.py script, to configure database
implemented session persistence. For more information, see Configuring database
462
463
Procedure
1. From the left panel of the Virtual System Patterns (Classic) window, select the
virtual system pattern. Virtual system patterns that have the read-only symbol
by them are already read-only and cannot be edited. Virtual system patterns
with the edit symbol beside them are not read-only and can be edited. Basic
information about the selected virtual system pattern is shown in the right
panel of the Virtual System Patterns (Classic) window.
2. Determine if you are ready to lock editing of the virtual system pattern.
v If virtual system pattern editing is complete and you are ready to make the
virtual system pattern read-only, click the Lock icon in the upper right
toolbar of the Virtual System Patterns (Classic) window.
v If virtual system pattern editing is not complete, see the information in
Editing a virtual system pattern (classic) on page 442 and Configuring
advanced options on page 449.
3. Verify that you want to make the virtual system pattern read-only. When
prompted to verify that you want to make the virtual system pattern read only,
click OK to lock the virtual system pattern.
Results
When you have made the virtual system pattern read-only, it can be cloned or
deleted but it cannot be edited.
What to do next
You can deploy the virtual system pattern to the cloud. For more information, see
Deploying a virtual system pattern (classic).
464
Procedure
1. From the list in the left panel of the Virtual System Patterns window, select the
virtual system pattern to deploy.
2. Indicate that you want to deploy the virtual system pattern. Click the Deploy
icon on the upper right panel of the Virtual System Patterns (Classic) window.
3. Provide the necessary information. The Describe the virtual system instance
you want to deploy dialog provides the fields of information to deploy the
virtual system pattern to the cloud. The parameters that are required differ
depending on any advanced configuration you have defined and any
associated script packages you have included. Links to the advanced
configuration and the scripts are provided on the interface. Provide the
following information to deploy the virtual system pattern:
Virtual system instance name
Enter the name of the virtual system instance in which to deploy this
virtual system pattern.
Choose Environment
You can deploy the virtual system pattern using an environment
profile. Make your selections from the following options:
IP version
IPv4 is selected. IPv6 is not currently supported.
Choose cloud group
This option is not currently supported.
Choose profile
Select this option to deploy the virtual system pattern using an
environment profile. Then select the environment Type and a
valid environment Profile from the lists.
Note: If the Pattern deployer option was chosen, you cannot
specify an IP address that is contained within the IP groups
which are defined in IBM Cloud Orchestrator.
Note: IBM Cloud Orchestrator is not able to filter environment
profiles suitable for deployment to VMware clusters based on
the images contained in the virtual system pattern that you are
deploying. Make sure you are selecting a valid environment
profile.
For more information about environment profiles, see Managing
environment profiles on page 354.
Schedule deployment
Click this link to provide information about when the virtual system
pattern is to be deployed and for how long. You can deploy the virtual
system pattern immediately after providing the information in the
dialog or you can schedule deployment using the following options:
Start now
Deploys the virtual system pattern immediately after providing
the required information in the dialog. Start now is the default
option.
Start later...
Provide the date and time to deploy this virtual system pattern
at a later time.
Chapter 7. Managing and deploying virtual patterns
465
Run indefinitely
Runs this virtual system pattern continuously. Run indefinitely
is the default option.
Run until...
Use this option to provide the end date and time for the virtual
system pattern to stop running.
Configure parts
For each virtual image part requiring information, click the link and
provide the information for each configuration parameter shown. The
set of parameters depends on the part itself. The parts that require
information are different depending on the type of virtual image to be
deployed and the type of hypervisors in the cloud. For example, parts
for an WebSphere Application Server image would require different
information than parts for a DB2 image.
Note: The administrator password that you specify for a Windows
virtual image must meet complexity requirements (for example,
Passw0rd).
Note: The WebSphere administrative user name should be non-root
user only.
Note: If you are using a non-English operating system, you must
specify the correct language and country in the Default Locale and
Default Country parameters to correctly deploy the virtual image.
For virtual image parts, you must specify a value in the Flavor field. In
OpenStack, the instance flavor describes the memory and storage
capacity of the virtual machine to be deployed. By default the flavor
values are:
m1.tiny
Memory: 512 MB, vCPUs: 1, Storage: 0 GB
m1.small
Memory: 2048 MB, vCPUs: 1, Storage: 20 GB
m1.medium
Memory: 4096 MB, vCPUs: 2, Storage: 40 GB
m1.large
Memory: 8192 MB, vCPUs: 4, Storage: 80 GB
m1.xlarge
Memory: 16384 MB, vCPUs: 8, Storage: 160 GB
Note:
v The flavor values might change depending on the configuration of
your OpenStack environment. In OpenStack, use the nova
flavor-list command to view the list of available flavors and their
characteristics.
v The 0 GB storage size is a special case which uses the native base
image size as the size of the ephemeral root volume. When you use a
flavor with 0 GB of storage, no automatic check is performed by
OpenStack on available storage capacity. You must ensure that there
is sufficient storage to contain the provisioned VM image disks.
466
Results
The virtual system pattern is deployed to the cloud and runs in the selected virtual
system instance.
On a topology deployment, some parts can reserve CPU or memory, or both. These
fields effect how the CPU and memory are configured on the underlying
hypervisor. The CPU and memory limits are set and reserved for ESX hypervisors.
This setting prevents the CPU from being overcommitted but reduces license
usage.
467
What to do next
To add additional nodes to the virtual system pattern, first stop the virtual system
in which the cloud the virtual system pattern is deployed to is running. For more
information about virtual system instances, see Managing virtual system instances
(classic) on page 476.
Deploying a pattern (classic) with additional actions:
You can deploy a virtual system pattern with additional configuration options that
were previously defined in Business Process Manager.
Procedure
1. From the list in the left panel of the Virtual System Patterns (Classic) window,
select the virtual system pattern to deploy.
2. Click the Deploy in the cloud icon on the upper right panel of the Virtual
System Patterns (Classic) window. A popup window opens with a set of
deployment options and parameters that you can configure.
3. Select options for the virtual system that you want to deploy. For information
about specific settings, see Deploying a virtual system pattern (classic) on
page 464.
4. If actions with user interface are defined on this pattern, a Configure actions
section is available. Click the link to view all actions that must be configured.
5. Click the action that you want to configure, and submit any required
parameters. A green checkmark is displayed next to the action name.
6. Repeat the previous step for all other actions that require configuring.
7. When all of the information is provided correctly in the dialog, click OK to
deploy the virtual system pattern.
Procedure
1. From the list of virtual system patterns in the left panel of the Virtual System
Patterns (Classic) window, select the virtual system pattern to delete. If the
virtual system pattern is not shown, you can also search for a virtual system
pattern using the search function. Basic information about the selected virtual
system pattern is shown in the right panel of the Virtual System Patterns
(Classic) window.
2. Determine if you are ready to delete the virtual system pattern.
v If you are ready to delete the virtual system pattern, click the delete icon in
the upper right toolbar of the Virtual System Patterns window.
468
v If you want to change the virtual system pattern instead of deleting it, see
the information in Editing a virtual system pattern (classic) on page 442
and Configuring advanced options on page 449.
3. Verify that you want to delete the virtual system pattern. When the prompted
to verify that you want to delete the virtual system pattern, click OK.
Results
You return to the Virtual System Patterns (Classic) window and the virtual system
pattern has been deleted and is no longer shown in the list on the left panel.
469
This icon is available if the virtual system pattern is not read-only and can
be edited. See Editing a virtual system pattern (classic) on page 442 for
more information.
Clone Clones the selected virtual system pattern. Cloning a virtual system pattern
is useful if the virtual system pattern is read-only and cannot be edited.
This option is available for the predefined virtual system patterns that are
provided by IBM and any virtual system patterns that are created and
locked to editing. You can clone the virtual system pattern and then edit
and deploy the copy. See Cloning an existing virtual system pattern
(classic) on page 438 for more information about cloning virtual system
patterns.
Lock
Delete Deletes the virtual system pattern from IBM Cloud Orchestrator. See
470
For any virtual system pattern you are creating, the initial status is
Draft.
Updated on
The timestamp of the most recent update.
In the cloud now
When the virtual system pattern is in use, this field shows the names of
the virtual system instances currently running that were created from this
virtual system pattern. Until you run the virtual system pattern in a cloud,
this field displays (none) initially.
Access granted to
This field can be edited and it provides access to this virtual system
pattern for other projects. Selecting projects, makes the virtual system
pattern readable or writable to the users belonging to these projects.
Initially this field is set to the role of the owner of the virtual system
pattern.
By default, the Add more box contains the Everyone built-in project. When
a project has been added, click the link beside the entry to toggle between
the following access levels:
v Read
v Write
v All
Click the link name of the project to show information about that project.
You can also click the remove link to remove access for a project.
471
The Parts list displays the parts available to use in your virtual system
pattern. The parts that are available depend on the virtual images you
have installed and the hardware type of any parts already in the virtual
system pattern. Only parts with the same hardware type as any parts
already in the virtual system pattern are available. When you select parts
in the Parts list, parts are then displayed. You can select them and drop
them onto the canvas on the right side of the Pattern Editor.
Scripts
The Scripts list provides the script packages that are available. This list can
contain any script packages that you have provided for use with IBM
Cloud Orchestrator. You can add script packages to the parts on the editing
palette. Add parts by dragging them onto the workspace on the right
canvas of the Pattern Editor window and dropping them onto the part
objects.
472
For more information about script packages, see Adding a script package
on page 367 and Associating a script package with a pattern on page
371.
Add-Ons
The Add-Ons list provides the add-ons that are available. This list can
contain any add-ons that you have provided for use with IBM Cloud
Orchestrator. You can add add-ons to the nodes on the editing palette. Add
the add-ons by dragging them onto the workspace on the right canvas of
the Pattern Editor window and dropping them onto the node objects. The
following types of add-ons can be added to the nodes:
Disk
NIC
User
For more information about add-ons, see Adding add-ons to the catalog
on page 411.
See Virtual system pattern (classic) editing views and parts on page 443 for more
information about the interaction of these parts on the canvas.
Available views
When a specific virtual system pattern is being edited in the Editor window, the
graphical topology view is displayed in an editing canvas. There are two options
to view a virtual system pattern on the editing canvas that are provided by toggle
links at the top right of the page:
Ordering
This link changes the view to show the order the parts are started when
the virtual system pattern is deployed. If you are working with a copy of a
provided virtual system pattern, there is a recommended order and this
order is the default setting. In this view, the parts are shown in the right
column of the panel and numbered in the order they are started. The left
column provides a textual description of the order in which the parts are
started with order constraints for parts and scripts.
Topology
This link is shown when you are in the Ordering view. Click it to switch
back to the topology view in which the relationship of the parts is shown.
Icons and links
From either the Topology or Ordering view, the following icons and links are on
the upper right of the panel:
Refresh
Forces a refresh of the virtual system pattern to ensure that the diagram
shows the current state of the virtual system pattern in IBM Cloud
Orchestrator. Refreshing the virtual system pattern is useful if, for example,
the virtual system pattern has been edited by another user since it was last
retrieved by the web browser.
473
Undo
Undo the previous action. The virtual system pattern is saved during the
editing process, so use this option to back up to the state of the virtual
system pattern before the last edit.
Undo all
Undo all changes made in the current editing session for this virtual
system pattern. The virtual system pattern is saved during the editing
process, so use this option to back up to the state of the virtual system
pattern before all edits were made during the current editing session.
Done editing
Indicates that you have finished editing and returns to the initial view of
the virtual system pattern.
Advanced options
This link opens a configuration window for the virtual system pattern you
are editing. The configuration window contains options that are available
for this virtual system pattern. A set of default options are selected. These
are common choices for topology virtual system patterns like the one you
are editing. For more information about advanced options, see
Configuring advanced options on page 449.
Fields on the topology configuration panel
The graphical editing canvas, on the right panel of the Pattern Editor window
provides an interactive graphical display of parts. These parts, scripts, and add-ons
make up the topology of the virtual system pattern. Parts displayed on the left
panel can be added to the canvas and the parts on the canvas can be manipulated.
The parts available can vary, depending on the virtual images you have installed.
Available parts might include the following objects:
v Administrative agent
v Custom node
v Deployment manager
v
v
v
v
Hovering your cursor over the part label displays a window that provides
additional information about the part and its virtual image. The following actions
can be performed:
v Drop parts onto the palette from the Parts list
v Edit the properties for the parts on the palette (using the edit properties icon on
the part, to open a properties panel)
v Drop scripts or add-ons onto the parts from the Scripts and Add-Ons lists
v Edit parameters, if the script has parameters, using the edit properties icon.
v Change the count for some types of parts
v Lock the count
v Delete a part
v Change a part so that it comes from a different virtual image
v Delete scripts or add-ons
474
475
Procedure
You can use the user interface to manage your virtual system instance. See
Managing virtual system instances (classic) with the user interface for more
information.
Results
You have become familiar with all the actions associated with managing a virtual
system instances.
476
Procedure
1. Navigate to the Virtual System Instances (Classic) window by clicking
PATTERNS > Instances > Virtual System Instances (Classic) from the menu
bar.
The list of the virtual system instances being managed by IBM Cloud
Orchestrator is displayed along with the status of each virtual system instance.
This list provides an overview of the existing virtual system instances but most
management functions for these virtual system instances require you to select a
specific virtual system instance.
2. Select a specific virtual system instance to manage by clicking a
<virtual_system_name> from the list of the virtual system instances. The
details for the virtual system instance you selected are displayed. If you want
to manage a different virtual system instance, then click a different
<virtual_system_name> and the associated virtual system instance details are
displayed.
3. You can perform the following tasks with virtual system instances:
v Start an existing virtual system instance. Virtual system instances managed
by IBM Cloud Orchestrator are not always running and in the started state.
When a virtual system instance is in the stopped state, you can restart the
virtual system instance to redeploy the virtual system instance into the cloud.
v Stopping a persistent virtual system instance (classic) on page 479. Virtual
system instances can be stopped without removing the virtual system
instance from IBM Cloud Orchestrator. If a virtual system instance is
stopped, then the virtual system instance is not running, but management of
the virtual system instance is retained by IBM Cloud Orchestrator and the
virtual system instance remains available for redeployment in the future.
v Removing a virtual system instance (classic) on page 480. You can remove
a virtual system instance when it is no longer needed. By removing a virtual
system instance, you release all the IBM Cloud Orchestrator resources,
making them available for placement decisions.
477
v Creating a snapshot image on page 481. You can create a snapshot image
to store the current state of the virtual system instance. You can later use this
snapshot image to partially restore your virtual system instance to the stored
state.
v Restoring virtual system instances (classic) from a snapshot image on page
482. A snapshot image represents a previously captured state of the virtual
system. Using this snapshot image, you can restore the state of virtual
machines that were present in the virtual system instance to their stored state
at the time the snapshot was taken.
v Deleting snapshot images on page 483. You can delete a snapshot image of
a virtual system instance that you no longer require.
v Accessing virtual machines in your virtual system instance (classic) on
page 485. Each virtual system instance consists of a set of virtual machines
that represent a physical node in an application server environment. You can
access the individual virtual machines that make up your virtual system
instance from the IBM Cloud Orchestrator user interface.
v Viewing the details for your virtual machines on page 487. Each virtual
system instance consists of a set of virtual machines that represent a physical
node in an application server environment. The details of each of these
virtual machines can be viewed and monitored from the panel displaying the
details for the virtual system instance.
Results
After you have followed these steps, your virtual system instance is ready to be
used.
Starting a persistent virtual system instance (classic):
Virtual system instances managed by IBM Cloud Orchestrator are not always
running and in the started state. When a persistent virtual system instance is in
either the stopped state or the stored state, you can restart the virtual system
instance to redeploy the virtual system instance into the cloud.
Before you begin
You must specifically be granted access to the virtual system instance you intend
to start or be assigned the admin role to perform these steps. These steps are only
intended for starting a virtual system instance that is in the stopped state or the
stored state. To create a virtual system instance, you must deploy a pattern into the
cloud. See Deploying a virtual system pattern (classic) on page 464, for more
information about creating a virtual system instance by deploying a pattern.
About this task
When a virtual system instance is stopped, the IBM Cloud Orchestrator resources
are not released and the virtual system instance remains managed by IBM Cloud
Orchestrator. The virtual system instance still has an impact on placement
decisions though it is not actively running on the hypervisor. The IBM Cloud
Orchestrator resources assigned to this virtual system instance are maintained to
ensure that IBM Cloud Orchestrator resources are available when the virtual
system instance is restarted.
If your virtual system instance has been stored, then other virtual system instances
might have consumed the memory required to restart your virtual system instance.
478
If this scenario occurs, then you can stop and then store other virtual system
instances to release sufficient memory to ensure that your stored virtual system
instance can be restarted. Follow these steps to redeploy the virtual system
instance into the cloud by restarting the virtual system instance.
Procedure
From the Virtual System Instances window, click the start icon to deploy the
virtual system instance in to the cloud. Deployment of the virtual system instance
into the cloud does not happen instantly. The deployment time depends on the
virtual system instance size and the system activity. The starting icon is displayed
while the deployment process is in progress or all the virtual machines in a cluster
have not yet started. When the state of the virtual system instance is The virtual
system has been deployed and is ready to use, then the virtual system instance is
running in the cloud and available for use. The failed icon is displayed if the
virtual system instance does not start successfully.
Results
Your virtual system instance is started and ready to be used.
What to do next
You can now access and use your virtual system instance. See Accessing virtual
machines in your virtual system instance (classic) on page 485 for more
information.
Stopping a persistent virtual system instance (classic):
You can stop a persistent virtual system instance without removing the virtual
system instance from IBM Cloud Orchestrator. If you stop a virtual system
instance, the virtual system instance is not running, but management of the virtual
system instance is retained by IBM Cloud Orchestrator and the virtual system
instance remains available for redeployment in the future.
Before you begin
You must specifically be granted write or all access to the virtual system instance or
be assigned the admin role to perform these steps.
About this task
When you stop a persistent virtual system instance, the resources are not released.
A stopped virtual system instance still affects placement decisions even though it is
not actively running on the hypervisor. The resources assigned to this virtual
system instance are maintained to ensure that the resources are available when you
redeploy the virtual system instance in to the cloud. Follow these steps to redeploy
the virtual system instance in to the cloud by starting the virtual system instance.
Procedure
From the Virtual System Instances window, click the stop icon to stop your virtual
system instance. Stopping the virtual system instance does not happen instantly.
When the state of the virtual system instance is Stopped, then the virtual system
instance has finished stopping. All virtual machines are stopped when a virtual
system instance is stopped. If you must stop only certain virtual machines, then
Chapter 7. Managing and deploying virtual patterns
479
this can be achieved using the associated virtual machine actions. Stopping a
virtual system instance does not release the associated resources. When a virtual
system instance is stopped, clicking the start icon restarts the virtual system
instance using the resources it had reserved.
Results
Your virtual system instance is no longer running but remains available for
redeployment in the future.
What to do next
Create a virtual system instance by deploying a pattern or access a different virtual
system instance that is started. See Deploying a virtual system pattern (classic)
on page 464 and Accessing virtual machines in your virtual system instance
(classic) on page 485 for more information.
Removing a virtual system instance (classic):
You can remove a virtual system instance when it is no longer needed. By
removing a virtual system instance, you release all the cloud resources, making
them available for placement decisions.
Before you begin
You must specifically be granted all access to the virtual system instance or be
assigned the admin role to perform these steps.
About this task
When a virtual system instance is stopped, the cloud resources are not released.
The processor usage and the memory allocation associated with the virtual system
instance effects placement decisions made by IBM Cloud Orchestrator. Though the
virtual system instance is not actively running, placement decisions are still
effected. The cloud resources assigned to this virtual system instance are
maintained to ensure that they are available if the virtual system instance is
redeployed into the cloud. Deleting the virtual system instance releases the
resources and the virtual system instance are no longer a factor in placement
decisions. Follow these steps to remove the virtual system instance from IBM
Cloud Orchestrator.
Procedure
1. From the Virtual System Instances window, click the remove icon to remove the
virtual system instance and release the IBM Cloud Orchestrator resources.
Clicking the remove icon displays a window requesting confirmation that this
virtual system can be deleted.
2. In the confirmation dialog, specify what you want to delete.
Delete the virtual system instances history and log files as well.
When deleting a virtual system instance, you can delete history
information and logs from that virtual system instance. To retain this
information, ensure that the Delete the virtual system instances
history and log files as well check box is not selected in the dialog
box.
480
If this virtual system instance contains any scripts that are run at
virtual system instance deletion, the check box must be
disabled. Otherwise, you cannot see the logs from the run of that
script.
Note: Scripts run at virtual system instance deletion are only run if the
virtual system instance is running when it is deleted.
Ignore errors on delete.
When deleting a virtual system instance, you are also presented the
option to ignore any errors that occur with the deletion. If you attempt
to delete a virtual system instance and all associated virtual machines
cannot be deleted, the delete fails. You can use the Ignore errors on
delete option to force deletion of the virtual system instance.
CAUTION:
This option is helpful in specific situations only, so use this option
with caution. You might know that the virtual machines cannot be
deleted and you choose to clean them up manually, for example. Or,
you might know that the server that is hosting the virtual machine is
no longer available. Therefore, the delete would not occur because
the errors would block the delete. You can use the Ignore errors on
delete check box in these circumstances to force deletion of a virtual
system instance, even if the virtual machines cannot be deleted.
3. Click OK to delete the virtual system instance with the parameters you
specified.
Results
After you have followed these steps, the virtual system instance has been removed
from the cloud.
What to do next
You can create a virtual system instance by deploying a pattern or you can access
any virtual system instance that is already started. See Deploying a virtual system
pattern (classic) on page 464 for more information about creating a virtual system
instance by deploying a pattern. See Accessing virtual machines in your virtual
system instance (classic) on page 485 for more information about accessing a
virtual system instance.
Creating a snapshot image:
You can create a snapshot image to store the current state of the virtual system
instance (classic). You can later use this snapshot image to partially restore your
virtual system instance to the stored state.
Before you begin
You must be granted access to the virtual system instance or have the admin role
to complete this task.
Note: The snapshot operation is supported only for VMware, and takes a snapshot
only of the disk image, without preserving memory state. This functionality is
consistent with the OpenStack model of snapshots.
481
482
483
Results
The snapshot image is deleted from the system memory.
Virtual System Instances (classic) fields on the user interface:
You can view and work with the virtual system instances in the Virtual System
Instances (classic) window.
Virtual System Instances (classic) fields
The following fields are displayed in a virtual system instance:
Created on
Specifies the date and time when the virtual system instance was created.
This field is automatically generated.
From pattern
Specifies the pattern that was used to create this virtual system instance.
This field is displayed as a link to the associated pattern.
Using Environment profile
Specifies the environment profile, if one was used when creating this
virtual machine, by providing a link to that environment profile. Clicking
the link displays the details for that environment profile. If none was
specified, then this field says None provided.
Current status
Specifies the state of the virtual machine.
Updated on
Specifies the last date and time when the virtual system instance was
updated.
Access granted to
The user who first deployed the virtual system instance is automatically
granted all access to the virtual image as the owner. If you want additional
users to access this virtual system instance, you must manually add the
projects to which these users belong. See User roles in IBM Cloud
Orchestrator on page 255 for more information about object level
permissions.
Snapshot
Includes links to any snapshot images that have been taken of this virtual
system instance.
History
Specifies the activity that has been performed on this virtual system
instance.
Virtual Machines
Lists the virtual machines that are included in this virtual system instance.
If an environment profile was used, then the virtual machine name is
provided by the user who provides the environment profile. Expand any
virtual machine to display detailed information about that virtual machine.
For more information about the virtual machine fields, see Virtual
machine panel fields on the Virtual System Instances (Classic) window on
page 488.
484
Comments
Specifies optional information a user can append to a virtual system
instance.
Accessing virtual machines in your virtual system instance (classic):
Each virtual system instance consists of a set of virtual machines that represent a
physical node in an application server environment. You can access the individual
virtual machines that make up your virtual system instance from the user interface.
Before you begin
You must specifically be granted access to the virtual system instance you intend
to access. Optionally, you can be assigned the admin role to perform these steps.
About this task
With IBM Cloud Orchestrator, you can access any of the virtual machines that are
contained by the virtual system instances.
Procedure
1. You can access virtual machines from the Virtual System Instances (Classic)
window by clicking PATTERNS > Instances > Virtual System Instances
(Classic) on the menu.
2. From the left panel of the Virtual System Instances (Classic) window, select the
virtual system instance containing the virtual machine. Information about the
virtual system instance is displayed in the right panel of the window.
3. From the right panel of the window, expand Virtual Machines by clicking the
expand icon to view a list of the virtual machines that exists in the selected
virtual system instance.
4. Expand the details for your virtual machine by clicking the expand icon next to
the <virtual_machine_name> for the virtual machine you want to access. The
number of virtual machines that exist for the virtual system instance is
dependent on the pattern that was deployed to create it. From the list of the
virtual machines included in this virtual system instance, you can view the
CPU and the Memory currently being used by a virtual machine.
5. Access your virtual machine. Use one of the following procedure to access your
virtual machine:
v Click Login under the SSH column to open a new browser window and
access the virtual machine using SSH. A prompt is displayed to enter user
name and password.
v Click VNC under the Consoles section to access your virtual machine using
Virtual Network Computing (VNC). During pattern creation, the default
setting is for your host operating system to be configured to accept VNC
connections. VNC connections can be disabled by modifying the virtual
machine properties during pattern creation.
v Click WebSphere under the Consoles section to access the WebSphere
Application Server administrative console on your virtual machine.
Important: To access your virtual machine using VNC or to access
theWebSphere Application Server administrative console on your virtual
machine, your virtual machine must be accessible from the machine you are
accessing the user interface. If a firewall is preventing the connection on the
required port, you must open this port to establish a connection. In addition to
Chapter 7. Managing and deploying virtual patterns
485
this, the DNS server must be correctly configured to resolve the virtual machine
host name. If the DNS is not configured correctly, the ip-address and hostname
fields must be present in the hosts file (/etc/hosts on Linux, and
c:\WINDOWS\system32\drivers\etc\hosts for Windows.
Results
After completing these steps, you have accessed your virtual machine from the
Virtual System Instances (Classic) window of the user interface.
Expanding disk on deployed virtual machines:
Use the Default LINUX resize disk or Default WINDOWS resize disk script
package to expand the disk of a deployed virtual machine.
About this task
After deploying a virtual machine, you can change the related flavor by editing the
Flavor field in the Virtual machines section in the Virtual System Instances
(Classic) window. When you change the flavor only the vCPU and the memory
values are changed. To change the disk size of a deployed virtual machine, you
must follow one of the following procedures:
v If you added the Default LINUX resize disk or Default WINDOWS resize disk
script package to the virtual system pattern before deploying it, perform the
following steps:
1. Click PATTERNS > Instances > Virtual System Instances (Classic) to access
the Virtual System Instances (Classic) window.
2. Select your instance and expand the Virtual Machines detailed section.
3. In the Script Packages section, select theDefault LINUX resize disk or
Default WINDOWS resize disk script package and click Execute now.
For information about adding a script package to a virtual system pattern, see
Associating a script package with a pattern on page 371.
v If the Default LINUX resize disk or Default WINDOWS resize disk script
package is not part of the virtual system pattern, you must download the script
package and run it manually by performing the following steps:
1. Click PATTERNS > Pattern Design > Script Packages to open the Script
Packages window.
2. Select the Default LINUX resize disk or Default WINDOWS resize disk script
package and click Download in the Script package file field in the right
pane.
3. When the file download is completed, upload the
defaultlinuxresizedisk.zip or defaultwindowsresizedisk.zip script
package zip file to the virtual machine where you want to change the disk
size.
4. Unzip the script package file and run the resize script.
486
This field graphically specifies the percentage of the virtual CPU power
that is currently being used. The number of virtual CPUs available is
determined by the pattern used to create the virtual system. The default
number of virtual CPUs for a virtual machine is one.
Memory
This field displays a graph that specifies the percentage of the memory
that is currently being used by the virtual machine. The amount of
memory available is determined by the pattern used to create the
virtual system instance. The default amount of virtual memory for a
virtual machine is 2048 MB.
SSH
This field provides the Login link that you can click to log in to your
virtual machine using Secure Shell (SSH). You are prompted to log on
as a user of your choosing.
Actions
This field displays the available actions for a virtual machine. Actions
that are not available for a specific virtual machine are not active. Click
the View link to show or hide the available actions for the virtual
machine.
Clone Deploys a new instance of the virtual machine with the same
Chapter 7. Managing and deploying virtual patterns
487
Stop
Provides the graphic and numeric percentage of available CPU being used
by this virtual machine.
Memory
Provides the graphic and numeric percentage of available memory being
used by this virtual machine.
SSH
488
Actions
Clicking the View link displays or hides the following set of icons to work
with this virtual machine:
Clone Deploys a new instance of the virtual machine with the same
parameters. Script packages marked to run at virtual system
creation run after the virtual machine is created. Any maintenance
applied to the source virtual machine is applied to the cloned
virtual machine automatically. The new virtual machine does not
inherit any disk changes made to the source virtual machine. This
action is available if you are viewing a virtual machine that has
completed deployment and is ready to be cloned.
Note: The tooltip This resource is not ready to be cloned is
shown for images that cannot be cloned temporary or permanently.
It actually means This resource cannot be cloned.
Start
Stop
489
Updated on
Specifies the time and date of the last change to the virtual machine.
On hypervisor
Specifies the hypervisor where this virtual machine is located by providing
a link to the hypervisor details panel. You can click the link to display the
details of the hypervisor where this virtual machine is running.
In cloud group
Specifies the cloud group where this virtual machine is located by
providing a link to the Cloud Groups panel. You can click the link to
display the details of the cloud group where this virtual machine is
running.
Registered as
Specifies how the virtual machine is registered on the hypervisor.
Stored on
Specifies the storage device associated with this virtual machine.
Hardware and network
The Hardware and network section provides the following fields:
Flavor Specifies the instance flavor that describes the memory and storage
capacity of the virtual machine. By default the flavor values are:
m1.tiny
Memory: 512 MB, vCPUs: 1, Storage: 0 GB
m1.small
Memory: 2048 MB, vCPUs: 1, Storage: 20 GB
m1.medium
Memory: 4096 MB, vCPUs: 2, Storage: 40 GB
m1.large
Memory: 8192 MB, vCPUs: 4, Storage: 80 GB
m1.xlarge
Memory: 16384 MB, vCPUs: 8, Storage: 160 GB
To change the flavor, stop the virtual machine and click Edit to select a
new flavor value. When you confirm your change, the virtual machine is
automatically restarted.
Note: When you change the flavor only the vCPU and the memory values
are changed. To change the disk size of a deployed virtual machine, you
must follow the procedure described in Expanding disk on deployed
virtual machines on page 486.
Note: If you are using IBM Cloud Orchestrator in languages other than
English, the Flavor field might be displayed as Untranslated message
RM11388.
Virtual CPU count
Specifies the number of CPU this virtual machine represents. This value is
specified in the pattern that was deployed to create the virtual system
instance. See Working with virtual system patterns (classic) on page 434
for more details on the pattern options.
490
Physical CPU
Specifies whether the physical CPUs are reserved for exclusive use by this
virtual machine. The Reserved status is shown if the True value is selected
for Reserve physical CPUs during deployment and it is an image variable.
CPU shares on host
The amount of CPU allocated for the host.
CPU shares consumed on host
The amount of CPU actually used by the host.
Virtual memory (MB)
Specifies the IP address and the host name of this virtual machine. The
virtual machine must be stopped before this value can be changed.
SSH public key
The name of, and a link to, the public key.
Network interface
The network interface address.
MAC address
Specifies the Media Access Control (MAC) address of the virtual machine.
Operating System
The Operating System section provides the following fields:
Name Specifies the type of operating system that is running on the virtual
machine.
Type
Shows the specific variety of the operating system that is running on the
virtual machine.
Version
Specifies the equivalent of the uname - a command for the operating
system on the virtual machine.
Note: After applying a service pack, the version of the operating system
might not be displayed by the uname - a command. The initial value is
obtained from the virtual machine, which is then stored in the database.
This is expected behavior.
WebSphere Configuration
The WebSphere Configuration section provides the following fields:
Cell Name
The cell in which this virtual machine resides.
Node Name
The node in which this virtual machine resides.
Profile Name
The profile in which this virtual machine was run.
Show all environment variables
This link specifies the set of supplied environment variables that you can
use when using a script package. See Script package environment
variables on page 388 for more information about the environment
variables.
491
Script Packages
The Script Packages section lists any script packages that have been run on this
virtual machine. If any script packages have been run for this virtual machine, then
links to the associated log files are also included. See Managing script packages
on page 365 for more information about script packages.
If a script is user initiated, meaning the executes attribute is set as when I initiate it,
then the start icon is displayed next to the script name. Click the icon to run the
script. There is no limit on the number of times a script is run using this method.
Scripts in this section can include add-ons. For more information about add-on
scripts, see Managing add-ons on page 408.
Consoles
The Consoles section provides a link to access your virtual machine. Using the
provided links, you can access the WebSphere Application Server administrative
console for your virtual machine. See Accessing virtual machines in your virtual
system instance (classic) on page 485 for more information about accessing you
virtual machines.
492
If you created virtual applications that contain these pattern types, ensure to
download the supported version of the pattern types to continue to use your
virtual applications in IBM Cloud Orchestrator.
Use the following steps to work with pattern types:
Procedure
1.
2.
3.
4.
For detailed information about accepting the license agreement for specific
pattern types, see the related pattern type documentation.
5. Upgrade a pattern type.
6. Remove a pattern type.
Results
You are ready to start creating or extending virtual applications with pattern types.
493
Procedure
1. Click PATTERNS > Deployer Configuration > Pattern Types. The Pattern
Types palette displays and the pattern types are listed.
2. Select a pattern_type. The pattern details display on the right.
3. View the details of the pattern type, including:
Description
Specifies the description of the pattern type.
License agreement
Specifies if the license agreement is accepted.
Status Specifies the status of the pattern type: Disabled or Available. To
enable the pattern type, select Enable. After you enable the pattern
type, the status is changed to Available. You can either enable the
current pattern type of no dependencies exist, or enable all of the
pre-requirements, such as accepting licenses and updating statuses.
Required
Specifies any prerequisite patterns that are required.
Plug-ins
Specifies the plug-ins that are associated with this pattern type. Click
show me all plug-ins in this pattern type to view plug-ins associated
with the pattern type. Plug-ins required for configuration are also
listed.
Dependency
Lists pattern type dependencies.
Procedure
1. Click PATTERNS > Deployer Configuration > Pattern Types. The Pattern
Types palette displays and the pattern types are listed.
2. Select a pattern_type. The pattern details display on the right.
3. Click show me all plug-ins in this pattern type and the System Plug-ins
palette displays with a list of plug-ins.
Results
You have viewed the plug-ins that are associated with the pattern type.
494
Procedure
1. Click PATTERNS > Deployer Configuration > Pattern Types. The Pattern
Types palette is displayed.
2. Click the New icon. The Install a pattern type window is displayed.
3. Select the Local tab, and click Browse to select the .tgz file to import as a
pattern type. Your system proceeds to upload the .tgz file.
Results
You have imported a new pattern type.
What to do next
Now you must accept the license agreement of the pattern type, configure plug-ins
in this pattern type, and enable the pattern type if you want to use it.
Results
You have removed a pattern type.
495
Procedure
1. Click PATTERNS > Deployer Configuration > Pattern Types. The Pattern
Types palette displays.
2. Click the New icon. The Install a new pattern type window displays.
3. Click Browse to select the .tgz file to import as a pattern type. Your system
proceeds to upload the .tgz file.
Results
You have imported an updated .tgz file as an upgraded pattern type.
496
Procedure
1. View the virtual application instances. Click PATTERNS > Instances > Virtual
Application Instances.
2. Select the virtual_application_instance for which you want to upgrade the
pattern type. Click the Upgrade icon. A dialogue box displays where you can
confirm that you want to upgrade the pattern type to the latest version. Click
OK. A message displays across the menu that the pattern type is upgrading.
When the upgrade is complete, the Status field arrow turns green.
A pattern type defines a logical collection of plug-ins, but not the members. The
members (plug-ins) define their associations with pattern types in the config.json
Chapter 7. Managing and deploying virtual patterns
497
file. Therefore, pattern types are dynamic collections and can be extended by third
parties. For example, the config.json file for the DB2 plug-in (not released with
the product) is as follows:
{
"name":"db2",
"version":"2.0.0.0",
"files":[
"db2/db2_wse_en-9.7.0.3a-linuxx64-20110330.tgz",
"optim/dsadm223_iwd_20110420_1600_win.zip",
"optim/dsdev221_iwd_20110421_1200_win.zip",
"optim/com.ibm.optim.database.administrator.pek_2.2.jar",
"optim/com.ibm.optim.development.studio.pek_2.2.jar"
],
"patterntypes":{
"primary":{
"dbaas":"1.0"
},
"secondary":[
{
"webapp":"2.0"
}
]
},
"packages":{
"DB2":[
{
"persistent":true,
"requires":{
"arch":"x86_64"
},
"parts":[
{
"part":"parts/db2-9.7.0.3.tgz",
"parms":{
"installDir":"/opt/ibm/db2/V9.7"
}
},
{
"part":"parts/db2.scripts.tgz"
}
]
}
]
}
}
498
When you build a virtual application pattern, you create the model of a virtual
application by using components, links, and policies.
Consider an order management application with the following requirements:
v It is web-based and runs on WebSphere Application Server.
v It is highly available, with a cluster of two WebSphere Application Server nodes
and two DB2 nodes.
v It uses DB2 to store order information and other application data.
v It supports the current maximum number of orders that are received by the
company, 10 transactions per second, but it must also be able to scale to handle
a larger number of transactions, up to 15 transactions per second, as the business
increases.
v It supports backup and restore capabilities.
A virtual application builder can use IBM Cloud Orchestrator to create a virtual
application pattern by using components, links, and policies to specify each of
these parameters.
Component
Represents an application artifact such as a WAR file, and attributes such
as a maximum transaction timeout. In terms of the order management
application example, the components for the application are the WebSphere
Application Server nodes and the DB2 nodes. The WebSphere Application
Server components include the WAR file for the application, and the DB2
components connect the application to the existing DB2 server.
Link
Policy Represents a quality of service level for application artifacts in the virtual
application. Policies can be applied globally at the application level or
specified for individual components. For example, a logging policy defines
logging settings and a scaling policy defines criteria for dynamically
adding or removing resources from the virtual application. In terms of the
order management application example, a Response Time Based scaling
policy is applied that scales the virtual application in or out to keep the
web response time 1000 - 5000 ms.
When you deploy a virtual application, the virtual application pattern is converted
from a logical model to a topology of virtual machines that are deployed to the
cloud. Behind the scenes, the system determines the underlying infrastructure and
middleware that is required for the application, and adjusts them as needed to
ensure that the quality of service levels that are set for the application are
maintained. A deployed topology that is based on a virtual application pattern is
called a virtual application instance. You can deploy multiple virtual application
instances from a single virtual application pattern.
The components, links, and policies that are available to design a particular virtual
application pattern are dependent on the pattern type that you choose and the
plug-ins that are associated with the pattern type.
499
500
Components
The following components are available with the virtual application patterns
provided with IBM Cloud Orchestrator or purchasable at the IBM PureSystems
Centre.
Important: Components that require disk add-ons are not supported in IBM Cloud
Orchestrator.
v Application
Additional archive file
Enterprise application
Existing web service provider endpoint on page 525
Policy set on page 526
Chapter 7. Managing and deploying virtual patterns
501
Policies
You can optionally apply policies to a virtual application to configure specific
behavior in the deployed virtual application instance. Two virtual applications
might include identical components, but require different policies to achieve
different service level agreements. For example, if you want a web application to
be highly available, you can add a scaling policy to the web application component
and specify requirements such as a processor usage threshold to trigger scaling of
the web application. At deployment time, the topology of the virtual application is
configured to dynamically scale the web application. Multiple WebSphere
Application Server instances are deployed initially for the web application and
instances are added and removed automatically based on the service levels that are
defined in the policy.
Policies can be applied only to particular types of components. For more
information, see the following links:
v Scaling policy
v Routing policy
v Java virtual machine (JVM) policy
v Log policy
502
503
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Patterns palette displays.
2. Click the Add icon on the toolbar.
3. To build your virtual application pattern:
a. Select a pattern type from the drop-down menu.
b. Select a virtual application template.
c. Click Start Building. You have created a new virtual application associated
with a pattern type. The Pattern Builder opens in a new window where you
can add components and policies.
4. On the Virtual Application properties pane, specify the following information:
Name The name of the virtual application pattern.
Description
(Optional) The description of the virtual application pattern.
Type
504
v To edit the connections between the parts, hover over one of the objects until
the blue circle turns orange. Select the circle with the left mouse button, drag
a connection to the second object until the object is highlighted, and release
the mouse button.
6. Click Save.
Results
The virtual application pattern is created.
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Patterns palette displays.
2. Select a virtual application pattern and click the Open icon.
3. Edit the virtual application pattern, as required:
v Drag the components that you want to add to the virtual application pattern
onto the canvas.
v To add policies to the virtual application pattern, click Add policy for
application and select a policy or select a component part on the canvas and
click the Add a Component Policy icon to add a component-specific policy.
v To remove parts, click the Remove Component icon in the component part.
v To edit the connections between the parts, hover over one of the objects until
the blue circle turns orange. Select the circle with the left mouse button, drag
a connection to the second object until the object is highlighted, and release
the mouse button.
4. Click Save.
What to do next
Deploy the virtual application.
505
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual application pattern and click the Clone icon on the toolbar.
3. Specify the name for the copy of the virtual application pattern and click OK.
What to do next
Edit the cloned virtual application pattern as necessary.
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual application pattern. The virtual application pattern details
display in the right pane.
3. Click the Delete icon on the toolbar.
4. Click Confirm to confirm that you want to delete the pattern.
Results
You deleted a virtual application pattern.
506
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Pattern palette displays.
2. Select the virtual application for which you want to create a layer.
3. Click the Open icon to edit the pattern layer. The Pattern Builder displays. The
Layers palette displays on the bottom left of the Pattern Builder. The
topographical view of the virtual application pattern displays on the canvas.
4. Expand Layers to view the layers of the virtual application. Click the layer to
view the topographical view on the canvas.
5. Click the Create a new layer icon. The new layer is added to the Layers
palette.
You can also add a layer by importing an existing application called a reference
layer.
6. Click Save.
Results
You have created a layer for your virtual application.
What to do next
Edit the layer.
507
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Pattern palette displays.
2. Select the virtual application pattern for which you want to edit a layer.
3. Click the Edit icon. The Pattern Builder opens.
4. Expand Layers to view the layers of the virtual application pattern. Click the
layer to view the topographical view on the canvas and to start editing.
5. Edit the layer. You can edit the layer in the following ways:
v Rename the layer. Click one time on the name to modify.
v Add or remove virtual application components.
v Add or remove virtual application components connections.
v Add or remove policies.
v Move components between layers. Use the move to: icon to switch the layer
group of each virtual application pattern part. When you select a virtual
application pattern part and click the move to: icon, a list of layers displays.
Select a layer to move the virtual application pattern part from the previous
layer.
6. Click Save.
Results
You have edited an existing virtual application pattern layer.
What to do next
Deploy the virtual application pattern.
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Patterns palette displays.
2. Select the virtual application pattern for which you want to remove a layer.
3. Click the Edit icon. The Pattern Builder opens. The Layers palette displays on
the lower left of the Pattern Builder. The topographical view of the virtual
application pattern displays on the canvas.
4. Expand Layers to view the layers of the virtual application pattern.
5. Select the layer that you want to delete and click the Delete the selected layer
icon. The topographical view displays on the canvas to show that the layer is
deleted.
Results
You deleted a virtual application pattern layer.
508
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Patterns palette displays.
2. Select the virtual application pattern for which you want to import a layer.
3. Click the Edit icon. The Pattern Builder opens. The Layers palette displays on
the lower left of the Virtual Application Patterns palette. The topographical
view of the virtual application pattern displays on the canvas.
4. Expand Layers to view the layers of the virtual application pattern.
5. Click the Import a virtual application icon. The Import Virtual Application
dialog box displays.
6. Select a virtual application that you want to reference as a layer and click Add.
The new application layer is now listed under the Layers palette. If you select
this layer, the topographical view displays on the canvas.
Results
You have imported a virtual application as a reference layer.
What to do next
After you import the reference layer, virtual application pattern components in
other layers can connect to the reference layer.
509
a virtual application template, you can specify property values that are not
configured or edit values that are not locked.
To access the virtual application templates, click PATTERNS > Pattern Design >
Virtual Application Templates.
Creating a virtual application template:
You can create a new virtual application template that is used to create a virtual
application. The template can be saved in the IBM Cloud Orchestrator catalog.
Before you begin
You must be assigned the catalogeditor role or the admin role to perform these
steps.
You can also use an existing virtual application template that was shipped with the
product or you can create a virtual application template from an existing virtual
application.
To create a virtual application template from an existing virtual application, click
PATTERNS > Pattern Design > Virtual Application Patterns and select a virtual
application in the list. Click the New icon to create a virtual application template.
Click the Edit icon to start the Pattern Builder.
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Templates. The
Virtual Application Templates palette displays.
2. Click the New icon on the toolbar.
3. To create your virtual application template:
a. Select a pattern type from the drop-down menu.
b. Select a virtual application template.
c. Click Start Building. You have created a new virtual application template
associated with a pattern type. The Pattern Builder opens in a new window
where you can add components and policies.
4. On the Virtual Application properties pane, specify the following information:
Name The name of the virtual application pattern.
Description
(Optional) The description of the virtual application pattern.
Type
510
511
512
513
514
515
To learn more about using the command-line interface see command-line interface.
After setting up system plug-ins, you can build a virtual application pattern with
component parts or edit an existing virtual application pattern. After enablement
by the administrator, you can select the specified version and list all the plug-ins
with the IBM version level, release, modification and fix level (v.r.m.f) structure
format.
Plug-ins shipped with pattern types:
Several preinstalled system plug-ins are available with the Foundation pattern type
shipped with IBM Cloud Orchestrator pattern types. You can use these plug-ins to
extend the function of virtual applications.
In the IBM Cloud Orchestrator user interface, click PATTERNS > Deployer
Configuration > System Plug-ins and select the Foundation pattern type to see a
list of the related plug-ins.
In the Pattern Builder, components are grouped into categories.
Adding system plug-ins to the catalog:
You can add a plug-in to the catalog. Plug-ins define components, links, and
policies for virtual application patterns.
Before you begin
You must be assigned the catalogeditor role or the admin role to perform these
steps.
Attention:
Procedure
1. Open the IBM Cloud Orchestrator user interface.
2. Click PATTERNS > Deployer Configuration > System Plug-ins.
3. Select the Add icon to upload the plug-in .tgz file. A dialogue box displays
where you can browse for a plug-in .tgz file to import.
Important: If the .tgz file is more than 2 GB, use the command-line interface
to upload the plug-in file to your system.
Deleting plug-ins from the catalog:
You can remove a plug-in from the catalog when a it is no longer needed.
Before you begin
You must be assigned the catalogeditor role or the admin role to perform these
steps.
Procedure
1. Open the IBM Cloud Orchestrator user interface.
2. Click PATTERNS > Deployer Configuration > System Plug-ins.
3. Select the Delete icon to delete the plug-in file. A window displays requesting
confirmation that you want to remove the plug-in.
516
Components
The following components are available with the virtual application patterns
provided with IBM Cloud Orchestrator or purchasable at the IBM PureSystems
Centre.
Important: Components that require disk add-ons are not supported in IBM Cloud
Orchestrator.
v Application
Additional archive file
Enterprise application
Existing web service provider endpoint on page 525
Policy set on page 526
Web application, such as IBM WebSphere Application Server
v Database
Data Studio web console
Database, such as IBM DB2
Existing database (DB2) on page 536
Existing database (Informix)
Existing database (Oracle)
Existing IMS database on page 541
v Messaging
Existing Messaging Service (WebSphere MQ) on page 553
Existing queue (WebSphere MQ) on page 556
Existing topic (WebSphere MQ) on page 555
v OSGi
External OSGi bundle repository
Chapter 7. Managing and deploying virtual patterns
517
OSGi application
v Transaction Processing
CICS Transaction Gateway.
Existing IMS TM on page 565.
v User Registry
Existing User Registry (IBM Tivoli Directory Server) on page 542
Existing User Registry (Microsoft Active Directory) on page 546
User Registry (Tivoli Directory Server) on page 549
v Other components
Generic target
Debug on page 609
Policies
You can optionally apply policies to a virtual application to configure specific
behavior in the deployed virtual application instance. Two virtual applications
might include identical components, but require different policies to achieve
different service level agreements. For example, if you want a web application to
be highly available, you can add a scaling policy to the web application component
and specify requirements such as a processor usage threshold to trigger scaling of
the web application. At deployment time, the topology of the virtual application is
configured to dynamically scale the web application. Multiple WebSphere
Application Server instances are deployed initially for the web application and
instances are added and removed automatically based on the service levels that are
defined in the policy.
Policies can be applied only to particular types of components. For more
information, see the following links:
v Scaling policy
v Routing policy
v Java virtual machine (JVM) policy
v Log policy
Application components:
There are several application components to choose from when building a virtual
application pattern.
About this task
v Additional archive file on page 519
v Enterprise application component on page 520
v Existing web service provider endpoint on page 525
v Policy set on page 526
v Web application component on page 528
518
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
(WebSphere Application
Server) cloud component
represents an execution
service for Java EE enterprise
archive (EAR files).
519
6. You can also view the additional archive file component properties by viewing
the plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select file/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Enterprise application component:
The enterprise application (WebSphere Application Server) component represents
an execution service for Java Platform, Enterprise Edition (Java EE) enterprise
application (EAR) files.
Before you begin
Attention: You cannot use an enterprise application that includes Container
Managed Persistence V 2.0 beans. This type of application requires deploy tools
that are not included in this products WebSphere Application Server binary files .
The following are attributes for an enterprise application:
v EAR file: Specifies the enterprise archive (.ear) file to be uploaded. This
attribute is required.
v Total transaction lifetime timeout: Specifies the default maximum time, in
seconds, allowed for a transaction that is started on this server before the
transaction service initiates timeout completion. Any transaction that does not
begin completion processing before this timeout occurs is rolled back. The
default is 120 seconds.
v Asynchronous response timeout: Specifies the amount of time, in seconds, that
the server waits for responses to WS-AT protocol messages. The default is 120
seconds.
v Client inactivity timeout: Specifies the maximum duration, in seconds, between
transactional requests from a remote client. Any period of client inactivity that
exceeds this timeout results in the transaction being rolled back in this
application server. The default is 60 seconds.
v Maximum transaction timeout: Specifies, in seconds, the maximum transaction
timeout for transactions that run in this server. This value is greater than, or
equal to, the value specified for the total transaction timeout. The default is 300
seconds.
v Iterim fixes URL: Specifies the location or URL of the selected interim fixes. This
URL is used by the WebSphere Application Server virtual machine to download
interim fixes for update.
Policies
520
Description
Description
System updates
Description
Connection
A messaging service
v JNDI name of the Java
represents a connection to an
Message Service (JMS)
external messaging system
connection factory
such as WebSphere MQ.
v Resource references of the
JMS connection factory
v Client ID
Policy set
521
Description
Connection
Generic target
Database (DB2)
522
Existing Information
Management Systems (IMS)
database system.
Description
Connection
v User filter
Existing IMS TM
Enterprise application
(WebSphere Application
Server)
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
523
Description
Connection
A message queue on an
v JNDI name
external WebSphere MQ
v Resource environment
messaging service through
references
which messages are sent and
v Message destination
received.
references
524
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
(WebSphere Application
Server) cloud component
represents an execution
service for Java Platform,
Enterprise Edition (Java EE)
enterprise archive (EAR
files).
v Service name
A web application
(WebSphere Application
Server) cloud component
represents an execution
service for Java EE web
archive (WAR files).
v Service name
525
526
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
(WebSphere Application
Server) cloud component
represents an execution
service for Java Platform,
Enterprise Edition (Java EE)
enterprise archive (EAR
files).
A web application
(WebSphere Application
Server) cloud component
represents an execution
service for Java EE web
archive (WAR files).
v Service name
v Binding file
v Key store
v Trust store (encryption)
v Trust store (digital
signature)
v Service name
v Binding file
v Key store
v Trust store (encryption)
v Trust store (digital
signature)
To make a connection between a component and the policy set, hover over the
blue circle on the enterprise application component part on the canvas. When the
blue circle turns yellow, draw a connection between the policy set and component.
About this task
You can view, edit, or add this virtual application component in the user interface
as follows:
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing policy set component, select the Policy Set component part
on the Pattern Builder canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add a new policy set component to a virtual application pattern, click the
Policy Set component listed under the Application Components and drag the
icon to the Pattern Builder canvas. The properties panel for the component
displays to the right of the Pattern Builder palette. For more details on the
properties panel settings, view the help by selecting the help icon on the
properties panel.
6. You can also view the policy set component properties by viewing the plug-in
information. Click PATTERNS > Deployer Configuration > System Plug-ins.
Select webservice/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Chapter 7. Managing and deploying virtual patterns
527
Description
Connections
Table 49. Incoming connectable components
528
Component name
Description
System updates
Description
Connection
A messaging service
v JNDI name of the Java
represents a connection to an
Message Service (JMS)
external messaging system
connection factory
such as WebSphere MQ.
v Resource references of the
JMS connection factory
v Client ID
Policy set
Generic target
Database (DB2)
529
Description
An existing Informix
database component
represents a connection to a
remote Informix database
instance running remotely
outside of the cloud. The
configuration properties
allow a connection to be
made to the remote Informix
database.
Existing Information
Management Systems (IMS)
database
530
Connection
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
Description
Existing IMS TM
Enterprise application
(WebSphere Application
Server)
An enterprise application,
such as a WebSphere
Application Server
application, cloud component
represents an execution
service for Java EE enterprise
applications (EAR files).
Connection
A message queue on an
v JNDI name
external WebSphere MQ
v Resource environment
messaging service through
references
which messages are sent and
v
Message destination
received.
references
Use the property panel to upload the WAR files. You can also specify a context
root. To make associations with other services, create a link to the corresponding
cloud component. At this time, support is limited to one database and one user
registry connection. A high availability (HA) policy object might be attached to
specify an HA pattern.
In addition to uploading your WAR file, you can upload additional files, such as a
compressed file containing configuration details or other information. When the
WebSphere process starts, the compressed file is extracted to a directory and the
icmp.external.directory system property is set. If you attach an HA policy to the
web application component, each virtual machine contains a copy of the
compressed file, and any updates made to the file or directory on one virtual
machine are not reflected in the copy of the file on another virtual machine.
By default, the application is available at http://{ip_address}/{context_root},
where:
v {ip_address} is obtained from the list of deployed virtual application patterns.
Chapter 7. Managing and deploying virtual patterns
531
532
533
Database (DB2):
The DB2 database component represents a pattern-deployed database service.
Before you begin
The following are the attributes for a DB2 database component:
v Database name: Specifies the name of the database name that you want to
deploy.
v Database description: Specifies a description of the database that you want to
deploy.
v Purpose: Specifies the purpose for the database. Select Production or
Non-Production. The default value is Production.
v Source:
Select one of the following sources:
Clone from a database image: Specifies a clone from the database image.
Apply database standard: Specifies to apply a database standard.
Settings include:
- Maximum User Data Space (GB):
Specifies the maximum size of the data space, in gigabytes, in the database
that you want to deploy. The default value is 10 GB.
- Workload standards
Specifies the workload standards. Settings include:
v departmental_OLTP: Specifies the departmental OLTP standard. The
workload type is Departmental OLTP.
v dynamic_datamart: Specifies the dynamic data mart standard. The
workload type Dynamic data mart.
- DB2 Compatability Mode:
Specifies the DB2 compatibility mode. Select Default Mode or Oracle Mode.
The default value is Default Mode.
- Schema file:
Specifies the schema file (*.ddl, *.sql) that defines the database schema.
Click Browse to search for the file on your system.
Connections
Table 51. Incoming connectable components
534
Component name
Description
535
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI name of the data
(WebSphere Application
source
Server) cloud component
v Resource references of the
represents an execution
data source
service for Java EE enterprise
v
Non-transactional data
applications (EAR files).
source
536
set to the corresponding data source, and the name must match the name that is
coded into the application.
About this task
You can view, edit, or add this virtual application component in the user interface
as follows:
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing remote DB2 database component, select the Existing
Database component part on the Pattern Builder canvas. The properties panel
displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add an existing DB2 database component to a virtual application pattern,
click the Existing Database (DB2) component listed under the Database
Components and drag the icon to the Pattern Builder canvas. The properties
panel for the component displays to the right of the Pattern Builder palette. For
more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
6. You can also view the remote DB2 database component properties by viewing
the plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select wasdb2/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Existing database (Informix):
An existing Informix database component represents a connection to a remote
Informix database running remotely outside of the cloud infrastructure. The
configuration properties allow a connection to be made to the remote Informix
database.
Before you begin
The following are the attributes for a remote Informix database component:
v Database name: Specifies the name of the existing Informix database. This
attribute is required.
v Server host name or IP address: Specifies the server host name or IP address of
the existing Informix database. This attribute is required.
v Server port number: Specifies the port number of the existing Informix
database. The default is 9088. This attribute is required.
v User name: Specifies the user name to access the existing Informix database.
This attribute is required.
537
v Password: Specifies the password to access the existing Informix database. This
attribute is required.
Connections
Table 53. Incoming connectable components
Component name
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI name of the data
(WebSphere Application
source
Server) cloud component
v Resource references of the
represents an execution
data source
service for Java EE enterprise
v Non-transactional data
applications (EAR files).
source
538
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI name of the data
(WebSphere Application
source
Server) cloud component
v Resource references of the
represents an execution
data source
service for Java EE enterprise
v Non-transactional data
archive (EAR files).
source
539
Description
Connection properties
v JNDI name of the data
source
v Resource references of the
data source
v Non-transactional data
source
540
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI name of the data
(WebSphere Application
source
Server) cloud component
v Resource references of the
represents an execution
data source
service for Java EE enterprise
applications (EAR files).
541
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To add a new IMS database component to a virtual application pattern, click
the Existing IMS Database component listed under the Databases
Components and drag the icon to the Pattern Builder canvas. The properties
panel for the database component displays to the right of the Pattern Builder
palette. For more details on the properties panel settings, view the help by
selecting the help icon on the properties panel.
5. To edit an existing IMS database component, select the Existing IMS Database
component part on the Pattern Builder canvas. The properties panel displays.
For more details on the properties panel settings, see the properties descriptions
or view the help by selecting the help icon on the properties panel.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
6. You can also view the database component properties by viewing the plug-in
information. Click PATTERNS > Deployer Configuration > System Plug-ins.
Select imsdb/x.x.x.x from the System Plug-ins palette where x.x.x.x corresponds
to the version numbers. The component plug-in configuration information
displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
User Registry components:
There are several user registry components to choose from when building a virtual
application pattern.
About this task
v Existing User Registry (IBM Tivoli Directory Server)
v Existing User Registry (Microsoft Active Directory) on page 546
v User Registry (Tivoli Directory Server) on page 549
Existing User Registry (IBM Tivoli Directory Server):
An existing user registry cloud component represents an existing Lightweight
Directory Access Protocol (LDAP) service that can be attached to a web application
component or an enterprise application component. The LDAP service provides a
user registry for container-managed security.
Before you begin
The following are attributes for the user registry component:
v Server Hostname or IP address: Specifies the hostname or IP address of the
remote LDAP. This attribute is required.
v Server Port Number: Specifies the port number of the remote LDAP. The default
is 389 or 636 for Secure Socket Layer (SSL). This attribute is required.
v Login domain name (DN): Specifies the login DN. This attribute is required.
542
v Password: Specifies the password to access the remote LDAP. This attribute is
required.
v Domain Suffix of LDAP: Specifies the domain suffix of the remote LDAP. This
attribute is required.
v Server SSL certificate: Specifies the SSL port certificate for the remote LDAP.
v User filter: Specifies the LDAP user filter that searches the existing user registry
for users.
v Group filter: Specifies the LDAP group filter that searches the existing user
registry for groups.
Default settings
Tivoli Directory Server is registered to the federated repository in WebSphere
Application Server using Virtual Member Manager (VMM) with the following
settings:
v Login properties of VMM = uid or cn
v Entity type
Object class of PersonAccount = Person or inetOrgPerson in ITDS
Object class of Group = groupOfUniqueNames or groupOfNames in ITDS
Note: The default value for the object class is groupOfUniqueNames. This value
cannot be changed.
Connections
Table 56. Incoming connectable components
Component name
Description
Connection properties
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
Enterprise application
(WebSphere Application
Server)
An enterprise application
(WebSphere Application
Server) cloud component
represents an execution
service for Java EE enterprise
applications (EAR files).
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
543
544
objectclass: ePerson
cn: user2
userpassword: user2
initials: user2
sn: user2
uid: user2
dn: cn=group1,o=acme,c=us
objectclass: groupOfNames
objectclass: top
cn: manager
member: cn=user2,o=acme,c=us
The web.xml file defines the roles and security policy for the application. role1 can
only access the protected resources.
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com
/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<display-name>HitCountWeb</display-name>
<servlet>
<description></description>
<display-name>HitCountServlet</display-name>
<servlet-name>HitCountServlet</servlet-name>
<servlet-class>com.ibm.samples.hitcount.HitCountServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HitCountServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
<security-constraint>
<display-name>AllAuthenticated</display-name>
<web-resource-collection>
<web-resource-name>All</web-resource-name>
<url-pattern>/*</url-pattern>
<http-method>GET</http-method>
<http-method>PUT</http-method>
<http-method>HEAD</http-method>
<http-method>TRACE</http-method>
<http-method>POST</http-method>
<http-method>DELETE</http-method>
<http-method>OPTIONS</http-method>
</web-resource-collection>
<auth-constraint>
<description>Auto generated Authorization Constraint</description>
<role-name>role1</role-name>
</auth-constraint>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
<login-config>
<auth-method>FORM</auth-method>
<realm-name></realm-name>
<form-login-config>
<form-login-page>/login.jsp</form-login-page>
<form-error-page>/login.jsp?error=Invalid+username+or+password</form-error-page>
</form-login-config>
</login-config>
<security-role>
<description>allowed group</description>
<role-name>role1</role-name>
</security-role>
</web-app>
The binding file binds the group1 group to the role1 role.
Chapter 7. Managing and deploying virtual patterns
545
Group Filter: Specifies the LDAP group filter that searches the existing user
registry for groups.
Default settings
Microsoft Active Directory Server is registered to the federated repository in
WebSphere Application Server using Virtual Member Manager (VMM) with the
following settings:
v Login properties of VMM = uid or cn
*samAccountName is mapped to both uid and cn
v Entity type
Object class of PersonAccount = user
Object class of Group = group
Connections
546
Description
Connection properties
v Role name
An enterprise application
(WebSphere Application
Server) cloud component
represents an execution
service for Java EE enterprise
archive (EAR files).
v Role name
Enterprise application
(WebSphere Application
Server)
547
6. You can also view the user registry component properties by viewing the
plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select wasldap/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Example
The following examples illustrate the three metadata files that are required to set
up an enterprise application with the user registry component.
The LDIF file defines the users and groups for the application. user2 is in the
group1 group.
dn: o=acme,c=us
objectclass: organization
objectclass: top
o: ACME
dn: cn=user2,o=acme,c=us
objectclass: inetOrgPerson
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: ePerson
cn: user2
userpassword: user2
initials: user2
sn: user2
uid: user2
dn: cn=group1,o=acme,c=us
objectclass: groupOfNames
objectclass: top
cn: manager
member: cn=user2,o=acme,c=us
The web.xml file defines the roles and security policy for the application. role1 can
only access the protected resources.
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com
/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<display-name>HitCountWeb</display-name>
<servlet>
<description></description>
<display-name>HitCountServlet</display-name>
<servlet-name>HitCountServlet</servlet-name>
<servlet-class>com.ibm.samples.hitcount.HitCountServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HitCountServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
<security-constraint>
<display-name>AllAuthenticated</display-name>
<web-resource-collection>
<web-resource-name>All</web-resource-name>
<url-pattern>/*</url-pattern>
548
<http-method>GET</http-method>
<http-method>PUT</http-method>
<http-method>HEAD</http-method>
<http-method>TRACE</http-method>
<http-method>POST</http-method>
<http-method>DELETE</http-method>
<http-method>OPTIONS</http-method>
</web-resource-collection>
<auth-constraint>
<description>Auto generated Authorization Constraint</description>
<role-name>role1</role-name>
</auth-constraint>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
<login-config>
<auth-method>FORM</auth-method>
<realm-name></realm-name>
<form-login-config>
<form-login-page>/login.jsp</form-login-page>
<form-error-page>/login.jsp?error=Invalid+username+or+password</form-error-page>
</form-login-config>
</login-config>
<security-role>
<description>allowed group</description>
<role-name>role1</role-name>
</security-role>
</web-app>
The binding file binds the group1 group to the role1 role.
<?xml version="1.0" encoding="UTF-8"?>
<application-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee
http://websphere.ibm.com/xml/ns/javaee/ibm-application-bnd_1_0.xsd"
version="1.0">
<security-role name="role1">
<group name="group1" />
</security-role>
</application-bnd>
549
v Group filter: Specifies the LDAP group filter that searches the existing user
registry for groups. This attribute is required.
Default settings
Tivoli Directory Server is registered to the federated repository in WebSphere
Application Server using Virtual Member Manager (VMM) with the following
settings:
v Login properties of VMM = uid or cn
v Entity type
Object class of PersonAccount = Person or inetOrgPerson in ITDS
Object class of Group = groupOfUniqueNames or groupOfNames in ITDS
Note: The default value for the object class is groupOfUniqueNames. This value
cannot be changed.
Connections
Table 58. Incoming connectable components
Component name
Description
Connection properties
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
Enterprise application
(WebSphere Application
Server)
An enterprise application
(WebSphere Application
Server) cloud component
represents an execution
service for Java EE enterprise
applications (EAR files).
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
To make a connection between a component and the user registry, hover over the
blue circle on the user registry component part on the canvas. When the blue circle
turns yellow, draw a connection between the user registry and component.
About this task
The current implementation supports a one-time upload of users and groups in an
LDIF file, and applications are currently limited to enterprise applications. Within
the application, the roles are defined in the web.xml file. Bindings of roles to users
and groups are defined in the META-INF/ibm-application-bnd.xml file. Bind the
roles to group for ease of management.
550
You can view, edit, or add this virtual application component in the user interface
as follows:
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing user registry component, select the component part on the
Pattern Builder canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add a user registry component to a virtual application pattern, click User
Registry (Tivoli Directory Server) listed under the User Registry Components
and drag the icon to the Pattern Builder canvas. The properties panel for the
component displays to the right of the Pattern Builder palette. For more details
on the properties panel settings, view the help by selecting the help icon on the
properties panel.
6. You can also view the user registry component properties by viewing the
plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select tds/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Example
The following examples illustrate the three metadata files that are required to set
up an enterprise application with the user registry component.
The LDIF file defines the users and groups for the application. user2 is in the
group1 group.
dn: o=acme,c=us
objectclass: organization
objectclass: top
o: ACME
dn: cn=user2,o=acme,c=us
objectclass: inetOrgPerson
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: ePerson
cn: user2
userpassword: user2
initials: user2
sn: user2
uid: user2
dn: cn=group1,o=acme,c=us
objectclass: groupOfNames
objectclass: top
cn: manager
member: cn=user2,o=acme,c=us
551
The web.xml file defines the roles and security policy for the application. role1 can
only access the protected resources.
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com
/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<display-name>HitCountWeb</display-name>
<servlet>
<description></description>
<display-name>HitCountServlet</display-name>
<servlet-name>HitCountServlet</servlet-name>
<servlet-class>com.ibm.samples.hitcount.HitCountServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HitCountServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
<security-constraint>
<display-name>AllAuthenticated</display-name>
<web-resource-collection>
<web-resource-name>All</web-resource-name>
<url-pattern>/*</url-pattern>
<http-method>GET</http-method>
<http-method>PUT</http-method>
<http-method>HEAD</http-method>
<http-method>TRACE</http-method>
<http-method>POST</http-method>
<http-method>DELETE</http-method>
<http-method>OPTIONS</http-method>
</web-resource-collection>
<auth-constraint>
<description>Auto generated Authorization Constraint</description>
<role-name>role1</role-name>
</auth-constraint>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
<login-config>
<auth-method>FORM</auth-method>
<realm-name></realm-name>
<form-login-config>
<form-login-page>/login.jsp</form-login-page>
<form-error-page>/login.jsp?error=Invalid+username+or+password</form-error-page>
</form-login-config>
</login-config>
<security-role>
<description>allowed group</description>
<role-name>role1</role-name>
</security-role>
</web-app>
The binding file binds the group1 group to the role1 role.
<?xml version="1.0" encoding="UTF-8"?>
<application-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee
http://websphere.ibm.com/xml/ns/javaee/ibm-application-bnd_1_0.xsd"
version="1.0">
<security-role name="role1">
<group name="group1" />
</security-role>
</application-bnd>
552
Messaging components:
There are several message components to choose from when building a virtual
application pattern.
About this task
v Existing Messaging Service (WebSphere MQ)
v Existing topic (WebSphere MQ) on page 555
v Existing queue (WebSphere MQ) on page 556
Existing Messaging Service (WebSphere MQ):
An existing message service component represents a connection to an external
messaging system such as WebSphere MQ. The presence of a messaging system
allows an enterprise application running on WebSphere Application Server to
connect to the external messaging resource, such as WebSphere MQ.
Before you begin
The following are attributes for the messaging service:
v Queue manager name: Specifies the name of the queue manager to connect to.
This attribute is required.
v Server host name or IP address: Specifies the TCP/IP host name or address of
the external WebSphere MQ messaging service. This attribute is required.
v Server Port Number: Specifies the TCP/IP port on which the external
WebSphere MQ message service is listening for connections. This attribute is
required. The default port is 1414.
v Channel name: Specifies the name of the channel definition to use when
accessing the WebSphere MQ queue manager. This attribute is required. The
default is SYSTEM.DEF.SVRCONN.
Connections
Table 59. Incoming connectable components
Component name
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI Name of JMS
(WebSphere Application
connection factory
Server) cloud component
v Resource references of JMS
represents an execution
connection factory
service for Java EE enterprise
v Client ID
applications (EAR files).
553
The application is assumed to use JNDI settings to locate the topic. Specify the
JNDI name in the link property panel, either as a hard-coded JNDI name or by
selecting the relevant application resource-references from the property panel list
box. During deployment, the JNDI name is set to the corresponding topic, and
mapped, if required, to the relevant resource reference in the application.
To make a connection between an application component and the messaging
service, hover over the blue circle on the messaging service component part on the
canvas. When the blue circle turns yellow, draw a connection between the
messaging service and application component.
About this task
The messaging service component represents a connection to an instance of IBM
WebSphere MQ. The component can be configured to create a connection to your
IBM WebSphere MQ installation. When you click the messaging service component
on the Pattern Builder canvas, a properties panel displays.
You can view, edit, or add this virtual application component in the user interface
as follows:
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing messaging service component, select the component part on
the Pattern Builder canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add a messaging service component to a virtual application pattern, click
Existing Messaging Service (WebSphere MQ) listed under the Messaging
Components and drag the icon to the Pattern Builder canvas. The properties
panel for the transaction processing component displays to the right of the
Pattern Builder palette. For more details on the properties panel settings, view
the help by selecting the help icon on the properties panel.
6. You can also view the existing WebSphere MQ service properties by viewing
the plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select wasmqx/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
554
Description
Connection properties
v JNDI name
Enterprise application
(WebSphere Application
Server)
v Resource environment
references
v Message destination
references
An enterprise application
v JNDI name
(WebSphere Application
v Resource environment
Server) cloud component
references
represents an execution
service for Java EE enterprise v Message destination
references
applications (EAR files).
v JNDI name
v Resource environment
references
v Message destination
references
The application is assumed to use JNDI settings to locate the topic. Specify the
JNDI name in the link property panel, either as a hard-coded JNDI name or by
selecting the relevant application resource-references from the property panel list
box. During deployment, the JNDI name is set to the corresponding topic, and
mapped, if required, to the relevant resource reference in the application.
The required attributes for Link to WebSphere MQ topic are as follows:
v JNDI name: The JNDI name that the application uses to locate the topic
destination. The JNDI name is only required if the application accesses the topic
directly without using a resource environment reference or message destination
reference.
v Resource environment references: The resource environment references that the
application uses to locate the topic.
v Message destination references: The message destination references that the
application uses to locate the topic.
To make a connection between a component and the messaging topic, hover over
the blue circle on the topic component part on the canvas. When the blue circle
turns yellow, draw a connection between the topic and component.
Chapter 7. Managing and deploying virtual patterns
555
556
Component name
Description
Connection properties
v JNDI name
v Resource environment
references
v Message destination
references
Description
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI name
(WebSphere Application
v Resource environment
Server) cloud component
references
represents an execution
service for Java EE enterprise v Message destination
references
applications (EAR files).
Connection properties
v JNDI name
v Resource environment
references
v Message destination
references
The application is assumed to use JNDI settings to locate the queue. Specify the
JNDI name in the link property panel, either as a hard-coded JNDI name or by
selecting the relevant application resource-references from the property panel list
box. During deployment, the JNDI name is set to the corresponding queue, and
mapped if required to the relevant resource reference in the application.
The required attributes for Link to WebSphere MQ queue are as follows:
v JNDI name: The JNDI name that the application uses to locate the queue
destination. This is only required if the application accesses the queue directly
without using a resource environment reference or message destination
reference.
v Resource environment references: The resource environment references that the
application uses to locate the Queue.
v Message destination references: The message destination references that the
application uses to locate the Queue.
To make a connection between a component and the messaging queue, hover over
the blue circle on the queue component part on the canvas. When the blue circle
turns yellow, draw a connection between the queue and component.
About this task
You can view, edit, or add this virtual application component in the user interface
as follows:
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing queue, select the Existing Queue component part on the
Pattern Builder canvas. The properties panel displays.
For more details on these the properties panel settings, view the help by
selecting the help icon on the properties panel.
5. To add a new queue component to a virtual application pattern, click the
Existing Queue (WebSphere MQ) component listed under the Messaging
Components and drag the icon to the Pattern Builder canvas. The properties
557
panel for the component displays to the right of the Pattern Builder palette. For
more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
6. You can also view the queue component properties by viewing the plug-in
information. Click PATTERNS > Deployer Configuration > System Plug-ins.
Select wasmqq/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
OSGi components:
The OSGi components available as parts for your virtual application pattern are
the OSGi application and the external OSGi bundle repository.
About this task
v Existing OSGi bundle repository (WebSphere Application Server)
v OSGi application (WebSphere Application Server) on page 559
Existing OSGi bundle repository (WebSphere Application Server):
This component provides the URL of an existing WebSphere Application Server
OSGi bundle repository.
Before you begin
The following are the attributes for the external OSGi bundle repository:
v Bundle repository URL: Specifies the URL of the existing OSGi bundle
repository. This attribute is required.
Connections
Table 62. Incoming connectable components
Component name
Description
558
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing external OSGi bundle repository component, select the
External OSGi Bundle Repository component part on the Pattern Builder
canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add an external OSGi bundle repository component to a virtual application
pattern, click the External OSGi Bundle Repository component listed under
the OSGi Components and drag the icon to the Pattern Builder canvas. The
properties panel for the component displays to the right of the Pattern Builder
palette. For more details on the properties panel settings, view the help by
selecting the help icon on the properties panel.
6. You can also view the external OSGi bundle repository component properties
by viewing the plug-in information. Click PATTERNS > Deployer
Configuration > System Plug-ins. Select osgirepo/x.x.x.x from the System
Plug-ins palette where x.x.x.x corresponds to the version numbers. The
component plug-in configuration information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
OSGi application (WebSphere Application Server):
This component represents the OSGi application on WebSphere Application Server.
Before you begin
The following are attributes for the OSGi application component:
v EBA file: Specifies the OSGi application to be uploaded. The OSGi application is
an enterprise bundle archive (EBA) file (.eba). This attribute is required.
Connections
Table 63. Incoming connectable components
Component name
Description
559
Description
Connection
An existing messaging
service represents a
connection to an external
messaging system such as
WebSphere MQ.
Generic target
Database (DB2)
560
Description
Connection
An existing Informix
database component
represents a connection to a
remote Informix database
instance running remotely
outside of the cloud. The
configuration properties
allow a connection to be
made to the remote Informix
database.
An existing CICS TG
v JNDI name of the CICS
component represents an
TG resource
existing connection to a
CICSTG instance running
remotely outside of the
cloud. The configuration
properties allow a connection
to be made to the CICS TG.
v User filter
v Group filter
v Role name
v User role mapping
v Group role mapping
v Special subject mapping
561
Description
Connection
A message queue on an
v JNDI name
external WebSphere MQ
v Resource environment
messaging service through
references
which messages are sent and
v Message destination
received.
references
Attention: You can upload an .eba file to replace an OSGi application in the
Virtual Application Console, but you cannot rename the archive as a part of the
update.
To make a connection between a component and the OSGi application, hover over
the blue circle on the OSGi application component part on the canvas. When the
blue circle turns yellow, draw a connection between the OSGi application
repository and component.
About this task
You can view, edit, or add this virtual application component in the user interface
as follows:
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing OSGi application component, select the OSGi application
component part on the Pattern Builder canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add a new OSGi application component to a virtual application pattern,
click the OSGi Application component listed under the OSGi Components
and drag the icon to the Pattern Builder canvas. The properties panel for the
component displays to the right of the Pattern Builder palette. For more details
on the properties panel settings, view the help by selecting the help icon on the
properties panel.
6. You can also view the OSGi application component properties by viewing the
plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select was/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
562
Description
Connection properties
563
Description
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI Name of the JCA
(WebSphere Application
Connection Factory
Server) cloud component
v Maximum number of
represents an execution
connections to the CICS
service for Java EE enterprise
Transaction Gateway
applications (EAR files).
Connection properties
564
Plug-ins. Select wasctg/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Installing the CICS resource adapter:
Before you can use the CICS Transaction Gateway (CICS TG) in the IBM Cloud
Orchestrator, you must install a CICS TG resource adapter.
Before you begin
You can use the ECI adapter, cicseci.rar, or the ECI adapter with two-phase
commit support, cicseciXA.rar. IBM Cloud Orchestrator does not provide EPI
support. The resource adapters are specific to your release of CICS TG and the one
you use depends on the platform that you are using and whether you require
two-phase or single-phase commit. For further details about CICS TG resource
adapters, see Using the ECI resource adapters.
About this task
To install a CICS TG resource adapter, log on to IBM Cloud Orchestrator as an
administrator. Upload the CICS TG resource adapter for your CICS TG installation.
You can now use, and configure a CICS TG component.
Procedure
1. Click PATTERNS > Deployer Configuration > System Plug-ins. A
configuration dialog box displays.
2. Browse for the resource adapter.
3. Click OK.
Results
You have uploaded a new resource adapter.
What to do next
Add the CICS TG component to a virtual application pattern.
Existing IMS TM:
An existing Information Management Systems Transaction Manager (IMS TM)
component provides an enterprise or web application that is running on
WebSphere Application Server to connect to and submit transactions to an existing
IMS system running remotely outside of the cloud.
Before you begin
The configuration properties allow a connection to be made to the IMS TM system.
The following are the required properties:
v Resource Adapter: Specifies the file path name of the IMS TM resource adapter
(.rar file).
Chapter 7. Managing and deploying virtual patterns
565
v Server host name or IP Address: Specifies the host name or IP address of IMS
Connect. IMS Connect is the TCP/IP listener component of IMS.
v Port number: Specifies the TCP/IP port used by the target IMS Connect.
v Datastore name: Specifies the name of the target IMS system. This name must
match the ID parameter of the datastore statement that is specified in the IMS
Connect configuration member.
The following are optional properties:
v User name: Specifies the security authorization facility (SAF) user name that is
used for connections created by the connection factory.
v Password: Specifies the password associated with the user name property
v CM0Dedicated: If checked (indicates True), dedicated persistent socket
connections are generated. If cleared (indicates False), shareable persistent socket
connections are generated. The default is False.
v SSL Enabled: Check to use SSL connection to IMS TM. If using SSL then the
following parameters are required:
SSL Encryption Type: Specifies the encryption type: Strong or weak. This is
related to the strength of the ciphers, that is, the key length. By default, the
encryption type is set to weak.
SSL Keystore Name: Specifies the full file path name of the keystore. Private
keys and their associated public keys certificates are stored in
password-protected databases called keystores. An example of a keystore
name is c:\keystore\MyKeystore.ks.
SSL Keystore Password: Specifies the password for the keystore.
SSL TrustStore Name: Specifies the full file path name of the truststore. A
truststore file is a key database file that contains public keys or certificates.
SSL TrustStore Password: Specifies the password for the truststore.
Trace Level: Specifies the level of information that is traced. Here are the
possible values:
- 0: No tracing or logging occurs
- 1: Only errors and exceptions are logged (default)
- 2: Errors and exceptions plus the entry and exit of important methods are
logged
- 3: Errors and exceptions, the entry and exit of important methods, and the
contents of buffers sent to and received from IMS Connect are logged.
Connections
Table 66. Incoming connectable components
Component name
Description
Connection properties
566
Description
Connection properties
Enterprise application
(WebSphere Application
Server)
An enterprise application
v JNDI Name of the JCA
(WebSphere Application
Connection Factory, or
Server) cloud component
v Resource references
represents an execution
mapping
service for Java EE enterprise
v
Maximum number of
applications (EAR files).
connections to IMS TM
v Connection timeout
567
Plug-ins. Select imstmra/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Other components:
There are several other components to choose from when building a virtual
application pattern.
About this task
v Generic target
v Debug on page 609
Generic target:
A generic target component is used to open the firewall for outbound TCP
connections from a web or enterprise application to a specified host and port.
Before you begin
The following are the attributes for a generic target component:
v Server (IP or IP/netmask): Specifies the target server. This attribute is required.
v Port: Specifies the destination port on the target server. This attribute is required.
Connections
Table 67. Incoming connectable components
Component name
Description
568
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing generic target component, select the Generic target
component part on the Pattern Builder canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add a new generic target component to a virtual application pattern, click
the Generic target component listed under Other Components and drag the
icon to the Pattern Builder canvas. The properties panel for the component
displays to the right of the Pattern Builder palette. For more details on these
the properties panel settings, view the help by selecting the help icon on the
properties panel.
6. You can also view the generic target component properties by viewing the
plug-in information. Click PATTERNS > Deployer Configuration > System
Plug-ins. Select connect/x.x.x.x from the System Plug-ins palette where x.x.x.x
corresponds to the version numbers. The component plug-in configuration
information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
Policies:
There are several policies to choose from when building a virtual application
pattern.
About this task
v Scaling policy
v Routing policy on page 571
v Log policy on page 572
v JVM policy on page 573
Scaling policy:
Scaling is a Pattern Builder runtime capability to automatically scale your
application platform as the load changes. A scaling policy component defines this
capability and the conditions under which scaling activities are performed for your
application.
Before you begin
The following are the attributes for a scaling policy:
v Enable session caching: Specifies whether to use the session caching function in
your application.
v Scaling Type: Specifies the scaling type used. You can select Static,CPU Based,
Response Time Based, or Web to DB. Depending on the selection, zero or more of
the other attributes are valid.
569
v Number of instances: Specifies the number of cluster members that are hosting
the web application. The default value is 2. Acceptable value range is 2 through
10. This attribute is required.
v Instance number range of scaling in and out: Specifies the scaling range for
instance members that are hosting the web application. Acceptable value range
is 1 through 50. This attribute is required.
v Minimum time (in seconds) to trigger add or remove: Specifies the time
duration condition to start scaling activity. The default value is 120 seconds.
Acceptable value range is 30 through 1800. This attribute is required.
v Scaling in and out when CPU usage is out of threshold range (in percentage):
Specifies the processor threshold condition to start scaling activity. When the
average processor utilization of your application platform is out of this threshold
range, your platform is scaled in or out. The default value is 20 - 80%.
Acceptable values range 0 - 100%.
v Scaling in and out when web response time is out of threshold range (in
milliseconds): Specifies the web application response time condition to start
scaling activity. When the web application response time is out of this threshold
range, your platform is scaled in or out. The acceptable values range from 0
-1000 ms.
v JDBC connections wait time is out of the threshold range (in milliseconds):
Specifies the JDBC connection wait state to start scaling activity. When the JDBC
connections wait time is out of this threshold range, your platform is scaled in
or out. The acceptable values range from 0 -10000 ms.
v JDBC connection pools usage is out of the threshold range (in percentage):
Specifies JDBC connection pool usage to start scaling activity. When the JDBC
connection usage is out of this threshold range, your platform is scaled in or out.
The acceptable values range 0 - 100%.
Note: Due to OpenStack limitations, vertical scaling (increasing memory and CPU
on running systems without stopping the service) is not supported by IBM Cloud
Orchestrator. Only horizontal scaling (increasing the number of virtual machines to
balance the workload) is supported when required by the scaling policy.
Connections
Table 68. Outgoing connectable components
Component name
Description
570
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To edit an existing scaling policy, select the Scaling Policy part on the Pattern
Builder canvas. The properties panel displays.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
5. To add a new scaling policy to a virtual application pattern, you can:
v Click the Add a policy icon located in the application component part on the
canvas. Select Scaling Policy from the list of policies. The scaling policy
displays as a part in your application component part.
or
v Click the Add policy for application icon on the upper left side of the
canvas. Select the Scaling Policy from the list of policies. The scaling policy
displays on the Pattern Builder canvas. The policy is applied to all applicable
components in the canvas.
Results
You have edited a current component, edited an existing policy, or added one.
Routing policy:
You can apply a routing policy to the application component parts of your virtual
application pattern.
Before you begin
The following are the attributes for a routing policy:
v Virtual hostname: Specifies the name of the virtual host for the routing policy.
This attribute is required.
v HTTP: Specifies support for HTTP schema with a routing policy.
v HTTPS: Specifies support for HTTPS schema with a routing policy.
Connections
Table 69. Outgoing connectable components
Component name
Description
571
572
Component name
Description
Description
573
v Debug port: Specifies the port that the JVM listens on for remote connections.
v Client (IP or IP/netmask): The IP address of the host that is being used to
debug.
v Client: Specifies an optional address of the debug client. This setting is used to
restrict source access to the debug port. Value is an IP address, for example,
1.2.3.4; or IP/netmask, for example, 1.2.0.0/255.255.0.0, which matches anything
in the 1.2. network.
v Enable verbose garbage collection: Specifies if the JVM has garbage collection
enabled.
v Generic JVM arguments:
v Bit level: Specifies if the bit level is set to 32 bit or 64 bit.
Connections
Table 71. Outgoing connectable components
Component name
Description
574
v Click the Add policy for application icon on the upper left side of the
canvas. Select the JVM Policy from the list of policies. The JVM policy
displays on the Pattern Builder canvas. You can connect the policy to an
application component part by hovering over the blue circle on the
application component part. When the blue circle turns yellow, draw a
connection between the application component and the policy.
Results
You have edited a current component, edited an existing policy, or added one.
What to do next
For more detailed information about using Rational Application Developer for
WebSphere, see Debugging applications in the WebSphere Application Server
Information Center.
You can optionally use the IBM Monitoring and Diagnostic Tools for Java - Health
Center (Health Center) to assess the current status of a running Java application.
Health Center continuous monitoring provides information that helps you to
identify and resolve problems with applications.
In IBM Cloud Orchestrator, you can configure the IBM Monitoring and Diagnostic
Tools for Java - Health Center using the following attributes in the JVM policy:
v Enable Health Center: Specifies to start the JVM with Health Center enabled.
Health Center is not enabled by default.
v Health Center port: Specifies the port for which the Health Center agent listens
for remote connections.
v Health Center Client: Specifies the IP address of the Health Center client. This is
an optional setting.
Technical information regarding the IBM Monitoring and Diagnostic Tools for Java
- Health Center is available at the following URL:
https://www.ibm.com/developerworks/java/jdk/tools/healthcenter/
Developing plug-ins
Plug-ins define the components, links, and policies that you use in the Pattern
Builder to create virtual application patterns, or extend existing virtual application
patterns. This guide describes how to develop your own custom plug-ins. Custom
plug-ins add behavior and function that users can exploit to enhance and
customize the operation of their virtual applications.
575
Procedure
1. Define and package plug-in artifacts.
a. Define the config.json file.
The config.json file is the only required file in a plug-in. The name,
version, and patterntypes elements are all required. The name element
specifies the name of the plug-in, and the version element defines the
version number of the plug-in. The patterntypes element specifies the
pattern types with which the plug-in is associated. The following example is
a WebSphere Application Server Community Edition plug-in that extends
the IBM Web Application Pattern type (not released with the product):
{
"name"
: "wasce",
"version" : "1.0.0.1",
"patterntypes":{
"secondary":[{ "*":"*"}]
},
"packages" : {
"WASCE" : [ {
"requires" : {
"arch"
: "x86_64",
"memory" : 512,
"disk"
: 300
},
"parts":[ {
"part" : "parts/wasce.tgz",
"parms" : {
"installDir" : "/opt/wasce"
}
} ],
"WASCE_SCRIPTS":[ {
"parts":[ {
"part":"parts/wasce.scripts.tgz"
} ]
} ]
} ]
}
}
576
required attributes of the package, all parts and node-parts in it. For
cpu it represents the total required resources of each type for all parts
and node-parts in the package.
memory
Specifies the minimum memory requirement for each package defined
by your plug-in. The requires element specifies the required attributes
of the package, all parts and node-parts in it. For memory it represents
the total required resources of each type for all parts and node-parts in
the package.
disk
Specifies the minimum disk requirement for each package defined by
your plug-in. The requires element specifies the required attributes of
the package, all parts and node-parts in it. For disk it represents the
total required resources of each type for all parts and node-parts in the
package.
Note: During the provisioning process, IBM Cloud Orchestrator adds up
the minimum CPU, memory, and disk values for each package, and
provisions a virtual machine that meets the specified requirements.
v packages element:
Defines the file packages with both the part and nodepart elements. The
example plug-in provides two packages: WASCE and WASCE_SCRIPTS.
The WASCE package contains the parts/wasce.tgz part file. This archive
contains the wasce image - all the files that compose wasce. The binaries
are required to install WebSphere Application Server Community Edition,
and package it directly in the plug-in.
There are other options for specifying the required binaries. You can
define a file attribute and have administrators upload the required
binaries after loading the plug-in in IBM Cloud Orchestrator. You can also
link to a remote server that stores the required artifacts. The
WASCE_SCRIPTS package provides the life cycle scripts to install the
WASCE image to the desired location, to install the enterprise archive
(EAR) or web archive (WAR) file, and to start the server.
2. Define configurable application model components.
The web and enterprise application archive components are displayed in the
Pattern Builder. Each component is specified in the metadata.json file that is
located in the plugin/appmodel directory of the plug-in archive and plugin
development project. The following example illustrates the JSON to define the
web archive component:
[{
"id"
: "WARCE",
"label"
: "Web Application (WebSphere Application Server Community Edition)",
"description" : "A web application cloud component represents an execution service
for Java EE Web applications (WAR files).",
"type"
: "component",
"thumbnail" : "appmodel/images/WASCE.png",
"image"
: "appmodel/images/WASCE.png",
"category"
: "application",
"attributes" : [
{
"id"
: "archive",
"label"
: "WAR File",
"description" : "Specifies the web application (*.war) to be uploaded.",
"type"
: "file",
"required"
: true,
577
"extensions"
: [ "war" ]
}
]
}]
There is a similar stanza for the enterprise archive component for its
downloadable archive.
The first type field of the listing is important. The value options for this field
are component, link or policy, and this defines the type in the application
model. The id of the component is WARCE. This can be any value as long as it is
unique.
The category refers to the tab under which this component is shown on the
palette in the Pattern Builder. The attributes array defines properties for the
component that you are defining. You can see and are able to specify values for
these properties when using this component in the Pattern Builder. Attribute
types include file, string (shown here), number, boolean, array, and range.
3. Define a template to convert the visual model into a physical model.
Plug-ins must provide the knowledge and logic for how to implement, or
realize, the deployment of the defined components. In the case of the next
example, the meaning of how to deploy an enterprise or web application
component must be specified. To do this, a single transform is provided that
translates the application model derived from what users build in the Pattern
Builder into a concrete topology.
The following example displays a Velocity template that represents a
transformation of the component into a JSON object that represents a fragment
of the overall topology document. Each component and link must have a
transform. In our plug-in, the WARCE and EARCE components share the same
transform template.
{
"vm-templates":[
{
"name"
: "${prefix}-wasce",
"packages" : [ "WASCE", "WASCE_SCRIPTS" ],
"roles"
: [
{
"plugin"
: "$provider.PluginScope",
"name"
: "WASCE",
"type"
: "WASCE",
"quorum"
: 1,
"external-uri" : [{"ENDPOINT":"http://{SERVER}:8080"}],
"parms":{
"ARCHIVE"
: "$provider.generateArtifactPath( $applicationUrl,
${attributes.archive} )"
},
"requires"
: { "memory":512, "disk":300 }
}
],
"scaling" : { "min":1, "max":1 }
}
]
}
578
v packages: Specifies a list of parts and nodeparts that are installed on each
deployed virtual machine. The WASCE entry indicates the use of the WASCE
virtual image. The WASCE_SCRIPTS entry specifies the WASCE life cycle
scripts.
v roles: Specifies parts in a plug-in that invoke lifecycle scripts for roles. You
can have one or more roles in your plug-in, but in the sample plug-in there
is a single WASCE role. When all roles on a node go to the RUNNING state, the
node changes to the green RUNNING state.
4. Define lifecycle scripts to install, configure, and start software.
In this step, you define the lifecycle scripts for the plug-in. This process
includes writing scripts to install, configure, and start the plug-in components.
You can view the complete scripts in the downloadable archives. The following
information includes the key artifacts:
v install.py script
The install.py script copies the WASCE image from the download location
to the desired installDir folder. It also sets the installDir value in the
environment for subsequent scripts. All parts and nodeparts installed by the
IBM Cloud Orchestrator agent run as root. The chown R virtuser:virtuser
command changes file ownership of the installed contents to the desired user
and group. Finally, the install.py script makes the scripts in the WebSphere
Application Server Community Edition bin directory executable. The
following sample code is the contents of the install.py script:
installDir = maestro.parms[installDir]
maestro.trace_call(logger, [mkdir, installDir])
if not WASCE in maestro.node[parts]:
maestro.node[parts][WASCE] = {}
maestro.node[parts][WASCE][installDir] = installDir
# copy files to installDir to install WASCE
this_file = inspect.currentframe().f_code.co_filename
this_dir = os.path.dirname(this_file)
rc = maestro.trace_call(logger, cp -r %s/files/* %s % (this_dir, installDir), shell=True)
maestro.check_status(rc, wasce cp install error)
rc = maestro.trace_call(logger, [chown, -R, virtuser:virtuser, installDir])
maestro.check_status(rc, wasce chown install error)
# make shell scripts executable
rc = maestro.trace_call(logger, chmod +x %s/bin/*.sh % installDir, shell=True)
maestro.check_status(rc, wasce chmod install error)
This example shows how the script makes use of the maestro module
provided within the plug-in framework. The module provides several helper
methods that are useful during installation and elsewhere.
v wasce.scripts part and install.py script
The wasce.scripts part also contains a install.py script. This script installs
the WebSphere Application Server Community Edition life cycle scripts. The
following is an example of the install.py script in wasce.scripts:
# Prepare (chmod +x, dos2unix) and copy scripts to the agent scriptdir
maestro.install_scripts(scripts)
v configure.py script
The configure.py script in the wasce.scripts part installs the user-provided
application to WebSphere Application Server Community Edition. The script
takes advantage of the hot deploy capability of WebSphere Application
Server Community Edition and copies the application binaries to a
monitored directory. The following example includes the contents of the
configure.py script:
Chapter 7. Managing and deploying virtual patterns
579
installDir = maestro.node[parts][WASCE][installDir]
ARCHIVE = maestro.parms[ARCHIVE]
archiveBaseName = ARCHIVE.rsplit(/)[-1]
# Use hot deploy
deployDir = os.path.join(installDir, deploy)
if os.path.exists(deployDir) == False:
# Make directories
os.makedirs(deployDir)
deployFile = os.path.join(deployDir, archiveBaseName)
# Download WASCE archive file
maestro.download(ARCHIVE, deployFile)
v start.py
The start.py script in the wasce.scripts part is responsible for starting the
WebSphere Application Server Community Edition process. After starting the
process, the script updates the state of the role to RUNNING. When the
deployment is in the RUNNING state, you can access the deployed application
environment. The following example shows the use of the geronimo.sh start
command to start WebSphere Application Server Community Edition, as well
as the gsh.sh command to wait on startup:
wait_file = os.path.join(maestro.node[scriptdir], WASCE, wait-for-server.txt)
installDir = maestro.node[parts][WASCE][installDir]
rc = maestro.trace_call(logger, [su, -l, virtuser, installDir + /bin/geronimo.sh,
start])
maestro.check_status(rc, WASCE start error)
logger.info(wait for WASCE server to start)
rc = maestro.trace_call(logger, [su, -l, virtuser, installDir + /bin/gsh.sh,
source, wait_file])
maestro.check_status(rc, wait for WASCE server to start error)
maestro.role_status = RUNNING
logger.info(set WASCE role status to RUNNING)
logger.debug(Setup and start iptables)
maestro.firewall.open_tcpin(dport=1099)
maestro.firewall.open_tcpin(dport=8080)
maestro.firewall.open_tcpin(dport=8443)
There are other scripts and artifacts that make up the plug-in, but the above
provides an explanation of the most significant scripts.
What to do next
Add your custom plug-in to IBM Cloud Orchestrator where the plug-in can be
used to create or extend a virtual application.
Plug-in Development Kit:
The Plug-in Development Kit (PDK) is designed to help you build plug-ins for
IBM Cloud Orchestrator. The custom plug-ins can be added to the IBM Cloud
Orchestrator catalog where they are used to add components, links, and policies to
virtual applications.
Attention:
The PDK is a zip package that includes a plug-in and pattern type build
environment, samples, and a tool to create a plug-in starter project
v docs
580
581
582
docs/javadoc
This directory contains Javadoc for IBM Cloud Orchestrator interfaces that
the plug-ins can invoke from the Java code.
docs/pydoc
This directory contains documentation for the maestro module used in
lifecycle Python scripts for nodeparts and parts.
v iwd-pdk-workspace
The root directory of your plugin development workspace.
Each plug-in and pattern types has its own project directory in this root
directory. These directories can be used directly from the command line or
imported into Eclipse as plug-ins.
v pdk-debug-{version}.tgz
This file is the debug plug-in that can be installed into the IBM Cloud
Orchestrator instance and used to develop and debug the plug-ins.
The debug includes features to deploy and debug a topology document,
which is a JSON object, and debug plug-in installation and lifecycle scripts
on deployed nodes. For more information, see the topic, Debug.
v pdk-unlock-{version}.tgz
The unlock plug-in enables you to delete a plug-in in use by a deployed
application, replace it with an updated version, and activate the modified
plug-in on deployed virtual machines in the application. For more
information, see the topic, Unlock.
Results
The PDK is downloaded and installed. Now you must complete the task of
Setting up the plug-in development environment.
Setting up the plug-in development environment:
Set up the environment to develop custom plug-ins that are used in IBM Cloud
Orchestrator
Before you begin
The following products are required before setting up the environment:
v Eclipse V3.6.2, 32-bit. The Java Platform, Enterprise Edition (Java EE) version is
recommended.
Eclipse is not required, but if you use it, use this version.
If you use Eclipse, you can use the Ant that comes with it. Do not install Ant
separately. Ant is located in the Eclipse installation directory at
eclipse/plugins/org.apache.ant_1.*.
583
2. In the plugin.depends project, run the build.xml Ant script. To run the Ant
script, right-click on the file and select Run As > Ant Build. OR, type ant in
the command line. This command builds all the plug-ins in the workspace.
3. Access the patterntypetest.basic project and run the build.patterntype.xml
script. Type ant -f build.patterntype.xml. This command builds the pattern
type.
4. Refresh the patterntypetest.basic project. A folder named, export, displays.
5. Navigate to the root of the export folder. The .tgz pattern type binary file is
located here. The export/archive directory contains the built pattern type that
is ready for installation into IBM Cloud Orchestrator.
6. Import the pattern type .tgz and use the plug-in from the IBM Cloud
Orchestrator catalog.
Plug-in development guide:
If you are developing custom plug-ins, this topic provides more details about
various aspects of plug-ins in the order encountered during a typical development
effort.
The following list is the high-level sections for this plug-in development reference
guide:
v Kernel services
Transformers: TopologyProvider services
Enhance template transforms with Java code
Plug-in components available as OSGi Declarative Services
v Deployment on page 590
Activation
Nodeparts
Parts
Roles
Set repeatable task
Recovery: Reboot or replace?
584
are added to the source roles. Each link transformer receives the topology
fragments as input that is generated by the link source and target components.
There are two types of transformers:
v Template-based implementations
Most transforms can be described using a template of the intended JSON
document (topology fragment for components; depends objects for links). IBM
Cloud Orchestrator embeds Apache Velocity 1.6.2 as a template engine.
Template-based implementations include:
Component document
The component name must match the "id" of the component, link, and policy
that is defined in the plug-in appmodel/metadata.json file. Template files are
specified as component properties, where the value is a path relative to the
plug-in root. For example, the transformer for the sample starget component
and link looks like the following:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="starget">
<implementation class="com.ibm.maestro.model.transform.template.TemplateTransformer"/>
<service>
<provide interface="com.ibm.maestro.model.transform.TopologyProvider"/>
</service>
<property name="component.template" type="String"
value="templates/starget_component.vm"/>
<property name="link.template" type="String"
value="templates/starget_link.vm"/>
</src:component>
Implementation
The sample starget_component.vm illustrates component transformation as
follows:
{
"vm-templates": [
{
"scaling":{
"min": 1,
"max": 1,
},
"name": "${prefix}-starget",
"roles": [
{
parms: {
"st1": "$attributes.st1"
},
"type": "starget",
"name": "starget"
}
]
}
]
}
target.template == vm-template
585
##
##
##
##
##
##
"vm-templates": [
{
"scaling":{
"min": 1,
"max": 1
},
"name": "${prefix}-ssource",
"roles": [
{
parms: {
Handling optional attributes:
macro syntax: #macro( if_value $map $key $format_str )
String value:
#if_value( $attributes, "ss_s", "ss_s": "$value", )
Number value:
#if_value( $attributes, "ss_n", "ss_n": $value, )
Boolean value:
#if_value( $attributes, "ss_b", "ss_b": $value, )
Missing value -- will not render:
#if_value( $attributes, "not_defined", "not_defined": "$value", )
## For artifacts, Inlet may send app model with absolute URLs for artifacts; other request
## paths might invoke with relative URLs. So use provider.generateArtifactPath(), which
## invokes URI.resolve() that handles both cases.
## Handling required attributes; throws an exception if the attribute is
## null/empty/not defined
"ss_f": "$provider.generateArtifactPath( $applicationUrl, ${attributes.ss_s} )",
## Handling range value (ss3)
"ss_r_min":"$attributes.ss_r.get(0)",
"ss_r_max":"$attributes.ss_r.get(1)",
## Handling policies: spolicy is defined; not_policy is not
#set( $spattrs = $provider.getPolicyAttributes($component, "spolicy") )
#if_value( $spattrs, "sp1", "sp1": "$value", )
#if_value( $spattrs, "not_defined", "not_defined": "$value", )
#set( $npattrs = $provider.getPolicyAttributes($component, "no_policy") )
#if_value( $npattrs, "np1", "np1": "$value", )
## Handling required config parms; throws an exception if the parm is
## null/empty/not defined
"cp1": "$config.cp1"
},
"type": "ssource",
"name": "ssource"
}
586
]
}
]
}
Implementation
Implementations extend com.ibm.maestro.model.transform.TopologyProvider
and can implement component and link transformations by overriding the
corresponding methods:
public JSONObject transformComponent(
String vmTemplateNamePrefix,
String applicationUrl,
JSONObject applicationComponent,
Transformer transformer)
throws Exception {
return new JSONObject();
}
public void transformLink(
JSONObject sourceFragment,
JSONObject targetFragment,
String applicationUrl,
JSONObject applicationLink,
Transformer transformer)
throws Exception {
}
- Invoking templates
Chapter 7. Managing and deploying virtual patterns
587
588
Java file:
package com.ibm.maestro.model.transform.wasdb2;
import com.ibm.maestro.common.http.HttpException;
import com.ibm.maestro.model.transform.template.RequiredMap;
import com.ibm.maestro.model.transform.template.TemplateTransformer;
public class WASDB2LinkTransform extends TemplateTransformer {
public static JndiNameResourceRefs getJndiNameAndResourceRefs(RequiredMap attributes)
throws HttpException {
return JndiNameResourceRefs.getJndiNameAndResourceRefs(attributes);
}
}
589
590
Nodeparts are packaged as .tgz files. The contents are organized into a
directory structure by convention. The following files and directories are
optional:
common/python/maestro/{name}.py
common/scripts/{name}
common/start/{N}_{name}
common/stop/{N}_{name}
{name}/{any files}
setup/setup.py
json
os
subprocess
sys
import maestro
Chapter 7. Managing and deploying virtual patterns
591
parms = maestro.parms
subprocess.call(chmod +x *.sh, shell=True)
rc = subprocess.call(./setup_agent.sh "%s" %d %s %s % (parms[agent-dir],
parms[http-port], parms[iaas-ip], parms[iaas-port]), shell=True)
maestro.check_status(rc, setup_agent.sh: rc == %s % rc)
v config.json file
In general, config.json may define any number of named packages. The
previous example shows one package named default. Each package is an array
containing any number of objects, where each object is a candidate combination
of node-parts and/or parts. The candidates are specified by mapped values for
requires, node-parts and parts. Package contents are additive within a pattern
type. For example, if two plug-ins are part of the same pattern type and both
define package FOO in config.json, then resolving package FOO considers
the union of candidates from both config.json files.
default is a special package name. The resolve phase always includes the
default package; other named packages are resolved only when explicitly
named.
Requires
592
593
script must set the role status to RUNNING. Role status is set only by the
{role}/start.py and changed.py scripts (role or dependency). Role status
is set as follows:
import maestro
maestro.role_status = 'RUNNING
There are several custom features of the workload agent, including the
following:
- The workload agent is extensible.
- Other nodeparts can install features into the OSGi-based application.
Complete the following steps to install nodepart features into the agent:
1. Provide a .tgz file containing the following files:
- lib/{name}.jar - bundle Java archive (JAR) files
- lib/features/featureset_{name}.blst - list of bundles for the feature
set
- usr/configuration/{name}.cfg - OSGi configuration code
2. Provide a start script before slot 9 that installs the .tgz file contents into
the agent.
Other nodeparts do not need to know the installation location of the agent
application. Rather, the agent provides the shared script,
agent_install_ext.sh, to install custom features. Shared scripts are always on
the PATH, so a typical start script for a nodepart to install a custom feature is
as follows:
#!/bin/sh
agent_install_ext.sh ../../autoscaling/autoscaling.tgz
The open tcpin directive is tailored for TCP connections and opens
corresponding rules in the INPUT and OUTPUT tables to allow request and
response connections. The open in directive opens the INPUT table only. For
src and dest, private is a valid value. This value indicates that <src> and
594
<dest> are limited to the IP range defined for the cloud. The value private is
defined in the config.json file for the firewall plug-in as follows:
{
"name":"firewall",
"packages":{
"default":[
{
"requires":{
"arch":"x86_64",
"os":{
"RHEL":"*"
}
},
"node-parts":[
{
"node-part":"nodeparts/firewall.tgz",
parms:{
"private":"PRIVATE_MASK"
}
}
]
}
]
}
}
Parts
Parts are installed by the workload agent and generally contain binary and life
cycle scripts associated with roles and dependencies. Review the following
information about parts:
v Conventions
All parts must have an install.py script at the root. Additional files are
allowed.
v Common scripts
By default, the maestro package contains the following functions:
595
maestro.download(url, f): Downloads the resource from the url and saves the
resource locally as file f.
maestro.downloadx(url, d): Downloads and extracts a .zip, .tgz, or .tar.gz
file into directory d. The .tgz and .tar.gz files are streamed through
extraction; the .zip file is downloaded and then extracted.
maestro.decode(s): Decodes strings encoded with the maestro encoding utility,
such as from a transformer using
com.ibm.ws.security.utils.XOREncoder.encode(String).
maestro.install_scripts(d1): Utility function for copying life cycle scripts into
{scriptdir} and making the shell scripts executable (dos2unix and chmod
+x).
maestro.check_status(rc, message): Utility function for logging and exiting a
script for non-zero rc.
v Data objects
The agent appends data objects or dictionaries to the maestro package when
starting part installation scripts as follows:
maestro.parturl : fully-qualified URL from which the part .tgz file was obtained (string; RO)
maestro.filesurl : fully-qualified URL prefix for the shared files in storehouse (string; RO)
maestro.parms : associated parameters specified in the topology document (JSON object; RO)
maestro.node[java] : absolute path to Java executable (string; RO)
maestro.node[deployment.id] : deployment ID, for example, d-xxx (string; RO)
maestro.node[tmpdir] : absolute path to working directory. This path is cleared after use
(string; RO)
maestro.node[scriptdir] : absolute path to the root of the script directory (string; RO)
maestro.node[name] : server name (same as env variable SERVER_NAME) (string; RO)
maestro.node[instance][ private-ip] (string; RO)
maestro.node[instance][ public-ip] (string; RO)
maestro.node[parts] : shared with all Python scripts invoked on this node (JSON object; RW)
Roles
A role represents a managed entity within a virtual application instance. Each role
is described in a topology document by a JSON object, which is contained within a
corresponding vm-template like the following:
maestro.role[tmpdir] : role-specific working directory; not cleared (string; RO)
You can import custom scripts, for example, import my_role/my_lib.py:
utilpath = maestro.node[scriptdir] + /my_role
if not utilpath in sys.path:
sys.path.append(utilpath)
import my_lib
596
"NONTRAN":false,
"db2jarInstallDir":"\/opt\/db2jar",
"db_type":"DB2",
"db_dsname":"db2ds1",
"resourceRefs":[
{
"moduleName":"tradelite.war",
"resRefName":"jdbc\/TradeDataSource"
}
],
"db_alias":"db21"
},
"type":"DB2",
"bindingType":"javax.sql.DataSource"
}
],
The role status can change during transitions and within a state. Here is the same
state progression, shown with the details of status and life cycle scripts started:
Table 72. Role state and status
Role state script
Transition
Initial
Initial =>
INSTALLED
on entry
INITIAL
during
INSTALLING
INSTALLED
INSTALLED
=>
{role}/install.py then all RUNNING
{role}/{dep}/install.py
CONFIGURING
STARTING (role status by
script)
{role}/configure/py then
all {role}/{dep}/
configure.py
{role}/start.py
RUNNING
on entry
on changed
{role}/start.py
597
Two status checks are available that determine when a dependency script is started
on a source role. For example, if A depends on B, then A is the source and B is the
target as follows:
v Role A must be in the RUNNING state (Role.rolesChanged())
v Role B must have status == RUNNING (Relation.rolesChanged())
Existing resources
Plug-ins can interact with existing resources. Although the existing resource is not
a managed entity within the plug-in, it is modeled as a role. This allows for a
consistent approach, whether dealing with pattern-deployed or existing resources.
Specifically:
v Integration between resources is modeled as a dependency between two roles.
The target role (pattern-deployed or existing) exports properties that are used by
a dependency script on the source ({role}/{dep}/changed.py) to realize the
integration. This design provides reuse of the source dependency script. For
example, in the wasdb2 plug-in, the WAS/DB2/changed.py script manages a
WebSphere Application Server data source for any pattern-deployed or existing
database.
v User interactions in the IBM Cloud Orchestrator deployment user interface are
consistent for resources and integrations. Resources (pattern-deployed or
existing) are represented as roles, meaning they display on the Operations tab of
the deployment panel in the product user interface. For example, you can look
for a role when you change a password. For a pattern-deployed resource, the
change is applied to the resource, then exported for dependencies to react. For
an existing resource, change is exported for dependencies to react like when the
password is already changed externally.
Managing configuration of the interactions (links) is handled through the source
role.
An existing resource is modeled by a component in appmodel/metadata.json file.
Typical component attributes are required to connect to the resource, such as
hostname/IP address, port and application credentials.
Integration with existing resources is modeled by a link in the
appmodel/metadata.json file.
If a type of resource displays as pattern-deployed or existing, then consolidation is
possible by adding a role to represent the external resource. This role can export
parameters from the existing resource that the dependent role for the pattern
deployed case can handle.
Consider the case of an application using an existing resource, such as wasdb2,
imsdb and wasctg plug-ins. At the application model level, the existing database is
a component, and WebSphere Application Server uses it, on behalf of the
application, as a represented link to that component. Typical attributes of the
existing database are its host name or IP address and port, and the user ID and
password for access.
In older service approaches, the existing database component has a transform that
builds a JSON target fragment that stores the attributes, and the link transform
uses these attributes. In IMS, for example, the link transform creates a dependency
in the WebSphere Application Server role in the WebSphere Application Server
node, with the parameters of the existing database passed from the component.
The dependent role configure.py script is used to configure WebSphere
598
Application Server to use the existing database based on the parameters. This is
sufficient, but in the deployment panel, the parameters of the existing database
appear in the WebSphere Application Server role, which is not sensible.
In the new role approach, the target component creates a role JSON object and the
link transform adds it to the WebSphere Application Server virtual machine
template list of roles. The wasdb2 plug-in creates an xDB role to connect to existing
DB2 and Informix databases. IMS can convert to this model, and move its
configure.py and change.py scripts to a new xIMS role. The advantage here is in
the deployment panel, which lists each role for a node separately in a left column
where its parameters and operations, are better separated for user access.
The wasdb2 plug-in provides an additional feature that IMS and CTG might not
use. The plug-in also supports pattern-deployed DB2 instances. In the
pattern-deployed scenario, the DB2 target node is a node that is started. The
correct model is a dependent role and the link configuration occurs when both
components, source WebSphere Application Server and target DB2, start. The
changed.py script is then run. For the existing database scenario, the wasdb2
plug-in exports the same parameters as the DB2 plug-in, and then processing for
pattern-deployed and existing cases can be performed in the changed.py script.
IMS and wasctg do not require this process and can use a configure.py role script
for new roles.
Set repeatable task
At run time, a role might need to perform some actions repeatedly. For example,
the logging service must back up local logs to the remote server in a fixed period.
The plug-in framework allows a script that is started after a specified time, to meet
this requirement.
In any part script, such as configure.py and start.py, you can specify a task as
follows:
task = {}
task[script]=backupLog.py
task[interval] = 10
taskParms={}
taskParms[hostname] = hostname
taskParms[directory] = directory
taskParms[user] = user
taskParms[keyFile] = keyFile
task[parms] = taskParms
maestro.tasks.append(task)
You must have a dictionary object named task. You can change this to another
valid name. The target script is specified by task[script'] and the interval is
specified by task[interval']. You can add parameters to the script by using
task[parms']. This is an optional addition to the script. The,
maestro.tasks.append(task) is used to enable this task. In this sample,
backupLog.py, which is located in the folder {role}/scripts, is started after 10
seconds when the current script is completed. Using the backupLog.py script, you
can retrieve the task parameters from maestro.task[parms'] and retrieve the
internal from maestro.task[interval']. This script is only started one time. If the
backupLog.py script is required to be started repeatedly, you must add the same
codes into the backupLog.py script. When the current script is completed, it is
started after the new specified internal and parameters.
Chapter 7. Managing and deploying virtual patterns
599
There are two ways for a plug-in to mark a virtual machine as persistent:
v Direct
The transformer adds the persistent property to the vm-template.
v Indirect
The package configuration specifies the persistent attribute.
The direct method supersedes the indirect. That is, if the vm-template is marked
persistent (true or false), that is the final value. If the vm-template is not marked
persistent, the resolve phase of deployment derives a persistent value for the
vm-template based on the packages associated with that vm-template. The
vm-template is marked persistent or true if any package declares persistent true.
The indirect method provides more flexibility to integrate parts and node parts,
without requiring global knowledge of where persistence is required. A
transformer adds the property as follows:
"vm-templates": [
{
"persistent":true,
"scaling": {
"min": 1,
"max": 1
},
"name": "Caching_Master",
"roles": [
{
"depends": [{
"role": "Caching_Slave.Caching"
}],
"type": "CachingMaster",
"name": "Caching",
parms:{
"PASSWORD": "$XSAPassword"
}
}
],
"packages": [
"CACHING"
]
},
600
"packages" : {
"DB2" : [{
"requires" : {
"arch" : "x86_64",
"memory" : 0},
"persistent" : true,
"parts" : [
{"part" : "parts/db2-9.7.0.3.tgz",
parms : {
"installDir" : "/opt/ibm/db2/V9.7"}},
{"part" : "parts/db2.scripts.tgz"}]
}]
}
}
where
script defines the operation debug.py script that is started when the operation
is submitted. The operation script name can also be followed by a method
name such as setWASTrace that is included in the previous code sample. The
method name can be retrieved later in the operation script. The operation
script should be placed under the role scripts path, for example,
plugin/parts/was.scripts/scripts/WAS.
attributes define the operation parameters that you must input. The operation
parameters can be retrieved later by the operation script.
v Attributes for operations against multiple instances.
If a role has more than one instance, you can use these attributes in the
operation definition to control how an operation is applied to instances. The
following attributes are validated if a role has more than one instance:
601
rolling
Determines if an operation is performed sequentially or concurrently on
instances.
To perform an operation concurrently, set "rolling": false. This is
the default setting.
To perform an operation sequentially, set "rolling": true
target Determines if an operation is performed on a single instance or all
instances.
To perform an operation on all instances, set "target": All. This is
the default setting.
To perform an operation a single instance, set "target": Single
See the WebSphere Application Server operation.json file for an example.
v Setting a particular role status for an operation.
By default, when an operation is being performed, the role status is set to
"CONFIGURING" and then is set back to "RUNNING" when the operation is
complete. This change in status can sometimes stop the application itself. Some
operations, such as exporting logs, do not require a role to change its status from
"RUNNING". For these types of operations, you can explicitly set the role status
to use when the operation starts. For example, to keep the role status as
"RUNNING" when the operation starts, add the following attribute to the
operation definition:
"preset_status": "RUNNING"
The role status will remain as "RUNNING" unless an error occurs during the
operation.
See the WebSphere Application Server operation.json file for an example
v Operation script
The operation script can import the maestro module. The information retrieved
in the role life cycle part script can be retrieved the same way in the deployment
panel, such as maestro.role, maestro.node, maestro.parms, and maestro.xparms.
Also, all of the utility methods, such as download and downloadx, can be used.
Parameters that are configured at the deployment panel are passed into the
script and are retrieved at maestro.operation['parms]. The method name
defined in the operation.json file is retrieved at maestro.operation['method],
and operation ID is retrieved at maestro.operation['id].
v File download and upload
All downloaded artifacts must be placed under the fixed root path. The
operation script can get the root path from
maestro.operation['artifacts_path]. To specify a file downloaded later, insert
maestro.return_value=file://key.p12 in the script. The prefix, file://,
indicates that a file is required for download. After the script is complete, the
deployment panel displays a link to download the file. Uploaded files are placed
in a temporary folder under the deployment path in the storehouse. The
operation script retrieves the full storehouse path of the uploaded files, for
example:
uploaded_file_path=maestro.operation['parms][{parm_name}]
After the file path is retrieved, the maestro.download() method downloads the
file. When the operation is complete, the temporary files in storehouse are
deleted. When the operation script interacts with kernel services and storehouse,
the script prefers to use the authorization token passed from the user interface,
but not use the agent token, so that the operations can be audited later. The
602
Configuration
In the deployment panel, a configuration update is handled as a special type of
operation. The configuration processes include:
v Define a configuration.
Add the tweak.json file under the plug-in plugin/appmodel folder, to specify
which configuration parameters can be changed during run time. This means
that some parameters in the topology model can be changed and validated at
run time. Each tweak.json file is a JSONArray. Each object describes a parameter
that can be tweaked in the topology model. For example, in the WebSphere
Application Server plug-in, add the following code example to a tweak.json file.
The parameter "ARCHIVE" under the "WAS" role can be tweaked.
The value of "id" is composed with {role_type}.{parm_name}. Other attributes
such as "label" and "description" are prepared for the user interface. This is
similar to the definition in the metadata.json file located in the /appmodel
directory.
{
"id": "WAS.ARCHIVE",
"label":"WAR/EAR File",
"description":"Specifies the web/enterprise application to be uploaded. ",
"type":"file",
"extensions":[
"war",
"ear"
]
}
Chapter 7. Managing and deploying virtual patterns
603
For a parameter under the depends section, the value of "id" is composed with
{role_type}.{depends_role_type}.{parm_name}:
{
"id": "WAS.DB2.MINPOOLSIZE",
"type": "number"
}
To enable this feature, you must add the following code to the operation.json
file:
{
"id": "configuration",
"label": "CONFIGURATION_LABEL",
"description": "CONFIGURATION_DESCRIPTION",
"script": "change.py"
},
to
result.put("attributes", attributes);
result.put("service", prefix);
In WASXDB2Transformer.java:
Change
JSONObject serviceParms = (JSONObject) targetFragment.get("service);
To
JSONObject serviceParms = (JSONObject) targetFragment.get("attributes);
604
"deps_key_id": "WAS.xLDAP.xLDAP_ROLE_MAPPING",
parms: {
"manager": {
"SPECIALSUBJECTS_xLDAP_ROLE_MAPPING": "None",
"GROUP_xLDAP_ROLE_MAPPING": "manager",
"xLDAP_ROLE_MAPPING": "manager",
"USER_xLDAP_ROLE_MAPPING": ""
},
"employee": {
"USER_xLDAP_ROLE_MAPPING": "",
"GROUP_xLDAP_ROLE_MAPPING": "employee",
"xLDAP_ROLE_MAPPING": "employee",
"SPECIALSUBJECTS_xLDAP_ROLE_MAPPING": "None"
}
}
}
]
The parms are nested structure and you must specify a deps_key_id as the
key for the subgroup in the parms. You can use Java based code or a template
to complete the transformer. In parts scripts changed.py and change.py, you can
retrieve the parameters using for cycle as follows:
for key in parms:
roleParms = parms[key]
print key
print roleParms[xLDAP_ROLE_USER_MAPPING]
print roleParms[xLDAP_ROLE_GROUP_MAPPING]
print roleParms[xLDAP_SPECIAL_SUBJECTS_MAPPING]
Other reference
See the related links for references to JSON formatting and validation, and,
guidelines for starting external commands from Python scripts.
Application model and topology document examples
The application model and topology documents are core pieces of the IBM Cloud
Orchestrator modeling and deployment. This section presents examples of these
related documents as a basis for the other sections in this guide. The sample Java
Enterprise Edition (Java EE) web application provided with the web application
virtual application pattern type is used as an example.
Application model
The appmodel.json file represents the serialization of the model that is defined in
the Pattern Builder user interface. Components (nodes) and links, along with
user-specified property values, are represented.
{
"model":{
"name":"Sample",
"nodes":[
{
"attributes":{
"WAS_Version":"7.0",
"archive":"artifacts/tradelite.ear",
"clientInactivityTimeout":60,
"asyncResponseTimeout":120,
"propogatedOrBMTTranLifetimeTimeout":300,
"totalTranLifetimeTimeout":120
},
"id":"application",
"type":"EAR"
Chapter 7. Managing and deploying virtual patterns
605
},
{
"attributes":{
"dbSQLFile":"artifacts/setup_db.sql"
},
"id":"database",
"type":"DB2"
}
],
"links":[
{
"source":"application",
"target":"database",
"annotation":"",
"attributes":{
"connectionTimeout":180,
"nontransactional":false,
"minConnectionPool":1,
"jndiDataSource":"jdbc/TradeDataSource",
"XADataSource":false,
"maxConnectionPool":10
},
"type":"WASDB2",
"id":"WASDB2_1"
}
]
}
}
Topology document
The final topology document for a given application model depends on the
deployment environment, such as storehouse URL and image ID. This sample
shows two vm-templates from the web application, application-was and
database-db2. Each vm-template has a list of nodeparts and parts to be installed,
and run time roles to be managed.
{
"vm-templates":[
{
"parts":[
{
"part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/was\/parts\/was-7.0.0.11.tgz",
parms:{
"installDir":"\/opt"
}
},
{
"part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/was\/parts\/was.scripts.tgz"
},
{
"part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/wasdb2\/parts\/db2.jdbc.tgz",
parms:{
"installDir":"\/opt\/db2jar"
}
},
{
"part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/wasdb2\/parts\/wasdb2.scripts.tgz"
}
],
"node-parts":[
{
parms:{
"private":"127.0.0.1"
},
"node-part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/firewall\/nodeparts\/firewall.tgz"
},
{
parms:{
"iaas-port":"8080",
"agent-dir":"\/opt\/IBM\/maestro\/agent",
"http-port":9999,
606
"iaas-ip":"127.0.0.1"
},
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/agent\/nodeparts\/agent-linux-x64.tgz"
},
{
parms:{
"installerURL":"files\/itmosv6.2.2fp2_linuxx64.tar.gz",
"omnibustarget":"",
"temsip":"",
"omnibusip":""
},
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/monitoring\/nodeparts\/monitoring.tgz"
},
{
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/deployinlet\/nodeparts\/deployinlet.tgz"
},
{
parms:{
"collectors":[
{
"url":"https:\/\/2.zoppoz.workers.dev:443\/http\/COLLECTOR_NODE_IP:8080"
}
]
},
"node-part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/logging\/nodeparts\/logging.tgz"
},
{
"node-part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/cloud.HSLT\/nodeparts\/iaas.tgz"
},
{
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/autoscaling\/nodeparts\/autoscaling.tgz"
}
],
"scaling":{
"min":1,
"max":1
},
"image":{
"type":"medium",
"image-id":"none",
"activators":[
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/\/admin\/clouds\/mockec2.zip"
]
},
"name":"application-was",
"roles":[
{
"depends":[
{
"role":"database-db2.DB2",
parms:{
"MAXPOOLSIZE":"$$2",
"installDir":"\/opt\/db2jar",
"inst_id":1,
"POOLTIMEOUT":180,
"NONTRAN":false,
"DS_JNDI":"jdbc\/TradeDataSource",
"MINPOOLSIZE":"$$3"
},
"type":"DB2"
}
],
parms:{
"clientInactivityTimeout":"60",
"ARCHIVE":"$$1",
"propogatedOrBMTTranLifetimeTimeout":"300",
"asyncResponseTimeout":"120",
"USERID":"virtuser",
"totalTranLifetimeTimeout":"120",
"PASSWORD":"<xor>BW4SbzM9FhwuFgUxE2YyOW4="
},
"external-uri":"http:\/\/{SERVER}:9080\/",
"type":"WAS",
"name":"WAS",
"requires":{
"memory":256
607
}
}
],
"packages":[
"WAS",
"WASDB2"
]
},
{
"parts":[
{
"part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/db2\/parts\/db2-9.7.0.1.tgz",
parms:{
"installDir":"\/opt\/ibm\/db2\/V9.7"
}
},
{
"part":"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/db2\/parts\/db2.scripts.tgz"
}
],
"node-parts":[
{
parms:{
"private":"127.0.0.1"
},
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/firewall\/nodeparts\/firewall.tgz"
},
{
parms:{
"iaas-port":"8080",
"agent-dir":"\/opt\/IBM\/maestro\/agent",
"http-port":9999,
"iaas-ip":"127.0.0.1"
},
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/agent\/nodeparts\/agent-linux-x64.tgz"
},
{
parms:{
"installerURL":"files\/itmos-v6.2.2fp2_linuxx64.tar.gz",
"omnibustarget":"",
"temsip":"",
"omnibusip":""
},
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/monitoring\/nodeparts\/monitoring.tgz"
},
{
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/deployinlet\/nodeparts\/deployinlet.tgz"
},
{
parms:{
"collectors":[
{
"url":"https:\/\/2.zoppoz.workers.dev:443\/http\/COLLECTOR_NODE_IP:8080"
}
]
},
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/logging\/nodeparts\/logging.tgz"
},
{
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/cloud.HSLT\/nodeparts\/iaas.tgz"
},
{
"node-part":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/admin\/plugins\/autoscaling\/nodeparts\/autoscaling.tgz"
}
],
"scaling":{
"min":1,
"max":1
},
"image":{
"type":"large",
"image-id":"none",
"activators":[
608
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/\/admin\/clouds\/mockec2.zip"
]
},
"name":"database-db2",
"roles":[
{
parms:{
"DB_PORT":50000,
"DB_PATH":"\/home\/db2inst1",
"PASSWORD":"<xor>NRA0aWgHOGoRaG47DiU=",
"DB_NAME":"adb",
"SQL_URL":
"https:\/\/2.zoppoz.workers.dev:443\/https\/localhost:9444\/storehouse\/user\/applications\
/a-0d1ac0d4-4e4c-49d7-954f-d4884a6ad703\/artifacts\/setup_db.sql"
},
"external-uri":"jdbc:db2:\/\/{SERVER}:50000\/adb:user=db2inst1;password=jOk67Xg5N71dQz;",
"type":"DB2",
"name":"DB2"
}
],
"packages":[
"DB2"
]
}
]
}
Related information:
JSON Formatter and Validator
JSONLint: The JSON Validator
Python documentation: Subprocess management
Plug-ins for development:
The Plug-in Development Kit (PDK) includes several plug-ins that you can install
to assist you with testing and troubleshooting your plug-ins.
Debug:
The debug component, com.ibm.maestro.plugin.debug, provides support for
developing and debugging plug-ins.
Before you begin
The debug component is included in the IBM Plug-in Development Kit.
Before you can use the debug plug-in, you must import it into your IBM Cloud
Orchestrator development environment.
1. Import the com.ibm.maestro.plugin.debug plug-in.
2. Restart IBM Cloud Orchestrator.
About this task
The following are the attributes for the debug component:
v Do not deploy this application model, only write topology document to
storehouse: Select this option to specify the use of the Storehouse Browser to get
a topology document.
This option enables a deploy-only mode so that your virtual application pattern
is transformed through the complete deployment process, but no virtual
machines are created. Rather, the finalized topology document is written to the
Storehouse. You can use the Storehouse Browser to view your final topology
document.
Chapter 7. Managing and deploying virtual patterns
609
Using this option is the first debugging step after writing the plug-in and
pattern type. To locate the Storehouse Browser, access the product user interface.
From the menu, click System > Storehouse Browser. Expand User Deployments
> deployment ID. Click topology.json > Get Contents.
The deployment ID is in the Kernel Services (KS) console log, as follows:
[18/Aug/2011 21:19:54:447 +0000] INFO debug topology-only deployment stored topology
document in https://192.0.2.10:9444/storehouse/user/deployments/
d-02de78c2-6b26-4f32-80af-ba9083a481c4/topology.json
v Leave files on deployed nodes to enable manual debugging: Select this option
if you want to use Secure Shell (SSH) to access the node for manual debugging.
The virtual application pattern is deployed when this check box is selected.
Virtual machines are created in the deployment process.
The files are retained on deployed virtual machines so that you can log in to the
machines with SSH. You can view files and re-run nodepart and part life-cycle
scripts. This option supports greater productivity in debugging these scripts,
because you can edit and re-run scripts on the node and are not required to
fully deploy new nodes to start a test cycle. See Running Scripts on deployed
virtual machines for instructions on using this option.
Procedure
You can view, edit, or add this virtual application component in the user interface
as follows:
1. Click PATTERNS > Pattern Design > Virtual Application Patterns.
2. Select a virtual_application_pattern.
3. Click Edit the virtual application icon located in the upper right corner of the
Pattern Builder palette.
4. To add the debug component to a virtual application pattern, click Debug
listed under Other Components and drag the icon to the Pattern Builder
canvas. The properties panel for the database component displays to the right
of the Pattern Builder palette. For more details on the properties panel settings,
view the help by selecting the help icon on the properties panel.
5. To edit an existing debug component, select the Debug part on the Pattern
Builder canvas. The properties panel displays. For more details on the
properties panel settings, see the properties descriptions or view the help by
selecting the help icon on the properties panel.
For more details on the properties panel settings, view the help by selecting the
help icon on the properties panel.
6. You can also view the debug component properties by viewing the plug-in
information.
Click PATTERNS > Deployer Configuration > System Plug-ins. Select
Foundation Pattern Type from the Select a pattern type menu.
The plug-ins included with the Foundation Pattern are listed. Select
com.ibm.maestro.plugin.debug/x.x.x.x from the System Plug-ins palette where
x.x.x.x corresponds to the version numbers. The component plug-in
configuration information displays on the canvas.
Results
You have edited a current component, edited an existing component, or added one.
610
Unlock:
To facilitate plug-in development, the unlock plug-in provides the ability to delete
a plug-in used by a virtual application instance so that you can easily replace the
plug-in with an updated version and activate the new plug-in on existing virtual
machines instead of redeploying a new copy of the application.
Before you begin
Important: The unlock plug-in is for development environments only to facilitate
testing of plug-ins that are being developed. It is not intended for production
environments and should not be installed in a production environment.
About this task
The debug component is included in the IBM Plug-in Development Kit.
In a normal IBM Cloud Orchestrator environment, a plug-in cannot be removed if
it is being used by a deployed virtual application. For example, if a virtual
application uses a plug-in called custom.plugin at the version level 1.2.3.4, you
cannot delete version 1.2.3.4 of custom.plugin from the system. Locking the
plug-in in this way is important for the integrity and stability of virtual
applications in a production environment. If the plug-in is removed and the
deployed application needs to scale up or recover from a failure, the absence of the
plug-in can result in application failure.
In a development environment, however, the ability to delete and replace a locked
plug-in is useful because it significantly reduces the time required to test updates
to a plug-in. For example, if a deployed application is using custom.plugin version
1.2.3.4 and you want to test a bug fix or new feature that you have added to the
plug-in, you can import the modified custom.plugin version 1.2.3.4 plug-in and
activate it on deployed virtual machines in the virtual application instead of
deploying a new copy of the virtual application.
Procedure
To use the unlock for plug-in testing:
1. Update the code for the plug-in you are developing, and build the plug-in
without changing the version number of the plug-in.
2. Import the plugin.unlock plug-in and then restart IBM Cloud Orchestrator.
3. Delete the existing plug-in version from the IBM Cloud Orchestrator system.
4. Import the plug-in that you updated and built in step 1.
5. Apply the changes to the virtual machine.
a. Stop the agent and deployment inlet.
killall java
c. Perform any operations required to put the virtual machine in a state that is
ready for a fresh activation of the plug-in. This can include stopping
processes or removing files or other content that the plug-in installs on the
virtual machine.
Chapter 7. Managing and deploying virtual patterns
611
d. Restart the virtual machine and activate the plug-in with the following
command:
/0config/0config.sh
The node reboots and restarts itself, as if it were a newly deployed node. It
downloads the topology document, and then takes the nodeparts, parts, and
roles through all the lifecycle events by running all the lifecycle startup
scripts. Your newly installed version 1.2.3.4 of custom.plugin is used by the
application. You can restart as many virtual machines as you need to
properly test your plug-in code.
What to do next
If you need to update the plug-in again, you can repeat the steps to delete the
installed plug-in, import the modified version, and activate the new version on the
virtual machines. When you finished your testing, delete plugin.unlock and restart
IBM Cloud Orchestrator.
Creating your own database plug-in:
If you want to develop a plug-in to support your own database, you can create
own modeled on wasdb2 plug-in. This topic describes how the implementation of
a database connection.
The wasdb2 plug-in includes parts for connections with an existing DB2, Informix,
or Oracle, or connections with a pattern-deployed DB2 database. You can choose to
include either or both implementations in your custom plug-in.
For a pattern-deployed database, there are two virtual machines, one running
WebSphere Application Server with the user's application, and the other running
DB2 to manage data for the application. In the existing database case, the database
is already up and running on another machine, either in the cloud, as a shared
service, or outside the cloud managed by IBM Cloud Orchestrator. In either case,
WebSphere Application Server needs to have the IP address, port number, database
name, and database credentials (userid and password) to connect to the database.
In the application model, the WebSphere Application Server node and the database
node are modeled as components. The link between them in the application model
represents the connection between them, and provides the foundation for this data
transfer of required access information.
Pattern-deployed database connection
Roles provide capabilities to orchestrate application startup, life-cycle management,
and undeployment. For a database deployed with the IBM Web Application
Pattern (not released with the product), a WAS role manages and interacts with
WebSphere Application Server instance deployed on its node, and a DB2 role
interacts with the DB2 instance deployed on its node. The wasdb2 plugin provides
a link between the WAS and DB2 components. It inserts a dependency of the WAS
role on the DB2 role. At the start of the deployment, when both the WAS and DB2
roles transition to the RUNNING state, the WAS/DB2/changed.py script runs. The
DB2 role life-cycle scripts export DB2 characteristics, like hostname/IP address,
port number, database name, userid and password that are required to use the DB2
database on this deployed instance. The WAS/DB2/changed.py script gets this
exported data, and passes the values into wsadmin scripts to configure the
information so that applications in the WebSphere Application Server node can
access the database.
612
Roles can also contribute operations to the deployment inlet. These operations can
be used to modify the running deployment. For example, you can change the
password of your DB2 database. The DB2 plugin offers this operation, which
changes the database password and exports this changed data. The
WAS/DB2/changed.py script is notified on this update, and invokes a wsadmin
script to update the changed password in WebSphere Application Server.
Existing database connection
Web Application Pattern (not released with the product) supports connecting to
three existing database types: DB2, Informix, or Oracle. It uses a role for each:
xDB2, xInformix and xOracle. These roles work like the DB2 role used for a
pattern-deployed database. An application model component is available on the
Pattern Builder palette for each type of database, with hostname/IP address, port
number, database name, and user ID and password attributes. These values are
specified at virtual application design time. A link transform makes the WAS role
dependent on these roles, for existing databases. The xDB2 role start.py script
exports the values that are specified in xDB2 component on the Pattern Builder
pane, by using the same mechanism and key names as the DB2 role. The existing
database roles offer configuration settings and deployment inlet change operations
to dynamically change these configuration values just like the DB2 role. As with a
pattern-deployed DB2 database, WAS/xDB/changed.py scripts get the exported
xDB values (IP address, port number, database name, userid and password) and
invoke appropriate WebSphere Application Server configuration scripts so that the
applications on the WebSphere Application Server node can access the database.
Two attributes must be added to dependent roles.
asDependency : DB
Normally, each dependent role must provide its own dependent role scripts.
The wasdb plugin provides these scripts for all databases, and they are
delivered as WAS/DB scripts. While the topology document has the WAS role
depending on the appropriate DB role (DB2, xDB2, xInformix or xOracle), the
asDependency attribute maps all dependent role script calls to WAS/DB, for
example for changed.py. Database dependent information, unique to each
database, is passed to the wasdb link in a dblink_metadata JSONObject.
localOnly : true
This attribute is used in the existing resource or surrogate role cases to indicate
that this role is local to the WebSphere Application Server node. It is especially
important with scaling to invoke WAS/DB/changed.py only once per local
WebSphere Application Server node. The next section describes surrogate roles.
The wasdb plugin contributes a WASDB link to the Web Application Pattern
pattern type. The source component is a WebSphere Application Server node
(vm-template). The target component is a JSONObject with two elements:
dblink_metadata (required
A JSONObject with two elements:
packages (optional)
A JSONArray of package names to be installed on the WebSphere
Application Server node. The packages are added in the usual way to the
$sourcePackages variable in the wasdb_link.vm velocity template.
parms (required)
The database-specific parameters required by the scripts to configure Web
Application Pattern to connect to the database
Chapter 7. Managing and deploying virtual patterns
613
role (optional)
The role to insert into the WebSphere Application Server template. It is also
made a dependent role to the WAS role in the source WebSphere Application
Server template, as is usual in Velocity template link transforms. The role
element is used in the existing database case, for xDB2, xInformix and xOracle.
A pattern-deployed database does not need an extra role. It uses the DB2 role
for the WAS role to depend on. A targetRole parameter is included in the
dblink_metadata parms element for a pattern-deployed database. Its value is
the name of the pattern-deployed DB role of the dependency added to the
WAS role.
Using a surrogate role
The surrogate role is an option if your database pattern deploy plug-in does not
provide a role that closely mimicks the DB2 role, or the data it exports does not
use the same names as the DB2 role and our xDB roles. A surrogate role is added
to reflect status and changes in the target database role back to the wasdb plugin
in the manner it expects. For example, a database called DB3 has a DB3 role, that
we want to use with our wasdb plugin. We will create a new role, DB3Surrogate,
that depends on the DB3 role. It will have a DB3Surrogate/DB3/changed.py script
that gets changed exported data from DB3. The WAS role will be dependent on
DB3Surrogate role, which will export changed data from DB3, convert it to names
and formats expected by wasdb, and export it in names wasdb expects. To realize
this with WASDB link, targetRole in dblink_metadata parms would be DB3, and
the DB3Surrogate role would be passed in as the role element.
Log service for plug-ins:
The logging service is a general service to collect multiple types of information.
The information is securely transferred from the virtual machine and stored for
review by a logging service implementation.
The information collected by the logging service is for administrative purposes and
not for the application. The service can collect text and binary type file
information. The file can be a single snapshot file that is never collected again or
an infinitely growing file that can rotate to manage the size.
The logging service is a high-level service that supports zero to multiple registered
logging service implementations. The registered implementations are the real
processes that provide reports on the multiple types of information collected.
This general logging service presents a subset of the collected information in the
Log Viewer page of the IBM Cloud Orchestrator user interface and the Virtual
Application Console deployment Log Viewer tab.
The Log Viewer displays only the information found on the virtual machine when
requested. Historical information cannot be displayed when the information is
removed. For example, if the data is deleted or rolled over or if the virtual
machine disappears because it is terminated. The presentation of the historical
information, even after the virtual machine is no longer available, remains
available to the logging service implementation to extend the Log Viewer
capabilities to access the extracted information such as the external storage system.
The logging service information is explained in more detailed in the following
sections:
614
Logtype details
File
BinaryFile
SingleLine
MultiLineTimeStamp
MultiLineIP
Custom logtypes
External resources can create a custom logtype to assist custom event patterns in
monitoring the files and directories. This is done by creating a custom JSON file
packaged with the external resource. The plug-in must notify the logging service
about this custom logtype file. The basic metadata fields for a logtype entry are as
follows:
Chapter 7. Managing and deploying virtual patterns
615
Value
Required
name
True
description
format
False
start
Specifies a pattern to
determine where an event
starts.
Important: If no end tag is
included, an event ends
when the pattern is seen
again.
False
end
Specifies a pattern to
determine when an event is
complete.
Important: This tag
determines when an event is
complete, therefore, other
start patterns that are found
are ignored until the end
pattern is found.
False
Example
logtype-config.json file:
{"types":[
{
name": adaptorName2,
"description":This is a new adaptor",
format:text
"start": "\\[\\d{2}/\\w{3}/\\d{4}.*\\d{2}:\\d{2}:\\d{2}:\\d{3}.*\\-\\d{4}\\].*Start:.*",
"end": "\\[\\d{2}/\\w{3}/\\d{4}.*\\d{2}:\\d{2}:\\d{2}:\\d{3}.*\\-\\d{4}\\].*End:.*"
}
]}
616
617
Then, the example plug-in start.py life cycle script notifies the logging service
about the list of files and directories:
maestro.loggingUtil.monitor(listjson)
The logging service monitors these specific files to display in the log viewer and
allow logging service implementations configured to store these for historical
purposes.
Create a log service implementation
The logging service supports custom implementations such as, log backup, logging
collection and analysis service, and software monitoring like Splunk. These
implementations act as the underlying process for a secure information transfer
from the virtual machine and information storage for data review.
The logging service implementation is required to follow these steps:
1. Create a plug-in that contains the logging service implementation that gets
registered with the logging service.
Because the logging service implementation can be used on virtual machines
where the logging service is implemented, the implementation must be a
pattern type plug-in. This allows the logging service implementation to be
properly installed and managed by the plug-in infrastructure.
2. Use the plug-in life cycle script calls to register with the logging service.
618
The logging plug-in implementation uses its life cycle scripts to register when it
is ready to receive forwarded logging service method calls. The method to do
this registration is maestro.loggingUtil.registerImplementation(ImplName,
ImplScript)
The implementation provides a name, for example, logbackup, so other plug-ins
required to interact with the specific implementation know if that
implementation is active on the virtual machine. Also, this helps the logging
service know that the services are registered. The Python script provided
contains the implementation of the core forwarding methods.
When the implementation is deactivated and does not take any more
forwarded calls from the logging service, the unregistered method is used. The
method to do this unregistration is
maestro.loggingUtil.unregisterImplementation(ImplName).
This again provides the official name of the implementation.
3. Implement the core forwarding methods such as monitor, unmonitor, and
registerPluginLogtype.
Each logging service implementation is required to provide a Python script that
implements the following methods. These methods are automatically called
when the implementation is registered. Any local plug-in on the virtual
machine can call these core methods. The following are the methods to be
implemented:
v monitor(jsonData)
Provides the list of files and directories to be monitored with a logtype. The
logtype defines the details about the binary or text file and what a single
event structure looks like inside the file. If the service cares about the specific
event structure, the logtype defines a generic pattern that indicates the start
and end pattern of a single event for that specific file.
v unmonitor(jsonData)
Provides the list of files and directories to stop monitoring.
v registerPluginLogtype(file)
Provides a file that contains custom logtypes provided by a specific plug-in.
This file explains unique event patterns for the specific plug-in role, for
example:
{"types":[
{
"name": "DB2instance",
"start": "------------------------------------------------------------.*",
"end": "------------------------------------------------------------.*"
},
{
"name": "DB2StandardLog",
"start": "\\d{4}\\-\\d{2}\\-\\d{2}\\-\\d{1,2}\\.\\d{1,2}\\.\\d{1,2}\\.\\d{1,6}.*"
}
]}
The implementation sets up these calls. For example, the logbackup plug-in
creates a registry of files and file patterns required to regularly back up the logs
to an external system.
619
IBM Cloud Orchestrator monitoring has three types of collectors that are described
in the following table.
Table 75. Monitoring collector types.. Description of the monitoring collector types available for plug-ins.
Collector
Property
Usage
com.ibm.maestro.monitor.collector.itmosagent Built-in
com.ibm.maestro.monitor.collector.wca
Built-in
com.ibm.maestro.monitor.collector.script
Public
The built-in collectors are dedicated to IBM Cloud Orchestrator monitoring only.
These collectors gather common metrics, like statistics at the virtual machine level.
The public collectors are used by plug-in developers to interact with plug-in
management facilities for metrics collecting.
620
Registration
To use IBM Cloud Orchestrator monitoring collectors, you must register the
collectors with the plug-in configuration, providing the node, role, metrics, and
collector facilities information.
IBM Cloud Orchestrator provides a Python interface to register the collectors. The
definition of the interface is as follows:
maestro.monitorAgent.register('{
node: String,
role: String,
collector: String,
config: JSONObject
}')
Configuration properties
com.ibm.maestro.monitor.collector.itmosagent
{
metafile:<full path to metadata json file>,
}
com.ibm.maestro.monitor.collector.wca
{
metafile:<full path to metadata json file>,
"api-version":"3.0"
}
com.ibm.maestro.monitor.collector.script
{
metafile:<full path to metadata json file>,
"executable":"<script to execute>",
"arguments":"<optional arguments>",
"validRC":"<optional - valid return code - defaults to 0>",
"workdir":"<optional - work dir- defaults to java.io- tmpdir>",
"timeout":"<optional - time out (second) - defaults to 5
seconds>",
}
The following code example illustrates the registering script used in the script
collector:
maestro.monitorAgent.register('{
node:${maestro.node},
role:${maestro.role},
collector:com.ibm.maestro.monitor.collector.script,
config:{ metafile:<full path to metadata json file>,
executable:<script to execute>,
arguments:<arguments>,
validRC:<valid return>,
workdir:< work dir >,
timeout:< time out >} }')
621
The registering scripts are typically put into appropriate scripts or directories of
the plug-in lifecycle to ensure that the plug-in is ready to collect metrics. For
example, for the WebSphere Application Server collector, the registering script is
placed under the installApp_post_handlers directory where all scripts are
executed after WebSphere Application Server is running.
Metadata file
The metadata file is referred to in collector registering.
The plug-in provides a JSON formatted file that includes collector metadata
parameters, metric category types that it wants to expose and metadata describing
each exposed metric. The content of the metadata file contains:
v Metadata file version
v array of category names to register (1..n)
v interval time in seconds to poll for updated data
v category unique configuration parameters, like mbeanQuery
v list of metric metadata objects
attributeName - Specifies an attribute from the collector to associate to this
metric
metricName - Specifies a metric name to expose through the monitoring agent
APIs
metricType - Specifies the data type, like range, counter, time, average,
percent, and string
description - (optional) Specifies the string that defines the metric
The format of the metadata file is as follows:
{
"Version" : <metadata file version>,
"Category": [
<array of category names to register (1..n)>
],
"Metadata": [
{
"<category name from Category[]>":{
"updateInterval": <interval time in seconds to poll for update data>
"metricsName": <metric name to expose through monitoring agent APIs>
"metricType": <metric value data type including RANGE",COUNTER,TIME,
AVERAGE,PERCENT",STRING>
} ,
...... ......
]
}
},
...... ......
]
}
622
monitoring_ui.json
A plug-in provides the monitoring_ui.json file for metadata. The following code is
an example of the monitoring_ui.json file:
[
{
"category": <category name from Category[] defined in
metric metadata>,
"label": <the content shown on the chart for the category>,
"displays": [
{
"label": <string shown on the chart element for the metric>,
"monitorType": <time and type properties of the metric to display>,
"chartType": <chart type for displaying the metric>,
"metrics": [
{
"attributeName": <metric name defined in the metadata>,
"label": <string shown on the chart element for the metric>,
}
]
}
]
},
...... ......
]
Table 77. Monitor types. Monitor types that are available to define metric data in plug-ins
Monitor types (monitorType)
Description
HistoricalNumber
HistoricalPercentage
RealtimeNumber
RealtimePercentage
Table 78. Chart types. Chart types that are available to define metric data in plug-ins.
Chart types (chartType)
Presentation
Lines
StackArea
StackColumn
Pie
623
IBM Cloud Orchestrator auto scaling provides the run time with automatic
addition or removal of virtual application and shared services instances based on
instances work load.
You can optionally turn on the auto scaling feature by attaching the scaling policy
to a target application or shared service. The policy is also used to deliver the
scaling requirements to the backend engine. Requirements include trigger event,
trigger time, and instance number, which drive the scaling procedure.
The auto scaling policy can be attached to two kinds of components in IBM Cloud
Orchestrator: a virtual application and a shared service. For the virtual application,
you can explicitly add the scaling policy to one or more components of the
application in IBM Cloud Orchestrators Pattern Builder. For the shared service, the
scaling policy must be described in the application model made by the plug-in
developer if the service asks for the auto scaling capability.
Plug-ins, either for virtual applications or shared services, define the scaling policy,
describe the policy in application model and provide transformers to explain and
add scaling attributes into the topology document when the policy is deployed
with plug-ins. Only if you are using shared services, can the application build
automatically generate the segment of scaling policy in application model. At run
time, the backend auto scaling engine first loads scaling attributes and generates
the rule set for scaling trigger. Then the backend engine computes on the rule set
and decides if the work load reaches a threshold for adding or removing
application or share service instances. The final step of the process is to complete
the request.
Policy elements
The auto scaling policy is composed of elements for different scaling aspects as
follows:
v Trigger Event
Specifies the type of monitoring metrics for the plug-in, including adding and
removing plug-in instances and what thresholds they have.
For each metric in the event definition, there are two thresholds: scale-in
threshold and scale-out threshold. For example, the CPU utilization of virtual
machines that run WebSphere Application Server instances can be the metric for
the trigger event and the thresholds for scale-in and scale-out are 20% and 80%,
then when the value of CPU utilization is higher than the 80%, a new
WebSphere Application Server instance is launched. When the CPU utilization is
below 20%, an existing WebSphere Application Server instance is selected for
removal.
v Trigger Time
Specifies the time it takes to hold an inspecting threshold before performing
scaling operations when the threshold condition is met. For example, if trigger
time is set to 120 seconds at the moment that the CPU utilization is monitored
high than 80%, a timer is started. When the timer reaches 120 seconds, the
scale-out operation is not started. It must be noticed that during the timing, if
the CPU utilization goes out of thresholds, the timer is stopped and waits for
another restarting.
v Instance Number Range
Specifies the total number of instances a plug-in can have at one time or at least
by scale-out or scale-in. When the cluster size of a plug-in reaches the border of
its ranges, no instance is added or removed to or from the cluster, even though
624
the trigger event is met. To apply the auto scaling policy to a plug-in, ensure
that the scaling policy is defined in the application model that the plug-in is
associated with, which collects user-specific requirement for the scaling
capability. Also ensure that the policy is transformed into the topology
document, which guides the backend engine to inspect trigger event and
perform scaling operations.
Application model
Auto scaling capability is embodied as a policy in the application model. The
application model is used to describe the components, policies, and links in the
virtual applications or shared services. For virtual applications, the model can be
visually displayed and edited with the Pattern Builder.
You can customize your components and policies, including the auto scaling policy,
in the Pattern Builder. There is no tool to visualize shared services in the
application model. Auto scaling can only be customized in the Virtual Application
Console when the service is deployed. The scaling policy that is described in the
application model, for either a virtual application or shared service, follows the
IBM Cloud Orchestrator application model specification. The policy is defined in
the node with a group of attributes.
The three auto scaling elements, trigger event, trigger time and instance number
range, are described in the attribute set. There is no name convention for the
attribute keys, but they must be understood by the plug-in to transfer into a
topology document. The following code is an example of the elements described in
the plug-in:
"model": {
"nodes": [
{
......
},
{
"id": <policy id>
"type":<policy type>
"attributes": {
<No.1 metric id for trigger event>: [
< threshold for scale-in >,
< threshold for scale-out >
],
<No.1 metric for trigger event>: [
< threshold for scale-in >,
< threshold for scale-out >
],
<...... :[...... ,...... ]>
<No.1 metric for trigger event>: [
< threshold for scale-in >,
< threshold for scale-out >
],
<trigger time id>: <trigger time value>
<instance range number id": [
<min number>,
<max number>
],
}
},
{
625
......
}
]
}
The attributes describe the scaling policy in an application model. From above
JSON segment, the Trigger Event can include multiple metrics and thresholds for
one scaling policy. This means that the scaling operations on a plug-in can be
triggered by different condition entries with different metrics. The relationship
among these entries are explicitly explained by plug-in transformer and marked in
the topology document. It is not required to mark them in application model,
except that their label can be used to define the relationship in the user interface.
IBM Cloud Orchestrator requires the metadata be provided in a plug-in to explain
components in the application model for user interface presentation. For scaling
policy, the plug-in can apply proper widget types and data types to the attributes
for Trigger Event, Trigger Time and Instance Number Scope.
Topology document
In the topology document, the scaling is extended to contain the attributes from
auto scaling. The neat scaling contains only attributes of min and max, both of
which typically have the same value. The value indicates the size of a fixed cluster
on the plug-in template.
"vm-templates": [
{
......
scaling :{
"min": <number,
"max": <number>,
}
},
{
......
}
]
626
topology document. The Trigger Event, Trigger Time, and Instance Number Scope
autoscaling elements correspond to "triggerEvents", "triggerTime", and "min"&"
max".
Pattern type packaging reference:
The pattern types are a collection of plug-ins. The plug-ins contain the
components, policies and links of the virtual application pattern. This topic
explains the packaging of the plug-ins that create the virtual application pattern
type.
Virtual application pattern types are shipped with IBM Cloud Orchestrator or they
are purchasable at the IBM PureSystems Centre.
The associated plug-in files that are associated with a pattern type are as follows:
v {ptype}.tgz file
v plugins/set of {plugin}.tgz files
v files/set of {name} files
The {ptype}.tgz file is required and must contain the patterntype.json file. The
{ptype}.tgz file might also contain the license and localized messages. For
example, the patterntype.json file for the IBM Web Application Pattern (not
released with the product) is as follows:
{
"name":"NAME",
"shortname":"webapp",
"version":"2.0.0.0",
"description":"DESCRIPTION",
"prereqs":{
"foundation":"*"
},
"license":{
"pid":"5725D57",
"type":"PVU"
}
}
A pattern type defines a logical collection of plug-ins, but not the members. The
members (plug-ins) define their associations with pattern types in the config.json
file. Therefore, pattern types are dynamic collections and can be extended by third
parties. For example, the config.json file for the DB2 plug-in (not released with
the product) is as follows:
{
"name":"db2",
"version":"2.0.0.0",
"files":[
"db2/db2_wse_en-9.7.0.3a-linuxx64-20110330.tgz",
"optim/dsadm223_iwd_20110420_1600_win.zip",
"optim/dsdev221_iwd_20110421_1200_win.zip",
"optim/com.ibm.optim.database.administrator.pek_2.2.jar",
"optim/com.ibm.optim.development.studio.pek_2.2.jar"
],
"patterntypes":{
"primary":{
"dbaas":"1.0"
},
"secondary":[
{
"webapp":"2.0"
}
Chapter 7. Managing and deploying virtual patterns
627
]
},
"packages":{
"DB2":[
{
"persistent":true,
"requires":{
"arch":"x86_64"
},
"parts":[
{
"part":"parts/db2-9.7.0.3.tgz",
"parms":{
"installDir":"/opt/ibm/db2/V9.7"
}
},
{
"part":"parts/db2.scripts.tgz"
}
]
}
]
}
}
Samples:
Use these samples to help you learn how to develop custom plug-ins. The plug-ins
that you develop can be added to the IBM Cloud Orchestrator catalog and used as
components, links and policies for virtual applications.
Plug-ins and pattern types are built and packaged using Apache Ant. The Plug-in
Development Kit (PDK) provides Ant build (.xml) files for this purpose. These
build files can run from the command line or from within Eclipse. Other
development environments can work, but only command line and Eclipse are
supported.
The Samples are included with the Plug-in Development Kit (PDK). Download the
PDK to get started with Samples. You can also download the PDK from the IBM
Cloud Orchestrator user interface Welcome page.
Attention: You must enable the PDK license before you can use the PDK. A
dialogue box displays during download to assist you with the license acceptance
process.
Sample pattern types and plug-ins
There are four projects in the form of plug-ins included in the PDK zip package.
These plug-ins show how to design an application model, configuration, virtual
machine template, and Python scripts of a plug-in.
plugin.com.ibm.sample.hellocenter: The plug-in project of HCenter plug-in
In this plug-in, you can learn how to write the lifecycle scripts, such as install,
configure, start, and stop for middleware like HelloCenter. You can use the
following scripts:
v install.py: Downloads artifacts from storehouse and installs the middleware. If
you want to download and extend the .tgz installation file from the storage
server, use the function downloadx instead.
628
629
630
Sample: Developing a plug-in and pattern type with the Eclipse Framework:
This topic is an example of how to create a plug-in and pattern type using the
Eclipse Framework.
Before you begin
Download and install the Plug-in Development Kit (PDK) and set up the
development environment.
About this task
Procedure
1. Go to the workspace that you created in the topic, Setting up the samples
environment section.
2. Build a single plug-in.
3. In the plugin.depends project, run the build.xml Ant script. To run the Ant
script, right-click on the file and select Run As > Ant Build. The plug-in build
process starts.
When the build process completes, refresh the project and a folder named
export displays. All of the build artifacts are listed in the export folder.
The plug-in package is in the root of export folder.
4. Build all plug-ins in the workspace. Select the build.xml file in the root of the
plugin.depends project. Right-click and select Run As > Ant Build. The plug-in
build process starts.
When the build process completes, refresh the project. A folder named image
displays in the sub-folder plug-ins, where all of the built plug-in packages are
located.
5. Build a single pattern type. Before this step, you must successfully complete
Step 2.
Select the build.patterntype.xml file in the root of the patterntypetest.hello
project. Right-click and select Run As > Ant Build. The pattern type build
process starts.
When the build process completes, refresh the project. A folder named export
displays where all the build artifacts are listed. The pattern type package is in
the root of export folder.
6. Navigate to the root of the export folder. The hello-2.0.0.0.tgz pattern type
binary file is located here. The sample pattern type patterntypetest.hello release
package includes three plug-ins:
v HCenter plug-in: This plug-in operates a simple message center middleware
named HelloCenter. HelloCenter opens port 4000 and listens to client
requests, and generates and returns greeting messages.
v Hello plug-in: This plug-in is a client component of HelloCenter. It sends a
request with the message sender identity to the Hello Center and tries to get
the returned greeting message and display the message on the console.
v HClink: This link plug-in from Hello to HCenter specifies the receiver name
of greeting message.
Results
You have created a plug-in that can be imported into the IBM Cloud Orchestrator
catalog.
Chapter 7. Managing and deploying virtual patterns
631
What to do next
Import the plug-in into IBM Cloud Orchestrator.
Sample: Import a plug-in and pattern type into IBM Cloud Orchestrator:
This topic is an example of how to import your sample plug-in into IBM Cloud
Orchestrator
Before you begin
Develop the plug-in.
About this task
Procedure
Import the pattern type hello-2.0.0.0.tgz and try sample plug-ins. The steps for
this task are based on a scenario where you have installed and configured IBM
Cloud Orchestrator.
1. Import pattern type patterntypetest.hello.
a. Log in into IBM Cloud Orchestrator as administrator or as a user with
permission to create a new pattern type.
b. Click PATTERNS > Deployer Configuration > Pattern Types.
c. Click the New icon (+).
The Install a pattern type window displays.
d. On the Local tab, click Browse.
Select the hello-2.0.0.0.tgz file. When the installation process completes,
the pattern type, patterntypetest.hello, is displayed in the Pattern Types
palette.
e. Select patterntypetest.hello pattern type from the drop down list
The pattern type details display on the right.
f. Click Accept to accept the license. Click Enable to enable this pattern type.
Results
You have imported a plug-in and pattern type into IBM Cloud Orchestrator
catalog.
What to do next
Create an application that can be deployed to IBM Cloud Orchestrator.
Sample: Creating an application with the patterntypetest.hello plug-in:
In this Samples topic, you use the patterntypetest.hello plug-in to create an
application that is deployed to IBM Cloud Orchestrator.
Before you begin
Set up your development environment and develop a plug-in and pattern type.
632
17.
18.
19.
20.
21. In this log page, from the root, go to IWD Agent > "../logs/Hello_PluginHVM.XXXX.hello" > console.log.
The following information is provided:
633
Results
You have created a virtual application and deployed it to IBM Cloud Orchestrator.
What to do next
Monitor the virtual application instance.
Sample: Creating a plug-in project from the command line:
This topic is an example of how to create a plug-in project using command-line
tools.
Before you begin
1. Open the command-line tool.
2. cd to the plugin.depends project directory in your workspace.
3. Set the ANT_HOME environment variable. You can use Ant in your Eclipse
installation at eclipse/plugins/org.apache.ant_1.7*. You can also invoke this Ant
script from Eclipse. To do this, right-click create.plugin.project.xml in the
plugin.depends project. Select Run As > Ant Build. Click the Main tab. In the
argument section, type the various -Dproject.name=jp1 values provided in this
sample.
About this task
Continue with this task by completing the following steps:
Procedure
1. Create a template plug-in project as follows:
ant -Dproject.name=tp1 -Dplugin.name=a.b.c.template -f create.plugin.project.xml
3. Verify that the command is successful. Import the newly created projects into
your workspace.
To build the plug-in projects, for example, jp1, you can find build.plugin.xml
in project jp1. Right-click build.plugin.xml and issue Run As > Ant Build
634
with the goal clean, publish selected. The equivalent Ant command is to issue
the following command in the project jp1 directory:
ant f build.plugin.xml clean publish
This command builds the plug-ins in this workspace one at a time. After the
script starts, go to the image/plugins folder of the plugin.depends project to
check all of the built plug-in packages.
7. Navigate to the root of the pattern type project, patterntypetest.hello, and type
the following command:
<ant command path> -f build.patterntype.xml
After the script starts, go to the root of the export folder of the
patterntypetest.hello project to check the built pattern type package, which is a
.tgz file.
635
Procedure
1. Deploy a virtual application using one of the following methods:
v From a virtual application pattern
v From a virtual application template
2. Secure a virtual application.
3. Monitor and administer virtual applications.
636
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Patterns. The Virtual
Application Patterns palette displays.
2. From the left panel of the Virtual Application Patterns palette, select the
virtual_application that you want to deploy.
3. Click the Deploy icon to deploy the virtual application. The Deploy Virtual
Application dialogue box displays.
4. Complete the Target environment profile fields.
These settings provide deployment configuration information, like virtual
machine names, IP address assignment, and cloud groups. Deploying virtual
applications with environment profiles enables deployments across tiers from a
single application.
a. Select the IP type filter, IPv4 or IPv6, in the Filter by IP type field.
b. Select the Filter by profile type from the drop down menu.
c. Select the Profile from the drop down menu.
d. Select theCloud group from the drop down menu.
e. Select the IP group from the drop down menu.
5. Click Advanced to set the SSH public key.
a. Enter an SSH public key in the SSH Key field. You can use a text editor to
open your public key file and, copy and paste the key into the SSH Key
field.
637
Important: Do not use cat / less to copy and paste from the user interface.
This type of cut and paste introduces spaces to the key and you cannot gain
access to the virtual machine.
If you do not want to use an existing SSH public key, you can generate one
as described in the next step.
a. Click Generate to generate the key. The SSH key is automatically generated
in the SSH Key field. Select Click here to download the file containing the
private key->Save to save the private key file. Save the file to a secure
location and you can name the key. The default name is id_rsa.txt. The
system does not keep a copy of the private key. If you do not download the
private key, you cannot gain access to the virtual machine, unless you
generate a new key pair again using user interface. You can also copy and
paste the public key, save the key, and reuse the same key pair for another
deployment. When you have the private key, make sure that it has the
correct permissions (chmod 0400 id_rsa.txt). By default, the ssh client does
not use a private key file with wide open permission.
6. Click OK. A message displays at the top of the Pattern Builder confirming that
the virtual application is in the deployment process. You can also check the
status of the deployment from this message.
Attention: You cannot modify a virtual application after you deploy it. You
must stop the deployed virtual application before you can change it.
When the virtual application is deployed, to view the virtual instance, click
PATTERNS > Instances > Virtual Application Instances. The Virtual
Application Instances palette displays.
Note: If the deployment process is stopped and in the Virtual Application
Instances window the flavor error message is displayed, you must create a
flavor with memory, vCPUs, and storage values equal to or greater than the
requirements of the virtual machines that you are deploying. For information
about creating flavors, see Managing flavors on page 100.
After you created the new flavor, delete the virtual application instance and
redeploy the virtual application pattern. .
7. View the details of the deployed virtual application in the Virtual Application
Instances palette. The details include a list of virtual machines provisioned on
the cloud infrastructure for that deployment, the IP address, virtual machine
status, and role status. Role is a unit of function that is performed by the virtual
application middleware on a virtual machine.
The status values are listed in the following table:
Table 79. Possible status values for a deployed virtual application
638
Status
Deployment description
LAUNCHING
INSTALLING
Not applicable
Table 79. Possible status values for a deployed virtual application (continued)
Status
Deployment description
RUNNING
TERMINATED
FAILED
You can also view the virtual machine role health status information. For
example, a red check mark is located on the green status arrow when the CPU
is critical on the virtual machine.
Click Endpoint to view the endpoint information for a given role. For a
deployment with DB2 you can have more than one endpoint, for example, an
endpoint for application developer one for database administrator.
Results
Your virtual application instance is successfully deployed and started. To stop the
virtual application instance, select the application from the list, and click Stop.
To redeploy a virtual application, select the virtual application from the Virtual
Application Patterns palette, and click the Deploy icon in the Pattern Builder.
To remove a stopped application, select it from the Virtual Application Patterns
palette, and click Delete.
What to do next
After you deploy your virtual application, you can use the IP address of the virtual
machines to access the application artifacts. For example:
To gain access to your virtual machine after deployment type:
ssh -i id_rsa.txt virtuser@<your_workload_ip>
For SCP:
scp -i id_rsa.txt myfiles.txt virtuser@<your_workload_ip>
639
You can view and monitor statistics for your deployed virtual machines and
download and view the log files from the user interface. For more information, see
Monitoring virtual application instances on page 650.
Procedure
1. Click PATTERNS > Pattern Design > Virtual Application Templates. The
Virtual Application Templates palette displays.
2. From the left panel of the Virtual Application Templates palette, select the
virtual_application_template that you want to deploy.
3. Click the Deploy icon. The Deploy Virtual Application dialogue box displays.
4. Complete the Target environment profile fields.
These settings provide deployment configuration information, like virtual
machine names, IP address assignment, and cloud groups. Deploying virtual
applications with environment profiles enables deployments across tiers from a
single application.
a. Select the IP type filter, IPv4 or IPv6, in the Filter by IP type field.
b. Select the Filter by profile type from the drop down menu.
c. Select the Profile from the drop down menu.
d. Select the Cloud group from the drop down menu.
e. Select the IP group from the drop down menu.
5. Click Advanced to set the SSH public key.
a. Enter an SSH public key in the SSH Key field. You can use a text editor to
open your public key file and, copy and paste the key into the SSH Key
field.
Important: Do not use cat / less to copy and paste from the user interface.
This type of cut and paste introduces spaces to the key and you cannot gain
access to the virtual machine.
If you do not want to use an existing SSH public key, you can generate one
as described in the next step.
a. Click Generate to generate the key. The SSH key is automatically generated
in the SSH Key field. Select Click here to download the file containing the
private key->Save to save the private key file. Save the file to a secure
location and you can name the key. The default name is id_rsa.txt. The
system does not keep a copy of the private key. If you do not download the
private key, you cannot gain access to the virtual machine, unless you
640
generate a new key pair again using user interface. You can also copy and
paste the public key, save the key, and reuse the same key pair for another
deployment. When you have the private key, make sure that it has the
correct permissions (chmod 0400 id_rsa.txt). By default, the ssh client does
not use a private key file with wide open permission.
6. Click OK. A message displays at the top of the Pattern Builder confirming that
the virtual application is in the deployment process. You can also check the
status of the deployment from this message.
Attention: You cannot modify a virtual application after you deploy it. You
must stop the deployed virtual application before you can change it.
When the virtual application is deployed, to view the virtual instance, click
PATTERNS > Instances > Virtual Application Instances. The Virtual
Application Instances palette displays.
7. View the details of the deployed virtual application in the Virtual Application
Instances palette. The details include a list of virtual machines provisioned on
the cloud infrastructure for that deployment, the IP address, virtual machine
status, and role status. Role is a unit of function that is performed by the virtual
application middleware on a virtual machine.
The status values are listed in the following table:
Table 80. Possible status values for a deployed virtual application
Status
Deployment description
LAUNCHING
INSTALLING
Not applicable
RUNNING
TERMINATED
FAILED
You can also view the virtual machine role health status information. For
example, a red check mark is located on the green status arrow when the CPU
is critical on the virtual machine.
Click Endpoint to view the endpoint information for a given role. For a
deployment with DB2 you can have more than one endpoint, for example, an
endpoint for application developer one for database administrator.
641
Results
Your virtual application instance is successfully deployed and started. To stop the
virtual application instance, select the application from the list, and click Stop.
To redeploy a virtual application, select the virtual application from the Virtual
Application Patterns palette, and click the Deploy icon in the Pattern Builder.
To remove a stopped application, select it from the Virtual Application Patterns
palette, and click Delete.
What to do next
After you deploy your virtual application, you can use the IP address of the virtual
machines to access the application artifacts. For example:
To gain access to your virtual machine after deployment type:
ssh -i id_rsa.txt virtuser@<your_workload_ip>
For SCP:
scp -i id_rsa.txt myfiles.txt virtuser@<your_workload_ip>
You can view and monitor statistics for your deployed virtual machines and
download and view the log files from the user interface. For more information, see
Monitoring virtual application instances on page 650.
Procedure
v User roles in IBM Cloud Orchestrator on page 255. Review the user roles that
apply to creating a virtual application pattern.
v Secure Socket Layer (SSL).
v Configuring Secure Shell (SSH) key-based access during application deployment.
v Configuring SSH key-based access in the user interface on page 646.
v LTPA keys.
642
643
644
645
646
6. To upload a SSH public key, copy and paste your SSH key in the Public Key
field and click Submit in the Add or update VM SSH public key section. If
you already have a public key in the Public Key field, the key is replaced.
Important: Do not copy the key from the console output of the Linux
command more. This can introduce line breaks into the key that might render it
invalid.
7. To remove VM SSH public keys, click Submit in the Remove VM SSH public
keys section.
Results
You have added a new VM SSH public key or removed VM SSH public keys.
LTPA keys:
Lightweight Third-Party Authentication (LTPA), is an authentication technology
used in the web application that is deployed into the cloud infrastructure.
About this task
There are various tasks you can do with LTPA keys, including regenerating keys,
importing keys and exporting keys.
To configure the LTPA keys, follow these steps:
Procedure
1. Click PATTERNS > Instances > Virtual Application Instances. The Virtual
Application Instances palette displays.
2. Select the WebSphere Application Server virtual application instance. The
virtual application instance details display.
3. Click the Manage icon.
4. Select Operations. The Operations palette opens.
5. Select the WebSphere Application Server application. The deployment
operations palette displays the operations on the right. Now you can manage
the LTPA keys.
6. Regenerate LTPA keys. To regenerate LTPA keys, expand Regenerate LTPA
keys in the Deployment operations palette. Click Submit. A confirmation
dialogue window displays. Click Yes to confirm that you want to regenerate
the LTPA keys.
The operation status displays in the Operation Execution Results palette. When
the operation displays as successful, the LTPA keys are regenerated.
7. Import LTPA keys. To import an LTPA key, expand Import LTPA keys in the
Deployment operations palette. Click Browse to locate the LTPA key that you
want to import. Click Submit. A confirmation dialogue window displays. Click
Yes to confirm that you want to import the LTPA key. The operation status
displays in the Operation Execution Results palette. When the operation
displays as successful the LTPA key is imported.
8. Export LTPA keys. To export LTPA keys, expand Export LTPA keys in the
Deployment operations palette. Click Submit. The operation status displays in
the Operation Execution Results palette. When the operation displays as
successful the LTPA key is exported. The exported key can be downloaded
through the link that is listed in the operation status section.
Chapter 7. Managing and deploying virtual patterns
647
Procedure
1. Click PATTERNS > Instances > Virtual Application Instances. The Virtual
Application Instances palette displays. The virtual application instances are
listed by name. You can sort the list by application name or sort by status.
Attention: You can also view the virtual application instances on a page
where all instances, such as virtual appliance instances, virtual system
instances, shared services instances and database instances are listed. Click
PATTERNS > Instances > Virtual Application Instances. Every instance
running on the IP address are listed.
2. From the left panel of the Virtual Application Instances palette, select the
virtual_application_instance that you want to review.
3. The Maintain view displays to the right. The details include a list of virtual
machines provisioned on the cloud infrastructure for that deployment, the IP
address, virtual machine status, and role status. Role is a unit of function that is
performed by the virtual application middleware on a virtual machine.
The status values are listed in the following table:
Table 81. Possible status values for a deployed virtual application
648
Status
Deployment description
LAUNCHING
INSTALLING
Not applicable
RUNNING
TERMINATED
FAILED
4. View the virtual machine role health status information. For example, a red
check mark is located on the green status arrow when the CPU is critical on the
virtual machine.
5. Click Endpoint to view the endpoint information for a given role. For a
deployment with DB2 you can have more than one endpoint, for example, an
endpoint for application developer one for database administrator.
6. Click Logs to view the log information. For more information, see Viewing
virtual application instance logs on page 650.
7. Click Manage in the top right for more advanced monitoring details and
operations. The Virtual Machine Monitoring page displays. If you want to
return to the Maintain view, click Maintain in the top right. For more
information about monitoring, see Monitoring virtual application instances
on page 650.
Results
Your virtual application instance is successfully deployed and started. To stop the
virtual application instance, select the application from the list, and click Stop.
However, a stopped deployment cannot be restarted.
To redeploy a virtual application, select the virtual application from the Virtual
Application Patterns palette, and click the Deploy icon in the Pattern Builder.
To remove a stopped application, select it from the Virtual Application Patterns
palette, and click Delete.
What to do next
After you deploy your virtual application, you can use the IP address of the virtual
machines to access the application artifacts. For example:
To gain access to your virtual machine after deployment type:
ssh -i id_rsa.txt virtuser@<your_workload_ip>
For SCP:
scp -i id_rsa.txt myfiles.txt virtuser@<your_workload_ip>
You can view and monitor statistics for your deployed virtual machines and
download and view the log files from the user interface. For more information, see
Monitoring virtual application instances on page 650.
649
Procedure
1. Click PATTERNS > Instances > Virtual Application Instances. The Virtual
Application Instances palette displays.
2. Select a virtual_application_instance . The virtual application instance details
display to the right.
3. Click the More information and advanced operations icon located in the upper
right corner.
4. Click Virtual Machine Monitoring. The Virtual Machine Monitoring palette
displays. Select a virtual machine that you want to monitor. The Memory,
Process, Network and Storage monitoring charts display.
Results
You have viewed and monitored virtual application instances and machines.
Viewing virtual application instance logs:
You can view logs of the virtual application instances.
Before you begin
Your virtual application patterns must be deployed and all of your virtual
machines started before you can monitor results.
Procedure
1. Click PATTERNS > Instances > Virtual Application Instances. The Virtual
Application Instances palette displays with the virtual application instances
listed.
2. Select a virtual_application_instance. The details display to the right.
650
3. Click Log, located under the VM Status column to view the virtual machine
status logs. The Log Viewer palette displays with organized log sections, such
as operating system log, pattern type plug-in log and agent log.
The following types of logs can be viewed in the Log Viewer:
v Lightweight Directory Access Protocol (LDAP) logs:
/home/idsldap/sqllib/log
db2dump
db2dump/events
db2dump/stmmlog files
/home/idsldap/idsslapd-idsldap/etc and logs
/var/idsldap/V6.3 log files
v WebSphere Application Server (WAS) logs:
logs/server1 files
logs/ffdc files
Attention: If you have specified additional log files or directories to
monitor in the Logging Policy, these logs also display in this list.
The log viewer is used to view the trailing 10 lines of the log you selected.
New log entries are appended into the log viewer as they occur. The log viewer
has several actions that control the behavior of the log viewer.
You can specify a string to filter what files are displayed in the log viewer.
Strings can be prefixed by a tag that specifies one or more elements of the logs
to be examined. The following tags are supported: role, dir, vm, and file.
role:DB2
Specifies files that belong to a role with a name that contains "DB2".
dir:var Specifies files with an absolute path that includes a directory name that
contains "var". The dir: tag applies to the entire path, with the
exception of the file name.
vm:application
Specifies files on a virtual machine with a name that contains
"application".
file:trace
Specifies files that have names that contain the word "trace".
If a tag is not included in a search string, the filter is assumed to have the file:
tag.
Multiple tags can be used in conjunction.
role:DB2,trace
Specifies files with a name that contains "trace", and that belongs to a
role with a name that contains "DB2".
Ensure that the tags: "role", "dir", "vm", and "file" are lowercase, or they are not
recognized as tags. The strings following the tags, are case insensitive. For
example, the string "role:WAS" will match a role name "WAS" as well as a role
named "was", but the string "ROLE:was" does not match anything, because
ROLE is not recognized as a valid tag. After entering a string in the filter box,
click Go to apply the filter to the tree.
4. Click Download All to download all files in the log viewer. It is only displayed
when no filter string has been entered, or when the filter input box has been
cleared. Clicking Download All returns an archived, compressed file containing
all of the logs on the virtual machine. If there are multiple virtual machines
Chapter 7. Managing and deploying virtual patterns
651
displayed in the log viewer, a separate archive file will be returned for each
virtual machine. When you enter a string into the filter box to select a subset of
logs, Download All is replaced by Download Filtered. Clicking Download
Filtered downloads all of the files that are displayed in the filtered tree as a
single archive file.
5. You can also view the log files in the Virtual Application Console. After you
select the virtual application instance and the details display to the right, you
can click the More information and advanced operations icon located in the
upper right corner. Select the virtual machine for which you want to view logs.
Click Logging on the dashboard.
6. Expand each section to view the logs.
7. Optional: Download the log file. After you expand the log type and select a
log, you can click the green arrow to download the log.
Results
You have viewed the logs associated the virtual application instances and the
virtual machines that they run on.
652
Procedure
1. Click PATTERNS > Deployer Configuration > Shared Services.
2. In the left pane of the Shared Services window, select the shared service that
you want to deploy.
3. Click the Deploy icon. The Configure and deploy application window is
displayed.
4. Provide information into the fields to configure the shared service. The
information that you must provide differs depending on the shared service that
you are working with.
5. Click OK. The Deploy Pattern window is displayed.
6. Complete the following fields to deploy the application as a shared service:
Name Specifies that name of the application that is being shared as a service.
Do not use more than two consecutive underscore characters in a
shared service name.
Target cloud group
Provides information related to cloud groups.
SSH Key
If you want to upload a public key so that you can connect to the
deployed virtual machines using SSH, click Advanced options and
complete the SSH section. If you do not have an existing SSH key pair,
you can generate one that can be reused with other deployments by
clicking Generate. The SSH Key field populates with the generated
public key. Select Download or Download (PKCS1 format) to save the
private key to your local system.
Schedule deployment
Specifies the start and end dates for the deployment.
7. To stop the shared service, click the Stop the selected shared service icon. The
shared service status is displayed as STOPPED in the details.
653
Monitoring - Application
The Monitoring - Application shared service can be deployed to one or more
cloud groups to provide a reference to an external Tivoli Monitoring installation
Version 6.2.2 Fix Pack 5 or later. Once created, the UNIX or Linux OS monitoring
agents and the Workload monitoring agent that are provided in the virtual
application workloads are automatically connected to a defined instance of a Tivoli
server by using the supplied primary and fail-over Tivoli Enterprise Monitoring
Server, protocol, and port. The URL for the Tivoli Enterprise Portal Webstart
console is provided, so cloud administrators are presented with a monitoring link
in the console to launch to the Tivoli Enterprise Console.
You must install the latest Application Support and Language Pack files for the
Workload monitoring agent on the Tivoli Enterprise Monitoring Server and Tivoli
Enterprise Portal Server before creating the shared service and deploying patterns
so that Tivoli Monitoring displays the new agents.
Procedure
1. Click PATTERNS > Deployer Configuration > Shared Services.
2. In the left pane of the Shared Services window, select the MonitoringApplication service.
3. Click the Deploy icon. The Configure and deploy application window is
displayed.
4. Complete the following information:
a. Enter the primary server in the Primary Server field.
b. Enter the secondary server in the Secondary Server field.
c. Select the protocol radio button in the Protocol field. Choose IP.PIPE,
IP.SPIPE, or IP.UDP.
d. Enter the port number in the Port field.
e. Enter the console URL in the Console URL field.
5. Click OK. The Deploy Virtual Application window is displayed.
6. Select the target cloud group.
7. Click Advanced options to set up SSH access. The SSH key provides access to
the virtual machines in the cloud group for troubleshooting and maintenance
purposes.
a. Use one of the following options to set the public key:
v To generate a key automatically, click Generate.
v To use an existing SSH public key, open the public key file in a text editor
and copy and paste it into the SSH Key field.
Note: Do not use cat, less, or more to copy and paste from a command
shell. The copy and paste operation adds spaces to the key which prevent
you from accessing the virtual machine.
b. If you generated a key automatically, click Download to save the private
key file to a secure location. The default name is id_rsa.txt.
654
The system does not keep a copy of the private key. If you do not
download the private key, you cannot access the virtual machine, unless
you generate a new key pair. You can also copy and paste the public key,
save the key, and reuse the same key pair for another deployment. When
you have the private key, make sure that it has the correct permissions
(chmod 0400 id_rsa.txt). By default, the SSH client does not use a private
key file that provides open permission for all users.
8. Click OK.
Results
You deployed a monitoring shared service as an application.
Procedure
1. Get the /utils/TOSCA/tosca.foundation-1.1.0.0.tgz file from the installation
package.
Log in to the Self-service user interface as a Cloud Administrator.
Click PATTERNS > Deployer Configuration > Pattern Types.
Click New.
Choose the tosca.foundation-1.1.0.0.tgz file from your local drive and click
OK. A new entry is displayed in the palette.
6. Find the TOSCA Foundation Pattern Type entry in the list on the left. This
entry is disabled by default.
7. Click the entry and select Enable from the Status field.
2.
3.
4.
5.
Results
After the pattern type is enabled, you can import TOSCA Cloud Service Archives
(CSARs) that contain service templates that comply with the TOSCA specification.
You can import the templates as Virtual Application Patterns by navigating to
PATTERNS > Pattern Design > Virtual Application Patterns, or as Virtual
Application Templates by navigating to PATTERNS > Pattern Design > Virtual
Application Templates.
Remember: In IBM Cloud Orchestrator, the TOSCA service templates are imported
and deployed as virtual application patters or virtual application templates. Before
you deploy an application pattern imported from a TOSCA CSAR, you must
perform the basic configuration for virtual application patterns, such as
configuration of cloud groups or import of base images.
655
Procedure
1. Log on to IBM Cloud Orchestrator user interface as administrator.
2. Click PATTERNS > Deployer Configuration > System Plug-ins.
3. From the list on the left, select TOSCA Foundation Pattern Type 1.1.0.0 to
define the pattern type.
4. From the list on the left, select the tosca.rpmrepository 1.1.0.0 plug-in.
5. From the menu on the right, click Configure. The plug-in configuration
window opens.
6. Choose the type of repository to configure. You can choose the types of
repositories to configure or you can select No Repository to disable the
configuration of a previously created repository. You can configure the
following types of repositories:
v ISO Image on NFS
v FTP
v HTTP
Procedure
1. From the Repository Type list, select the option ISO Image on NFS.
2. Provide the following parameters:
Repository Name
This parameter specifies the name of the repository that is configured in the
operating system of deployed virtual machines, for example RHEL_6.2_Repo.
656
Procedure
1. From the Repository Type list, select the option FTP.
2. Provide the following parameters:
Repository Name
This parameter specifies the name of the repository that is configured in the
operating system of deployed virtual machines, for example RHEL_6.2_Repo.
Mount point for ISO Image
This parameter does not have any meaning for the FTP repository type.
Host name or IP address
This parameter specifies the fully-qualified host name or IP address of the
FTP server that hosts the repository. This server must be accessible by
deployed virtual machines.
Repository Path
This parameter specifies the full path of the repository on the FTP server.
Procedure
1. From the Repository Type list, select the option HTTP.
2. Provide the following parameters:
Repository Name
This parameter specifies the name of the repository that is configured in the
operating system of deployed virtual machines, for example RHEL_6.2_Repo.
657
Procedure
1. Perform the steps described in Configuring the RPM repository for the
deployment of TOSCA patterns on page 656.
2. From the Repository Type list, select the option No Repository.
Results
By selecting this option, you disable the configuration of an RPM repository and
you keep the default configuration defined in the base operating system image
used for virtual application deployment.
Example
You can use this option if the deployed virtual machines have access to default
repositories on the internet or if the base operating system image is already
configured to use specific RPM repositories.
Procedure
1. To use Chef with virtual application patterns, you must upload an RPM
containing a Chef client. Retrieve the Chef client RPM from Opscode.
2. Configure the TOSCA Chef Plug-in repository:
a. Go to Patterns > System Plugins and choose the tosca.chef plug-in.
b. Click Configure in the upper right corner. The configuration options for the
TOSCA Chef plug-in appears.
c. Choose Chef Deployment Mode: Chef Solo, as TOSCA service templates,
imported as virtual application patterns, are using Chef Solo deployments.
658
d. In the Chef client package, click Browse and choose the Chef RPM that
you downloaded from Opscode to your local drive.
e. Click Update to activate your changes.
The TOSCA service template, imported as virtual application pattern, uses this
RPM package for installing Chef Solo on the deployed VM and executing the
Chef artifacts.
659
660
661
Note: The configuration of the VPN between the IBM Cloud Orchestrator
on-premise instance and the remote clouds has to be done out of band depending
on the available or chosen hardware devices available on the on-premise site. Both
the Business Process Manager node and the Workload Deployer node require
access to the remote cloud via VPN.
Configuration of the Public Cloud Gateway is done in the following steps:
v Configure config.json in /opt/ibm/pcg/etc on the node were the Public Cloud
Gateway is installed.
v Configure credentials.json in /opt/ibm/pcg/etc on the node were Public
Cloud Gateway is installed.
v Configure access to the setup regions.
See the related topic for more information about how the Public Cloud Gateway
fits into the IBM Cloud Orchestrator product architecture.
Review the list of capabilities and limitations for the Public Cloud Gateway.
662
v Windows images in Amazon EC2 and SoftLayer are automatically activated and
license keys are provided by the remote cloud.
v For Amazon EC2, the resize of a virtual machine instance is only possible in the
stopped state.
Public Cloud Gateway capabilities within IBM Cloud Orchestrator using
Workload Deployer
v vSysClassic using scp-init images.
v Deploy a virtual system.
v Stop and start a virtual system instance.
v Remove a virtual system instance.
v Script packages support is available.
v Configure administrator password on a virtual machine with images containing
the Activation Engine.
v Single Dynamic Host Configuration Protocol (DHCP) IP Group.
v User add-on support is available.
v Disk add-on is available for virtual system parts.
Public Cloud Gateway limitations within IBM Cloud Orchestrator using
Workload Deployer
v Public Cloud Gateway vSysNext is not supported.
v Network Interface Controller (NIC) add-on is not supported.
v Setting root password not supported for scp-init images.
v A Windows licence key is required in the Workload Deployer UI but Windows
images will get license key and activation by remote cloud environment for
Amazon EC2 and SoftLayer.
v Due to the build in size limitations of user data that can be passed into Softlayer
during provisioning, the user data passed during the Workload Deployer pattern
deployments will be reduced to the given username and password needed to
setup the machine. This means that additional user data passed in during the
provision by the end user will not be available on the machine and script
packages and other components relying on such user data might not work as
expected.
v On EC2 and Softlayer Windows, the images are created fully licensed and
activated. Therefore any given Owner, Organization and License data filled in
during provisioning of a pattern in Workload Deployer will be ignored.
Public Cloud Gateway capabilities within IBM Cloud Orchestrator
v Deploy a single virtual server offering.
v Stop and start a virtual system instance.
v Attach and detach a volume.
v
v
v
v
663
v OpenStack command line clients are not supported for Public Cloud Gateway
regions.
v OpenStack API support. The Public Cloud Gateway supports a limited set of
OpenStack APIs.
Volume provider limitations
v For NIOS regions, neither name nor description are supported.
v For Amazon EC2, name and description is supported.
v For SoftLayer, name is supported and description is not supported. If no name is
supplied, a name is created in the form:
HybridStorage-<volume_size>GB-<date>
.
v For SoftLayer, there is no mount point information returned and hence
hardcoded /mnt is returned as mount point.
v For Amazon EC2 and SoftLayer, the IBM Cloud Orchestrator Cinder toolkit
attach the volumes but it does not format and mount the volumes on the
servers.
v For SoftLayer, the size of the volume is defined by the size options available for
SoftLayer Portable Storage. The SoftLayer Portable Storage is attached as a local
storage, which means that only a subset of the size options is available. Storage
is only attached in the granularity available for Portable Storage. For example,
even if you request 1 GB, you get 10 GB.
664
v
v
v
v
v
Filter by status
Filter by SSH key name
Delete virtual machine
Start / stop virtual machine
Show virtual machine detail
List images
Show image detail
v
v
v
v
v
v
v Get Limits
v Show Network providing a single DHCP network
v Query quota for tenant
v
v
v
v
v Detach Volume
v Create key pair providing the SSH key import within the request
v List key pairs
v Delete key pair
Supported OpenStack Glance API
v List images
v Show image detail
Supported OpenStack Cinder API
v Create volume
v List volumes
Filter by status
v Filter by status
v Show volume detail
v Delete volume
v List volume types
Single hardcoded entry
Note: All other OpenStack documented API's are not supported by the Public
Cloud Gateway.
Chapter 8. Managing a hybrid cloud
665
666
Multitenancy support
The Public Cloud Gateway provides capabilities for multitenancy.
These capabilities are in addition to the general multitenancy capabilities in the
core product.
The Public Cloud Gateway contains the following capabilities related to
multitenancy:
v Supports non-default domains and projects.
v Limits the view of resources on project scope.
v Creates resources in scope of a project.
v Supports quotas on a per project and per region base.
Mapping to OpenStack concepts:
The Public Cloud Gateway supports the following OpenStack constructs
and resources in a multitenancy model:
Chapter 8. Managing a hybrid cloud
667
668
Share a cloud account across projects using dedicated credentials per project:
The Public Cloud Gateway contains a configuration file that maps the
credentials used for a region to a project. This feature allows you to supply
different logon credentials on a per project and per region basis.
Note: The project of the cloud administrator must be mapped using a
credential which can see all the resources of the other projects within the
same account. Failure to this would result in deployment errors using
Workload Deployer scenarios.
Limitations:
v The multitenancy support does not provide a physical segregation of resources
because they still belong to the same account. It provides different views of the
account on a per project basis.
v The network is shared across an account.
v Storage comes from a shared pool.
v Images are public.
ram
Total RAM size in MB that can be consumed. Must be larger than the
gigabytes quota value.
gigabytes
Largest size of a single volume in gigabytes.
volumes
Number of volumes that can be created.
key_pairs
Number of key pairs that can be created.
The following actions can be done on quotas:
v Set global quota defaults. Maintained within the Public Cloud Gateway
configuration.
Chapter 8. Managing a hybrid cloud
669
v Query and Set per project quota defaults. Maintained within the administration
view in the project details.
v Delete a project quota. Supported only through OpenStack API calls. Not
supported through the IBM Cloud Orchestrator UI.
Assumptions:
v The Administrator of the Public Cloud Gateway region must ensure that the
sum of the per project quota does not exceed the region capability.
v Quotas are enforced on a per project and region base.
Network planning
Public Cloud Gateway scenarios require a set of network configuration to
successfully provision resources within the remote cloud. This section provides an
overview about which network configuration is assumed and required.
v Access to the remote cloud REST API entry points.
v Communication from the IBM Cloud Orchestrator management stack to and
from the provisioned virtual machine instances.
670
671
EC2Classic
Default-VPC
Non-Default VPC
Private IP address
Yes
Yes
Yes
Public IP Address
Yes
Configurable per
region
Configurable per
region/project
Hide Public IP
Address
No, EC2Classic
defines subnet
Yes, Granularity on
project or region
Elastic IP Address
No
No
No
Config
On Amazon account
level
On Amazon account
level
"vpc" property in
config.json on
region definition and
VPC definition in the
Amazon
Management portal
672
Upgrading
There are some considerations for the Public Cloud Gateway when you upgrade.
The Public Cloud Gateway includes some new capability for multi-tenancy for
Amazon EC2. See the following topics:
v Multitenancy support on page 667
v Configuring the Public Cloud Gateway for Amazon EC2 on page 697 to set up
the credentials.json
The caching system of the Public Cloud Gateway has changed. See Configuring
caching on page 691.
673
RequestUserUUId
Describes the user to which the virtual machine belongs. The tag must
contain a valid user UUID found in the identity view in the
Administration user interface.
Note: You must set at least the TenantUUID tag so that the virtual machine appears
in OpenStack. The same is valid for existing volumes.
The upgrade is done by adding the above tags to the exiting resources in Amazon
EC2 in the Amazon Management UI. As a result, the instances and volumes appear
in the related projects.
For keypair, the upgrade is done differently because keypairs do not support tags.
The _<project uuid> is a suffix to the name of the keypair. The upgrade of keypairs
is done by renaming the related keypair in the Amazon Management UI.
Note: Existing keypairs cannot be added to IBM Cloud Orchestrator V2.4 so that
they can be used in the Deploying a virtual machine on page 314 offering. They
have to be reimported using the Registering a key pair on page 322 offering.
674
Prerequisites
Before configuring the Public Cloud Gateway you must ensure that the following
requirements are satisfied.
General requirements
Depending on the public cloud used by the customer the following
requirements are needed on the cloud that will be integrated.
An Amazon Web Service (AWS) account with Amazon EC2 credentials for
each tenant/project using the Public Cloud Gateway functionality is
required. See the AWS Management Console for more information:
https://console.aws.amazon.com/console/home.
Set up an account in SoftLayer and create one or more users IDs. Each ID
has its own unique password and API access key. The API access key is
required to configure SoftLayer integration in the Public Cloud Gateway.
Network requirements
v Port requirements The Public Cloud Gateway requires access to a
number of ports in the installation environment and in the default
Amazon EC2 security group. If these ports are blocked by a firewall or
used by another process, some Public Cloud Gateway functions will not
work.
Table 83. Ports used by Public Cloud Gateway
Port
TCP or UDP
Direction
Description
22
TCP
Outbound
SSH communication
with the Virtual
Machine instances.
675
TCP or UDP
Direction
ICMP
Description
ICMP communication
with the Virtual
Machine instances.
443
TCP
Outbound
HTTPS
communication with:
v Amazon EC2
management
endpoints.
Note: Make sure that the Amazon EC2 security groups are configured
according to the table. For more information about Amazon EC2 security
groups, see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
using-network-security.html
v DNS requirements Ensure that the DNS is configured correctly in
both Central Server 2 and Central Server 3. Both Central Server 2 and
Central Server 3 instances must be able to resolve the Amazon EC2
management endpoints as defined in the /opt/ibm/pcg/etc/config.json
file.
Access to public cloud resources
To provision virtual machines or use any services in the public cloud, users
are required to have credentials to access public cloud resources. These
credentials are then used in configuration of the Public Cloud Gateway.
Images in public cloud
To deploy patterns in the public cloud, users are required to provide
pattern-specific private images or image templates in the public cloud
image repositories. See Creating a supported image.
676
IBM SoftLayer
To create a cloud-init-ready image template in SoftLayer, complete the
following steps:
1. From the SoftLayer portal, create an instance SoftLayer-provided base
OS images or create an instance from an existing image template that
needs a cloud-init script installed.
2. To add cloud-init support to the image, follow the procedure in
Adding cloud-init to Linux images on page 336.
3.
4.
5.
6.
7.
Non-IBM OpenStack
Follow the OpenStack descriptions for OpenStack Linux image
requirements. See http://docs.openstack.org/image-guide/content/
ch_openstack_images.html. Ensure that you have added the cloud-init
support to the image.
677
2. Create a supported image on Amazon EC2 for use with IBM Cloud
Orchestrator: Enable Amazon EC2 image for scp-init activation on
page 679.
3.
Note:
v For images used in Virtual System Pattern deployment only:
edit the file /etc/rc.d/rc.local, for Red Hat Enterprise Linux or
edit the file /etc/rc.local for Ubuntu Linux
v Add the line
ip addr add curl -s http://169.254.169.254/latest/meta-data/public-ipv4 dev eth0
and
./scp-cloud-init install
and
touch /opt/IBM/AE/AR/ovf-env.done
6. When the instance is running, from the SoftLayer portal, access the
computing instance: Device > Device List > Device Name.
7. From the Device List, select the Computing Instance from actions
menu. Select Create Image Template.
8. Follow the prompts to create the image template.
Non-IBM OpenStack
Follow the OpenStack descriptions for OpenStack Linux image
requirements. See http://docs.openstack.org/image-guide/content/
ch_openstack_images.html. Ensure that you have added the scp-init
support to the image.
678
and
sudo service sshd restart
2. Create a supported image on Amazon EC2 for use with IBM Cloud
Orchestrator.
Note: The following instructions are applicable for Linux images only.
Windows images are not supported.
3. Download the following file to the Linux image: http://
<Workload_Deployer_machine>/downloads/cloud-init/scp-cloud-init where
<Workload_Deployer_machine> is the IP address of the server where the
Workload Deployer component is installed.
4. Run the following commands as the root user:
chmod 755 scp-cloud-init
and
./scp-cloud-init install
and
touch /opt/IBM/AE/AR/ovf-env.done
Delete the content of the /etc/rc.d/rc.local file. rc.local is an init script file
provided by Amazon to edit the authentication_keys and sshd_config files. You
must delete the content of the rc.local file to keep your preferences.
7. Select the Create Image option from the Amazon Web Service (AWS)
Management Console to create an image from the instance. For more
information about the AWS Management Console, see https://
console.aws.amazon.com/console/home.
6.
679
3. Update the sshd_config file to enable password authentication and root login.
Note: Future Amazon updates to the images might require changes to the
procedure.
Update the cloud-init configuration file
Make sure that the following lines are in the /etc/cloud/cloud.cfg file:
disable_root: false
ssh_pwauth:
true
These properties enable root login and password authentication in cloud-init. They
are required to set the password via user-data.
Update the authorized_keys file
In the authorized_keys file, remove the command prefix and leave only the
ssh-rsa statement. For example, change the following default content:
no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo Please login
as the user \"ec2-user\" rather than the user \"root\".;echo;sleep 10"
ssh-rsa <content of sshkey>
Procedure
1. IBM SoftLayer
To create a cloud-init-ready image template in SoftLayer, complete the
following steps:
a. Deploy single virtual server from a public image:
1) Select a public image, for example Windows Server 2012 Standard
Edition (64 bit).
2) Flavor: Small.
3) No key.
680
4) No user/password.
5) No volume attach.
b. Log on to the virtual machine:
1) The virtual machine password will be generated at first. So once the
virtual machine is reported as ACTIVE, open the SoftLayer portal and
navigate to your devices list.
2) Expand the virtual machine you just provisioned and click the show
password box to reveal the administrator password.
3) Use this password to log on to the virtual machine using RDP with the
IP address provided.
4) Depending on the load on the SoftLayer datacenter you are using, it
may take up to 20 minutes before the login information becomes
available.
c. Install cloudbase-init on the virtual machine:
1) Download the installer from https://www.cloudbase.it/downloads/
CloudbaseInitSetup_Beta_x64.msi.
2) Run the installer.
3) Enter the correct administrator user name for your version of Windows.
For example, Administrator for the English version.
4) Make sure you check the use metadata password option.
5) Click next and wait until the installation completes. Do NOT select to
run sysprep or to shut down the virtual machine.
6) Click finish to close the installer.
d. Copy the SoftLayer metadata service to the virtual machine:
1) The built-in version of cloudbase-init does not support loading
metadata from SoftLayer. Therefore the cloudbase-init installation on
your virtual machine must be extended with a small file that
implements a SoftLayer metadata service.
Download from http://<pcg host>:9797/downloads/scripts/softlayer/
windows/cloudbase-init/.
2) Copy slservice.py to the services folder of your cloudbase-init
installation. The default is C:\Program Files (x86)\Cloudbase
Solutions\Cloudbase-Init\Python27\Lib\site-packages\
cloudbaseinit\metadata\services.
3) Adjust the configuration settings of cloudbase-init:
a) Open the file cloudbase-init.conf in an editor. The default is
C:\Program Files (x86)\Cloudbase Solutions\Cloudbase-Init\
conf\cloudbase-init.conf.
b) Make sure it contains these lines:
metadata_services=cloudbaseinit.metadata.services.slservice.SLService
plugins=cloudbaseinit.plugins.windows.setuserpassword.SetUserPasswordPlugin
681
682
683
Procedure
1. Deploy a single virtual server from a public image:
Select a public image, for example Windows Server 2012 Standard Edition (64
bit), Flavor: Small, No key, No user/pwd, No volume attach.
2. Log on to the virtual machine:
The virtual machine password will be generated at first. So once the virtual
machine is reported as ACTIVE, open the SoftLayer portal and navigate to your
devices list. Expand the virtual machine you just provisioned and click show
password to reveal the Administrator password. Use this password to log on to
the virtual machine using RDP with the IP address provided. Depending on the
load on the SoftLayer datacenter you are using, it may take up to 20 minutes
before the login information becomes available.
3. Adjust Firewall and User Account Control:
To make the image accessible to Workload Deployer follow the section for the
Windows operating version chosen in step 1 described here:
http://www-01.ibm.com/support/knowledgecenter/api/content/
SS4KMC_2.3.0/com.ibm.sco.doc_2.3/scenarios/
c_prereq_kvm_vmware_images_win.html.
4. Install the prerequisite software, as described here: Software prerequisites for
Microsoft Windows images on page 338.
5. Install the scp-cloud init scripts:
Download the following files:
http://<Workload_Deployer_machine>/downloads/cloud-init/scp-cloud-init.cmd
http://<Workload_Deployer_machine>/downloads/cloud-init/scp-cloud-init.vbs
Copy the files to the directory c:\windows\setup\ibm. Add one script to run
when starting the virtual machine by using the group policy editor:
v Run gpedit.msc.
v In the Local Group Policy Editor window, select Computer Configuration >
Windows Settings > Scripts (Startup/Shutdown).
v In the right pane, double-click Startup.
v In the Startup Properties window, click Add.
v In the Add a Script window, enter c:\windows\setup\ibm\
setpasswd_sl_scp.py in the Script Name field, and click OK.
v Click OK to save your changes and close the Startup Properties window.
v Close the Local Group Policy Editor window.
7. Create a private image from the virtual machine:
Leave the virtual machine (do not shut it down, run sysprep or anything). Go
back to the SoftLayer portal and click on the virtual machine to open its details.
684
In the action menu select the create image template action and provide an
image name. You can now use this private image for provisioning with IBM
Cloud Orchestrator.
8. (Optional) Delete the original virtual machine:
Once the new image has been created you can safely delete the original virtual
machine using the IBM Cloud Orchestrator UI. Be aware that creating the
private image template may take up to 20 minutes depending on the size of the
virtual machine. Until the image creation transaction has been completed, the
original virtual machine can not be deleted.
Note: Keep in mind that the password that you enter during provisioning will
be visible to anyone who can log on to the provisioned virtual machine. So it is
best advised to change the password as soon as possible after the first logon.
Note: Setting the password during provisioning will work only if the chosen
password complies with the password policy of the Windows operating system
on the image. If the password you chose at provisioning time does not comply
with the password policy, the password will not be set. You will then be able to
access the virtual machine using the password originally generated by
SoftLayer which you can reveal using the SoftLayer portal.
Procedure
1. Deploy single virtual server from a public image.
IBM Cloud Orchestrator will not display public images so you will have to
deploy from the EC2 portal. Log into the AWS portal, open the EC2 application
and go to AMIs.
Select the filter Public Images. Choose an available Windows AMI, for example
Windows_Server-2012-R2_RTM-English-64Bit-Base. Click Launch. Select the
instance flavor. An EBS-only flavor is recommended to save costs, for example
t2.micro. Go to the next step to configure the instance. Depending on your
requirements you may also want to enable the public IP assignment. Click Next
until you reach the security group configuration. There you must make sure
that you select a security group that allows RDP access to the virtual machine.
Click Review and Launch and then Launch. The initial administrator password
will be generated and encrypted using a Keypair. Be sure you select a Keypair
that you have access to. For this you need the private key pem file from the
Keypair creation.
2. Log on to the virtual machine.
When the instance appears as running with all status checks done in the EC2
portal, select the instance and click Connect. Click Get Password and select the
pem private key file of the Keypair that you selected at provisioning time. Click
download remote desktop file and open it with RDP. Use the displayed
password to connect.
3. Adjust Firewall and User Account Control.
To make the image accessible to Workload Deployer follow the section for
Windows operating version chosen in Step 1, described here:
http://www-01.ibm.com/support/knowledgecenter/api/content/
SS4KMC_2.3.0/com.ibm.sco.doc_2.3/scenarios/
c_prereq_kvm_vmware_images_win.html.
685
Copy the files to the directory c:\windows\setup\ibm. Add one script to run
when starting the virtual machine by using the group policy editor: run
gpedit.msc. In the Local Group Policy Editor window, select Computer
Configuration > Windows Settings > Scripts (Startup/Shutdown). In the right
pane, double-click Startup. In the Startup Properties window, click Add. In the
Add a Script window, enter c:\windows\setup\ibm\setpasswd_ec2_scp.py in
the Script Name field, and click OK. Click OK to save your changes and close
the Startup Properties window. Close the Local Group Policy Editor window.
7. Run Ec2ConfigServiceSettings. Default: C:\Program Files\Amazon\
Ec2ConfigService\Ec2ConfigServiceSettings.exe. In the Image tab make sure
the keep existing option is set for the administrator password. We want the
password to be set by cloudbase-init, not by Ec2ConfigService.
8. Create a private image from the virtual machine. Leave the virtual machine (do
not shut it down, run sysprep or anything). Go back to the EC2 portal and click
on the virtual machine to open its details. In the action menu select the create
image action and provide an image name. You can now use this private image
for provisioning with IBM Cloud Orchestrator.
9. (Optional) Delete the original virtual machine. Once the new image has been
created you can safely delete the original virtual machine using the IBM Cloud
Orchestrator UI. Be aware that creating the private image template may take up
to 20 minutes depending on the size of the virtual machine. Until the image
creation transaction has been completed, the original virtual machine can not be
deleted.
Note: Keep in mind that the password that you enter during provisioning will
be visible to anyone who can log on to the provisioned virtual machine. So it is
best advised to change the password as soon as possible after the first login.
Note: Setting the password during provisioning will work only if the chosen
password complies with the password policy of the Windows operating system
on the image. If the password you chose at provisioning time does not comply
with the password policy, the password will not be set. Since EC2 cannot
decrypt the password again as in the case of the public image, it is best advised
to change the password first to a known value or to add a second user with
known credentials before creating the private image.
686
Note:
v SoftLayer does not support to set tag value during image template
creation. It first requires creating the template and then editing the
template by using templateID.
v In SoftLayer, tags on images only work within the account where the
image resides. If an image is shared across accounts, the image tag is not
Chapter 8. Managing a hybrid cloud
687
Configuring flavors
OpenStack API requires flavors for provisioning virtual machines. IBM Cloud
Orchestrator must be able to return a flavor list for each region.
The Public Cloud Gateway stores the current list of known flavors in the
flavors.json file on Central Server 2 in the /opt/ibm/pcg/etc directory.
The following capabilities are supported:
v A global flavor list provided by the default section, if no remote cloud or region
specific information is provided.
v A remote cloud type specific default flavor provided through the following
sections:
ec2_default
nios_default
softlayer_default
All the flavor definitions within the flavors.json file must be valid for the related
remote cloud regions. All the definitions in these sections are snippets and provide
configuration examples.
Note:
v Changes to the flavors.json file are only active after restarting the Public Cloud
Gateway.
v Failure to the above assumptions result in provisioning errors.
v A flavor must not be removed if virtual machines with this flavor exist.
688
v You can add a new section for the specific Amazon AWS EC2 region with the
name because this region is defined in the config.json file.
Note: With the Amazon AWS EC2 API, you cannot query or manage the
supported flavors. Any changes to the list of flavors for Amazon AWS EC2 must
match the published list on their website or the list of flavors shown in the
Amazon AWS EC2 management UI. Otherwise, the virtual machine has
provisioning errors.
689
"nioRegion1": {
"m1.small": {"name":"Small", "cpu":1, "ram":2048, "disk":20},
"m1.medium": {"name":"Medium", "cpu":2, "ram":4096, "disk":40},
"m1.large": {"name":"Large", "cpu":4, "ram":8192, "disk":80},
"m1.xlarge": {"name":"Extra Large", "cpu":8, "ram":16384, "disk":160},
}
}
Using this example, when accessing nioRegion1, IBM Cloud Orchestrator offers
a list of four flavors, but any other non-IBM supplied OpenStack regions offers
the six flavors that are defined for the default region.
Note:
v The ID in the flavors.json file, m1.small, must match the flavor name on
non-IBM supplied OpenStack and not the flavor ID.
v IBM Cloud Orchestrator requires at least 512 MB of memory to be defined in
flavors.
IBM SoftLayer
SoftLayer does not support flavors during deployment. The flavors.json defines
the set of flavors that you can use during deployment by using IBM Cloud
Orchestrator. SoftLayer only supports a certain list of possible values for CPU,
RAM, and disk. These values can change over time. The possible values are visible
if you try to create a cloud compute instance through the SoftLayer provided
management UI. Only these values can be used for flavor definitions. If other
values are used, you might receive deployment errors.
The following rules apply for IBM SoftLayer:
v You can modify the global list supplied in softlayer_default section. This
affects all IBM SoftLayer regions except the ones that have a separate named
section.
v You can add a new section for the specific IBM SoftLayer region with the name
because this region is defined in the config.json file.
Note:
v IBM Cloud Orchestrator requires at least 512 MB of memory to be defined in
flavors.
v SoftLayer provides a predefined set of values for CPU, RAM, and disk. Check
the possible values in the SoftLayer documentation or in the SoftLayer
management portal.
Configuring quotas
You can configure quotas.
690
The default quotas setting is stored within the config.json file located in the
/opt/ibm/pcg/etc subdirectory under the Central Server 2 where the Public Cloud
Gateway component is installed.
The file is in JSON format. This is the quota section of the file:
{
"defaultQuota":{
"instances":"100",
"cores":"100",
"ram":"262144",
"gigabytes":"512",
"volumes":"2048",
"key_pairs":"100"
}
}
To modify the config.json file, as root, open it in a text editor and change the
values:
1. Connect to Central Server 2 via SSH. Default location: /opt/ibm/pcg/etc/
config.json.
2. Restart the Public Cloud Gateway by submitting the following command as
root on the Central Server 2 command line: service pcg restart.
Configuring caching
Account caching management with external clouds is required as the remote
clouds have denial of service and API rate limits management in place.
Note: The previous capability of cacheCounters in config.json located in the
/opt/ibm/pgc/etc directory is deprecated and no longer supported. See
Upgrading from SmartCloud Orchestrator V2.3 or V2.3.0.1 on page 147.
Both are configured within the config.json files in different sections.
The cache values describe when the public cloud internal caches are refreshed if no
modifying resource requests are performed. Modifying resources requests are
create / modify / delete style requests.
A modifying request would invalidate the caches for the triggering URL (region /
project).
The values must be adapted so that the least amount of API calls are done against
the remote clouds without impacting the responsiveness of the Public Cloud
Gateway.
Caching of resources:
Chapter 8. Managing a hybrid cloud
691
{
"cacheTimeout":{
"serversTimeout":"180",
"glanceImagesTimeout":"180",
"availabilityZoneTimeout":"180",
"volumesTimeout": "180",
"keypairTimeout": "180"
}
}
All values are in seconds. Each entry defines a refresh interval for the quota
calculation:
serverQuotaTimeout
Defines the refresh interval in seconds for virtual machine instance related
quota elements.
volumeQuotaTimeout
Defines the refresh interval in seconds for volume related quota elements.
keypairQuotaTimeout
Defines the refresh interval in seconds for key pair related quota elements.
692
693
"ec2":[
{
"name":"EC2region",
"url":"https://ec2.us-east-1.amazonaws.com/",
"enabled":true
}
Configure IBM Cloud Orchestrator for the new region and remove the entry for
the old region as documented below.
Restart the Public Cloud Gateway by using the service pcg restart command.
For more information about starting the Public Cloud Gateway, see
Command-line interface scripts on page 715.
Delete the services of the old region from keystone:
[root@central-server-2 ~]# source ~/keystonerc
[root@central-server-2 ~]# keystone endpoint-list
+----------------------------------+----------|
id
|
region
+----------------------------------+----------| 0ff9e584b3d04c56af32e7b43ad5324d | EC2-001
| 11924c78eca949ae939f2309a4e21bf9 | EC2-001
| 187dbc8c68b74d5f8e098d4c61544d0b | RegionOne
| 19ded8422da445a7b6ceb0ce6d3c5f5e | RegionOne
| 311e7c36c6b84ee3a5e2f39a04001481 | RegionOne
| 3d8ff61f86f64838be5000b7efd60b89 | RegionOne
| 7f5a578d80684fcaaa473c9012ba7f46 | EC2-001
| bf8de63072ae453fb5ddf8b3027945cf | RegionOne
| e0a8c3d7c821424e94a0ef17c8c1a383 | EC2-001
+----------------------------------+----------+--------------------------------------------------------|
publicurl
+--------------------------------------------------------|
http://central-server-2:5000/v3
| http://central-server-2:9797/EC2-001/v1/%(tenant_id)s
|
http://central-server-2:8776/v1/%(tenant_id)s
|
http://central-server-2:8774/v2/%(tenant_id)s
| https://192.0.2.129:9443/ImageLibrary/ImageService/v1
|
http://central-server-2:9292/
| http://central-server-2:9797/EC2-001/v2.0/%(tenant_id)s
|
http://central-server-2:5000/v3
|
http://central-server-2:9797/EC2-001/v2.0
+--------------------------------------------------------+--------------------------------------------------------|
internalurl
+--------------------------------------------------------|
http://central-server-2:5000/v3
| http://central-server-2:9797/EC2-001/v1/%(tenant_id)s
|
http://central-server-2:8776/v1/%(tenant_id)s
|
http://central-server-2:8774/v2/%(tenant_id)s
| https://192.0.2.129:9443/ImageLibrary/ImageService/v1
|
http://central-server-2:9292/
| http://central-server-2:9797/EC2-001/v2.0/%(tenant_id)s
|
http://central-server-2:5000/v3
|
http://central-server-2:9797/EC2-001/v2.0
+--------------------------------------------------------+---------------------------------------------------------+----------------------------------+
|
adminurl
|
service_id
|
+---------------------------------------------------------+----------------------------------+
|
http://central-server-2:35357/v3
| 40a0d00ad6d34cfc8a5c412c61cb3e33 |
694
| http://central-server-2:9797/EC2-001/v1/%(tenant_id)s | 372311fb67564e41987038d587c6a539 |
|
http://central-server-2:8776/v1/%(tenant_id)s
| 372311fb67564e41987038d587c6a539 |
|
http://central-server-2:8774/v2/%(tenant_id)s
| b6155b46d8d1463185fdfbddb05f18b5 |
| https://192.0.2.129:9443/ImageLibrary/ImageService/v1
| ac8c99aa1fb445898f92fd0aeb43c8e1 |
|
http://central-server-2:9292/
| 1f3d30e2fcf04bdd908969a987722acc |
| http://central-server-2:9797/EC2-001/v2.0/%(tenant_id)s | b6155b46d8d1463185fdfbddb05f18b5 |
|
http://central-server-2:35357/v3
| 40a0d00ad6d34cfc8a5c412c61cb3e33 |
|
http://central-server-2:9797/EC2-001/v2.0
| 1f3d30e2fcf04bdd908969a987722acc |
+---------------------------------------------------------+----------------------------------+
Delete all the endpoints related for the old region EC2-001 with the following
example command:
[root@central-server-2 ~]# keystone endpoint-delete 0ff9e584b3d04c56af32e7b43ad5324d
is possible to define:
A default proxy server.
A proxy server for Amazon EC2.
A proxy server for SoftLayer.
A proxy server for non-IBM supplied OpenStack.
695
There is a new main section in the config.json file in the etc directory of the
Public Cloud Gateway. This is sample content to describe the structure and
properties of the configuration :
"proxy":{
"default":{
"host":"proxy.local",
"port":"3128",
"userid":"xxxxx",
"password":"yyyyy"
},
"nios":{
"host":"localhost",
"port":"3128",
"userid":"xxxxx",
"password":"yyyyy"
},
"ec2": {
"host":"localhost",
"port":"3128"
},
"softlayer":{
"host":"127.0.0.1",
"port":"9090",
"userid":"xxxxx",
"password":"yyyyy"
}
},
The default entry within the proxy definition defines the default proxy. This
definition will be used if there is no proxy definition for the specific remote cloud
type:
v The nios entry defines the specific proxy for all regions of type non-IBM
supplied OpenStack (nios).
v The ec2 entry defines the specific proxy for all regions of type Amazon AWS
EC2 (ec2).
v The softlayer entry defines the specific proxy for all regions of type SoftLayer
(softlayer).
Table 84. Parameters that are used in the proxy definition in the config.json file
696
Parameter
Description
host
port
userid
Table 84. Parameters that are used in the proxy definition in the config.json
file (continued)
Parameter
Description
password
697
The cloud region configuration is described in the vCenters section. Each region is
specified using three key-value pairs: name, url, and enabled.
The parameters in the config.json file are explained in the following table. Update
the enabled parameter to true if you want to specify that a particular region must
be created in Keystone.
698
Table 85.
Parameter
Description
name
url
enabled
Note: You must add a mapping for the project of the cloud administrator to the
credentials.json file. The default is admin. If this entry is missing, you cannot add
the availability zone to the domain through the IBM Cloud Orchestrator
Administration UI.
{
"tenantName":"admin",
"access_key_ID":"xxx",
"secret_access_key":"xxx"
},
where xxx is a valid set of credentials to access your Amazon EC2 account.
Additional properties for Amazon EC2 on the region level. Example for
SAOPAULO region:
{
"name":"EC2-SA-SAOPAULO",
"url":"https://ec2.sa-east-1.amazonaws.com",
"enabled":false / true,
"ImageType" : "cloud-init" or "scp-init",
}
Table 86.
Parameter
Description
ImageType
If your account does support the capability to place virtual machines into distinct
subnets of a non-default VPC, there are two properties to control that placement:
Table 87.
Parameter
Description
vpc
699
Description
privateNetworkOnly
This capability is available when the only supported platform for you account is
VPC. It is not enabled when other supported platforms are listed, for example,
EC2. You can check the supported platform of your account on the EC2 dashboard
in the Account Attributes section.
Be aware that in addition to the configuration in this file, more configuration tasks
are necessary in your Amazon VPC account to leverage the non-default VPC
support. See Configuring subnets and security groups in a non-default VPC
region on page 702.
Configure cloud credentials in the /opt/ibm/pcg/etc/credentials.json file: This
file is used to specify Amazon EC2 credentials for each project. For more
information about defining projects, see Managing projects on page 265. The
Amazon EC2 credentials are mapped to specific projects in IBM Cloud
Orchestrator. These mappings are specified in the credentials.json configuration
file.
Go to the /opt/ibm/pcg/etc directory and open the credentials.json file:
{
"cred":{
"ec2":[
{
"tenantName":"demo",
"access_key_ID":"xxx",
"secret_access_key":"xxx"
},
{
"tenantName":"admin",
"access_key_ID":"xxx",
"secret_access_key":"xxx"
},
{
"tenantID":"xxxxxx",
"access_key_ID":"xxx",
"secret_access_key":"xxx"
},
{
"tenantName":"*",
"access_key_ID":"xxx",
"secret_access_key":"xxx"
}
]
}
}
The parameters in the credentials.json file are explained in the following table.
Update these parameters if you want to specify credentials to project mappings
and define which credentials must be used for the different projects specified.
700
Table 88.
Parameter
Description
tenantName
access_key_ID
secret_access_key
region
Note: You must add a mapping for the project of the cloud administrator to the
credentials.json file. The default is admin. If this entry is missing, you cannot add
the availability zone to the domain using the Administration user interface:
{
"tenantName":"admin",
"region": "yyy",
"access_key_ID":"xxx",
"secret_access_key":"xxx"
},
where xxx is a valid set of credentials to access your Amazon AWS EC2 account.
Procedure to activate configuration changes:
1. Restart the Public Cloud Gateway by using the pcg restart command. For
more information, see Command-line interface scripts on page 715.
701
Procedure
1. Check that prerequisites are met. See Prerequisites on page 675.
2. You can integrate SoftLayer using thePublic Cloud Gateway. See Integrating
SoftLayer using the Public Cloud Gateway on page 703.
702
3. You can configure the Public Cloud Gateway for SoftLayer. See Configuring
the Public Cloud Gateway for SoftLayer.
4. Create a supported image. See Creating a supported image on page 676.
5. Configure quotas. See Configuring quotas on page 690.
What to do next
For information about post-configuration steps, see Performing post-configuration
tasks on page 714.
Procedure
1. Setup an account in SoftLayer and create one or more users IDs. Each ID has
its own unique password and API access key. The API access key is required in
configuring SoftLayer integration in the Public Cloud Gateway.
2. Create IBM Cloud Orchestrator-ready images.
3. Set up the following configuration files as described in Configuring the Public
Cloud Gateway on page 666. In particular, configure admin.json,
config.json, credentials.json, and flavors.json.
4. Start or restart the Public Cloud Gateway.
703
"enabled":true
},
{
"name":"SL-Dallas06",
"dataCenter" : "Dallas 6",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-SanJose",
"dataCenter" : "San Jose 1",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-Amsterdam",
"dataCenter" : "Amsterdam 1",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-Seattle",
"dataCenter" : "Seattle",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-WashingtonDC",
"dataCenter" : "Washington 1",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-Singapore",
"dataCenter" : "Singapore 1",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-Dallas01",
"dataCenter" : "Dallas 1",
"url":"https://api.softlayer.com/",
"enabled":true
},
{
"name":"SL-HongKong",
"dataCenter":"Hong Kong 2",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-Houston",
"dataCenter":"Houston 2",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-Toronto",
"dataCenter":"Toronto 1",
"url":"https://api.softlayer.com/",
"enabled":false
},
{
"name":"SL-London",
"dataCenter":"London 2",
"url":"https://api.softlayer.com/",
"enabled":false
704
},
{
"name":"SL-Melbourne",
"dataCenter":"Melbourne 1",
"url":"https://api.softlayer.com/",
"enabled":false
}
]
}
}
The cloud region configuration is described in the vCenters section. Each region is
specified using three key-value pairs: name, url, and enabled. The parameters in
the config.json file are explained in the following table. Update the enabled
parameter to true if you want to specify that a particular region must be created in
Keystone.
Table 89. Parameters that are used in the config.json file
Parameter
Description
name
dataCenter
url
enabled
705
Description
ImageType
privateNetworkOnly
primaryVlanID
backendVlanID
Note: To obtain the correct VLAN ID, perform the following steps:
1. Log on to SoftLayer portal at https://control.softlayer.com/.
2. Go to the VLANs page at https://control.softlayer.com/network/vlans.
3. Choose the VLAN that you want to use for provisioning and select it to open
the VLAN details.
4. Copy the VLAN ID from the browser URL. For example, if the URL is
https://control.softlayer.com/network/vlans/600516 then 600516 is the
correct ID. Do not confuse the VLAN ID with the VLAN number displayed on
the web page.
Configure cloud credentials in the /opt/ibm/pcg/etc/credentials.json file:
This file is used to specify SoftLayer credentials for each project. For more
information about defining projects, see Managing projects on page 275. The
SoftLayer credentials are mapped to specific projects in IBM Cloud Orchestrator.
These mappings are specified in the credentials.json configuration file.
Go to the /opt/ibm/pcg/etc directory and open the credentials.json file:
706
{
"cred":{
"softlayer":[
{
"tenantName":"admin",
"user_id":"xxx",
"api_access_key":"xxx"
},
{
"tenantName":"demo",
"user_id":"xxx",
"api_access_key":"xxx"
},
{
"tenantName":"tenant1",
"user_id":"xxx",
"api_access_key":"xxx"
},
{
"tenantName":"tenant2",
"user_id":"xxx",
"api_access_key":"xxx"
}
]
}
}
Description
tenantName
user_id
api_access_key
707
Table 91. Parameters that are used in the credentials.json file (continued)
Parameter
Description
region
Note: You must add a mapping for the project of the cloud administrator to the
credentials.json file. The default is admin. If this entry is missing, you cannot add
the availability zone to the domain using the Administration user interface.
{
"tenantName":"admin",
"region": "yyy",
"user_id":"xxx",
"api_access_key":"xxx"
},
Procedure
1. Restart the Public Cloud Gateway by using the service pcg restart command.
For more information about starting the Public Cloud Gateway, see
Command-line interface scripts on page 715.
2. Execute the script in the /opt/ibm/pcg/refreshEndpoint.sh to clean up caches
related to region or endpoint information.
3. Check the Public Cloud Gateway log in /var/log/pcg/pcg.log for problems.
708
Note: Not all EC2 operations are supported by the OpenStack EC2
implementation. For example, the Resize Instance functionality is not supported
in Public Cloud Gateway by the OpenStack EC2 API.
Additional information on the capabilities of the OpenStack EC2 implementation is
available under API Feature Comparison at: https://wiki.openstack.org/wiki/
Main_Page.
The Public Cloud Gateway is not preconfigured for use with Amazon Elastic
Compute Cloud (Amazon EC2) as part of IBM Cloud Orchestrator. You must
complete certain configuration tasks before using the Public Cloud Gateway.
1. Familiarize yourself with the Public Cloud Gateway. See Public Cloud
Gateway overview on page 661.
2. Check that prerequisites are met. See Prerequisites on page 675.
3. Configure the Public Cloud Gateway to manage non-IBM supplied OpenStack.
For more information, see Configure the Public Cloud Gateway regions for
non-IBM supplied OpenStack and Configure non-IBM supplied OpenStack
EC2 credentials on page 711. You must already have one or more OpenStack
regions that are configured and functioning. For information about how to
install and configure a basic OpenStack instance, see http://docs.openstack.org.
4. Create a supported image. See Creating a supported image on page 676.
5. Configure quotas. See Configuring quotas on page 690.
For information about post-configuration steps, see Performing post-configuration
tasks on page 714.
The following configuration topics assume that you already have one or more
OpenStack regions that are configured and functioning. Instructions on how to
install and configure a basic OpenStack instance can be found at
http://docs.openstack.org.
Procedure
1. Run the following command:
keystone service-list
709
and look for the entry for Amazon EC2 and take note of the ID.
2. Run the following command:
keystone endpoint-list
and find the entry where the service_id matches the Amazon EC2 ID from the
previous step. Take a note of the publicurl also. It will be something similar to
http://<address>:8773/services/Cloud
Note: If you want the Public Cloud Gateway to manage more than one
non-IBM supplied OpenStack region, repeat these steps on the Keystone server
for each non-IBM supplied OpenStack region. This is required to obtain the
Amazon EC2 API interface address for each region.
3. The Public Cloud Gateway reads the connection details at startup from the
/opt/ibm/pcg/etc/config.json file on the Central Server 2 node. By default,
this file only contains the details of the Amazon EC2 regions. This file must be
updated to add the non-IBM supplied OpenStack regions to a data block
tagged nios inside the vcenters scope similar to this partial example:
"vcenters":{
"nios":[
{
"name":"nioRegion1",
"url":"http://192.0.2.12:8773/services/Cloud/",
"enabled":true
},
{
"name":"nioRegion2",
"url":"http://192.0.2.13:8773/services/Cloud/",
"enabled":true
}
]
},
Note: The url that is obtained from Keystone has a trailing / appended to it.
Note: The region name specified in the nios tag section must be a unique
name for that particular region.
Activating configuration changes
4. To activate configuration changes, complete the following steps:
a. Restart the Public Cloud Gateway by using the service pcg restart
command.
b. Execute the script refreshEndpoint.sh in the /opt/ibm/pcg directory to
clean up caches related to region or endpoint information.
c. Check for problems in the Public Cloud Gateway log: /var/log/pcg/
pcg.log.
For more information, see Command-line interface scripts on page 715.
What to do next
If the connection details contain a host name rather than an IP address, make sure
that the Central Server 2 node can resolve the host names and add entries to the
/etc/hosts file if required. Alternatively, use the IP address rather than the host
name in the config.json file.
710
Note: The user and tenant that are chosen must have sufficient access to
perform the necessary actions on Nova and Glance. Use the admin user and
admin tenant unless you have suitable alternatives that are configured on the
non-IBM supplied OpenStack Keystone.
2. Create the Amazon EC2 credentials using the command:
keystone ec2-credentials-create --tenant-id <tenant ID> --user-id <user ID>
If successful, the command returns two new 32-bit keys that are called Access
and Secret.
Note: Amazon EC2 credentials that are created on non-IBM supplied
OpenStack Keystone apply to both tenant and user when using these
credentials. For example, when deploying a virtual machine, the virtual
machine is created using the tenant and the user specified in the previous
command.
3. The following example shows the previous steps where Amazon EC2
credentials are created on the non-IBM supplied OpenStack using the admin
tenant and the admin user: :
root@nio3:/etc/glance# keystone user-list
+----------------------------------+---------+---------+--------------------+
|
id
|
name | enabled |
email
|
+----------------------------------+---------+---------+--------------------+
| 475e0cb45d1049cbb5bddf1eb508b391 | admin |
True | [email protected] |
| 901653358e49460297fbba3dfb0848cf | cinder |
True | [email protected] |
| 79fabd02dc1f43aabc03f77d97a840ee |
demo |
True | [email protected] |
| 00cb0e8b6d734e7499fca57d077b0fc1 | glance |
True | [email protected] |
| 10f54c9b123147c488ae3c942143acc8 |
nova |
True | [email protected] |
| 940f130d79cc46b9ae38ae6bda929767 | quantum |
True | [email protected] |
+----------------------------------+---------+---------+--------------------+
root@nio3:/etc/glance# keystone tenant-list
+----------------------------------+---------+---------+
|
id
|
name | enabled |
+----------------------------------+---------+---------+
| 40a39220bf5747edaac54216b5e8eb60 | admin |
True |
| 7cdafa91633a43d19e773bdbe0b28b76 |
demo |
True |
| 0420552b5721451a9d42b5e96ba79444 | service |
True |
+----------------------------------+---------+---------+
root@nio3:/etc/glance# keystone ec2-credentials-create
--tenant-id 40a39220bf5747edaac54216b5e8eb60
--user-id 475e0cb45d1049cbb5bddf1eb508b391
Chapter 8. Managing a hybrid cloud
711
+-----------+----------------------------------+
| Property |
Value
|
+-----------+----------------------------------+
|
access | 7e3d858e92324564a31e5d9b50fa62f0 |
|
secret | 98d7e15c0ae649b6a90bcbd8f9dbb725 |
| tenant_id | 40a39220bf5747edaac54216b5e8eb60 |
| user_id | 475e0cb45d1049cbb5bddf1eb508b391 |
+-----------+----------------------------------+
root@nio3:/etc/glance#
712
Table 92.
Parameter
Description
tenantName
access_key_ID
secret_access_key
region
Note: You must add a mapping for the project of the cloud administrator to the
credentials.json. The default is "admin". If this entry is missing, you cannot add
the availability zone to the domain via the IBM Cloud Orchestrator Administration
UI.
{
"tenantName":"admin",
"region":"yyy"
"access_key_ID":"xxx",
"secret_access_key":"xxx"
},
where xxx is a valid set of credentials to access your Amazon EC2 account.
713
Procedure
1. For deployment using a single virtual machine, complete the following steps:
a. Add the newly-defined Public Cloud Gateway managed region or
availability zone to:
Domain
See Assigning a zone to a domain on page 262.
Project
See Assigning a zone to a project on page 267.
b. Register a new SSH key for deployment. See Registering a key pair on
page 322.
c. If you want to use additional disks during deployment, you must create
volumes for the project. You can create volumes using the OpenStack
Cinder Storage Volumes toolkit.
d. Add cloud-init to Linux operating system images, as described in Adding
cloud-init to Linux images on page 336.
e. Deploy the virtual machine as described in Deploying a virtual machine
on page 314.
2. For deployment using virtual patterns, complete the following steps:
a. Update the NTP servers list to include a valid NTP server for the new
Public Cloud Gateway managed region. For more information about the
NTP servers list, see Setting NTP servers on page 83.
The default NTP configuration is only valid for new servers that can reach
the Deployment Server node via DNS. A publicly accessible NTP server is
required for servers in public clouds. Examples are:
v 0.amazon.pool.ntp.org
v 1.amazon.pool.ntp.org
v 2.amazon.pool.ntp.org
v 3.amazon.pool.ntp.org
For SoftLayer, you can use servertime.service.softlayer.com as an NTP
server.
b. Before you create a Linux image, ensure that the software requirements are
met, as described in Software prerequisites for Linux images (KVM or
VMware hypervisors) on page 337.
c. Deploy the virtual pattern as described in Chapter 7, Managing and
deploying virtual patterns, on page 353.
Results
You can now deploy a virtual machine or virtual pattern by using the Public
Cloud Gateway.
714
Reference
This section provides reference information for the Public Cloud Gateway.
Key pairs
Key pairs are needed to access the virtual machines that you deployed. When you
deploy a virtual machine, these keys are injected into the instance to allow
password-less SSH access to the instance.
The default key pair that is created from the Self-service user interface in Amazon
EC2 regions is appended with the user ID of the user that created the key pair. For
example, if the user creating the key pair in the Self-service user interface is admin,
the name of the key pair that is created in Amazon EC2 is default_admin. For
information about managing key pairs, see Managing key pairs on page 322.
715
These properties enable root login and password authentication in cloud-init. They
are required to set the password via user-data.
716
Chapter 9. Integrating
Learn how to integrate IBM Cloud Orchestrator with the following IBM products.
Procedure
1. You must install several rpm packages that are required by IBM Global Security
Toolkit (GSKit). GSKit is deployed automatically with the Tivoli Monitoring
installation and requires the following operating system patches:
v ksh-20091224-1.el6.x86_64.rpm
v glibc-2.12-1.7.el6.i686.rpm
v libgcc-4.4.4-13.el6.i686.rpm
v nss-softokn-freebl-3.12.7-1.1.el6.i686.rpm
2. Install the libraries that are required by the OS Monitoring Agent:
v libstdc++
v libgcc
v compat-libstdc++
Restriction: On a 64-bit system, you must have 32-bit and 64-bit versions of
those libraries.
717
Database setup
IBM Tivoli Monitoring requires two databases, the Tivoli Enterprise Portal Server
database and the Tivoli Data Warehouse database.
v The Tivoli Enterprise Portal Server database, or portal server database, stores
user data and information that is required for graphical presentation on the user
interface. The portal server database is created automatically during
configuration of the portal server. It is always on the same computer as the
portal server.
v The Tivoli Data Warehouse database, also called the warehouse database or data
warehouse, stores historical data for presentation in historical data views. In a
single-computer installation, the warehouse database is created on the same
relational database management server that is used for the portal server
database. In larger environments, it is best to create the warehouse database on a
different computer from the portal server.
You can create a TEPS database on an embedded Derby database that is delivered
with the Tivoli Monitoring installer. Warehouse database can be set on a DB2 or
Oracle server. Thus, the best solution is to install a DB2 server on a Tivoli
Monitoring server and use it for TEPS and Warehouse databases.
For more information about installing DB2, see the DB2 documentation.
Procedure
1. You must install the following components of Tivoli Monitoring:
v Hub Tivoli Enterprise Monitoring Server
v Tivoli Enterprise Portal Server
v Tivoli Enterprise Portal desktop client
v The Warehouse Proxy Agent
v The Summarization and Pruning Agent
2. If you plan to set up a dashboard environment, you can install extra
components. For base installation, these features are not required and can be
skipped or installed later:
v Dashboard Application Services Hub (a Jazz for Service Management
component)
v IBM Infrastructure Management Dashboards for Servers
v Tivoli Authorization Policy Server
v tivcmd Command Line Interface for Authorization Policy
718
v Tivoli Enterprise Portal Server: modify the cq_silent_config.txt file with the
following information:
CMSCONNECT=YES
HOSTNAME=itmsrv1
NETWORKPROTOCOL=ip.pipe
DB2INSTANCE=db2inst1
DB2ID=itmuser
DB2PW=passw0rd
WAREHOUSEID=itmuser
WAREHOUSEDB=WAREHOUS
WAREHOUSEPW=passw0rd
ADMINISTRATORID=db2inst1
ADMINISTRATORPW=passw0rd
Chapter 9. Integrating
719
Procedure
1. Create a warehouse database on a remote server.
2. Create a DB2 user on a remote server. Grant the user administrative rights to
the database.
720
If you want the agents to report to your main Tivoli Enterprise Monitoring Server,
configure each one of them with such a configuration file.
721
INSTALL_PRODUCT=v1
INSTALL_PRODUCT_TMS=all
INSTALL_PRODUCT_TPS=all
INSTALL_PRODUCT_TPW=all
INSTALL_ENCRYPTION_KEY=IBMTivoliMonitoringEncryptionKey
SEED_TEMS_SUPPORTS=true
MS_CMS_NAME=TEMS
DEFAULT_DISTRIBUTION_LIST=NEW
OpenStack hypervisors
With the default configuration, you can monitor the Region Server as a KVM
hypervisor. To do so, you can use Monitoring Agent for Kernel-based virtual
machines from Tivoli Monitoring for Virtual Environments.
OpenStack can use different hypervisors, like KVM or VMware. To monitor
different KVM hypervisors, you can use the installed agent and simply add a new
instance in the agent configuration.
To monitor a VMware hypervisor, you must install and configure Monitoring
Agent for VMware, which is also included in Tivoli Monitoring for Virtual
Environments. Then, you can add an instance of configuration every time a new
hypervisor is added to OpenStack.
For more information about VMware Agent, see VMware VI User's Guide.
722
to use the IBM DB2 Enterprise Server Edition V10.5, because IBM Endpoint
Manager will be used in the context of IBM Cloud Orchestrator.
723
724
This groups any computers which have the _BESCLIENT_GROUP NAME set to
<MY_VALUE>. During the deployment of the virtual system, _BESCLIENT_GROUP
NAME should be set to MY_VALUE to be included in this group.
To assign a virtual machine to a specific group during deployment, the Endpoint
Manager install agent script package contains the following environment
variable: _BESCLIENT_GROUP_NAME. This environment variable corresponds to the
_BESCLIENT_GROUP_NAME client setting configured within the Endpoint Manager
server and can be used to group virtual systems within the Endpoint Manager
server.
The _BESCLIENT_GROUP_NAME environment variable is by default set to none and
ignored but can be modified when deploying a virtual system.
725
/etc/init.d/iptables stop
iptables -A INPUT -p tcp --dport 52311 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 52311 -j ACCEPT
/etc/init.d/iptables save
/etc/init.d/iptables start
fi
fi
# If the _BESCLIENT_GROUP_NAME is set (default is none i.e. ignore)
if [ ! "${_BESCLIENT_GROUP_NAME}"== "none"]; then
echo "Group policy to be set to: ${_BESCLIENT_GROUP_NAME}"
echo "[Software\BigFix\EnterpriseClient\Settings\Client\
_BESCLIENT_GROUP_NAME]">>/var/opt/BESClient/besclient.config
echo "value = ${_BESCLIENT_GROUP_NAME}">>/var/opt/BESClient/besclient.config
fi
# start the IEM client
/etc/init.d/besclient start
v Create the cbscript.json file. Make sure that the name specified in the file
matches the script package zip name, for example: IEM_8.5.6666.0rhe5.x86_64.zip:
[
{
"name": "IEM_8.5.6666.0-rhe5.x86_64",
"version": "1.0.0",
"description": "This script package installs the IEM Agent on RHEL 5 64-bit ",
"command": "/bin/sh /tmp/IEMClient/install.sh",
"log": "/tmp/IEMClient",
"location": "/tmp/IEMClient",
"timeout": "0",
"commandargs":"",
"keys":
[
{
"scriptkey": "_BESCLIENT_GROUP_NAME",
"scriptvalue": "",
"scriptdefaultvalue": "none"
}
]
}
]
v The script package name must match the name specified in the cbscript.json
file. Create the script package by zipping together the following files (for
example, IEM_8.5.6666.0-rhe5.x86_64.zip):
actionsite.afxm - the Endpoint Manager Server Masthead.
v
v
v
v The script package runs when the virtual system is deployed. To verify that the
agent is installed correctly review the log files.
726
v After the agent has registered with the Endpoint Manager server, the computer
system information is displayed in the Endpoint Manager console.
v If a group has been configured and set, the computer also displays under the
specific group heading in the Endpoint Manager console.
v You can now perform patch management.
727
v Create the cbscript.json file: Make sure that the name specified in the file
below matches the script package zip name, for example, IEM_8.2.1175.0sle11.x86_64.zip:
[
{
"name": "IEM_8.2.1175.0-sle11.x86_64",
"version": "1.0.0",
"description": "This script package installs the IEM Agent on SUSE 11",
"command": "/bin/sh /tmp/IEMClient/install.sh",
"log": "/tmp/IEMClient",
"location": "/tmp/IEMClient",
"timeout": "0",
"commandargs":"",
"keys":
[
{
"scriptkey": "_BESCLIENT_GROUP_NAME",
"scriptvalue": "",
"scriptdefaultvalue": "none"
}
]
}
]
v The script package name must match the name specified in the cbscript.json
file. Create the script package by zipping together the following files (for
example, IEM_8.2.1175.0-sle11.x86_64.zip) :
actionsite.afxm
The Endpoint Manager Server Masthead.
BESAgent-8.2.1175.0-sle11.x86_64.rpm
The agent installer for SUSE Linux.
install.sh
cbscript.json
v Import the script package zip file into IBM Cloud Orchestrator and keep the
default setting of Executes at virtual system creation.
v Create a pattern containing a SUSE Linux Enterprise part and add the script
package to this part.
v Optional: Configure a Computer Group, in the Endpoint Manager Server
console, to match the proposed _BESCLIENT_GROUP_NAME environment variable
value set when the virtual system is deployed.
v Deploy the pattern. Optional: Specify the value for the _BESCLIENT_GROUP_NAME
environment variable to match the Computer Group configured in the Endpoint
Manager console. By default this value is set to none and is ignored.
v The script package will run when the virtual system is deployed. To verify the
agent installed correctly review the log files.
v After the agent has registered with the Endpoint Manager server, the computer
system information is displayed in the Endpoint Manager console.
728
v If a group has been configured and set, the computer is also displayed under the
specific group heading in the Endpoint Manager console.
v You can now perform patch management.
v Create the cbscript.json file. Make sure that the name specified in the file
below matches the script package zip name, for example, IEM_8.2.1175.0windows_setup.zip:
[
{
"name": "IEM_8.2.1175.0-windows_setup",
"version": "1.0.0",
"description": "This script package installs the TEM Client on WIN
"command": "install.bat",
"log": "C:\\TEMP\\TEMClient",
"location": "C:\\TEMP\\TEMClient",
"timeout": "0",
"commandargs":"",
"ostype": "windows",
"keys":
",
Chapter 9. Integrating
729
[
{
"scriptkey": "_BESCLIENT_GROUP_NAME",
"scriptvalue": "",
"scriptdefaultvalue": "none"
}
]
}
]
v The script package name must match the name specified in the cbscript.json
file. Create the script package by zipping together the following files (for
example, IEM_8.2.1175.0-windows_setup.zip) :
actionsite.afxm
The Endpoint Manager Server Masthead.
setup.exe
The agent installer for Windows.
install.bat
cbscript.json
v Import the script package zip file into IBM Cloud Orchestrator and keep the
default setting of Executes at virtual system creation.
v Create a pattern containing a Windows operating system part and add the script
package to this part.
v Optional: Configure a Computer Group, in the Endpoint Manager Server
console, to match the proposed _BESCLIENT_GROUP_NAME environment variable
value set when the virtual system is deployed.
v Deploy the pattern. Optional: Specify the value for the _BESCLIENT_GROUP_NAME
environment variable to match the Computer Group configured in the Endpoint
Manager console. By default this value is set to noneand is ignored.
v The script package runs when the virtual system is deployed. To verify the agent
installed correctly review the log files.
v Once the agent has registered with the Endpoint Manager server the computer
system information is displayed in the Endpoint Manager console.
v If a group has been configured and set, the computer also displays under the
specific group heading in the Endpoint Manager console.
v You can now perform patch management..
Troubleshooting
If the Endpoint Manager agent fails to register with the Endpoint Manager server,
check the following issues:
v The host name where the Endpoint Manager agent is installed must be correctly
resolved by using the ping command from the Endpoint Manager server, and
viceversa. To ping the machine, use the command:
# ping ip_address
730
v Ensure that the agent is running, by executing the following command on the
virtual machine:
# /etc/init.d/besclient status
Chapter 9. Integrating
731
732
733
734
735
The IBM Cloud Orchestrator command-line interface can run in both interactive
and batch modes. For more information about initializing the command-line
interface for either batch or interactive mode, see Invoking the command-line
interface on page 737.
Related information:
Jython
Python documentation
Procedure
1. Download thecommand-line interface tool from http://<Central Server
3>/downloads/cli/ in your IBM Cloud Orchestrator installation.
2. Save the .zip file. On your local hard disk drive, save the .zip file.
3. Open the .zip file. Expand the contents of this .zip file to a directory on your
hard disk. When expanded, the .zip file creates a directory tree under a single
top-level directory, the deployer.cli directory.
4. Ensure that either the JAVA_HOME or the PATH environment variable is set to the
location of your JRE.
5. Optional: If you are running the Windows Server 2003 operating system or the
Windows Server 2008 operating system, perform this step.
In the deployer.cli directory, create a registry file in the lib\<version> with
the following line:
python.os=nt
This causes Jython to bypass the normal operating system detection logic and
treat the system as a Windows machine.
By default, the only thing in the lib directory is a <version> subdirectory that
matches level of the product from which the CLI was downloaded. If you use
this CLI installation to communicate with products at different version levels,
then there is one subdirectory under the /lib directory for each of these
version levels and you must copy the registry file into each of these
subdirectories.
Example: \lib\3.0.0.0-12345\registry
736
Results
You have installed the IBM Cloud Orchestrator command-line interface in the bin
directory using the shell scripts (deployer.bat on Windows and deployer on
Linux). After the CLI is installed, you can verify the install by running
deployer.bat on Windows or deployer on Linux from the bin directory. If the
environment is set up correctly, then an informational message tells you that the
command-line interface is working and provides further details about using the
command-line interface.
What to do next
Invoke the command-line interface. For more information, see Invoking the
command-line interface.
Procedure
v Invoke the command-line interface in interactive mode. To invoke the
command-line interface in interactive mode, use the following parameters:
-u <userid> or
--userid
This optional parameter specifies the userid to authenticate to IBM
Cloud Orchestrator. If the userid parameter is not specified, the
command-line interface uses the value of the DEPLOYER_USERID
environment variable to determine the userid. When the userid is not
included on the command-line and the environment variable is not set,
you are prompted to enter a userid.
737
-p <password> or
--password
This optional parameter specifies the password used to authenticate to
IBM Cloud Orchestrator. If the password parameter is not specified, the
command-line interface uses the value of the DEPLOYER_PASSWORD
environment variable to determine the password. When the password is
not included on the command-line and the environment variable is not
set, you are prompted to enter a password.
Note: -p is case sensitive.
-h <hostname> or
--hostname
This required parameter indicates the host name or IP address of the
machine where the Workload Deployer component is installed. If you
specify this option, do not use the URL to access the Web interface. If
this parameter is not specified, the command-line interface uses the
value of the DEPLOYER_HOSTNAME environment variable to
determine the host name.
Note: You must specify only the host name or IP address, not the full URL used
to access the web user interface.
$ deployer -h mydeployer.mycompany.com -u username -p password
-f <script_file> <arg>*
Use this optional parameter to cause the command line to run the
specified Jython script file with the specified arguments. Any arguments
following the script file name are passed to the Jython script. Only one
-f parameter can be specified on the command line. You are running the
command-line interface in batch mode.
$ deployer -h mydeployer.mycompany.com -u joeadmin
-p password -f sampleScript.jy arg1 arg2 arg3
On Linux, you can make the shell automatically use the IBM Cloud
Orchestrator command to execute your Jython scripts. If the IBM Cloud
Orchestrator command is on your PATH, insert the following line at the
top of your script to have the shell execute it using the command-line
interface:
738
#!/usr/bin/env deployer
Passing commands to the command line by any of these methods (on the
command line using the -c parameter, or in a script file specified using the -f
parameter), support the same Jython scripting language.
Results
After completing these steps, the command-line interface is running in interactive
mode or the script or command you invoked is now running.
What to do next
You are ready to use the command-line interface. You can get help for any
command, attribute, or method on the command-line interface using the
instructions provided in the Getting help on the command-line interface. For a
list of the available objects to be used in the command-line interface, see
Command-line interface resource object reference on page 740. A set of sample
scripts that demonstrate some of the command-line interface function are located
in the <cli_install_dir>\samples directory.
The help command shown in the previous example is a Jython function. When it is
used as shown in the previous example, it provides a high-level overview of the
command line environment and instructions for accessing help on more specific
topics.
Procedure
v Invoke general help.
When used with no parentheses and no parameters, the help command provides
general help for using the IBM Cloud Orchestrator command-line interface.
739
In interactive mode, you can invoke deployer.help without the package prefix,
as shown in the following example:
>>> help
v Invoke help for the package. Help is available for the IBM Cloud Orchestrator
package.
To get help for the IBM Cloud Orchestrator package, use the following
command:
>>> help(deployer)
When invoked with a parameter, the help function provides detailed information
about the specified package, module, function, or property. Information about
invoking help for each resource is available in the reference information for that
resource.
v Invoke detailed help about a specific topic.
Pass a single parameter to the help function to get more detailed help about a
specific topic. For example, to see detailed help for how to work with
hypervisors in the command-line interface, enter the following command:
>>> help(deployer.hypervisors)
The deployer prefix is used to group all the IBM Cloud Orchestrator-specific
Jython functions into a single package to reduce the chances of name collisions
with functions and variables in your own scripts.
Results
Detailed or general help is displayed.
What to do next
You can continue to use the command-line interface guided by the information in
the help function.
AddOns object
An AddOns object represents the collection of add-ons defined to IBM Cloud
Orchestrator. Objects of this type are used to create, delete, iterate over, list and
search for add-ons on the product. To get help for the AddOns object, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.addons)
740
AddOn object
An AddOn object represents a particular add-on defined in IBM Cloud Orchestrator.
Use the AddOn object to query and manipulate the add-on definition in the product.
Attributes of the add-on and relationships between the add-on and other resources
in the product are represented as Python attributes on the AddOn object. Manipulate
these Python attributes using standard Python mechanisms to make changes to the
corresponding data in the product. To get help for the AddOn object, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.addon)
AddOn attributes
The AddOn object has the following attributes:
acl
The access control list for this add-on. For additional help on using this
object, enter the following command:
>>> help(deployer.acl)
741
This field is read-only. See Environment methods on page 743 for more
information.
id
label
location
The directory, on the virtual machine, into which files for this add-on
package are to be placed.
log
The directory, on the virtual machine, that is to contain the log files
generated by this add-on.
name
The name associated with this add-on. Each add-on must have a unique
name.
timeout
The maximum amount of time to wait for this add-on to finish running on
a virtual machine. Specify the timeout as the number of milliseconds to
wait, or 0 to wait indefinitely for the add-on to complete.
type
The type of add-on. This attribute must have a string value equal to one of
the following constants:
v deployer.DISK_ADDON
v deployer.NIC_ADDON
v deployer.USER_ADDON
updated
Thd time the add-on was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the add-on is displayed, this value
is shown as the date and time in the local timezone. This field is read-only.
Archive methods
The Archive object has the following attributes:
get
This method retrieves the archive currently associated with the add-on.
This method has one required parameter that indicates where the add-on
archive should be saved. It can be either of the following values:
v A string containing the name of a file in which to save the archive. The
.zip file type is automatically appended to the filename if the filename
does not end in .zip.
v A Python file object. You must ensure that the file object can correctly
handle binary data.
The add-on archive is returned in a zip file format, as shown in the
following example
>>> myaddon.archive.get(/path/to/foo.zip)
742
__lshift__
This method is invoked implicitly when the Archive object is used as the
left argument of a left shift operator (<<). It calls set() with the right
argument of the operator, as shown in the following example:
>>> myaddon.archive << /path/to/file
__rshift__
This method is invoked implicitly when the Archive object is used as the
left argument of a right shift operator (<<). It calls get() with the right
argument of the operator, as shown in the following example:
>>> myaddon.archive >> /path/to/file.zip
This method sets the archive associated with the add-on. It has one
required parameter that indicates the source of the add-on archive to be
uploaded. It maycan be either of the following values:
set
v A string containing the name of a file from which to get the archive.
v A Python file object.
You must ensure that the file object can correctly handle binary data, as
shown in the following example:
>>> myaddon.archive.set(/path/to/foo)
Environment methods
The Environment object has the following attributes:
isDraft
Indicates if this add-on is in draft mode.
isReadOnly
Indicates if this add-on is read-only.
makeReadOnly
Makes this add-on read-only. When the add-on is read-only, it cannot be
modified.
Creates a copy of this add-on with all of the same files, fields, and settings.
The new add-on has the name provided and an empty ACL.
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
Related information:
clone
Jython
Python documentation
743
Audit object
An Audit object represents the audit logs stored on IBM Cloud Orchestrator.
To get help on the command-line interface for the Audit object, pass the name of
the object as an argument to the help() function. See the following example for
details:
>>> help(deployer.audit)
For more information about working with resource objects on the command-line
interface, see Resources on the command line on page 843.
Audit methods
You can use the following methods on an Audit object:
get("file", start=start_time, end=end_time, tz="time_zone", size=size)
This method downloads an audit log from the product in a .zip file. Use
the size parameter to specify the maximum number of audit records that
you want to download. You can use other parameters to filter your record
set, according to the time frame in which the records were logged.
Parameter descriptions:
v file - A file object or file name used to store the audit log. If a file name
is specified, then .zip is automatically appended if the specified name
does not end in .zip.
v start_time - The earliest timestamp to be included in the audit data,
specified as the number of seconds since midnight, January 1, 1970 UTC.
Floating point values can be specified to indicate fractional seconds. The
start parameter is optional.
v end_time - The latest timestamp to be included in the audit data,
specified as the number of seconds since midnight, January 1, 1970 UTC.
Floating point values can be specified to indicate fractional seconds. The
end parameter is optional.
v time_zone - The time zone of the time frame that you specify in the start
and end parameters. The tz parameter is optional.
v size - The maximum number of records to be written to the .zip file. You
can request up to 20,000 records. If you specify a greater number, the
product automatically resets your request to 20,000 records, and writes
that number of records to the .zip file.
For example:
deployer.audit.get("my.zip",start=1321391040,end=1321911000,tz="est",size=10000)
744
Clouds object
A clouds object represents a collection of cloud groups that are defined to IBM
Cloud Orchestrator. Objects of this type are used to delete, iterate over, list and
search for cloud groups on IBM Cloud Orchestrator.
To get help for the clouds object on the command-line interface, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.clouds)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Cloud object
A cloud object represents a particular cloud group that is defined in IBM Cloud
Orchestrator. Use the cloud object to query and manipulate the cloud group
definition in the product. Attributes of the cloud group, and relationships between
Chapter 11. Reference
745
the cloud group and other resources on IBM Cloud Orchestrator, are represented as
Jython attributes on cloud objects. Manipulate these Jython attributes using
standard Jython mechanisms to change the corresponding data on the IBM Cloud
Orchestrator.
Cloud objects can have many hypervisors objects.
To get help for the cloud object on the command-line interface, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.cloud)
Cloud attributes
The cloud object supports the following attributes:
acl
Access control list for this cloud group. For additional help on using this
object, enter:
>>> help(deployer.acl)
address
The network address or host name is retrieved from the hypervisor
manager each time.
created
Creation time of the cloud group, as number of seconds since midnight,
January 1, 1970 UTC. When the cloud group is displayed, this value is
shown as the date and time in the local time zone. This field is read-only.
currentstatus
The status of the cloud object. This field contains an eight character string
value that is generated by the product.
currentstatus_text
This attribute is a string representation of the currentstatus attribute in
the preferred language of the requester and is automatically generated by
the product. This field is read-only.
defaultcloud
Indicates if this cloud group is the default cloud group used by IBM Cloud
Orchestrator. This attribute is read-only.
description
Description of the cloud group. This field is a string and can be edited.
endpointtype
Specifies the type of endpoints managed by the cloud object. This
read-only value is determined based on the target endpoints currently
added to the Cloud object. The value of the endpoint type can be one of
the following:
v Hypervisor: The cloud object manages one or more hypervisor
endpoints.
v Pool: The cloud object manages a pool.
v Cluster: The cloud object manages a cluster.
v None: The cloud object does not manage any endpoints.
v Mixed: The cloud object manages endpoints of multiple types.
id
746
name
The name associated with this cloud group. Each cloud group must have a
unique name.
owner
A user object that references the owner of this cloud group. For more
information about the properties and methods supported by user objects,
enter:
>>> help(deployer.user)
type
updated
The time the cloud group was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the cloud group is displayed, this
value is shown as the date and time in the local time zone. This attribute is
read-only.
vendor The type of hypervisors this cloud group contains. Valid value is
OpenStack.
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Cloud methods
The cloud object has the following methods associated with it:
discover()
Forces IBM Cloud Orchestrator to rediscover the network and storage to
which the hypervisor is attached. This method accepts a single optional
parameter that allows you to specify to opt-in or opt-out the discovered
network and storage. The following values are valid for this parameter:
T
747
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
Related tasks:
Managing environment profiles on page 354
You can use environment profiles to control some aspects of your deployment. You
can use environment profiles to group related deployment configuration options
together and deploy from a single pattern.
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related reference:
Environment profiles on the command-line interface on page 749
You can work with the environment profiles on the IBM Cloud Orchestrator
command-line interface.
Environment profile clouds on the command-line interface on page 752
You can work with environment profile clouds on the IBM Cloud Orchestrator
command-line interface.
Environment profile cloud IP groups on the command-line interface on page 755
You can work with the environment profile cloud IP group objects on the IBM
Cloud Orchestrator command-line interface.
Related information:
Jython
Python documentation
748
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
EnvironmentProfile object
An EnvironmentProfile object represents a particular environment profile defined
to IBM Cloud Orchestrator. Use the EnvironmentProfile object to query and
manipulate the environment profile definition in the product. Attributes of the
EnvironmentProfile object on the IBM Cloud Orchestrator, are represented as
Jython attributes on the EnvironmentProfile object. Relationships between the
environment profile and other resources are also represented as Jython attributes
on the EnvironmentProfile object. You can manipulate these Jython attributes
using standard Jython mechanisms to change the corresponding data on the IBM
Cloud Orchestrator.
You can work with EnvironmentProfile objects on the command line and help is
available. To get help for the EnvironmentProfile object, pass it as an argument to
the help() function, as shown in the following example:
>>> help(deployer.environmentprofile)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
EnvironmentProfiles attributes
The EnvironmentProfiles object has the following attributes:
acl
The access control list for this environment profile. For additional help on
using this object, see ACL object on page 862 or enter the following
command:
>>> help(deployer.acl)
clouds An object that manipulates the clouds and IP groups associated with an
environment profile. For additional help on using this object, see
Environment profile clouds on the command-line interface on page 752
or enter the following command:
>>> help(deployer.environmentprofileclouds)
Chapter 11. Reference
749
created
The creation time of the environment profile, as number of seconds since
midnight, January 1, 1970 UTC. When the environment profile is
displayed, this value is shown as the date and time in the local time zone.
currentmessage
The message associated with the status of the environment profile.
currentmessage_text
The textual representation of the currentmessage attribute.
currentstatus
The status of the environment profile.
currentstatus_text
The textual representation of the currentstatus attribute.
description
A description of the environment profile.
environment
The environment the profile represents. The following values are valid for
the environment attribute:
v deployer.environmentprofile.ALL_ENVIRONMENT
v deployer.environmentprofile.DEVELOPMENT_ENVIRONMENT
v deployer.environmentprofile.TEST_ENVIRONMENT
v deployer.environmentprofile.QUALITY_ASSURANCE_ENVIRONMENT
v
v
v
v
id
deployer.environmentprofile.PERFORMANCE_ENVIRONMENT
deployer.environmentprofile.RESEARCH_ENVIRONMENT
deployer.environmentprofile.PRODUCTION_ENVIRONMENT
deployer.environmentprofile.PRE_PRODUCTION_ENVIRONMENT
ipsource
Indicates the source of IP addresses for this environment profile. The
following values are valid for the ipsource attribute:
deployer.environmentprofile.WEBSPHERE_DEPLOYER_IPSOURCE
IBM Cloud Orchestrator selects the IP addresses.
deployer.environmentprofile.PATTERN_DEPLOYER_IPSOURCE
The user who is deploying the pattern provides the IP address.
Important: If this option is used, the person deploying the pattern
cannot specify an IP address that is contained within the IP groups
defined in IBM Cloud Orchestrator.
memory_cap
The maximum amount of memory that deployers using this environment
profile can consume.
memory_inuse
The amount of memory that deployers have reserved using in this
environment profile. This field is read only.
memory_reserved
The amount of memory that deployers are using in this environment
profile. This field is read only.
name
750
owner
A User object that references the owner of this environment profile. For
more information about the properties and methods supported by User
objects, enter the following command:
>>> help(deployer.user)
pcpu_cap
The maximum number of available physical CPUs that the deployers using
this environment profile can consume.
pcpu_inuse
The number of physical CPUs that deployers are using in this environment
profile. This field is read only.
pcpu_reserved
The number of physical CPUs that deployers have reserved using this
environment profile. This field is read only.
platform
The type of hypervisors this environment profile supports on deployments.
Valid value is OpenStack.
storage_cap
The maximum amount of storage that deployers using this environment
profile can consume.
storage_inuse
The amount of storage that deployers have reserved using in this
environment profile.
storage_reserved
The amount of storage that deployers are using in this environment profile.
This field is read only.
updated
The time the environment profile was last updated, as number of seconds
since midnight, January 1, 1970 UTC. When the environment profile is
displayed, this value is shown as the date and time in the local time zone.
vcpu_cap
The maximum number of virtual CPUs that deployers using this
environment profile can consume.
vcpu_inuse
The number of virtual CPUs that deployers have reserved using in this
environment profile. This field is read only.
vcpu_reserved
The number of virtual CPUs that deployers are using in this environment
profile.
vmname_pattern
The pattern used to generate virtual machine names. Various predefined
attributes can be included in the virtual machine name by including the
following strings in the vmname_pattern attribute:
${hostname}
Replaced with the host name of the virtual machine.
${n-counter}
Replaced with a counter of n digits. The n variable in this string is
a placeholder for the number you supply.
751
${vs-name}
Replaced with the name of the virtual system instance.
For example, a vmname_pattern attribute with the following value:
${vs-name} - ${hostname} - ${4-counter}
results in virtual machine names like the names shown in the following
example:
My VS - myhostname - 0017
EnvironmentProfiles methods
The EnvironmentProfiles object has the following method:
Creates a copy of this environment profile with the same settings. The new
environment profile has the name provided as well as an empty ACL.
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
clone
Related tasks:
Managing environment profiles on page 354
You can use environment profiles to control some aspects of your deployment. You
can use environment profiles to group related deployment configuration options
together and deploy from a single pattern.
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related reference:
Environment profiles command-line interface reference on page 748
You can work with environment profiles on the IBM Cloud Orchestrator
command-line interface.
Related information:
Jython
Python documentation
Environment profile clouds on the command-line interface:
You can work with environment profile clouds on the IBM Cloud Orchestrator
command-line interface.
For general information about working with the command-line interface, see
Using the command-line interface on page 735.
EnvironmentProfileClouds object
The EnvironmentProfileClouds object manipulates the clouds and IP groups
associated with an environment profile. This object behaves much like a dict object
with Cloud objects as keys. See EnvironmentProfileClouds methods on page 753
for more information. References to these objects can be obtained using the clouds
attributes of EnvironmentProfile objects.
752
EnvironmentProfileClouds methods
The EnvironmentProfileClouds object has the following methods:
addCloud
Adds a cloud group to the list of cloud groups for an environment profile.
This method accepts the following parameters:
v A Cloud object that represents the cloud group to be added.
v An optional alias for the cloud in this environment profile. If no alias is
provided, the name of the cloud group is used.
clear
__contains__
Indicates if the specified cloud group is associated with this environment
profile. This method is called automatically when you use the Python in
operator.
__delitem__
Dissociates a cloud group from this environment profile. This method is
called automatically when you use the Python del statement.
get
__getitem__
Returns an EnvironmentProfileCloud object that describes how a cloud
group is used in this environment profile. This method accepts a single
parameter that must be a Cloud object representing the cloud group about
which information is to be returned. This method is started automatically
when you access an item using the Python [] syntax.
has_key
Indicates if the specified cloud group is associated with this environment
profile. This method accepts a single parameter that must be a Cloud object
representing the cloud group about which information is to be returned.
items
__iter__
Returns an iteration over Cloud objects representing the cloud groups in
the environment profile.
iteritems
Returns an iteration over (Cloud, EnvironmentProfileCloud) tuples
representing the cloud group associated with the environment profile.
iterkeys
Returns an iteration over Cloud objects representing the cloud groups in
the environment profile.
itervalues
Returns an iteration over the EnvironmentProfileCloud objects associated
with the environment profile.
753
keys
__len__
Returns the number of cloud groups associated with the environment
profile.
__repr__
Returns a string representation of the cloud groups and IP groups
associated with the environment profile.
__setitem__
Associates a cloud group with the environment profile and assigns an alias
to it. This method is called automatically when you assign a value to an
item in an EnvironmentProfileClouds object. The key for the item must be
a Cloud object and the assigned value must be a string, as shown in the
following example:
>>> myep = deployer.environmentprofiles[My profile][0]
>>> mycloud = deployer.clouds[My cloud][0]
>>> myep.clouds[mycloud] = alias for my cloud
__str__
Returns a string representation of the cloud groups and IP groups
associated with the environment profile.
__unicode__
Returns a string representation of the cloud groups and IP groups
associated with the environment profile.
values Returns a list of the EnvironmentProfileCloud objects associated with the
environment profile.
EnvironmentProfileCloud object
An EnvironmentProfileCloud object is used to access and modify information
about a particular cloud group in an environment profile. See
EnvironmentProfileCloud methods and EnvironmentProfileCloud attributes on
page 755 for additional information about individual attributes and methods.
EnvironmentProfileCloud methods
The EnvironmentProfileCloud object has the following methods:
__eq__
This method is used automatically by Python to determine if two
EnvironmentProfileCloud objects are equal. That is, if they represent the
same cloud in the same environment profile.
__nonzero__
This method is used by Python whenever an EnvironmentProfileCloud
object is used in a boolean context. It always returns True.
__repr__
This method returns a string representation of the
EnvironmentProfileCloud object and the IP groups it contains.
__str__
This method returns a string representation of the
EnvironmentProfileCloud object and the IP groups it contains.
754
__unicode__
This method returns a string representation of the
EnvironmentProfileCloud object and the IP groups it contains.
EnvironmentProfileCloud attributes
The EnvironmentProfileCloud object has the following attributes:
alias
The alias attribute can be used to examine and modify the alias assigned to
a cloud group in an environment profile. Its value must be a string.
ipgroups
The ipgroups attribute references an EnvironmentProfileClouds object that
contains additional information about how the IP groups within the cloud
group are used by the environment profile. For additional help on using
this object, enter the following command:
>>> help(deployer.environmentprofilecloudipgroups)
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
Related tasks:
Managing environment profiles on page 354
You can use environment profiles to control some aspects of your deployment. You
can use environment profiles to group related deployment configuration options
together and deploy from a single pattern.
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related reference:
Environment profiles command-line interface reference on page 748
You can work with environment profiles on the IBM Cloud Orchestrator
command-line interface.
Related information:
Jython
Python documentation
Environment profile cloud IP groups on the command-line interface:
You can work with the environment profile cloud IP group objects on the IBM
Cloud Orchestrator command-line interface.
For general information about working with the command-line interface, see
Using the command-line interface on page 735.
EnvironmentProfileCloudIPGroups object
An EnvironmentProfileCloudIPGroups object manipulates the IP groups associated
with a cloud in an environment profile. This object behaves much like a dict object
with IPGroup objects as keys. See EnvironmentProfileCloudIPGroups methods on
page 756 for additional information about methods. References to these objects can
be obtained with the ipgroups attributes of EnvironmentProfileCloud objects.
Chapter 11. Reference
755
EnvironmentProfileCloudIPGroups methods
The EnvironmentProfileCloudIPGroups object has the following methods:
addIPGroup
Adds an IP group to the list of IP groups for a cloud in an environment
profile. This method accepts the following parameters:
v An IPGroup object that represents the IP group to be added
v An optional alias for the IP group in this environment profile. If no alias
is provided, the name of the IP group is used.
clear
Dissociates all IP groups from this cloud group in the environment profile.
__contains__
Indicates if the specified IP group is associated with this cloud group in
the environment profile. This method is called automatically when you use
the Python in operator.
__delitem__
Dissociates an IP group from a cloud group in this environment profile.
This method is called automatically when you use the Python del
statement.
get
__getitem__
Returns an EnvironmentProfileCloudIPGroup object that describes how an
IP group is used in this environment profile. This method accepts a single
parameter that must be an IPGroup object representing the IP group about
which information is to be returned. This method is started automatically
when you access an item using the Python [] syntax.
has_key
Indicates if the specified IP group is associated with this cloud in the
environment profile. This method accepts a single parameter that must be
an IPGroup object representing the IP group about which information is to
be returned.
items
__iter__
Returns an iteration over IPGroup objects representing the IP groups
associated with this cloud group in the environment profile.
iteritems
Returns an iteration over (IPGroup, EnvironmentProfileCloudIPGroup)
tuples representing IP groups associated with this cloud group in the
environment profile.
iterkeys
Returns an iteration over IPGroup objects representing the IP groups
associated with this cloud group in the environment profile.
756
itervalues
Returns an iteration over the EnvironmentProfileCloudIPGroup objects
associated with the environment profile cloud group.
keys
__len__
Returns the number of IP groups associated with the environment profile
cloud group.
__repr__
Returns a string representation of the IP groups associated with the
environment profile cloud group.
__setitem__
Associates an IP group with the environment profile cloud group and
assigns an alias to it. This method is called automatically when you assign
a value to an item in an EnvironmentProfileCloudIPGroups object. The key
for the item must be an IPGroup object and the assigned value must be a
string, as shown in the following example:
>>>
>>>
>>>
>>>
__str__
Returns a string representation of the IP groups associated with the
environment profile cloud group.
__unicode__
Returns a string representation of the IP groups associated with the
environment profile cloud group.
values Returns a list of the EnvironmentProfileCloudIPGroup objects associated
with the environment profile cloud group.
EnvironmentProfileCloudIPGroup object
An EnvironmentProfileCloudIPGroup object is used to access and modify
information about a particular IP group associated with a cloud group in an
environment profile. See EnvironmentProfileCloudIPGroup methods and
EnvironmentProfileCloudIPGroup attributes on page 758 for additional
information about methods and attributes.
EnvironmentProfileCloudIPGroup methods
The EnvironmentProfileCloudIPGroup object has the following methods:
__eq__
This method is used automatically by Python to determine if two
EnvironmentProfileCloudIPGroup objects are equal. That is, if they
represent the same IP group in the same cloud in the same environment
profile.
__nonzero__
This method is used by Python whenever an
EnvironmentProfileCloudIPGroup object is used in a boolean context. It
always returns True.
757
__repr__
This method returns a string representation of the
EnvironmentProfileCloudIPGroup object.
__str__
This method returns a string representation of the
EnvironmentProfileCloudIPGroup object.
__unicode__
This method returns a string representation of the
EnvironmentProfileCloudIPGroup object.
EnvironmentProfileCloudIPGroup attributes
The EnvironmentProfileCloudIPGroup object has the following attributes:
The alias attribute can be used to examine and modify the alias assigned to
the IP group in a cloud group for an environment profile. The value for
this attribute must be a string.
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
alias
Related tasks:
Managing environment profiles on page 354
You can use environment profiles to control some aspects of your deployment. You
can use environment profiles to group related deployment configuration options
together and deploy from a single pattern.
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related reference:
Environment profiles command-line interface reference on page 748
You can work with environment profiles on the IBM Cloud Orchestrator
command-line interface.
Related information:
Jython
Python documentation
Hypervisors object
A hypervisors object represents the collection of hypervisors defined to IBM Cloud
Orchestrator. Objects of this type are used to delete, iterate over, list and search for
hypervisors on IBM Cloud Orchestrator.
758
You can work with hypervisors objects on the command line and help is available.
To get help for the hypervisors object, pass it as an argument to the help()
function, as shown in the following example:
>>> help(deployer.hypervisors)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Hypervisor object
A hypervisor object represents a particular hypervisor defined in IBM Cloud
Orchestrator. Use the hypervisor object to query and manipulate the hypervisor
definition in the product. Attributes of the hypervisor object are represented as
Jython attributes on the hypervisor object. Relationships between the hypervisor
object and other resources on IBM Cloud Orchestrator are also represented as
Jython attributes on the hypervisor object. Manipulate these Jython attributes using
standard Jython mechanisms to change the corresponding data on IBM Cloud
Orchestrator.
You can work with hypervisors on the command line and help is available. To get
help for the hypervisors object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.hypervisors)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Hypervisor attributes
Hhypervisors are automatically added as part of the discovery process for the
cloud group that represents their hypervisor manager. Hypervisor objects provide
the following attributes to work with them:
address
The host name or IP address (dotted decimal notation) at which the
hypervisor can be reached.
created
The creation time of the hypervisor, as number of seconds since midnight,
January 1, 1970 UTC. When the hypervisor is displayed, this value is
shown as the date and time in the local time zone. This field is read-only.
currentmessage
The message associated with the status of the hypervisor. It might, for
example, provide details about a problem if the hypervisor has been placed
in an error state. This field is read-only.
currentmessage_text
A textual description of the currentmessage value. Provides additional
status or details about what is happening or has happened to the
hypervisor resource. This field is read-only.
currentstatus
The status of the hypervisor. This field is read-only.
currentstatus_text
A textual description of the currentstatus value. This field is read-only.
759
desiredstatus
Indicates the status in which you want the hypervisor. Setting this value
causes IBM Cloud Orchestrator to initiate the necessary steps to get the
hypervisor to this state.
desiredstatus_text
A textual description of the desiredstatus value. This field is read-only.
endpointtype
Specifies the type of endpoint type of this hypervisor object. This read-only
value is determined based on the actual endpoint type of the resource. The
value of the endpoint type can be one of the following:
v Hypervisor: The resource is a hypervisor.
v Pool: The resource is a pool.
v Cluster: The resource is a cluster.
id
name
The name associated with this hypervisor. Each hypervisor must have a
unique name.
pvuscore
The processor value unit (PVU) score for the hypervisor. This attribute
includes the following information about the processor type:
v Vendor who created it (for example Intel)
v Brand (for example Xeon)
v Number of cores per chip (for example dual core or quad core)
This information can be derived from the PVU-table.xml file or entered
manually.
type
updated
The time the hypervisor was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the hypervisor is displayed, this
value is shown as the date and time in the local time zone. This field is
read-only.
UUID
The universally unique identifier for the hypervisor. For hypervisors that
do not have a UUID, this value will be "none".
version
The type and version of the server, as shown below:
v VMware ESX Server 4.0.0
virtualmachines
A list of virtual machines currently defined on this hypervisor.
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Hypervisor methods
The following methods are provided for hypervisor objects:
discover()
Forces IBM Cloud Orchestrator to rediscover the network and storage to
which the hypervisor is attached. This method accepts a single optional
760
761
Images object
An images object represents a collection of images in the catalog.
To get help for the images object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.images)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Images methods
The following methods are provided for images objects:
get()
Get an image by the image ID. It accepts a single parameter: the image ID.
getOVF()
Get the OVF of an image. It accepts a single parameter: the image ID.
Image object
An Image object represents a particular image in the catalog. To get help
for the image object on the command-line interface, pass it as an argument
to the help() function, as shown in the following example:
>>> help(deployer.image)
Image object
Animage represents a particular image in the catalog.
To get help for the image object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.image)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Image attributes
Image objects provide the following attributes to work with them:
architecture
The architecture of the image.
cloud
A reference to the cloud to which this virtual machine belongs. For more
information on the properties and methods supported by Cloud objects,
enter:
>>> help(deployer.cloud)
description
The description of the image.
762
hypervisor
The hypervisor of the image.
The image ID.
id
repository
The repository the image is located in.
version
The version of the image.
Image methods
The following methods are provided for image objects:
link()
IMRepository object
An IMRepository object represents an IBM Installation Manager repository. Use the
IMRepository object to manage the IBM Installation Manager repository definition.
To get help for the IMRepository object on the command-line interface, pass it as
an argument to the help() function, as shown in the following example:
>>> help(deployer.imrepository)
IMRepositories object
An IMRepositories object represents a the collection of IBM Installation Manager
repositories.
To get help for the IMRepositories object on the command-line interface, pass it as
an argument to the help() function, as shown in the following example:
>>> help(deployer.imrepositories)
IMRepository attributes
categoryname
Category name for the IBM Installation Manager repository.
packageidversion
The ID and version for a software package.
IMRepositories methods
createCategory(<category name>)
Create the specified category in the Installation Manager repository.
>>> deployer.imrepositories.createCategory(Test)
deleteCategory(<category name>)
Delete the specified category in the Installation Manager repository.
Chapter 11. Reference
763
>>> deployer.imrepositories.deleteCategory(Test)
listCategory()
List the category names.
>>>deployer.imrepositories.listCategory()
listPackage()
List the software packages that are in the Installation Manager repository.
>>>deployer.imrepositories.listPackage()
IPs object
An IPs object represents the collection of IP addresses defined within a particular
IP group. Objects of this type are accessed using the ips attribute of the IP group in
which they are contained, as shown in the following example:
>>> myipgroup = deployer.ipgroups[my ip group name][0]
>>> myipgroup.ips
Objects of this type are used to create, delete, iterate over, list and search for IP
addresses in IBM Cloud Orchestrator. Unlike other types of resource collections, IP
addresses have no name attribute. When searching for IP addresses within this
collection, matching is done against the ipaddress attribute.
When you are creating IPs objects, pass the IP address as a string in dotted
decimal notation to the create() method. To create multiple IPs objects, pass a list
of these strings, as shown in the following example:
>>> myipgroup.ips.create("1.2.3.4")
>>> myipgroup.ips.create(["1.2.3.5", "1.2.3.6"])
Note: Because IPs objects do not have a name property, the search string supplied
in any search operations is matched against the IP address.
You can work with IP addresses on the command line and help is available. To get
help for the IPs object, pass it as an argument to the help() function, as shown in
the following example:
>>> help(deployer.ips)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
764
IP object
An IP object represents a particular IP address defined in IBM Cloud Orchestrator.
Use the IP object to query and manipulate the IP address definition in the product.
Attributes of the IP address and relationships between the IP address and other
resources in IBM Cloud Orchestrator are represented as Jython attributes on the IP
object. Manipulate these Jython attributes using standard Jython mechanisms to
change the corresponding data in IBM Cloud Orchestrator.
IP objects are contained in the IPGroup object.
You can work with IP addresses on the command line and help is available. To get
help for the IP object, pass it as an argument to the help() function, as shown in
the following example:
>>> help(deployer.ip)
IP attributes
Unlike other types of resource collections, IP addresses have no name attribute.
When searching for IP addresses within this collection, matching is done against
the IPaddress attribute. The IP object has the following attributes:
created
The creation time of the IP, as number of seconds since midnight, January
1, 1970 UTC. When the IP is displayed, this value is shown as the date and
time in the local time zone. This field is read-only.
currentmessage
The message associated with the status of the IP. This field is read-only.
currentmessage_text
The message text describing the current message of the IP address. This
field is read-only.
currentstatus
The status of the IP. This field is read-only.
currentstatus_text
The message text describing the status of the IP address. This field is
read-only.
id
ipaddress
The IP address associated with this IP. The IP address must be unique and
must belong to the IP group under which this IP is defined. This field is
read-only.
ipgroup
A reference to the IPgroup object that contains this IP address. For more
information about the properties and methods supported by IPgroup
objects, enter the following command:
>>> help(deployer.ipgroup)
updated
The time the IP was last updated, as number of seconds since midnight,
January 1, 1970 UTC. When the IP is displayed, this value is shown as the
date and time in the local time zone. This field is read-only.
userhostname
The hostname that was entered that is associated with this IP.
Chapter 11. Reference
765
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
IP method
The IP object has the following method associated with it:
reset
CAUTION:
Use this method only when an IP object gets into an error state. Use this
method if an IP status is active in IBM Cloud Orchestrator, but the IP is
not active because a virtual machine has been deleted.
766
IPgroups object
An IPgroups object represents the collection of IP groups defined to the IBM Cloud
Orchestrator. Objects of this type are used to create, delete, iterate over, list and
search for IP groups on the IBM Cloud Orchestrator.
IPgroups objects contain IPs objects and IPgroups objects have many networks.
You can work with the IPgroups object on the command line and help is available.
To get help for the IPgroups object, pass it as an argument to the help() function,
as shown in the following example:
>>> help(deployer.ipgroups)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
IPgroup object
An IPgroup object represents a particular IP group defined in IBM Cloud
Orchestrator. Use the IPgroup object to query and manipulate the IP group
definition in the product. Attributes of the IP group are represented as Jython
attributes on the IPgroup object. Relationships between the IP group and other
resources on the IBM Cloud Orchestrator are also represented as Jython attributes
on the IPgroup object. Manipulate these Jython attributes using standard Jython
mechanisms to change the corresponding data on the IBM Cloud Orchestrator.
You can work with the IPgroup object on the command line and help is available.
To get help for the IPgroup object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.ipgroup)
IPgroup attributes
When you are creating an IP group, you must provide values for the following
attributes:
Chapter 11. Reference
767
v subnetaddress
v netmask
v primarydns
You can also provide values for the following attributes:
v name
v gateway
v secondarydns
The IPgroup object has the following attributes:
alternategateway
(Optional) The alternate gateway IP address to use for the IP group
networking.
computernameprefix
(Optional) If specified, it defines the prefix to use for the computer name.
created
The creation time of the IP group, as number of seconds since midnight,
January 1, 1970 UTC. When the IP group is displayed, this value is shown
as the date and time in the local time zone. This field is read-only.
description
(Optional) A basic description of the IP Group. This can be used to provide
more details about the usage or purpose of an IP group.
domain (Optional) The domain name to use for the IP Group network.
domainsuffixes
(Optional) A comma-separated list of domain suffixes that must be added
to the network settings of the virtual machine. For example, ibm.com or
us.ibm.com.
gateway
The default gateway associated with the IP group represented as a string
in dotted decimal notation, for example: 192.168.98.1.
hostnameprefix
(Optional) If specified, it is used as the hostname's prefix in the generated
virtual machine hostname.
id
ips
The set of IP addresses defined within this IP group for use on virtual
machines. For more information about the properties and methods
supported by IPs objects, enter the following command:
>>> help(deployer.ips)
name
The display name associated with this IP group. If the name is not
specified, it defaults to the subnet address.
netmask
The network mask associated with the subnet address of the IP group that
is represented as a string in dotted decimal notation, for example:
255.255.255.0.
networks
The hypervisor network attachments associated with this IP group.
768
primarydns
The primary domain name system (DNS) server used for the IP group
represented as a string in dotted decimal notation, for example:
192.168.98.2.
primarywins
(Optional) The primary WINs address to use for the virtual machine. Only
used for Windows based deployments.
protocol
Specifies the protocol to be used for the IP Group network and can either
be dhcp or static. If dhcp, then all of the IP address based networking
properties are optional the deployments are done using this IP Group is set
up using DHCP.
secondarydns
The secondary DNS server used for the IP group represented as a string in
dotted decimal notation, for example: 192.168.98.3.
secondarywins
(Optional) The secondary WINs address to use for the virtual machine.
Only used for Windows based deployments.
updated
The time the IP group was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the IP group is displayed, this value
is shown as the date and time in the local time zone. This field is
read-only.
version
The version of the IP addresses for the IP group. The valid value is IPv4 or
IPv6.
Attention: Workloads that require IP caching must be deployed to cloud
groups with only IPv4 IP groups.
workgroup
(Optional) The Windows workgroup name to use for the virtual machine.
Only used for Windows based deployments.
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
Related information:
Jython
Python documentation
769
MailDelivery object
Namespace for mail settings. The MailDelivery object has the following attributes:
replytoaddress
Reply-to address (set to "" to use system administrator address)
smtpserver
The SMTP server used by IBM Cloud Orchestrator to send email. The
value is a string containing the host name or IP address of the SMTP
server. IP addresses must be specified in dotted decimal notation.
Related tasks:
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related information:
Jython
Python documentation
Networks object
A networks object represents the collection of networks defined to IBM Cloud
Orchestrator. Objects of this type are used to delete, iterate over, list and search for
networks on IBM Cloud Orchestrator.
Note: Networks objects are automatically created by IBM Cloud Orchestrator and
cannot be manually defined.
You can work with networks objects on the command line and help is available. To
get help for the networks object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.networks)
For more information about working with resource objects, see the Resources,
resource collections, and methods on page 842 section.
Network object
A network object represents a particular network defined in IBM Cloud
Orchestrator. Use the network object to query and manipulate the network
definition in the product. Attributes of the network object are represented as Jython
attributes on the network object. Relationships between the network object and
other resources on IBM Cloud Orchestrator are also represented as Jython
770
attributes on the network object. Manipulate these Jython attributes using standard
Jython mechanisms to change the corresponding data on the IBM Cloud
Orchestrator.
Network objects belong to hypervisors and IP groups.
You can work with network objects on the command line and help is available. To
get help for the network object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.network)
Network attributes
The network object has the following attributes:
created
The creation time of the network, as number of seconds since midnight,
January 1, 1970 UTC. When the network is shown, this value is shown as
the date and time in the local time zone. This field is read-only.
currentmessage
The message associated with the status of the network. This field is
read-only.
currentmessage_text
The message text describing the status of the network. This field is
read-only.
currentstatus
The status of the network. This field is read-only.
currentstatus_text
The text describing the status of the network. This field is read-only.
hypervisor
A reference to the hypervisor that owns this network connection. For more
information about the properties and methods supported by hypervisor
objects, enter the following command:
>>> help(deployer.hypervisor)
id
ipgroup
A reference to the IPGroup object to which this network is attached. For
more information about the properties and methods supported by IPGroup
objects, enter the following command:
>>> help(deployer.ipgroup)
name
The name associated with this network. Each network must have a unique
name.
updated
The time the network was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the network is displayed, this value
is shown as the date and time in the local time zone. This field is
read-only.
Specifies the virtual local area network (VLAN) associated with this
network. This value must be an integer value in the 0 - 4095 range,
inclusive.
Related information:
vlan
771
Jython
Python documentation
PatternType object
A PatternType object represents a particular pattern type defined on IBM Cloud
Orchestrator. Use the PatternType object to query and manipulate the pattern type
definition. Attributes of the pattern type and relationships between the pattern
type and other resources on IBM Cloud Orchestrator are represented as Jython
attributes on the PatternType object. Manipulate these Jython attributes using
standard Jython mechanisms to make changes to the corresponding data on IBM
Cloud Orchestrator. To get help for the PatternType object, pass it as an argument
to the help() function, as shown in the following example:
>>> help(deployer.patterntype)
PatternTypes object
A PatternTypes object represents the collection of pattern types defined to IBM
Cloud Orchestrator. Objects of this type are used to create, delete, iterate over, list
and search for pattern types on IBM Cloud Orchestrator. To get help for the
PatternTypes object, pass it as an argument to the help() function, as shown in the
following example:
>>> help(deployer.patterntypes)
PatternType attributes
The PatternType object has the following attributes:
shortname
The short name of the pattern type.
name
version
The version of the pattern type.
description
The description of the pattern type.
status The status of the pattern type.
required
The prerequisites of the pattern type.
PatternType methods
The PatternTypes and PatternType objects have the following methods:
acceptLicense
Accept the license of a given pattern type.
deployer.patterntypes.get(<shortname>, <version>).acceptLicense()
772
773
Plugin object
A Plugin object represents a particular plug-in defined in IBM Cloud Orchestrator.
Use the Plugin object to query and manipulate the plug-in definition. Attributes of
the plug-in and relationships between the plug-in and other resources in IBM
Cloud Orchestrator are represented as Jython attributes on the plug-in object.
Manipulate these Jython attributes using standard Jython mechanisms to make
changes to the corresponding data on the product.
To get help for the Plugin object, pass it as an argument to the help() function, as
shown in the following example:
>>> help(deployer.plugin)
Plugin attributes
The Plugin object has the following attributes:
create_time
The creation time of the plug-in.
creator
Creator of the plug-in.
description
The description of the plug-in.
last_modified
Time the plug-in was updated.
last_modifier
The last user who updated the plug-in.
name
Plugin methods
The Plugin object has the following methods:
list
List all plug-ins
deployer.plugins
or
deployer.plugins.list
774
Diagnostics object
Returns the Diagnostics object representing the diagnostics package for the IBM
Cloud Orchestrator.
Help is available on the command-line interface for the Diagnostics object. To get
help, pass the Diagnostics object as an argument to the help() function, as shown
in the following example:
>>> help(deployer.diagnostics)
The Diagnostics object has one method, the get method. The get method
downloads the diagnostics package as a compressed file. This method takes an
optional path where the file is stored; the default path is ./trace.zip, as shown in
the following examples:
>>> deployer.diagnostics.get()
>>> deployer.diagnostics.get(/some/path/diagnostics.zip)
Trace object
The Trace object returns a TraceFile object representing the running trace file on
the IBM Cloud Orchestrator.
Help is available for the Trace object on the command-line interface. To get help,
pass the Trace object as an argument to the help() function, as shown in the
following example:
>>> help(deployer.trace)
Trace methods
The Trace object has the following methods:
add
Adds a logger and optional log level to the trace file specification. Logger
names use Java package name syntax and log levels are one of the
following values:
Chapter 11. Reference
775
v
v
v
v
v
OFF
SEVERE
WARNING
CONFIG
INFO
v FINE
v FINER
v FINEST
The default value is OFF. The add method is shown in the following
examples:
>>> deployer.trace.add(com.ibm.ws.deployer, FINE)
>>> deployer.trace.add(com.ibm.ws.deployer.not.interested)
remove
Removes an existing logger from the trace file specification. Logger names
use Java package name syntax, as shown in the following example:
>>> deployer.trace.remove(com.ibm.ws.deployer.not.interested)
set
Sets the log level for an existing logger in the trace file specification.
Logger names use Java package name syntax and log levels are one of the
following values:
v OFF
v SEVERE
v WARNING
v CONFIG
v INFO
v FINE
v FINER
v FINEST
The set method is shown in the following examples:
>>> deployer.trace.set(com.ibm.ws.deployer, FINE)
>>> deployer.trace.set(com.ibm.ws.deployer, SEVERE)
spec
Returns a map with the trace file specification for the IBM Cloud
Orchestrator. The map has key-value pairs in which the key is the package
name and the value is the log level.
tail
Prints the last <n> lines of the file, where <n> is an integer, as shown in
the following example:
>>> deployer.trace.tail()
>>> deployer.errors.tail(100)
Errors object
The Errors object returns an ErrorFile object representing the running error file
on the IBM Cloud Orchestrator.
Help is available for the Errors object on the command-line interface. To get help,
pass the Errors object as an argument to the help() function, as shown in the
following example:
776
>>> help(deployer.errors)
The Errors object has one method, the tail method. The tail method prints the last
<n> lines of the file, in which <n> is an integer. The tail method is shown in the
following example:
>>> deployer.trace.tail()
>>> deployer.errors.tail(100)
Scripts object
A Scripts object represents the collection of script packages defined to IBM Cloud
Orchestrator. Objects of this type are used to create, delete, iterate over, list and
search for script packages on the IBM Cloud Orchestrator.
Help is available for the Scripts object on the command-line interface. To get help,
pass the Scripts object as an argument to the help() function, as shown in the
following example:
>>> help(deployer.scripts)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Script object
A Script object represents a particular script defined in IBM Cloud Orchestrator.
Use the Script object to query and manipulate the script definition in the product.
Attributes of the script and relationships between the script and other resources in
IBM Cloud Orchestrator are represented as Jython attributes on the Script object.
Manipulate these Jython attributes using standard Jython mechanisms to change
the corresponding data in IBM Cloud Orchestrator.
777
Help is available for the Script object on the command-line interface. To get help,
pass the Script object as an argument to the help() function, as shown in the
following example:
>>> help(deployer.script)
Script attributes
The pattern Script object has the following attributes:
acl
The access control list for this script. This field is read-only. For additional
help about using this object, see ACL object on page 862 or enter the
following command:
>>> help(deployer.acl)
archive
The script archive object associated with this script. For more information,
see Script.Archive object on page 780. This field is read-only.
command
The command to be run for this script package. This field contains a string
value with a maximum of 4098 characters.
commandargs
The arguments passed to the command. This field contains a string value
with a maximum of 4098 characters.
created
The creation time of the script, as number of seconds since midnight,
January 1, 1970 UTC. When the script is displayed, this value is shown as
the date and time in the local timezone. This field is read-only.
currentstatus
The status of the script package. This field contains an eight character
string value that is generated by the product.
currentstatus_text
A textual representation of currentstatus in the preferred language of the
requester. This string is automatically generated by the product. This field
is read-only.
description
The description of the script package. This field contains a string value
with a maximum of 1024 characters.
environment
Manages the key/value pairs that define the environment, or parameters,
of the script. The environment property holds the script keys and default
values for the script package. It is used such as a Jython dict object, as
shown in the following example:
>>> myscript.environment
{
"scriptkey1": "value for scriptkey1",
"scriptkey2": "value for scriptkey2"
}
>>> myscript.environment[scriptkey1]
value for scriptkey1
>>> myscript.environment[foo] = bar
>>> myscript.environment
{
"foo": "bar",
"scriptkey1": "value for scriptkey1",
"scriptkey2": "value for scriptkey2"
778
}
>>> del myscript.environment[foo]
>>> myscript.environment
{
"scriptkey1": "value for scriptkey1",
"scriptkey2": "value for scriptkey2"
}
label
location
The directory, on the virtual machine, into which files for this script
package are to be placed. This field contains a string value with a
maximum of 4098 characters.
log
The directory on the virtual machine to hold the log files generated by this
script package. This field contains a string value with a maximum of 4098
characters.
name
The name associated with this script. Each script must have a unique
name. This field contains a string value with a maximum of 1024
characters.
owner
A user object that references the owner of this script package. For more
information about the properties and methods supported by user objects,
enter the following command:
>>> help(deployer.user)
ostype The operating system where the script package can run. Specify one of the
following values:
linux/unix
Specifies the script package is applicable to Linux or Unix systems.
windows
Specifies the script package is applicable to Windows systems.
both
779
timeout
The maximum amount of time to wait for this script package to finish
running on the virtual machine. Specify the timeout, as the number of
milliseconds to wait, or 0 (zero) to wait indefinitely for the script package
to complete. The value of this attribute is an integer.
updated
The time the script was last updated, as number of seconds since midnight,
January 1, 1970 UTC. When the script is displayed, this value is shown as
the date and time in the local timezone. This field is read-only.
Script methods
The Script object attributes have the following methods:
clone
Creates a copy of this script package with all the same files, fields, and
settings. The name of the new script is provided and the acl attribute is
empty.
isDraft
Indicates if this script is in draft mode.
isReadOnly
Indicates if this script is read-only.
makeReadOnly
Makes this script read-only. When the script is made read-only, the script
cannot be modified.
Script.Archive object
A Script.Archive object represents the archive file associated with a particular
script in IBM Cloud Orchestrator. This object provides mechanisms to query and
manipulate the script archive in the product.
Script.Archive methods
The Script.Archive object has the following methods:
get
This method retrieves the archive currently associated with the script. This
method has one required parameter that indicates where the script archive
is to be saved. It can be either of the following values:
v A string containing the name of a file in which to save the archive. If the
file name does not end in .zip, .zip is automatically appended to the
file name.
v A Jython file object, as shown in the following example:
>>> myscript.archive.get(/path/to/foo.zip)
You must ensure that the file object can correctly handle binary data.
The script archive is returned in a compressed (.zip) file format.
__lshift__
This method is started implicitly when the Archive object is used as the
left argument of a left shift operator ( << ). It calls the set() method with
the right argument of the operator. This method is shown in the following
example:
>>> myscript.archive << /path/to/file
780
__rshift__
This method is started implicitly when the Archive object is used as the
left argument of a right shift operator ( >> ). It calls the get() method with
the right argument of the operator. This method is shown in the following
example:
>>> myscript.archive >> /path/to/file.zip
set
This method sets the archive associated with the script. It has one required
parameter that indicates the source of the script archive to be uploaded. It
can be either of the following values:
v A string containing the name of a file from which to get the archive.
v A Jython file object.
Ensure that the file object can correctly handle binary data. This method is
shown in the following example:
>>> myscript.archive.set(/path/to/foo)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
You can control the user access to scripts with the ACL object. For more information
about the ACL object, see the ACL object on page 862 topic.
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
Related reference:
ACL object on page 862
You can use the access control list (ACL) object to set and control user access for
other IBM Cloud Orchestrator resources.
Related information:
Jython
Python documentation
Snapshots object
A Snapshots object represents the collection of snapshots taken for a particular
virtual system instance. Objects of this type are accessed using the snapshots
property of the VirtualSystem object in which they are contained, as shown in the
following example:
>>> myvs = deployer.virtualsystems[my virtualsystem][0]
>>> myvs.snapshots
Objects of this type are used to create, delete, iterate over, list and search for
snapshots on the IBM Cloud Orchestrator
Chapter 11. Reference
781
To get help on the command-line interface for the snapshots object, pass the name
of the object as an argument to the help() function. See the following example:
>>> help(deployer.snapshots)
Snapshot object
A Snapshot object represents a particular snapshot defined on the IBM Cloud
Orchestrator. Use the Snapshot object to query and manipulate the snapshot
definition in the product. Attributes of the Snapshot object are represented as
Jython attributes on the Snapshot object. Relationships between the Snapshot object
and other resources on the IBM Cloud Orchestrator are also represented as Jython
attributes on the Snapshot object. Manipulate these Jython attributes using
standard Jython mechanisms to change the corresponding data on the IBM Cloud
Orchestrator.
Help is available on the command-line interface for the Snapshot object. To get
help, pass the Snapshot object as an argument to the help() function, as shown in
the following example:
>>> help(deployer.snapshot)
For more information about the command-line interface, see the Using the
command-line interface on page 735 section. For more information about working
with resources on the command-line interface, see the Resources, resource
collections, and methods on page 842 section.
You can control user access to virtual system instances using the ACL object. For
more information about the ACL object, see the ACL object on page 862 topic.
Related tasks:
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related information:
Jython
Python documentation
782
Storages object
A storages object represents the collection of hypervisor storage defined to the
IBM Cloud Orchestrator. Objects of this type are used to delete, iterate over, list
and search for storage devices on IBM Cloud Orchestrator.
You can work with storage on the command line and help is available. To get help
for the storages object, pass it as an argument to the help() function, as shown in
the following example:
>>> help(deployer.storage)
Storage object
A storage object represents a particular storage defined in IBM Cloud Orchestrator.
Use the storage object to query and manipulate the storage definition on the
product. Attributes of the storage object are represented as Jython attributes on the
storage object. Relationships between the storage object and other resources in the
IBM Cloud Orchestrator are also represented as Jython attributes on the storage
object. Manipulate these Jython attributes using standard Jython mechanisms to
change the corresponding data in IBM Cloud Orchestrator.
You can work with storage on the command line and help is available. To get help
for the storage object, pass it as an argument to the help() function, as shown in
the following example:
>>> help(deployer.storage_)
Storage attributes
The storage object has the following attributes:
created
The creation time of the storage object, as number of seconds since
midnight, January 1, 1970 UTC. When the storage object is displayed, this
value is shown as the date and time in the local time zone. This field is
read-only.
hypervisors
The set of hypervisors that are attached to this storage.
hypervisorstorageid
The identifier used by hypervisors to identify this storage. It is
automatically determined so this field is read-only.
id
name
The name associated with this storage object. Each storage object must
have a unique name.
783
updated
The time the storage was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the storage is displayed, this value
is shown as the date and time in the local time zone. This field is
read-only.
Related information:
Jython
Python documentation
VirtualApplication object
A VirtualApplication object represents a particular virtual application instance
defined on IBM Cloud Orchestrator. Use the VirtualApplication object to query
and manipulate the virtual application instance definition. Attributes of the virtual
application instance and relationships between the virtual application instance and
other resources on IBM Cloud Orchestrator are represented as Jython attributes on
the VirtualApplication object. Manipulate these Jython attributes using standard
Jython mechanisms to make changes to the corresponding data on IBM Cloud
Orchestrator. To get help for the VirtualApplication object, pass it as an argument
to the help() function, as shown in the following example:
>>> help(deployer.virtualapplication)
VirtualApplications object
A VirtualApplications object represents the collection of virtual application
instances defined to IBM Cloud Orchestrator. Objects of this type are used to
create, delete, iterate over, list and search for virtual application instances on IBM
Cloud Orchestrator. To get help for the VirtualApplications object, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.virtualapplications)
VirtualApplication methods
The VirtualApplication and VirtualApplications object have the following
methods:
list
List all virtual application instances.
deployer.virtualapplications
or
deployer.virtualapplications.list
784
785
ApplicationPattern object
An ApplicationPattern object represents a particular virtual application pattern
defined on IBM Cloud Orchestrator. Use the ApplicationPattern object to query
and manipulate the virtual application pattern definition. Attributes of the virtual
application pattern and relationships between the virtual application pattern and
other resources on IBM Cloud Orchestrator are represented as Jython attributes on
the ApplicationPattern object. Manipulate these Jython attributes using standard
Jython mechanisms to make changes to the corresponding data on IBM Cloud
Orchestrator. To get help for the ApplicationPattern object, pass it as an argument
to the help() function, as shown in the following example:
>>> help(deployer.application)
786
ApplicationPatterns object
An ApplicationPatterns object represents the collection of virtual application
patterns defined to IBM Cloud Orchestrator. Objects of this type are used to create,
delete, iterate over, list and search for virtual application patterns on IBM Cloud
Orchestrator. To get help for the ApplicationPatterns object, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.applications)
ApplicationPattern attributes
The ApplicationPattern object has the following attributes:
acl
ApplicationPattern methods
The ApplicationPattern and ApplicationPatterns object have the following
methods:
list
787
Create an application pattern with JSON file or .zip file (the file type is
decided by the file extension). For example:
v Use specific attributes:
>>> deployer.applications.create({name:demoApp})
You can use ssh-keygen to generate SSH keys and save the public key in a
file.
The <certFile> and <params> parameters are optional.
The format of <params> is:
{ node_link_id.attributeId: attributeValue, groups:{node_link_id.groupId: True/False} }.
788
Example:
sample=deployer.applications.get(a-b62aeddb-6b43-4421-a0b6-df41b44c5407)
env=deployer.environmentprofiles[0]
cloud = env.clouds.keys()[0]
ipgroup=env.clouds[cloud].ipgroups.keys()[0]
deployOptions = {"environment_profile" : env,
"cloud_group": cloud,
ip_group: ipgroup,
ip_version: IPv4
}
vapp = sample.deploy(env_test, deployOptions)
refresh
Call this method before getting status of the instance.
deployer.virtualapplications.get(<depl_id>).refresh()
or
deployer.virtualapplications[<index>].monitoring
getMetrics
Chapter 11. Reference
789
Sample output:
>>>virtualapplication = deployer.virtualapplications[0]
>>>servers = virtualapplication.monitoring.servers
>>> servers[0].getMetrics()
{
"CPU": {
"IO_Wait": 0.75,
"Idle_CPU": 93.17,
"System_CPU": 3.69,
"Time_Stamp": 1302279676016,
"User_CPU": 2.39
},
"DISK": {
"Blocks_Reads_Per_Second": 0,
"Blocks_Written_Per_Second": 13867,
"Time_Stamp": 1302279676016
},
"MEMORY": {
"Memory_Cache": 571.7,
"Memory_Free": 33.38,
"Memory_Free_Percent": 2,
"Memory_Total": 2000.0,
"Memory_Used": 1966.61,
"Memory_Used_Percent": 98,
"Swap_Free_Percent": 100,
"Swap_Used_Percent": 0,
"Time_Stamp": 1302279676016
},
"NETWORK": {
"Bytes_Received_per_sec": 7402,
"Bytes_Transmitted_per_sec": 4172,
"Time_Stamp": 1302279676016
},
"PaaS": {
"Private_IP": "10.102.129.79",
"Time_Stamp": 1302279676016,
"availability": "NORMAL",
"deploymentId": "d-694ca59c-a212-4b32-b083-d860a871e7f2",
"serverName": "application-was.11302175160418",
"state": "RUNNING"
}
}
monitoring roles
Get monitoring metrics data of a role.
deployer.virtualapplications[0].monitoring.roles[<index>].getMetrics(<metricType>)
Sample output:
>>>virtualapplication = deployer.virtualapplications[0]
>>>roles= virtualapplication.monitoring.roles
>>> roles[0].getMetrics()
{
"PaaS": {
"Private_IP": "10.102.129.79",
"Time_Stamp": 1302279782294,
"availability": "UNKNOWN",
"deploymentId": "d-694ca59c-a212-4b32-b083-d860a871e7f2",
"roleName": "application-was.11302175160418.WAS",
"serverName": "application-was.11302175160418",
"state": "RUNNING"
},
790
"WAS_JDBCConnectionPools": {
"MaxPercentUsed": 0,
"MaxWaitTime": 0,
"MinPercentUsed": 0,
"MinWaitTime": 0,
"PercentUsed": 0,
"Time_Stamp": 1302279782294,
"WaitTime": 0
},
"WAS_JVMRuntime": {
"HeapSize": 114055,
"JVMHeapUsed": 48,
"Time_Stamp": 1302279782294,
"UsedMemory": 55756
},
"WAS_TransactionManager": {
"ActiveCount": 0,
"CommittedCount": 150,
"RolledbackCount": 0,
"Time_Stamp": 1302279782294
},
"WAS_WebApplications": {
"MaxServiceTime": 0,
"MinServiceTime": 0,
"RequestCount": 0,
"ServiceTime": 0,
"Time_Stamp": 1302279782294
}
}
getLogs
List all logs that are available for downloading.
deployer.virtualapplications[<index>].vminstances().instances[<instance_index>].logging.getLogs()
Sample output:
>>> deployer.virtualapplications[0].vminstances().instances[0].logging.getLogs()
{DB2:[/home/db2inst1/sqllib/log/instance.log,
/home/db2inst1/sqllib/db2dump/stmmlog/stmm.0.log,
/home/db2inst1/sqllib/db2dump/db2inst1.nfy, /home/db2inst1/sqllib/db2dump/db2diag.log],
IWD Agent:
[/opt/IBM/maestro/agent/usr/servers/Database-db2.11319107992348/logs/Database-db2.11319107992348.DB2/trace.log,
/opt/IBM/maestro/agent/usr/servers/Database-db2.11319107992348/logs/Database-db2.11319107992348.DB2/console.log,
/opt/IBM/maestro/agent/usr/servers/Database-db2.11319107992348/logs/Database-db2.11319107992348.systemupdate/
console.log,
/0config/0config.log], OS: [/var/log/boot.log, /var/log/maillog, /var/log/messages,
/var/log/brcm-iscsi.log, /var/log/acpid, /var/log/wtmp, /var/log/yum.log, /var/log/mcelog,
/var/log/spooler, /var/log/dmesg, /var/log/secure, /var/log/cron, /var/log/rpmpkgs]}
download logs
deployer.virtualapplications[<index>].vminstances().instances[<instance_index>].logging.
download(<log file name>, <the local file path and file name to be saved>)
Sample output:
>>> deployer.virtualapplications[0].vminstances().instances[0].logging.download
("/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/SystemOut.log", "E:/test.log")
>>>
791
VirtualImages object
A VirtualImages object represents the collection of virtual images defined to IBM
Cloud Orchestrator. Objects of this type are used to create, delete, iterate over, list
and search for virtual images on the IBM Cloud Orchestrator.
You can work with virtual images on the command line and help is available. To
get help for the VirtualImages object, pass it as an argument to the help()
function, as shown in the following example:
>>> help(deployer.virtualimages)
VirtualImages attributes
progressIndicators
This boolean attribute specifies if a progress indicator displays when
uploading a virtual image from the command-line interface. The default
value of the progressIndicators attribute is false.
VirtualImages methods
The VirtualImages object has the methods described for a typical resource
collection. The following methods are unique to VirtualImages as their parameters
and return values differ from what is expected:
create Imports a new virtual image or images to the IBM Cloud Orchestrator. The
attributes to import the new virtual images can be specified in any of the
following ways:
v As a string specifying the URL from which the virtual image can be
downloaded, as shown in the following example:
>>> deployer.virtualimages.create
(http://server.xyz.com/path/to/foo.ova)
v As the name of a local virtual image open virtual appliance (OVA) file to
be uploaded to IBM Cloud Orchestrator, as shown in the following
example:
>>> deployer.virtualimages.create(/path/to/foo.ova)
Tip: Passing the OVA file to the product takes up more space on the
machine than specifying the URL of the OVA file. If space is an issue,
consider pointing to the OVA file with a string specifying the URL
instead.
v As a Jython file object that references the local OVA file, as shown in the
following example:
>>> deployer.virtualimages.create(open(/path/to/foo.ova, rb))
792
This method returns a VirtualImage object for the newly created virtual
image, or a list of VirtualImage objects if multiple virtual images were
created. Because of their size, importing virtual images takes several
minutes. If the location of the OVA file was specified using a local file
name or file object, the OVA file is uploaded to the IBM Cloud
Orchestrator before this method returns. Otherwise, this method queues
the operation on the IBM Cloud Orchestrator and returns immediately. The
returned VirtualImage objects can be used to track the status of the import
process on the IBM Cloud Orchestrator.
import
The import method is an alias for the create() method and uses the same
parameters and return values.
link
Links one or more virtual images to the IBM Cloud Orchestrator image
catalog. This method accepts the following parameters:
v Cloud group ID as first attribute.
v One or more OpenStack virtual image IDs separated by a comma. To
identify the virtual image IDs, use the nova image-list command in
your OpenStack environment.
The following example shows the link method:
>>> deployer.virtualimages.link
(1, 2496c17c-c303-4dd1-9008-0d16f8161b9c,
fab6d6be-dc4f-4dbe-bd41-4c9abc42b13e,
0adaaaba-ce86-48bf-8d62-c28a0cb0ebb1)
For more information about working with resource objects, see the Resources,
resource collections, and methods on page 842 section.
VirtualImage object
A VirtualImage object represents a particular virtual image defined to IBM Cloud
Orchestrator. Use the VirtualImage object to query and manipulate the virtual
image definition in IBM Cloud Orchestrator. Attributes of the virtual image
resource and relationships between the virtual image and other resources in IBM
Cloud Orchestrator are represented as Jython attributes on the VirtualImage object.
Manipulate these Jython attributes using standard Jython mechanisms to change to
the corresponding data on the IBM Cloud Orchestrator.
You can work with a virtual image on the command line and help is available. To
get help for the VirtualImage object, pass it as an argument to the help() function,
as shown in the following example:
>>> help(deployer.virtualimage)
Chapter 11. Reference
793
VirtualImage attributes
The VirtualImage object has the following attributes, all which are read only:
acl
advancedoptionsaccepted
This attribute specifies if the virtual images advanced options are enabled.
The default value for this boolean attribute is false.
build
created
The creation time of the virtual image, as number of seconds since
midnight, January 1, 1970 UTC. When the virtual image displays, this
value is shown as the date and time in the local time zone.
currentmessage
The message associated with the status of the virtual image. This field
contains an eight character string value that is generated by the product.
currentmessage_text
This attribute is a string representation of currentmessage in the preferred
language of the requester. This attribute is automatically generated by the
product.
currentstatus
The status of the virtual image. This field contains an eight character string
value that is generated by the product.
currentstatus_text
This attribute is a string representation of currentstatus in the preferred
language of the requester. This attribute is automatically generated by the
product.
description
The description of the virtual image. This field contains a string value with
a maximum of 1024 characters.
hardware
The default hardware configuration for the virtual image.
id
license
The license attribute contains complete information about the licenses
defined in the virtual image. A virtual image contains some number of
licenses organized into some number of collections. Exactly one license
from each collection must be accepted before the virtual image can be
used. The value of the licenses attribute is a Python dict object. Each key
in the dict object is the string ID of a collection of licenses. Each value is a
nested Python dict object with the following keys and values:
label
licenses
An array of Python dict objects, each of which describes one
license. These dict objects have the following keys and values:
label
licenseid
The ID of the license.
794
text
The value of the licenses attribute is generated by the product and cannot
be set. The value of the dict object is not protected against updates, but
such changes have no effect.
licenseaccepted
This attribute indicates if the license from this virtual image has been
accepted. IBM Cloud Orchestrator does not allow a virtual image to be
used until the license is accepted. The license from the virtual image must
be retrieved before it can be accepted. The following values are valid:
name
'T'
'F'
The display name associated with this virtual image. This attribute is
generated automatically when the virtual image is imported. This field
contains a string value with a maximum of 1024 characters.
operatingsystemdescription
This attribute specifies a textual description of the operating system used
by the virtual image. The value of this attribute is a string and might be
None if the OVA of the virtual image does not supply a value.
operatingsystemid
The readonly attribute specifies the ID of the guest operating system used
by the virtual image. The value of this attribute is an integer that
corresponds to one of the constants defined for
CIM_OperatingSystem.OSType.
operatingsystemversion
This read-only attribute specifies the version of the guest operating system
used by the virtual image. The value of this attribute is a string and might
be None if the OVA of the virtual image does not supply a value.
owner
A user object that references the owner of this virtual image. This attribute
is an integer value. For more information about the properties and
methods supported by user objects, enter the following command:
>>> help(deployer.user)
parts
The parts that can be realized by this virtual image. For more information
about the operations available for this list of parts, see Parts on the
command-line interface on page 839. You can also get help for
deployer.parts, as shown in the following example:
>>> help(deployer.parts)
pmtype The type of machine supported by this virtual image. This field contains a
string value with a maximum of eight characters.
productids
This attribute specifies all of the product IDs associated with this virtual
image.
servicelevel
The service level of the attribute. This field contains a string value with a
maximum of 1024 characters.
updated
The time the virtual image was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the virtual image is displayed, this
value is shown as the date and time in the local time zone.
Chapter 11. Reference
795
version
The version of the virtual image. This field contains a string value with a
maximum of 1024 characters.
VirtualImage methods
The VirtualImage object has the following methods:
acceptLicense(licenses)
This method accepts the license presented by the virtual image. The IBM
Cloud Orchestrator cannot use a virtual image until you have accepted its
license. This method accepts a single optional parameter that allows you to
specify which license for this virtual image is being accepted. The value for
this parameter must be a Python dict object in which each key is a license
collection ID for the virtual image. Each value is the license ID for the
license to be accepted from that collection. If this parameter is not
supplied, the product behaves as if an arbitrary license from each license
collection has been accepted. The following example shows the usage of
this method:
>>> myvi.acceptLicense({ collection1: license3,
collection2: license17 })
For more information about virtual image licenses, enter the following
command:
>>> help(deployer.virtualimage.license)
addProductid
Adds a new product ID to all the virtual image parts. This method accepts
the following parameters:
productid
The product ID that you want to add. This parameter is required.
licensetype
The license type for this product ID. Valid values are:
v PVU
v Server
The default value is PVU. This parameter is required.
licensecpu
The processor count limit for a server license. This parameter is
required for the server license type.
licensememory
The memory limit in GB for a server license. This parameter is
required for the server license type.
The following example shows the addProductid method:
>>> myimage[0].addProductid(5724-X89, PVU)
>>> myimage[0].addProductid(5724-X89, Server, 4, 32)
deleteProductid
Delete a product ID from all of the virtual image parts. This method
accepts the following parameters:
productid
The product ID that you want to delete. This parameter is
required.
796
licensetype
The license type for this product ID. Valid values are:
v PVU
v Server
The default value is PVU. This parameter is required.
The following example shows the deleteProductid method:
>>> myimage[0].deleteProductid(5724-X89, PVU)
For more information about working with resource objects, see the Resources,
resource collections, and methods on page 842 section.
You can control the user access to virtual images with the ACL object. For more
information about the ACL object, see the ACL object on page 862 information.
Related concepts:
Resources, resource collections, and methods on page 842
IBM Cloud Orchestrator manages different types of resources, for example,
patterns, virtual images, and virtual system instances. Within the command-line
interface, Jython objects are used to represent these resources and collections of
these resources. Methods control the behavior of the Jython objects.
Related tasks:
Chapter 6, Managing virtual images, on page 331
You can manage virtual images that can be deployed by using IBM Cloud
Orchestrator.
Related reference:
ACL object on page 862
You can use the access control list (ACL) object to set and control user access for
other IBM Cloud Orchestrator resources.
Virtual images REST API on page 985
You can use the representational state transfer (REST) application programming
interface (API) to manage virtual images.
Related information:
Jython
Python documentation
VirtualImageMappings object
A VirtualImageMappings object represents the collection of virtual image mappings
defined to IBM Cloud Orchestrator. Objects of this type are used to create, delete,
iterate over, list and search for virtual image mappings on the IBM Cloud
Orchestrator.
797
You can work with virtual image mappings on the command line and help is
available. To get help for the VirtualImageMappings object, pass it as an argument
to the help() function, as shown in the following example:
>>> help(deployer.virtualimagemappings)
VirtualImageMappings methods
The VirtualImageMappings object has the methods described for a typical resource
collection. The following method is unique to VirtualImageMappings as its
parameters and return values differ from what is expected:
create Creates a new virtual image mapping between pattern engine virtual
image and OpenStack image on IBM Cloud Orchestrator. The attributes to
create a new virtual images can be specified as a dict object in the
following ways:
v To create a mapping between pattern engine virtual image and already
existing OpenStack image using the OpenStack image UUID:
>>> deployer.virtualimagemappings.create({templateid: 1, cloudid: 2,
region: RegionOne, uuid: edd897e3-cd48-4a6f-8dd2-d6c3d11c0375})
VirtualImageMapping object
A VirtualImageMapping object represents a relation between a particular virtual
image defined to IBM Cloud Orchestrator pattern engine and an image defined in
OpenStack region. Use the VirtualImageMapping object to query and manipulate
the virtual image mapping definition in IBM Cloud Orchestrator. Attributes of the
virtual image mapping resource and relationships between the virtual image
mapping and other resources in IBM Cloud Orchestrator are represented as Jython
attributes on the VirtualImageMapping object. Manipulate these Jython attributes
using standard Jython mechanisms to change to the corresponding data on the
IBM Cloud Orchestrator.
You can work with a virtual image on the command line and help is available. To
get help for the VirtualImage object, pass it as an argument to the help() function,
as shown in the following example:
>>> help(deployer.virtualimagemapping)
VirtualImageMapping attributes
The VirtualImageMapping object has the following attributes, all which are read
only:
798
created
The creation time of the virtual image mapping, as number of seconds
since midnight, January 1, 1970 UTC. When the virtual image mapping
displays, this value is shown as the date and time in the local time zone.
currentstatus
The status of the virtual image mapping. This field contains an eight
character string value that is generated by the product.
id
name
The display name associated with this virtual image mapping. This
attribute is generated automatically when the virtual image mapping is
created and it is the same as the virtual image name. This field contains a
string value with a maximum of 1024 characters.
region The name of the OpenStack region in which the mapped OpenStack image
is located. This field contains a string value with a maximum of 1024
characters.
templateid
The ID of the virtual image. This attribute is an integer value.
uuid
virtualimage
The nested VirtualImage object to which the OpenStack image is mapped.
This attribute value is a VirtualImage object.
VirtualMachines object
A VirtualMachines object represents the collection of virtual machines within a
virtual system instance on IBM Cloud Orchestrator. Objects of this type are used to
create, delete, iterate over, list and search for virtual machines within a virtual
system instance on IBM Cloud Orchestrator.
Help is available on the command-line interface for the VirtualMachines object. To
get help, pass the VirtualMachines object as an argument to the help() function, as
shown in the following example:
>>> help(deployer.virtualmachines)
VirtualMachine object
A VirtualMachine object represents a particular virtual machine defined on IBM
Cloud Orchestrator. Use the VirtualMachine object to query and manipulate the
virtual machine definition. Attributes of the virtual machine are represented as
Jython attributes on the VirtualMachine object. Relationships between the virtual
machine and other resources on the IBM Cloud Orchestrator are also represented
as Jython attributes on the VirtualMachine object. Manipulate these Jython
attributes using standard Jython mechanisms to change to the corresponding data
on the IBM Cloud Orchestrator.
Chapter 11. Reference
799
VirtualMachine attributes
The VirtualMachine object has the following attributes:
cloud
A reference to the cloud to which this virtual machine belongs. For more
information about the properties and methods supported by cloud objects,
see Cloud group command-line interface reference on page 745 or enter
the following command:
>>> help(deployer.cloud)
cpucount
The number of virtual processors defined for this virtual machine. This
value is an integer.
created
The creation time of the virtual machine, as number of seconds since
midnight, January 1, 1970 UTC. When the virtual machine is displayed,
this value is shown as the date and time in the local timezone. This value
is numeric and is automatically generated by the product.
currentmessage
The message associated with the status of the virtual machine. This field
contains an eight character string value that is generated by the product.
currentmessage_text
Specifies the textual representation of the currentmessage attribute. This
attribute is a string representation of the currentmessage attribute in the
preferred language of the requester. This status message is automatically
generated by the product.
currentstatus
The status of the virtual machine. This field contains an eight character
string value that is generated by the product.
currentstatus_text
Specifies the textual representation of the currentstatus attribute. This
attribute is a string representation of the currentstatus attribute in the
preferred language of the requester. This status message is automatically
generated by the product.
desiredstatus
The intended status of the virtual machine. This field contains an eight
character string value that is generated by the product.
displayname
The display name associated with this virtual machine in the hypervisor.
This field contains a string value with a maximum of 1024 characters.
environment
The environment property shows the environment variables defined on the
virtual machine. The value of this property is a Jython dictionary (dict)
object.
Note: This dict object is intended only for reading. You can update the
dict object, but updates are not sent back to the IBM Cloud Orchestrator.
Therefore, updates have no effect on the virtual machine.
800
hardware
The hardware details for the virtual machine.
hypervisor
A reference to the hypervisor on which this virtual machine is running. For
more information on the properties and methods supported by Hypervisor
objects, enter:
>>> help(deployer.hypervisor)
If the hypervisor has been put in quiesce mode, the virtual machine can be
migrated to a new hypervisor by assigning a new Hypervisor to this
attribute, as shown in the following example:
>>> assert myvm.hypervisor.isQuiesced()
>>> myvm.hypervisor = newhypervisor
ip
ips
memory
The amount of memory allocated to this virtual machine, in megabytes.
This value is an integer.
migrationtargets
A list of hypervisors to which the virtual machine can be migrated. A
value of None indicates that the hypervisor hosting the virtual machine has
not been quiesced. An empty list indicates no suitable hypervisors were
found. The returned list is mutable, but changes to the list have no effect.
name
The display name associated with this virtual machine. This field contains
a string value with a maximum of 1024 characters.
runtimeid
The runtime ID generated by the hypervisor on which this virtual machine
is running. This field contains a string value with a maximum of 1024
characters.
scripts
The scripts that have been run on the virtual machine, both during and
after deployment.
Chapter 11. Reference
801
storageid
The hypervisor storage ID of the storage on which this virtual machine
resides. This field contains a string value with a maximum of 1024
characters.
updated
The time the virtual machine was last updated, as number of seconds since
midnight, January 1, 1970 UTC. When the virtual machine is displayed,
this value is shown as the date and time in the local timezone. This value
is numeric and is automatically generated by the product.
virtualimage
A reference to the VirtualImage object from which this virtual machine
originated. For more information about the properties and methods
supported by VirtualImage objects, see Virtual images command-line
interface reference on page 792 or enter the following command:
>>> help(deployer.virtualimage)
virtualsystem
A VirtualSystem object that references the virtual system instance to which
this virtual machine belongs. For more information about the properties
and methods supported by VirtualSystem objects, see Virtual system
instances (classic) command-line interface reference on page 804 or enter
the following command:
>>> help(deployer.virtualsystem)
VirtualMachine methods
The VirtualMachine object has the following methods:
delete Deletes the resource represented by this object.
start() Starts a virtual machine.
stop() Stops a virtual machine. The virtual machine continues to reserve the
resource used by that virtual machine.
For more information about the command-line interface, see the Using the
command-line interface on page 735 section. For more information about working
with resources on the command-line interface, see the Resources, resource
collections, and methods on page 842 section.
You can control user access to virtual system instances using the ACL object. For
more information about the ACL object, see the ACL object on page 862 topic.
Related tasks:
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related information:
Jython
Python documentation
802
Virtualsysteminstances object
A virtualsysteminstances object represents the collection of virtual system
instances that are defined to the IBM Cloud Orchestrator. Objects of this type are
used to create, iterate over, list and search for virtual system instances. Objects of
this type are used to create, delete, iterate over, list and search for virtual system
instances on the IBM Cloud Orchestrator. The virtualsysteminstances object
supports methods that are not available with the VirtualSystems object.
Help is available on the command-line interface for the virtualsysteminstances
object. To get help, pass the virtualsysteminstances object as an argument to the
help() function, as shown in the following example:
>>> help(deployer.virtualsysteminstances)
Virtualsysteminstances methods
The Virtualsysteminstances object supports the following methods:
virtualsysteminstances()
List all virtual system instances.
Example:
>>> deployer.virtualsysteminstances()
virtualsysteminstances[index]
Get a specified virtual system instance by index.
Example:
>>> deployer.virtualsysteminstances[0]
virtualmachines()
List the virtual machines that are associated with a specified virtual system
instance.
Example:
>>> deployer.virtualsysteminstances.get("a-b62ae").virtualmachines()
findFixes()
List the fixes for the virtual system instance.
Example:
>>> vsysinst0 = deployer.virtualsysteminstances[0]
>>> vsysinstfixes = vsysinst0.findFixes()
803
"description": "",
"filename": "fix.zip",
"fixprereqs": [
{
"created": 1394115951541,
"fixid": 61,
"id": 62,
"middlewarename": "IBM WebSphere Application Server Liberty",
"middlewareversion": "8.5.5.0",
"updated": 1394115951541
}
],
"hasservicefile": "TRUE",
"id": 61,
"installtype": "WCA",
"name": "testfix",
"ownerid": 1,
"scriptid": 140,
"severity": "RM10388",
"target": "APPLICATION",
"type": "IFIX",
"updated": 1394447441222
]
},
Note: You can get the value for vmtemplate, such as SourceNode and
TargetNode, from the output of the findFixes() method. For an example,
see the entry for the findFixes() method.
delete(virtual system ID)
Delete the specified virtual system instance.
Example:
>>> deployer.virtualsysteminstances.delete("a-b62ae")
804
VirtualSystems object
A VirtualSystems object represents the collection of virtual system instances
defined to the IBM Cloud Orchestrator. Objects of this type are used to create,
delete, iterate over, list and search for virtual system instances on the IBM Cloud
Orchestrator.
Help is available on the command-line interface for the VirtualSystems object. To
get help, pass the VirtualSystems object as an argument to the help() function, as
shown in the following example:
>>> help(deployer.virtualsystems)
VirtualSystems methods
The VirtualSystems object has the following method:
create Creates a virtual system instance based on a pattern. This method accepts a
single parameter that describes the virtual system instance to be created.
This parameter can be any of the following:
v A dictionary (dict) object with the required and optional keys, as shown
in the following example:
>>> deployer.virtualsystems.create({name: my virtual system,
...
environmentprofile: myEnvProf, pattern: thepattern,
...
*.*.password: mypassword})
The name for the virtual system instance as a string. This key is
required.
pattern A reference to the Pattern object for the new virtual system
instance. See Virtual system patterns (classic) command-line
interface reference on page 818 or the deployer.patterns help
for more information about obtaining a Pattern object. This key
is required.
starttime
The time at which the virtual system instance is started,
expressed as the number of seconds since midnight, January 1,
1970 UTC. This value is most easily obtained using the Jython
time module, particularly the time.time() and time.mktime()
functions, as shown in the following example:
Chapter 11. Reference
805
>>>
>>>
>>>
>>>
For both part properties and script parameters, any of the three pieces of
the key can be an asterisk (*) to match any value for that piece. See the
following examples:
part-1.ConfigPWD_ROOT.password
Specifies the value for the root password on the part with ID 1.
*.ConfigUSER_ROOT.password
Specifies the user password for all the parts.
part-3.*.password
Specifies a value to be used for all the passwords for part 3,
including any script parameters that have a key of password.
*.*.password
Specifies a value to be used for all part properties and script
parameters with a key of password.
*.*.*
If more than one key in the dict object can be used for a given part
property or script parameter, the most specific matching key is used. A
key with a wildcard closer to the end is considered less specific. For
example, when attempting to discover a value for part1.ConfigPWD_ROOT.password, the dict keys are considered in the
following order:
1. part-1.ConfigPWD_ROOT.password
806
2.
3.
4.
5.
6.
*.ConfigPWD_ROOT.password
part-1.*.password
*.*.password
part-1.ConfigPWD_ROOT.*
*.ConfigPWD_ROOT.*
7. part-1.*.*
8. *.*.*
For virtual image parts, you must specify the instance flavor in the
following format:
part-<part_id>.OpenStackConfig.flavor: <flavor>
807
nic-<nic_number>
Indicates a particular network interface on the virtual machine.
All parts have at least one network interface, designated nic-1.
Additional network interfaces are designated in numeric order,
for example nic-2 and nic-3, if additional NIC add-ons have
been added to the part. The network interfaces are ordered the
same as the corresponding add-on pattern scripts in part.scripts.
The following keys in the dict object are used as the source of
environment profile deployment data:
part-<part_id>.cloud
Specifies a Cloud object for the cloud group to be used for the
part. The cloud group must be associated with the environment
profile.
Important: If a part represents a group of virtual machines, all
virtual machines must be in the same cloud group.
part-<part_id>.vm-<vm_number>.nic-<nic_number>.ipgroup
Specifies the IPGroup object to be used for the network interface.
The IP group must be associated with the cloud group in the
environment profile.
part-<part_id>.vm-<vm_number>.nic-<nic_number>.ipaddress
If the environment profile indicates that the user deploying the
pattern is to supply IP addresses, this key is used to supply the
IP address, or addresses, to be used for the network interface.
part-<part_id>.vm-<vm_number>.nic--<nic_number>.hostname
This key behaves like the IP address key, but it is used to
provide host names. Unlike IP addresses, host names are never
required. The semantics and syntax for single and multiple
values are identical to IP addresses. As with part properties and
script parameters, an asterisk (*) can be used to specify a value
that applies to more than one part, as shown in the following
example:
{ ..., \n
environmentprofile: myenvpro,
*.cloud: mycloud,
# all parts to one cloud
*.*.nic-1.ipgroup: ipgroup1, # all built-in intfs to
ipgroup1
part-1.vm-1.nic-1.ipaddress: 1.2.3.4,
part-1.vm-2.nic-1.ipaddress: 1.2.3.5,
part-3.vm-1.nic-1.ipaddress: 1.2.3.6,
part-3.*.nic-2.ipgroup: ipgroup2, # extra nic goes to
ipgroup2
part-3.vm-1.nic-2.ipaddress: 5.6.7.8
part-3.vm-1.nic-2.hostname: foo.mycompany.com,
... }
808
VirtualSystem object
A VirtualSystem object represents a particular virtual system instance defined on
IBM Cloud Orchestrator. Use the VirtualSystem object to query and manipulate
the virtual system instance definition on the IBM Cloud Orchestrator. Attributes of
the virtual system instance and relationships between the virtual system instance
and other resources on the IBM Cloud Orchestrator are represented as Jython
attributes on the VirtualSystem object. Manipulate these Jython attributes using
standard Jython mechanisms to change the corresponding data on the IBM Cloud
Orchestrator.
To get help on the command-line interface for the VirtualSystem object, pass the
VirtualSystem object as an argument to the help() function, as shown in the
following example:
>>> help(deployer.virtualsystem)
VirtualSystem Attributes
The VirtualSystem object has the following attributes:
acl
The access control list for this virtual system instance. This field is
read-only. For additional help on using this object, enter the following
command:
>>> help(deployer.acl)
created
The creation time of the virtual system instance, as number of seconds
since midnight, January 1, 1970 UTC. When the virtual system instance is
displayed, this value is shown as the date and time in the local time zone.
This read-only value is numeric and is automatically generated by the
product.
currentmessage
The message associated with the status of the virtual system instance. This
read-only attribute has an eight character string value that is automatically
generated by the product.
currentmessage_text
Specifies the textual representation of the currentmessage attribute. The
currentmessage_text attribute is a string representation of the
currentmessage attribute in the preferred language of the requester. The
currentmessage attribute is automatically generated by the product.
currentstatus
The status of the virtual system instance. This read-only attribute has an
eight character string value that is automatically generated by the product.
currentstatus_text
Specifies the textual representation of the currentstatus attribute. The
currentstatus_text attribute s a string representation of the currentstatus
attribute in the preferred language of the requester. The
currentstatus_text attribute is automatically generated by the product.
desiredstatus
Indicates the status in which you want the virtual system instance to be.
Setting this value causes IBM Cloud Orchestrator to initiate the steps to get
the virtual system instance to this state.
Chapter 11. Reference
809
desiredstatus_text
Specifies the textual representation of the desiredstatus attribute. The
desiredstatus_text attribute is a string representation of the
desiredstatus attribute in the preferred language of the requester. The
desiredstatus_text attribute is automatically generated by the product.
environmentprofile
An environment profile object that references the profiles used to create
this virtual system instance. For more information about the properties and
methods supported by environment profile objects, see Environment
profiles command-line interface reference on page 748 or use the
following command:
>>> help(deployer.environmentprofile)
id
name
The name associated with this virtual system instance. Each virtual system
instance must have a unique name. This field contains a string value with
a maximum of 1024 characters. When a virtual system instance is created,
its name cannot be changed. This field is read-only.
owner
A User object that references the owner of this virtual system instance. For
more information about the properties and methods supported by User
objects, enter the following command:
>>> help(deployer.user)
pattern
A reference to the Pattern object from which this virtual system instance
was created. For more information about the properties and methods
supported by Pattern objects, see Virtual system patterns (classic)
command-line interface reference on page 818 or enter the following
command:
>>> help(deployer.pattern)
snapshots
A resource collection containing the snapshots taken of this virtual system
instance. For more information about the properties and methods
supported by the Snapshots objects, see Snapshots on the command-line
interface on page 781 or enter the following command:
>>> help(deployer.snapshots)
updated
The time the virtual system instance was last updated, as number of
seconds since midnight, January 1, 1970 UTC. When the virtual system
instance is displayed, this value is shown as the date and time in the local
time zone. This value is numeric and is automatically generated by the
product. This field is read-only.
virtualmachines
A resource collection containing the virtual machines within this virtual
system instance. For more information about the properties and methods
supported by the VirtualMachines objects, see Virtual machines
command-line interface reference on page 799 or enter the following
command:
>>> help(deployer.virtualmachines)
810
VirtualSystem methods
The VirtualSystem object has the following methods:
createSnapshot()
Creates a snapshot of this virtual system instance. This method takes an
optional dictionary that can contain a description for the snapshot.
delete()
Deletes the virtual system instance. This method accepts the following
optional parameters:
deleteRecord
This boolean parameter controls whether records and logs
associated with the virtual system instance are left on the product
machine after the virtual system instance is deleted. The default
value, True, deletes all information associated with the virtual
system instance. A value of False leaves those records on the
product machine for future inspection.
ignoreErrors
This boolean parameter controls whether the product continues
attempting to delete a virtual system instance after an error is
encountered. The default value, False, causes the attempted delete
to fail if an error is encountered.
Supply values for these parameters using Python named arguments, as
shown in the following example:
>>> myvirtsystem.delete(ignoreErrors=True, deleteRecord=False)
CAUTION:
Using the ignoreErrors parameter is helpful in specific situations only,
so use this option with caution. You might know that the virtual
machines cannot be deleted and you choose to clean them up manually,
for example. Or you might know that the server hosting the virtual
machine is no longer available. Therefore, deletion would not occur if
the errors blocked it. You can use the ignoreErrors parameter in these
circumstances to force deletion of a virtual system instance, even if the
virtual machines cannot be deleted.
start() Starts this virtual system instance.
stop() Stops this virtual system instance.
For more information about the command-line interface, see the Using the
command-line interface on page 735 section. For more information about working
with resources on the command-line interface, see the Resources, resource
collections, and methods on page 842 section.
You can control user access to virtual system instances using the ACL object. For
more information about the ACL object, see the ACL object on page 862 topic.
Related tasks:
Using the command-line interface on page 735
You can perform administrative functions in IBM Cloud Orchestrator by using the
command-line interface tool provided with the product.
Related information:
Jython
Chapter 11. Reference
811
Python documentation
VirtualSystemPatterns object
A VirtualSystemPatterns object represents the collection of virtual system patterns
on a IBM Cloud Orchestrator. Virtualsystempatterns objects are used to create,
iterate over, list and search for virtual system patterns.
Note: The VirtualSystemPatterns object is not supported for use with virtual
system patterns (classic). To work with virtual system patterns (classic), use the
Patterns object.
To get help for the VirtualSystemPatterns object on the command-line interface,
pass it as an argument to the help() function, as shown in the following example:
>>> help(deployer.virtualsystempatterns)
VirtualSystemPattern object
A VirtualSystemPattern object represents a single virtual system pattern or
template. Virtualsystempattern extends the Application object, and provides all
of the Application attributes and methods and the additional attributes and
methods that documented in subsequent sections.
Note: The VirtualSystemPattern object is not supported for use with virtual
system patterns (classic). To work with virtual system patterns (classic), use the
Pattern object.
To get help for the VirtualSystemPattern object on the command-line interface,
pass it as an argument to the help() function, as shown in the following example:
>>> help(deployer.virtualsystempattern)
VirtualSystemPattern attributes
patternversion
The pattern version, which can be any string. This attribute is read-only.
readonly
Boolean value that is set to true if the pattern is read-only, and false
otherwise. This attribute is read-only.
VirtualSystemPatterns methods
Create a virtual system pattern by using a Python dictionary (dict), a JSON file,
or a compressed file.
Format: deployer.virtualsystempatterns.create(file)
This example passes the method a Python dictionary (dict) that contains
the pattern application model:
>>> json={"model":{"name":"Test virtual system pattern", "patterntype":"vsys",
"version":"1.0", "patternversion":"1.0"}}
>>> deployer.virtualsystempatterns.create(json)
812
This example passes the method a JSON file that contains the pattern
application model:
>>> deployer.virtualsystempatterns.create("F:\\cli\\testJson.json")
If you specify a JSON or compressed file, you can also specify one or more
of these optional parameters:
name
Specifies the pattern name, which overrides the pattern name that is
specified in the model.
patternversion
Specifies the virtual system pattern version, which overrides any
pattern version that is specified in the model. The patternversion is
set to 1.0 by default if it is not specified. This parameter can be set to
any string.
replace
Include this parameter and set it to true to replace an existing virtual
system pattern with the same name and version. If this parameter is
not included, or is set to false, the operation fails if the pattern exists.
Note: The patternversion and replace parameters are not currently
supported for use with virtual application patterns.
Example:
>>> deployer.virtualsystempatterns.create("F:\\cli\\vsys.zip", name = "Test",
patternversion = "2.0", replace = True)
If you only specify a portion of the pattern name, the command returns all
patterns that have that text string as part of the pattern name.
For example, to return all patterns that include the string Red Hat in the
pattern name, run the command:
>>> deployer.virtualsystempatterns["Red Hat"]
813
Example:
>>> deployer.virtualsystempatterns.list({app_name:try1})
If you want to export or import a virtual system pattern and include the
referenced assets, use the export_artifacts and import_artifacts
methods from the deployer module.
Update the specified virtual system pattern.
Format: deployer.virtualsystempatterns.get(ID).update(file path)
Example:
>>> deployer.virtualsystempatterns.get("a-514a41").update("C:\\sample.zip")
List the images, script packages, add-ons, and software components (plug-ins)
that are associated with a pattern.
Format: deployer.virtualsystempatterns.listAssets()
Example:
>>> pattern = deployer.virtualsystempatterns[0]
>>> pattern.listAssets()
The output of the listAssets() method is a JSON object with each type of
asset that is associated with the pattern.
Clone a virtual system pattern from an existing virtual system pattern.
Format: deployer.virtualsystempatterns.get(ID).clone(name)
Example:
>>> deployer.virtualsystempatterns.get("a-514a41").clone("clonedTest")
814
system pattern with this method, call the deploy method and do not
include the placement_only parameter, or set it to False.
2. Modify the placement before you deploy the virtual system pattern,
and use the modified placement for the deployment. To use this
method, the deployment must be called in two phases:
v First, call the deploy method with the placement_only parameter set
to True to generate the placement and topology. This call generates a
Placement object without deploying the pattern. You can modify this
object to change the placement for the pattern before it is deployed.
This parameter tells the system to generate a placement for the
deployment, which is returned in response body. You can modify this
placement before you pass it to the system in the second phase to
deploy the pattern.
v Then, call the deployPlacement method and pass it a dictionary
object with these keys:
placement
This key is required. The value represents the final Placement
object. The placement settings that are specified in this object are
used for the deployment.
addon_parameters
This key is optional. Use this key to specify parameters for the
add-ons in the pattern, such as the volume ID for a Default
attach block disk add-on.
topology_parameters
This key is optional. Use this key to specify parameters for the
topology, such as the GPFS volume name.
This method accepts the following parameters:
name
Required. The name for the instance.
Cloud object or dictionary object
Required. You can use a cloud object or a dictionary object to describe
the environment profile. The environment profile dictionary object
contains these keys:
environment_profile
Required. An environment profile object.
placement_only
Optional. When placement_only is present and set to True, a
Placement object is returned without deploying the pattern. You
can modify the placement of the deployment by modifying this
object. Then, call the deployPlacement method and pass it the
modified object to use the modified settings for the deployment.
cloud_group
A Cloud object in the environment_profile. This attribute is
optional if the pattern supports placement.
ip_group
An IPGroup object in the cloud_group object. This attribute is
optional if the pattern supports placement.
815
ip_version
Optional. Valid values are 'IPv4' and 'IPv6'. The default value is
'IPv4'.
The environment profile dictionary format is:
{
environment_profile: <env_profile_obj> or <env_profile_id>
placement_only: True or False,
cloud_group: <cloud_group_obj> or <cloud_group_id>
ip_group: <ip_group_obj> or <ip_group_id>
ip_version: IPv4 or IPv6
}
816
817
You can control the access control list for virtual system patterns (classic)
with the ACL object. For more information about the ACL object, see ACL
object on page 862.
Advancedoptions
You can use the Advancedoptions object to view and set the advanced
options for the virtual system pattern (classic). Different advanced options
are available, depending on the topology described by your virtual system
pattern (classic). For more information about the Advancedoptions object,
see Advancedoptions object on page 825.
Parts
Pattern_parts
Represents the virtual system pattern (classic) parts and collection of
virtual system pattern (classic) parts defined within a particular virtual
system pattern (classic) on IBM Cloud Orchestrator. For more information
about virtual system pattern (classic) parts, see Virtual system pattern
(classic) parts on the command-line interface on page 827.
Pattern_scripts
Represents the scripts and collections of scripts defined within a particular
virtual system pattern (classic) part on IBM Cloud Orchestrator. For more
information about the virtual system pattern (classic) script parts, see
Virtual system pattern (classic) scripts on the command-line interface on
page 833.
Patterns
Represents the virtual system patterns (classic) and collection of virtual
system patterns (classic) defined to IBM Cloud Orchestrator. For more
information about patterns, see Virtual system patterns (classic) on the
command-line interface on page 821.
User
818
copied the virtual images, add-ons, and scripts. The same virtual images with the
same name must exist on both the environment before you can export and import
the virtual system patterns. Also, the same scripts and add-ons, with the same
names, must exist on both the IBM Cloud Orchestrator environments.
About this task
You can export virtual system patterns from one IBM Cloud Orchestrator
environment and import them to another IBM Cloud Orchestrator environment
using the patternToPython.py script. This command-line interface (CLI) script
examines a virtual system pattern defined on IBM Cloud Orchestrator and
generates another CLI script. When run, this generated CLI script reconstructs the
original virtual system pattern on another IBM Cloud Orchestrator environment.
The reconstructed virtual system pattern contains the same properties, shown in
the following list, as the original script:
v Parts
v Property values and metadata
v Scripts
v Script parameter values
v Advanced options
v Add-ons
v Add-on parameter values
v Status (draft or read-only)
Procedure
1. Use the samples/patternToPython.py CLI script to export the virtual system
pattern. Use the standard CLI parameters to specify the host name, user ID,
and password to access the virtual system pattern, and the location of the
patternToPython.py CLI script. Use one of the following methods to export a
virtual system pattern:
Interactively
Specify only the file name and select the virtual system pattern
interactively, as shown in the following example:
deployer -h <hostname.com> -u <user> -p <password>
-f samples/patternToPython.py -f exported_pattern.py
Automated
Specify both the file name and virtual system pattern, as shown in the
following example:
deployer -h <hostname.com> -u <user> -p <password>
-f samples/patternToPython.py -p "My Pattern" -f exported_pattern.py
See Invoking the command-line interface on page 737 for other ways to
specify the host name, user ID, and password.
The patternToPython.py CLI script accepts the following options:
-f <filename> or filename <filename>
Indicates that the generated CLI script is written to the specified file. If
not specified, the generated CLI script is written to standard output.
-p <pattern> or pattern <pattern>
Specifies the name of the virtual system pattern to be exported. The
virtual system pattern name you specify must uniquely identify a
819
virtual system pattern. If you do not specify the virtual system pattern,
a list of virtual system patterns that are defined is shown and you are
prompted to select one.
passwords
Includes passwords in the Python code. If the password is not
specified, password values are omitted.
Remember: If you specify the --passwords option, passwords are
stored as plain text in the generated script. Restrict read access to the
script file accordingly.
2. Check for errors. If there are any problems generating the CLI script, the
patternToPython.py script generates error messages.
3. Run the generated CLI script to import the virtual system pattern to another
IBM Cloud Orchestrator environment. Use the standard CLI parameters to
specify the host name, user ID, and password of IBM Cloud Orchestrator on
which the virtual system pattern is to be created. The following example
specifies the host name and user ID:
deployer -h <hostname.com> -u <user> -p <password> -f exported_pattern.py
See Invoking the command-line interface on page 737 for more information
about specifying the host name, user ID, and password.
Results
When you have completed these steps, you have exported the virtual system
pattern from one IBM Cloud Orchestrator environment and imported it to a second
environment.
What to do next
You can use the virtual system patterns or modify them for use on the
environment to which you copied them.
Related tasks:
Making virtual system patterns (classic) read-only on page 463
Either draft or read-only virtual system patterns can be deployed for testing or
production, but making a virtual system pattern read-only prevents further edits to
the topology definition. Making virtual system patterns read-only provides
consistent reuse in the cloud.
Deploying a virtual system pattern (classic) on page 464
You can deploy virtual system patterns to run in a cloud group. You can deploy
either draft or committed virtual system patterns for testing or production.
Configuring advanced options on page 449
When you have edited the topology of a virtual system pattern, you can configure
advanced function for the virtual system pattern.
Related reference:
Virtual system pattern (classic) editing views and parts on page 443
A virtual system pattern, that is not read-only, can be edited if you have
permission to edit it. The topology for a virtual system pattern is graphically
shown. Virtual image parts, add-ons, and script packages can be dropped onto an
editing canvas to create or change relationships between the parts that define the
topology.
820
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Pattern object
A Pattern object represents a particular IBM Cloud Orchestrator virtual system
pattern. Use the Pattern object to query and manipulate the virtual system pattern
definition. Attributes of the virtual system pattern and relationships between the
virtual system pattern and other IBM Cloud Orchestrator resources are represented
as Jython attributes on the Pattern object. Manipulate these Jython attributes using
standard Jython mechanisms to change the corresponding IBM Cloud Orchestrator
data.
To get help for the Pattern object on the command-line interface, pass it as an
argument to the help() function, as shown in the following example:
>>> help(deployer.pattern)
Pattern attributes
acl
An object for manipulating the access control list for this virtual system
pattern. For more information about the properties and methods supported
by acl objects, enter the following command:
help(deployer.acl)
821
since midnight, January 1, 1970 UTC. When the virtual system pattern is
displayed, this value is shown as the date and time in the local time zone.
This field is read-only.
currentmessage
The message associated with the status of the virtual system pattern. This
field is read-only.
currentmessage_text
Provides a textual description of the current message. This field is
read-only.
currentstatus
The status of the virtual system pattern.
currentstatus_text
Provides a textual description of the status. This field is read-only.
description
The description of the virtual system pattern.
id
name
The name associated with this virtual system pattern. Each virtual system
pattern must have a unique name.
owner
A User object that references the owner of this virtual system pattern. For
more information about the properties and methods supported by user
objects, enter the following command:
>>> help(deployer.user)
parts
updated
The time the virtual system pattern was last updated, as number of
seconds since midnight, January 1, 1970 UTC. When the virtual system
pattern is displayed, this value is shown as the date and time in the local
time zone. This field is read-only.
validations
The validation status and messages associated with the virtual system
pattern. The value of this attribute is a list containing the results of
validation tests run on the IBM Cloud Orchestrator. Each entry in the list is
a dict object containing the following keys and values:
status
This key provides the validation status associated with this entry.
status_text
This key provides a textual representation of the status.
message
This key provides the message that is associated with the status.
message_text
This key provides the textual representation of the message.
This value is automatically generated and cannot be changed. To get help
for the validations attribute, enter the following command:
help(deployer.pattern.validations)
822
validationmessage
The message associated with the validation status of the virtual system
pattern. This field is read-only.
validationmessage_text
Provides a textual description of the validation message. This attribute
provides a shortcut to access the first entry of the validations attribute.
This field is read-only.
validationstatus
The status associated with validation of the virtual system pattern. This
field is read-only.
validationstatus_text
Provides a textual description of the validation status. This attribute
provides a shortcut to access the first entry of the validations attribute.
This field is read-only.
virtualimage
The virtual image that is used to realize this virtual system pattern when
the virtual system pattern is deployed.
virtualsystems
The virtual system instances currently running this virtual system pattern.
For more information about the properties and methods supported by
virtualsystems objects, enter the following command:
help(deployer.virtualsystems)
Pattern methods
The pattern object has the following methods:
clone()
Clones this pattern object and returns a pattern object for the new virtual
system pattern, as shown in the following example:
clone(**options)
virtualimage
This deprecated parameter attempts to change all parts in the new
virtual system pattern to the specified virtual image, regardless of
the number of virtual images referenced in the original virtual
system pattern. To change the virtual image associated with parts
in the new virtual system pattern, set the virtualimage attribute on
those parts after the virtual system pattern has been cloned. If not
specified, parts in the new virtual system pattern use the same
virtual images as the corresponding parts in this virtual system
pattern. If specified, an attempt is made to switch all parts in the
new virtual system pattern to the indicated virtual image, as
shown in the following example:
>>> newpattern = mypattern.clone(name=foo, virtualimage=myvi)
Chapter 11. Reference
823
isDraft()
Indicates if this virtual system pattern is in draft mode.
isReadOnly()
Indicates if this virtual system pattern is read-only.
listConfig(data={})
Returns a dictionary (dict object) of the configurable part properties and
script parameters contained in this virtual system pattern. The keys for this
dict object are described under create in Virtual system instances (classic)
command-line interface reference on page 804 and in the online help for
deployer.virtualsystems.create. The values reflect the default values
defined in the virtual system pattern.
Accepts a single optional parameter that can be used to supply a dict
object containing overrides to the default values. Specify the overrides as
described under create method in Virtual system instances (classic)
command-line interface reference on page 804. Specifying overrides is
useful to see what values are to be used during a virtual system pattern
deployment without tying up the resources to perform an actual
deployment.
makeReadOnly()
Makes this virtual system pattern read-only. When the virtual system
pattern is read-only, the virtual system pattern can be deployed but not
modified.
runInCloud(cloud,options)
Allocates cloud resources and starts a copy of this virtual system pattern in
the cloud. This method requires the following parameters:
cloudorep
This is either a cloud object representing the cloud group in which
the virtual system pattern is to be run, or an EnvironmentProfile
object representing the environment profile to be used for the
deployment.
options A Jython dict object containing:
v A name key/value that specifies a name for the virtual system
instance to be created.
v The part property and script parameter values to be used for the
deployment. For more information, see the create method in
Virtual system instances (classic) command-line interface
reference on page 804.
v The environment profile configuration information. For more
information, see the create method in Virtual system instances
(classic) command-line interface reference on page 804.
The following example shows the runInCloud method:
>>> myvirtsys = mypattern.runInCloud(myCloud, { name: example,
...
*.*.password: thepassword })
toPython(f, **options)
This method generates a Python script on the local product machine that,
when run, reconstructs a virtual system pattern from IBM Cloud
Orchestrator. To use the generated script, the machine on which the virtual
824
clear() Clears all advanced options from the virtual system pattern. There are no
parameters to this method, as shown in the following example:
>>> mypattern.advancedoptions.clear()
__contains__(item)
Called implicitly by the Jython in operator, this method accepts a single
string parameter. The parameter names an advanced option and returns a
Boolean value. The boolean value indicates if the specified advanced
option is set in the virtual system pattern, as shown in the following
example:
>>> if security-enabled in mypattern.advancedoptions:
...
print security is enabled
825
getAvailableOptions()
Returns a list of strings that name all the advanced options available for
this virtual system pattern. This method has no parameters, as shown in
the following example.
>>> mypattern.getAvailableOptions()
Note: This list of strings returned includes all available advanced options,
regardless of whether they are currently set.
To see just the options that are currently set, pass the advancedoptions
property to the Jython list() function.
__iadd__(other)
Started implicitly when you use the Jython += operator with the
advancedoptions property, as shown in the following example:
>>> mypattern.advancedoptions += messaging-engine-ha
The __iadd__ method has the same behavior as the add method.
__isub__(other)
This method is started implicitly when you use the Jython -= operator with
the advancedoptions property, as shown in the following example:
>>> mypattern.advancedoptions -= [messaging-mq-link,
...
messaging-enabled]
The __isub__ method has the same behavior as the remove method.
__iter__()
Returns an iterator over all the advanced options (as strings) that are
currently set for this virtual system pattern, as shown in the following
example:
>>> for ao in mypattern.advancedoptions:
...
print advanced option %s is set % ao
remove(other)
This method accepts a single parameter that must be either a string
specifying an advanced option or a list of these strings, as shown in the
following example:
>>> mypattern.advancedoptions.remove(messaging-enabled)
>>> mypattern.advancedoptions.remove([sessionpersistence-db,
...
security-enabled])
The specified advanced options are removed from the existing advanced
options for this virtual system pattern.
Note:
v Some advanced options are grouped into sets that require at least one of
the advanced options to be set. If removing one advanced option would
cause this constraint to be violated, the default advanced option from
the set is automatically selected.
826
__repr__()
Returns a string representation of the advanced options. The string
representation includes short textual descriptions of each advanced option
and shows the hierarchy of the advanced options.
__str__()
Returns a string representation of the advanced options. The string
representation includes short textual descriptions of each advanced option
and shows the hierarchy of the advanced options.
__unicode__()
Returns a string representation of the advanced options. The string
representation includes short textual descriptions of each advanced option
and shows the hierarchy of the advanced options.
In addition to these methods, you can also assign a string or list of strings to the
advancedoptions property. The result is the same as if you had issued the clear()
and then the add() methods of the same advanced options.
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Related tasks:
Working with virtual system patterns (classic) on page 434
Using a virtual system pattern, you can describe the topology of a system that you
want to deploy. Virtual system patterns provide repeatable system deployment that
can be reproduced. To build virtual system patterns, you can use parts from one or
more virtual images, add-ons, and script packages.
Related information:
Jython
Python documentation
Virtual system pattern (classic) parts on the command-line interface:
You can work with virtual system patterns (classic) by working with virtual system
pattern (classic) parts on the IBM Cloud Orchestrator command-line interface.
For information about working with the command-line interface, see Using the
command-line interface on page 735.
Pparts object
A pattern_parts object represents the collection of parts defined within a
particular virtual system pattern on the IBM Cloud Orchestrator. Objects of this
type are used to create, delete, iterate over, list and search for parts within a virtual
system pattern on the IBM Cloud Orchestrator. When creating a Ppart object, an ID
(of the part to be added to the virtual system pattern) is required. A Pparts object
has all the methods of a resourcecollection object. The create() method is
described because it has specific behavior unique to the Pparts object. The create
method creates a virtual system pattern part or parts within a virtual system
pattern. The parts to be added can be specified in any of the following ways:
v As a Part object for the part that is to be added to this virtual system pattern, as
shown in the following example:
Chapter 11. Reference
827
>>> mypattern.parts.create(thepart)
v As a dictionary (dict) object with the required keys, as shown in the following
example:
>>> mypattern.parts.create({id: thepart.id })
v As a long or int value that specifies the ID of the Part object to be used to create
a virtual system pattern part, as shown in the following example:
>>> mypattern.parts.create(thepart.id)
v As the name of a file containing a Jython expression that evaluates to one of the
values in this list.
v As the value deployer.wizard or a wizard instance created by calling
deployer.wizard(). If either of these values is supplied, the command-line
interface prompts interactively for the values to create the resource. For more
information about using the wizard to create a pattern_parts object, see
Wizard objects on the command-line interface on page 860.
v As a list of any of the previous items in the list, in which case multiple parts are
created within the virtual system pattern. This usage is shown in the following
example:
>>> mypattern.parts.create([part1, part2])
The create method returns a resource object for the new virtual system pattern
part, or a list of these objects if multiple virtual system pattern parts were created.
To get help for the pattern_parts object on the command-line interface, pass it as
an argument to the help() function, as shown in the following example:
>>> help(deployer.pattern_parts)
Ppart object
A Ppart object represents a part that has been added to a virtual system pattern in
IBM Cloud Orchestrator. Use the pattern_part object to query and modify the part
definition in the product. Attributes of the part and relationships between the part
and other resources in IBM Cloud Orchestrator are represented as Jython attributes
on the pattern_part object. Modify these Jython attributes using standard Jython
mechanisms to change the corresponding data in IBM Cloud Orchestrator.
Help is available for the virtual system pattern part object on the command-line
interface. To get help, pass it as an argument to the help() function, as shown in
the following example:
>>> help(deployer.pattern_part)
Ppart Attributes
The Ppart object has the following attributes:
count
The number of instances of this part that are to be created when the virtual
system pattern is deployed.
Note: This property is only defined for types of parts that can be
replicated at deployment time. For those types of parts, the count is
initially set to 1 and can be changed to any positive integer value. For
other types of parts, the count is None and cannot be changed.
countLocked
This attribute provides a boolean value that indicates if virtual machines,
based on this part, can be dynamically added to or removed from the
828
virtual system instance after the virtual system pattern has been deployed.
For parts that cannot be dynamically added or removed, the countLocked
attribute returns the None value and cannot be changed. For more
information, enter the following command:
help(deployer.pattern_part.countLocked)
description
The description of the part. This field is a string value with a maximum of
1024 characters.
id
partCaption
The label used for this part. This field is a string value with a maximum of
1024 characters.
pattern
The virtual system pattern that contains this part. For more information
about the properties and methods supported by pattern objects, enter the
following command:
>>> help(deployer.pattern)
properties
The list of properties defined for this part. Each property is a dict object
with the following properties:
key
locked
pclass
value
userConfigurable
Indicates whether the value of the property can be changed at
deployment time.
Note: This property is read-only. Changes to the dict object have no effect
on the part. To change a property of the part, use the setProperty() method
of the part.
scripts
A resource collection containing the scripts contained within this virtual
system pattern part. For more information about the properties and
methods supported by pattern_scripts objects, enter the following
command:
>>> help(deployer.pattern_scripts)
startsafter
The set of parts in the virtual system pattern that must start before this
part. For additional information on manipulating part startup order, enter
the following command:
>>> help(deployer.pattern_part.StartsAfter)
829
system pattern. Virtual system pattern parts with smaller values start
before virtual system pattern parts with larger values. This attribute is an
integer.
validations
This attribute provides the validation status and messages associated with
the virtual system pattern part. The value of this attribute is an array
containing the results of validation tests run on the IBM Cloud
Orchestrator. Each entry in the array is a dict object containing the
following keys and values:
message
This key provides the message that is associated with the status.
message_text
This key provides the textual representation of the message.
status
This key provides the validation status associated with this entry.
status_text
This key provides a textual representation of the status.
This value is automatically generated and cannot be changed. To get help
for the validations attribute, enter the following command:
help(deployer.pattern_part.validations)
virtualimage
The virtual image to be used to realize this part when the virtual system
pattern including the part is deployed. For more information about the
properties and methods supported by virtualimage objects, enter the
following command:
help(deployer.virtualimage)
Ppart Methods
The Ppart object has the following methods:
getProperty
Returns information about a particular property of a part. This method
accepts the following parameters:
key
setProperty
Sets the value and (optionally) metadata for a part property. This method
accepts the following parameters:
key
830
metadata
An optional dict object that can contain the userConfigurable key.
The userConfigurable key is a boolean value that indicates if users
can change the value of this property at deployment time.
pclass The pclass of the property to be retrieved. This parameter is
required.
value
mypassword, userConfigurable=False)
StartsAfter object
A list-like object that represents the parts in the pattern that must start before this
part.
StartsAfter methods
The StartsAfter object has the following methods:
__contains__
This method is invoked implicitly by Jython when you use the in operator
on a StartsAfter object. Its single parameter must be a virtual system
pattern part from the same virtual system pattern. The method returns
True or False to indicate if the specified virtual system pattern part is in
the set of virtual system pattern parts that must start before the virtual
system pattern part to which the StartsAfter object belongs. Invocation of
this method is shown in the following example:
>>> firstPart = mypattern.parts[0]
>>> secondPart = mypattern.parts[1]
>>> if firstPart in secondPart.startsafter:
>>>
# firstPart starts before secondPart
__delitem__
This method is invoked implicitly by Jython when you use the del
operator on a StartsAfter object. Its single parameter must be the index of
the virtual system pattern part to be removed from the set of virtual
system pattern parts that must start before the virtual system pattern part
to which the StartsAfter object belongs. Invocation of this method is shown
in the following example:
>>> del mypattern.parts[0].startsafter[0]
__getitem__
This method is invoked implicitly by Jython when you use the [] operator
on a StartsAfter object. Its single parameter must be the non-negative index
of the virtual system pattern part to be returned from the set of virtual
system pattern parts that must start before the virtual system pattern part
to which the StartsAfter object belongs. Invocation of this method is shown
in the following example:
>>> mypattern.parts[0].startsafter[0]
__iadd__
This method is invoked implicitly by Jython when you use the += operator
Chapter 11. Reference
831
__lshift__
This method is invoked implicitly by Jython when you use the left shift
operator (<<) on a StartsAfter object. Its single parameter can be either a
virtual system pattern part or a list of virtual system pattern parts. All
virtual system pattern parts must belong to the same virtual system
pattern. All parts passed as arguments are added to the set of virtual
system pattern parts that must start before the virtual system pattern part
to which the StartsAfter object belongs. Invocation of this method is shown
in the following example:
>>> firstPart = mypattern.parts[0]
>>> secondPart = mypattern.parts[1]
>>> secondPart.startsafter << firstPart
832
__repr__
Returns a string representation of the virtual system pattern parts that
must start before a particular virtual system pattern part.
__rshift__
This method is invoked implicitly by Jython when you use the right shift
operator (>>) on a StartsAfter object. Its single parameter can be either a
virtual system pattern part or a list of virtual system pattern parts. All
virtual system pattern parts must belong to the same virtual system
pattern. All parts passed as arguments are removed from the set of virtual
system pattern parts that must start before the virtual system pattern part
to which the StartsAfter belongs. Invocation of this method is shown in the
following example:
>>> firstPart = mypattern.parts[0]
>>> secondPart = mypattern.parts[1]
>>> secondPart.startsafter >> firstPart
833
v As a long or int value that specifies the ID of the script object to be used
to create a virtual system pattern script. This value is shown in the
following example:
>>> mypart.scripts.create(thescript.id)
This method returns a virtual system pattern script object for the new
virtual system pattern script, or a list of these objects if multiple virtual
system pattern scripts were created.
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
For more information about the script package object, see Script package
command-line interface reference on page 777.
Pscript object
A virtual system pattern script object represents a script that has been added to a
part in a virtual system pattern in IBM Cloud Orchestrator. Use the virtual system
pattern script object to query and manipulate the script definition in the product.
Attributes of the script and relationships between the script and other resources in
IBM Cloud Orchestrator are represented as Jython attributes on the virtual system
pattern script object. Manipulate these Jython attributes using standard Jython
mechanisms to change the corresponding data in IBM Cloud Orchestrator.
834
To get help for the pattern_script object, on the command-line interface, pass it as
an argument to the help() function, as shown in the following example:
>>> help(deployer.pattern_script)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Pscript Attributes
The pattern_script object has the following attributes:
description
The description of the script.
executionOrder
Indicates the order in which this script is to run, relative to other scripts in
the same part. Virtual system pattern scripts with larger values for the
executionOrder attribute are run before scripts with smaller values. This
value is an integer.
id
isdeletable
Indicates if this script can be deleted by a user. This attribute is read-only.
label
parameters
A list of parameters defined for this script. This attribute is read-only. Each
parameter is a dict object with the following properties:
defaultvalue
The default value assigned to the parameter.
key
locked
userConfigurable
Indicates whether the value of the parameter can be changed at
deployment time.
Changes to the dict object have no effect on the script. To change a
parameter of the script, use the setParameter() method.
ppart
The part that contains this script. This attribute is read-only. For more
information about the properties and methods supported by pattern_part
objects, enter the following command:
>>> help(deployer.pattern_part)
startsafter
The set of scripts in the virtual system pattern that must start before this
script. For additional information on manipulating script startup order,
enter the following command:
>>> help(deployer.pattern_script.StartsAfter)
835
Pscript Methods
The pattern_script part has the following methods:
getParameter
Returns information about a particular parameter of a script. This method
accepts the following parameters:
key
wantMetadata
An optional boolean value indicating if the method returns
metadata about the parameter (True) or just the value of the
parameter (False), as shown in the following example:
>>> mypattern.parts[0].scripts[0].getParameter(key1, True)
setParameter
Sets the value and (optionally) metadata for a script parameter. This
method accepts the following parameters:
key
defaultvalue
An optional value to be set for the parameter. If not specified, the
current value of the parameter is unchanged.
metadata
An optional dict object that can contain the following key:
userConfigurable
A boolean value that indicates if users can change the
value of this parameter at deployment time, as shown in
the following example:
>>> mypattern.parts[0].scripts[0].setParameter(key1, \
...
new value, **{ userConfigurable: False })
StartsAfter object
A list-like object that represents the parts in the pattern that must start before this
part.
StartsAfter methods
The StartsAfter object has the following methods:
__contains__
This method is invoked implicitly by Jython when you use the in operator
on a StartsAfter object. Its single parameter must be a virtual system
pattern script from the same virtual system pattern. The method returns
True or False to indicate if the specified virtual system pattern script is in
the set of virtual system pattern scripts that must start before the virtual
system pattern script to which the StartsAfter object belongs. Invocation of
this method is shown in the following example:
>>> firstScript = mypattern.parts[0].scripts[0]
>>> secondScript = mypattern.parts[1].scripts[0]
>>> if firstScript in secondScript.startsafter:
>>>
# firstScript starts before secondScript
__delitem__
This method is invoked implicitly by Jython when you use the del
836
__getitem__
This method is invoked implicitly by Jython when you use the [] operator
on a StartsAfter object. Its single parameter must be the non-negative index
of the virtual system pattern script to be returned from the set of virtual
system pattern scripts that must start before the virtual system pattern
script to which the StartsAfter object belongs. Invocation of this method is
shown in the following example:
>>> mypattern.parts[0].scripts[0].startsafter[0]
__iadd__
This method is invoked implicitly by Jython when you use the += operator
on the StartsAfter attribute of a virtual system pattern script. It accepts a
single parameter that can be either a virtual system pattern script or list of
virtual system pattern scripts. All virtual system pattern scripts must
belong to the same virtual system pattern. All scripts passed as arguments
are added to the set ofvirtual system pattern scripts that must start before
the virtual system pattern script to which the StartsAfter object belongs.
Invocation of this method is shown in the following example:
>>> firstScript = mypattern.parts[0].scripts[0]
>>> secondScript = mypattern.parts[1].scripts[0]
>>> secondScript.startsafter += firstScript
837
__lshift__
This method is invoked implicitly by Jython when you use the left shift
operator (<<) on a StartsAfter object. Its single parameter can be either a
virtual system pattern script or a list of virtual system pattern scripts. All
virtual system pattern scripts must belong to the same virtual system
pattern. All scripts passed as arguments are added to the set of virtual
system pattern scripts that must start before the virtual system pattern
script to which the StartsAfter object belongs. Invocation of this method is
shown in the following example:
>>> firstScript = mypattern.parts[0].scripts[0]
>>> secondScript = mypattern.parts[1].scripts[0]
>>> secondScript.startsafter << firstScript
838
Part object
A Part object represents a particular part defined to IBM Cloud Orchestrator. Use
the Part object to query and manipulate the part definition. Attributes of the part
and relationships between the part and other resources in IBM Cloud Orchestrator
are represented as Jython attributes on the Part object. Manipulate these Jython
attributes using standard Jython mechanisms to change the corresponding data on
the IBM Cloud Orchestrator.
Help is available for the Part object on the command-line interface. To get help,
pass it as an argument to the help() function, as shown in the following example:
>>> help(deployer.part)
Part attributes
The Part object has the following attributes:
created
The creation time of the part, as number of seconds since midnight,
January 1, 1970 UTC. When the part is displayed, this value is shown as
the date and time in the local timezone. This value is numeric and is
automatically generated by the product. This attribute is read-only.
currentmessage
The message associated with the currentstatus of the part. This field is an
eight character string value that is automatically generated by the product.
This attribute is read-only.
currentmessage_text
Specifies the textual representation of the currentmessage attribute. This
field is a string representation of the currentmessage attribute in the
preferred language of the requester and is automatically generated by the
product. This attribute is read-only.
currentstatus
Specifies a string constant representing the currentstatus of the virtual
Chapter 11. Reference
839
label
The label of this part. This field is a string value with a maximum of 1024
characters. This attribute is read-only.
name
The name of this part. This field is a string value with a maximum of 1024
characters. This attribute is read-only.
owner
A User object that references the owner of this part. For more information
about the properties and methods supported by User objects, enter the
following command:
>>> help(deployer.user)
productids
This attribute specifies all of the product IDs associated with this virtual
image part.
updated
The time the part was last updated, as number of seconds since midnight,
January 1, 1970 UTC. When the part is displayed, this value is shown as
the date and time in the local timezone. This value is numeric and is
automatically generated by the product. This attribute is read-only.
validationmessage
The message associated with the validationstatus attribute of the part.
This attribute is read-only.
validationmessage_text
The textual representation of the validationmessage attribute. This
attribute is read-only.
validationstatus
The status associated with validation of the part. This attribute is read-only.
validationstatus_text
The textual representation of the validationstatus attribute. This attribute
is read-only.
virtualimage
A reference to the virtual image that defined this part. For more
information about the properties and methods supported by virtualimage
objects, enter the following command:
>>> help(deployer.virtualimage)
840
Part methods
The Part object has the following methods:
isConceptual
Indicates if this part is conceptual. A conceptual part is not tied to any
particular virtual image until deployment time.
addProductid
Add a new product ID to the virtual image part. This method accepts the
following parameters:
productid
The product ID that you want to add. This parameter is required.
licensetype
The license type for this product ID. Valid values are:
v PVU
v Server
The default value is PVU. This parameter is required.
licensecpu
The CPU count limit for this server license. This parameter is
required for the server license type.
licensememory
The memory limit in GB for this server license. This parameter is
required for the server license type, as shown in the following
example:
>>> mypart[0].addProductid(5724-X89, PVU)
>>> mypart[0].addProductid(5724-X89, Server, 4, 32)
deleteProductid
Delete a product ID from the virtual image part. This method accepts the
following parameters:
productid
The product ID that you want to delete. This parameter is
required.
licensetype
The license type for this product ID. Valid values are:
v PVU
v Server
The default value is PVU. This parameter is required.
The following example shows the deleteProductid method:
>>> mypart[0].deleteProductid(5724-X89, PVU)
For more information about working with resource objects, see Resources,
resource collections, and methods on page 842.
Related tasks:
Working with virtual system patterns (classic) on page 434
Using a virtual system pattern, you can describe the topology of a system that you
want to deploy. Virtual system patterns provide repeatable system deployment that
can be reproduced. To build virtual system patterns, you can use parts from one or
more virtual images, add-ons, and script packages.
Related information:
Chapter 11. Reference
841
Jython
Python documentation
There are additional Jython objects that represent collections of resources in IBM
Cloud Orchestrator. These resource collections (Jython objects) can be used to
perform actions such as creating a resource or searching for an existing resource.
Help is available for resource collection objects, in general, and for each type of
resource collection. You can get help for resource collections by entering the
following command:
>>> help(deployer.resourcecollection)
Methods on the Jython objects support the operations that can be performed on the
resource within the IBM Cloud Orchestrator. When you call one of these methods
within the command-line interface tool, the request is sent over HTTPS to the IBM
Cloud Orchestrator where it is run. The result passes back over the HTTPS
connection to the command-line interface tool and is shown in one of the following
ways:
v As return values from the methods
v As an updated state in the Jython objects
v As Jython exceptions (if the result indicates an error condition)
All Jython classes, objects, and fields within the command-line interface are
documented using standard Jython doc strings. The help() function provided in
the command-line interface can be used to display the doc strings, as shown in the
following examples:
>>> help(deployer.ipgroups)
An IPGroups object represents the collection of IP groups defined to the
Workload Deployer appliance. Objects of this type are used to create, delete,
iterate over, list and search for IP groups on the Workload Deployer appliance.
Additional help is available for the following methods:
__contains__, create, delete, __delitem__, __getattr__, __getitem__,
__iter__, __len__, list, __lshift__, __repr__, __rshift__, __str__,
__unicode__
>>> help(deployer.ipgroup)
An IPGroup object represents a particular IP group defined on the
Workload Deployer appliance. Use the IP group object to query and
manipulate the IP group definition on the appliance. Attributes of
the IP group and relationships between the IP group and other
resources on the Workload Deployer appliance are represented as Jython
842
843
When displayed as a string, the properties of a resource are always shown within
curly brackets. In the previous example, the deployer.self() function returns a
single user resource that represents the current user. The following properties have
simple string or numeric values:
v currentmessage
v currentmessage_text
v currentstatus
v
v
v
v
v
currentstatus_text
email
fullname
id
username
The password property has a special value that indicates it can be written to, but
not displayed. The remaining properties all have a value of '(nested object)' that
indicates they have complex values which are not suitable for displaying, either
because doing this would require time-consuming data retrieval from the server or
because doing so would generate too much text. In the previous example, all these
additional properties are references to other resources or resource collections.
Resources are Jython objects and they can be used such as other Jython objects.
Jython variables can hold references to resource objects, as shown in the following
example:
>>> me = deployer.self()
The properties of the object can be read using the Jython dot operator, as shown in
the following example:
>>> me.fullname
Administrator
>>> me.id
1L
The properties of the object can be similarly updated, as shown in the following
example:
>>> me.email
Note: When you update a resource, the change is immediately sent to the IBM
Cloud Orchestrator.
The IBM Cloud Orchestrator command-line interface keeps a local cache of the
properties of the resource that are not relationship-based. This cache is
automatically refreshed whenever any of these properties are updated, and can be
manually refreshed at any time using the refresh() method.
If you try to perform operations not allowed by the product, a Jython exception is
raised, as shown in the following example:
>>> me.id = 143
Traceback (innermost last):
File "<console>", line 1, in ?
AttributeError: cant set attribute
844
Some errors are detected by the IBM Cloud Orchestrator command-line interface
and others are detected by the IBM Cloud Orchestrator.
If you need more information about a particular resource, you can pass the
resource to the IBM Cloud Orchestrator command-line interface help function, as
shown in the following example:
>>> everyone = deployer.everyone()
>>> help(everyone)
Most resources defined to IBM Cloud Orchestrator automatically track the time
and date they were created and last modified. Within the IBM Cloud Orchestrator,
these timestamps are tracked as the number of milliseconds since January 1, 1970
UTC. Because the Jython time and date libraries define timestamps as the number
of seconds since this epoch, the command-line interface code converts the number
of milliseconds returned from the IBM Cloud Orchestrator to seconds. The IBM
Cloud Orchestrator command-line interface formats these timestamps in the local
timezone whenever a string representation is generated for a resource object. The
following example retrieves the first ipgroup resource and displays the created
property for it, then the entire object is displayed from the interactive mode:
>>> ipg = deployer.ipgroups[0]
>>> ipg.created
1.242243975898E9
>>> ipg
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 13, 2009 3:46:15 PM
}
Many types of IBM Cloud Orchestrator resources also have status attributes that
reflect the current and intended status of the resource. The IBM Cloud Orchestrator
manages these status values and automatically adds an additional attribute with
_text appended to the name and containing a textual representation of this status,
as shown in the following example:
>>> ipg.ips[0].currentstatus
RM01017
>>> ipg.ips[0].currentstatus_text
Inactive
There are different types of relationships among the types of resources defined to a
IBM Cloud Orchestrator. When an individual resource is related to other resources,
additional attributes are added to the resource object pointing to the related
resource objects. If the relationship is to a single other resource, the attribute is
named the same as the type of other resource (singular) and its value is the other
resource object. If the relationship is to multiple other resources, the name of the
attribute is the type of the other resource (plural) and its value is a resource
Chapter 11. Reference
845
collection. In the following example, the relationship between an IP group and the
IP addresses it contains is represented by the ips property on the IP group object.
>>> ipg = deployer.ipgroups[0]
>>> ipg
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 13, 2009 3:46:15 PM
}
>>> ipg.ips
[
{
"created": May 13, 2009 3:47:54 PM,
"currentmessage": None,
"currentmessage_text": None,
"currentstatus": "RM01017",
"currentstatus_text": "Inactive",
"id": 5,
"ipaddress": "10.1.2.3",
"ipgroup": (nested object),
"updated": May 13, 2009 3:47:54 PM
},
{
"created": May 13, 2009 3:47:54 PM,
"currentmessage": None,
"currentmessage_text": None,
"currentstatus": "RM01017",
"currentstatus_text": "Inactive",
"id": 6,
"ipaddress": "10.1.2.4",
"ipgroup": (nested object),
"updated": May 13, 2009 3:47:54 PM
}
]
Note: In the previous example the value of the ips property is a list of IP address
objects.
Conversely, an IP address is related to a single IP group. This relationship is
represented by the ipgroup property on the IP object, as shown in the following
example:
>>> ip = ipg.ips[0]
>>> ip
{
"created": May 13, 2009 3:47:54 PM,
"currentmessage": None,
"currentmessage_text": None,
"currentstatus": "RM01017",
"currentstatus_text": "Inactive",
"id": 5,
"ipaddress": "10.1.2.3",
"ipgroup": (nested object),
"updated": May 13, 2009 3:47:54 PM
}
>>> ip.ipgroup
{
"created": May 13, 2009 3:46:15 PM,
846
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 13, 2009 3:46:15 PM
}
847
PM,
PM
PM,
PM
The IBM Cloud Orchestrator package defines one of these types of resource
collections for each type of resource that IBM Cloud Orchestrator manages. The
previous example also shows another type of resource collection. When a resource
is related to other resources of a given type, the resources to which it is related are
represented as a resource collection. In the previous output, for example, the
networks attribute on each IP group represents the networks attached to the IP
group and the ips attribute represents the IP addresses defined within the IP
group. These types of resource collections can only be accessed through a resource
object.
Purpose
This topic describes the methods shared by all IBM Cloud Orchestrator resources.
For more information about resources, see Resources on the command line on
page 843.
The help for each type of resource indicates what methods and properties are
defined for that type of resource. To get additional help on a particular method,
use the help function with the method name appended to the resource in one of
the following ways:
v You can use the generic type of the resource, as shown in the following example:
>>> help(deployer.group)
>>> help(deployer.group.refresh)
v You can use a specific resource instance, as shown in the following example:
848
Properties
Each IBM Cloud Orchestrator resource is represented by a Jython object in the
command-line interface. Attributes of the resource are represented as properties of
the Jython object and can be accessed and updated using the typical Jython
mechanisms. Use the Jython dot operator or built-in getattr() function to obtain
the value of a property, as shown in the following example:
>>> ipg.name
ten
>>> getattr(ipg, name)
ten
The Jython dot operator or built-in setattr() function can be used to update the
value of a property, as shown in the following example:
>>> ipg.primarydns = 10.0.0.10
>>> ipg.primarydns
10.0.0.10
>>> setattr(ipg, name, new name for ten)
>>> ipg.name
new name for ten
The Jython built-in hasattr() function can be used to determine if a resource has a
given property, as shown in the following example:
>>> hasattr(ipg, netmask)
1
>>> hasattr(ipg, undefinedproperty)
0
Different types of resources have different properties defined. See the interactive
help that describes how to use specific types of resources in the command-line
interface for more information about the properties for each type of resource.
Command-line interface resource object reference on page 740 also provides a
listing of resources and resource collections with links to more information about
them.
Chapter 11. Reference
849
Methods
IBM Cloud Orchestrator resource methods include:
__contains__(item)
Indicates if this resource object has a value for the specified attribute. This
method is invoked implicitly by the Jython in operator, as shown in the
following example:
>>> address in ipg
1
>>> foo in ipg
0
__delattr__(name)
Removes an attribute for this resource in IBM Cloud Orchestrator. The update
is sent immediately to the product and the cached attributes are updated if the
request is successful. This method is invoked implicitly by the Jython del
operator, as shown in the following example:
>>> ipg
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 13, 2009 3:46:15 PM
}
>>> del ipg.secondarydns
>>> ipg
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": None,
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 10:11:27 AM
}
delete()
Deletes a resource from the IBM Cloud Orchestrator, as shown in the following
example:
>>> ipg in deployer.ipgroups
1
>>> ipg.delete()
>>> ipg in deployer.ipgroups
0
__eq__(other)
Indicates if another resource object represents the same resource as this object.
Note: This method tests if the two objects see the same resource on the IBM
Cloud Orchestrator. It does not check for a match between the cached
attributes of the two objects.
850
This method is invoked explicitly for the Jython == operator and other
situations in which Jython determines if two objects are equal, as shown in the
following example:
>>> ipg == deployer.ipgroups[0]
1
>>> ipg == deployer.ipgroups[1]
0
__hash__()
Called implicitly by Jython when a resource object is used as a key in a
dictionary (dict) or when the built-in hash() function is called with a resource
object. This method returns a hash value for the resource.
Note: If two resource objects represent the same resource, the __hash__()
functions always return the same value. You cannot make any other
assumptions about the value returned by this function.
The __hash__() function is shown in the following example:
>>> hash(ipg)
1929121056
isStatusTransient()
Indicates if the currentstatus of this resource is transient. A status is
considered transient if IBM Cloud Orchestrator eventually brings the resource
out of this state without any user interaction. This method is invoked with no
parameters and returns the True or False values.
If the resource object does not have a currentstatus property, an exception is
raised.
Note: This method examines the cached value of the currentstatus property.
If you are polling the object to check its status, then use the refresh() method
to update the currentstatus from the product, as shown in the following
example:
>>> while myvirtualsystem.refresh().isStatusTransient():
...
time.sleep(5)
refresh()
Updates the attributes of a resource. To reduce network traffic, the IBM Cloud
Orchestrator command-line interface caches a local copy of the attributes, but
not the relationships, of a resource. Use this method to update resource
attributes with current data from the IBM Cloud Orchestrator, as shown in the
following example:
>>> ipg.gateway
10.0.0.1
(change ipgroup gateway from a different CLI or GUI)
Chapter 11. Reference
851
>>> ipg.gateway
10.0.0.1
>>> ipg.refresh()
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.43",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": None,
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 10:27:23 AM
}
>>> ipg.gateway
10.0.0.43
__repr__()
This method is invoked implicitly by Jython when an expression entered in
interactive mode returns a resource object or when a resource object is passed
to the Jython repr() function. It returns a representation of the resource, as
shown in the following example:
>>> ipg
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 26, 2009 10:15:56 PM
}
__str__()
Returns a string representation of this resource. Attributes representing other
resources or resource collections are shown as (nested object) to avoid
recursive loops. Attributes representing timestamps are automatically
formatted in the local time zone. This method is called implicitly when a
resource is printed, for example if a Jython expression entered in interactive
mode returns a resource. It can also be invoked explicitly by passing a resource
collection to the Jython str() function.
>>> ipg
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 10:33:17 AM
}
__unicode__()
Returns a Jython Unicode string containing a string representation of the
852
That is, the default behavior is to refresh the cached properties of the
resource, and wait until the resource enters a non-transient state.
Note: This default value is only useful for resources that have a
currentstatus property. For other types of resources, you must
override the default condition with one of your own.
maxWait
The maximum amount of time to wait, in seconds, for the specified
condition to become true. Fractional seconds can be specified by
supplying a floating point value. A negative value causes this method
to wait indefinitely. The default value is -1.
interval
The interval at which to check the specified condition. The interval is
specified in seconds. Fractional seconds can be specified by supplying
a floating point value. The default value is 10, which causes the
condition to be evaluated once every 10 seconds.
The waitFor() method returns the value returned by the condition the last
time it was invoked. Arbitrary conditions can be supplied to this method, but
it is most often used to make a script wait for completion of a long-running
background process in IBM Cloud Orchestrator, such as importing a virtual
image or deploying a virtual system instance, as shown in the following
example:
>>> myvs = mypattern.runInCloud(...)
>>> myvs.waitFor()
>>> # myvs is now in either a started or failed state
Related concepts:
Resources on the command line on page 843
Any IBM Cloud Orchestrator functional object is a resource object on the
command-line interface. Within the command-line interface, Jython objects are
used to represent these resources. The IBM Cloud Orchestrator command-line
interface manages different types of resources, for example hypervisors, patterns,
virtual images, and virtual system instances.
Related information:
Chapter 11. Reference
853
Jython
Python documentation
Purpose
This topic describes the methods shared by all IBM Cloud Orchestrator resource
collections. For more information about resource collections, see Resource
collections on the command line on page 847. The provided examples assume the
following IP groups are defined:
>>> deployer.ipgroups
[
{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19 PM
},
{
"created": May 14, 2009 10:20:56 AM,
"gateway": "192.168.0.1",
"id": 6,
"ips": (nested object),
"name": "192.168.0.0",
"netmask": "255.255.255.0",
"networks": (nested object),
"primarydns": "192.168.0.2",
"secondarydns": "192.168.0.3",
"subnetaddress": "192.168.0.0",
"updated": May 14, 2009 1:28:52 PM
}
]
Methods
Resource collections methods include:
add(other)
Adds the specified resource or resources to this collection. This method accepts
a single parameter that can be either a resource or a list of resources. This
method has no return value; exceptions are raised to indicate any problems
adding the resources.
Note: This method does not create new resources. All resources passed to this
method must exist.
The following example shows the add(other) method:
>>> mygroup = deployer.groups.mygroup[0]
>>> joe = deployer.users.joe[0]
>>> mygroup.users.add(joe)
854
Note: For relationship-based resource collections, the left shift operator ("<<")
can be used as an alias for the add() method, as shown in the following
example:
>>> mygroup.users << joe
__contains__(item)
Indicates if this resource collection contains the specified item. Because
resource collections contain resource objects, this method returns false unless
the item parameter is a resource object of the correct type. This method is
invoked implicitly by the Jython in operator, as shown in the following
example:
>>> ipg = deployer.ipgroups[0]
>>> ipg in deployer.ipgroups
1
>>> ipg in deployer.hypervisors
0
create(other)
Creates a resource or new resources and places them in this collection. The
attributes for the new resources can be specified in any of the following ways:
v As a dict object with the required keys, as shown in the following example:
>>> deployer.groups.create({name: new user group,
description: description of new group})
This method returns a resource object for the newly created resource, or a list
of resource objects if multiple resources were created.
Note: For type-based resource collections, the left shift operator, <<, can be
used as an alias for the create() method, as shown in the following example:
>>> deployer.groups << { name: new user group,
description: description of new group }
delete(other)
Deletes a resource in this collection. All information about this resource is
deleted from IBM Cloud Orchestrator. The resource to be deleted can be
specified in any of the following ways:
v As an int or long that specifies the ID of the resource to be deleted, as
shown in the following example:
>>> deployer.patterns.delete(17)
855
__delitem__(key)
Deletes an item from this resource collection. For resource collections based on
relationships, this removes the relationship between the resources; for resource
collections based on types, this method deletes the definition of the resource on
the IBM Cloud Orchestrator. The key parameter can be any type recognized by
the delete() or remove() methods.
The __delitem__(key) method is invoked implicitly by the Jython del
statement, as shown in the following example:
>>> for ipg in deployer.ipgroups:
...
print id %d: %s % (ipg.id, ipg.name)
...
id 2: ten
id 6: 192.168.0.0
>>> del deployer.ipgroups[6]
>>> for ipg in deployer.ipgroups:
...
print id %d: %s % (ipg.id, ipg.name)
...
id 2: ten
__getattr__(name)
Searches for a resource in the resource collection by name. Returns a list of
resource objects that matched the search criteria and returns an empty list if
none are found. For most types of resources, the string is matched against the
name of the resource.
Note: This match is a partial match therefore the returned list of resources can
include any names that contained the specified string anywhere in the name. If
any of the resources exactly match the specified string, they are provided at the
beginning of the returned list.
The descriptions for various types of resources specify if the __getattr__()
method searches on a different attribute. This method is invoked implicitly by
the Jython dot operator, as shown in the following example:
>>> deployer.ipgroups.te
[{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19 PM
}]
__getitem__(key)
Returns a single resource from the collection. Unless otherwise indicated for a
specific type of resource collection, this method recognizes two types of key
values. If the value of the key parameter is an integer or long value, it is used
as an index into the collection and the specified item is returned. If the value
of the key parameter is a string, this method behaves as __getattr__(key).
856
This method is invoked explicitly by Jython when square brackets are used
with a resource collection, as shown in the following example:
>>> deployer.ipgroups[1].subnetaddress
192.168.0.0
>>> deployer.ipgroups[en]
[{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19 PM
}]
__iter__()
Returns an iterator that can be used to iterate over all resources in this
collection. This method is invoked implicitly by numerous Jython idioms
including:
v list comprehensions
v the for..in construct
v filter() function
v iter() function
v map() function
v reduce() function
v zip() function
The iterator is shown in the following example:
>>> [ ipg.name for ipg in deployer.ipgroups ]
[ten, 192.168.0.0]
>>> for ipg in deployer.ipgroups:
...
print ipg.subnetaddress
...
10.0.0.0
192.168.0.0
>>> filter(lambda ipg: ipg.name != ipg.subnetaddress, deployer.ipgroups)
[{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19 PM
}]
__len__()
Returns the number of resources in the collection. In addition to the Jython
len() function, this method is invoked implicitly when a resource collection is
used in a boolean context. A resource collection that contains no resources is
considered false; a collection that contains at least one resource is considered
true, as shown in the following example:
857
>>> len(deployer.ipgroups)
2
>>> if deployer.ipgroups:
...
print true
... else:
...
print false
...
true
>>> len(deployer.hypervisors)
0
>>> if deployer.hypervisors:
...
print true
... else:
...
print false
...
false
list(filt)
Returns a list of resources in this collection. An optional dict parameter can be
supplied to filter the list of resources returned. The keys and values in the dict
parameter are passed to IBM Cloud Orchestrator. The supported keys and
values within the dict parameter depend on the type of resource collection. The
list(filt) method is shown in the following example:
>>> deployer.ipgroups.list({name: t})
[{
"created": May 13, 2009 3:46:15 PM,
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19 PM
}]
remove(other)
Removes a resource from this collection.
Note: The resource is not deleted from IBM Cloud Orchestrator, it is just
dissociated from this collection.
The resources to be removed can be specified in any of the following ways:
v As an int or long containing the ID of the resource to be removed.
v As the resource object for the resource to be removed, as shown in the
following example:
>>> mygroup = deployer.groups.mygroup[0]
>>> joe = deployer.users.joe[0]
>>> mygroup.users.remove(joe)
Note: For relationship-based resource collections, the right shift operator, >>,
can be used as an alias for the remove() method, as shown in the following
example:
>>> mygroup.users >> joe
__repr__()
This method is invoked implicitly by Jython when an expression entered in
interactive mode returns a resource object or when a resource object is passed
to the Jython repr() function. It returns a string representation of the resource
collection, as shown in the following example:
858
>>> deployer.ipgroups
[
{
"created": May 13, 2009 3:46:15
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19
},
{
"created": May 14, 2009 2:13:59
"gateway": "192.168.0.1",
"id": 7,
"ips": (nested object),
"name": "192.168.0.0",
"netmask": "255.255.255.0",
"networks": (nested object),
"primarydns": "192.168.0.2",
"secondarydns": "192.168.0.3",
"subnetaddress": "192.168.0.0",
"updated": May 14, 2009 2:13:59
}
]
PM,
PM
PM,
PM
__str__()
Returns a string representation of the resource collection. The returned string is
readable, though it might be a lengthy string if the resource collection contains
many resources. This method is called implicitly when a resource collection is
printed, for example if a Jython expression entered in interactive mode returns
a resource collection as was seen in the earlier example. It can also be invoked
explicitly by passing a resource collection to the Jython str() function.
>>> print str(deployer.ipgroups)
[
{
"created": May 13, 2009 3:46:15
"gateway": "10.0.0.1",
"id": 2,
"ips": (nested object),
"name": "ten",
"netmask": "255.0.0.0",
"networks": (nested object),
"primarydns": "10.0.0.2",
"secondarydns": "10.0.0.3",
"subnetaddress": "10.0.0.0",
"updated": May 14, 2009 1:28:19
},
{
"created": May 14, 2009 2:13:59
"gateway": "192.168.0.1",
"id": 7,
"ips": (nested object),
"name": "192.168.0.0",
"netmask": "255.255.255.0",
"networks": (nested object),
"primarydns": "192.168.0.2",
"secondarydns": "192.168.0.3",
"subnetaddress": "192.168.0.0",
"updated": May 14, 2009 2:13:59
}
]
PM,
PM
PM,
PM
859
__unicode__()
Returns a Jython Unicode string containing a string representation of the
resource collection. This method is called implicitly when a resource collection
is passed to the Jython unicode() function or used in a context that requires a
Unicode string, as shown in the following example:
>>> unicode(deployer.ipgroups)
Related concepts:
Resources on the command line on page 843
Any IBM Cloud Orchestrator functional object is a resource object on the
command-line interface. Within the command-line interface, Jython objects are
used to represent these resources. The IBM Cloud Orchestrator command-line
interface manages different types of resources, for example hypervisors, patterns,
virtual images, and virtual system instances.
Related information:
Jython
Python documentation
Usage
Use the wizard class as part of the create() method of a resource collection to
create a resource. The wizard class presents a series of prompts for information that
is used when creating a resource object.
The wizard object is supported by resource collection create methods as:
A temporary wizard object
>>> deployer.<resource_collection>.create(deployer.wizard)
After you have entered the information requested by a prompt, click enter to
proceed to the next prompt. Prompts that are not required are indicated with the
inclusion of (optional) in the prompt. If you do not want to enter a value for an
optional prompt, click Enter to advance to the next prompt. For some other
prompts, a list of possible values can be accessed. The prompt for these fields
includes (* to select from list)". If you enter *, then a list of values is display. Enter
the number associated with the option you would like to select.
>>> w = deployer.wizard()
>>> deployer.virtualsystems.create(w)
Enter ?? for help using the wizard.
name: MyVirtualSystem
pattern (* to select from list): *
1. MyPattern1
2. MyPattern2
3. MyPattern3
pattern (* to select from list):1
860
Examples
See the following example of the screen output when creating an user object using
the wizard class.
>>> w = deployer.wizard()
>>> deployer.users.create(w)
Enter ?? for help using the wizard.
username: joeuser
fullname: Joe User
password (optional):
email: [email protected]
{
"clouds": (nested object),
"currentmessage": "RM02013",
"currentmessage_text": "User has not logged in yet",
"currentstatus": "RM01062",
"currentstatus_text": "Inactive",
"email": "[email protected]",
"fullname": "Joe User",
"groups": (nested object),
"id": 2,
"parts": (nested object),
"password": (write-only),
"patterns": (nested object),
"roles": (nested object),
"scripts": (nested object),
"username": "joeuser",
"virtualimages": (nested object),
"virtualsystems": (nested object)
}
Methods
toDict
If a wizard object was explicitly constructed, use the toDict() method of the
wizard class to create a dictionary (dict) object with the required keys. The
following example shows how to use the toDict() method to create a dict
object as a continuation of the wizard example.
>>> w.toDict()
{fullname: uJoe User, email: [email protected], username: ujoeuser}
Related concepts:
Resource collections on the command line on page 847
IBM Cloud Orchestrator manages different types of resources, for example
hypervisors, patterns, virtual images, and virtual system instances. These resources
can be collected into groups of like objects called resource collections. Within the
command-line interface, Jython objects are used to represent these resource
collections.
Chapter 11. Reference
861
ACL object
You can use the access control list (ACL) object to set and control user access for
other IBM Cloud Orchestrator resources.
Purpose
Use the access control list (ACL) object to set and control user access for the
following objects:
AddOns For more information about Add-ons, see AddOn command-line interface
reference on page 740.
Clouds For more information about clouds, see Cloud group command-line
interface reference on page 745.
EnvironmentProfiles
For more information about environment profiles, see Environment
profiles on the command-line interface on page 749.
Hypervisors
For more information about hypervisors, see Hypervisor command-line
interface reference on page 758.
Patterns
For more information about patterns, see Virtual system patterns (classic)
command-line interface reference on page 818.
Scripts
For more information about scripts, see Script package command-line
interface reference on page 777.
Virtual images
For more information about virtual images, see Virtual images
command-line interface reference on page 792.
Virtual systems
For more information about virtual systems, see Virtual system instances
(classic) command-line interface reference on page 804.
ACL object
The ACL object represents the ACL associated with a IBM Cloud Orchestrator
resource. The IBM Cloud Orchestrator manages access to resources with a
hierarchical set of permissions. These permissions are represented by constants in
the IBM Cloud Orchestrator package. From the least access to greatest access, these
permissions are:
NO_PERMISSIONS
The user cannot access the resource.
862
READ_PERMISSION
The user can view the resource and use it in a read-only manner, but
cannot alter the resource.
UPDATE_PERMISSION
In addition to viewing and using the resource, the user is permitted to
alter the resource.
CREATE_PERMISSION
Typically applied to collections of resources, with this permission the user
can create new resources.
DELETE_PERMISSION, ALL_PERMISSION
The user is granted full access to the resource.
ACL objects are accessed using the ACL property of the resource to which they apply,
as shown in the following example:
>>> mypattern = deployer.patterns[0]
>>> mypattern.acl
{
(user cbadmin): all
}
ACL methods
The ACL object provides the following methods:
check(entity)
Queries the IBM Cloud Orchestrator to determine what permissions the
specified user has been granted to the resource associated with this ACL.
The following example shows this method:
>>> deployer.patterns[0].acl.check(deployer.self())
__contains__(item)
Indicates if a specific permission has been defined for the specified user, as
shown in the following example:
>>> deployer.users[user1] in deployer.virtualimages[0].acl
__delitem__(key)
Removes any explicit permissions set for the specified user for this
resource. This method is called implicitly by the Jython del statement, as
shown in the following example:
>>> user = deployer.users[user2][0]
>>> del deployer.patterns[0].acl[user]
__getitem__(key)
Returns the permission explicitly set for the specified user for this resource.
This method is started implicitly when a user is used as an index to an
ACL, as shown in the following example:
>>> deployer.virtualimages[0].acl[deployer.everyone()]
__iter__()
This method is started implicitly when you reference an ACL object in a
context that requires iterating over all the entries. This method is also
started implicitly when you are explicitly passing the ACL object to the
Jython iter() function. The following example shows this method:
>>> for userorgroup in myvirtualsystem.acl:
...
print userorgroup.name
863
__len__()
Returns the number of permissions explicitly set for this resource, as
shown in the following example:
>>> len(deployer.scripts[0].acl)
refresh()
Refreshes the cached ACL entries with current data from the IBM Cloud
Orchestrator.
__repr__()
This method is started implicitly by Jython when an expression entered in
interactive mode returns an ACL or when an ACL is passed the Jython
repr() function. It returns a string representation of the resource. The
following example shows this method being implicitly started:
>>> deployer.scripts[0].acl
__setitem__(key, value)
Sets an explicit ACL for the specified user. This method is started implicitly
when you use the []= construct, as shown in the following example:
>>> myscript.acl[deployer.users[user2]] = deployer.READ_PERMISSION
The value specified inside the square brackets must be a User object. The
value to the right of the equal sign must be one of the following values:
v
v
v
v
v
deployer.NO_PERMISSIONS
deployer.READ_PERMISSION
deployer.UPDATE_PERMISSION
deployer.CREATE_PERMISSION
deployer.DELETE_PERMISSION
v deployer.ALL_PERMISSIONS
__str__()
Returns a string representation of this ACL. This method is started
implicitly by Jython when a resource object is used as a value in a string
formatting operation. This method is also started implicitly by Jython
when it is passed as a parameter to the Jython str() function. The
following example shows this method:
>>> print Here is the ACL: %s % deployer.patterns[0].acl
>>> str(deployer.patterns[1].acl)
__unicode__()
Returns a string representation of this ACL. This method is started
implicitly by Jython when a resource object is used as a value in a string
formatting operation. This method is also started implicitly when it is
passed as a parameter to the Jython unicode() function. The following
example shows this method:
>>> print Here is the ACL: %s % deployer.patterns[0].acl
>>> str(deployer.patterns[1].acl)
For more information about working with resource objects, see the Resources,
resource collections, and methods on page 842 section.
Related concepts:
Resources on the command line on page 843
Any IBM Cloud Orchestrator functional object is a resource object on the
command-line interface. Within the command-line interface, Jython objects are
used to represent these resources. The IBM Cloud Orchestrator command-line
interface manages different types of resources, for example hypervisors, patterns,
864
Purpose
This topic provides a listing of utilities you can use with IBM Cloud Orchestrator
command-line interface to accomplish the following tasks:
This function returns an object with a string representation that is the version of
the command-line interface code, as shown in the following example:
1.0.0.0-11703
Note: This object is not a string, but can be converted to a string using the str()
command, as shown in the following example:
>>> if str(deployer.cliversion).startswith(1.0.0):
...
print running 1.0.0 deployer CLI
...
running 1.0.0 deployer CLI
865
Note: This object is not a string but you can convert it to a string using str(), as
shown in the following example:
>>> if str(deployer.version).find(1.0.0) >= 0:
...
print deployer appliance is version 1.0.0
...
deployer appliance is version 1.0.0
866
The deployer.waitFor returns the value obtained the last time the condition was
evaluated.
For more information about working with resource objects, see the Resources,
resource collections, and methods on page 842 section.
Related concepts:
Resources on the command line on page 843
Any IBM Cloud Orchestrator functional object is a resource object on the
command-line interface. Within the command-line interface, Jython objects are
used to represent these resources. The IBM Cloud Orchestrator command-line
interface manages different types of resources, for example hypervisors, patterns,
virtual images, and virtual system instances.
Related information:
Jython
Python documentation
867
should be used by the product when generating the response data. You can
specify any of the languages supported by the product.
Authentication
The REST API only supports HTTP basic authentication. After successfully
authenticating, the server will return two cookies named zsessionid and
SimpleToken that should be included with subsequent HTTP requests that
are part of the same session. The same user IDs and passwords used to
access the GUI and the command-line interface are used to access the REST
API. The authorization of a user to perform actions on the product is
independent of the interface (GUI, command-line interface or REST API)
used to request the actions.
Content-Type
All the content included in an HTTP request body sent to the product must
be JSON encoded. You must include a "Content-Type: application/json"
header to indicate this for each request that includes any data.
X-IBM-Workload-Deployer-API-Version: 4.0.0.1
Every HTTP request to the product must include a "X-IBM-WorkloadDeployer-API-Version: 4.0.0.1" header to indicate that your client expects the
REST API semantics described in this document.
domainName
When not using the default domain, the HTTP request must include in the
header "domainName:<yourDomainName>" to let the user be authenticated to
the <yourDomainName> domain.
projectName
When not using the default project, the HTTP request must include in the
header "projectName:<yourProjectName>" to let the user be authenticated
in the <yourProjectName> project.
The REST API is only supports the sending and receiving of UTF-8 encoded data.
Ensure that your HTTP client is appropriately set to encode and decode character
data, including JSON data. All responses of REST requests in JSON format are
encoded in UTF-8.
Note: Key-value pairs that are only used by user interface clients are optional.
868
GET
URL pattern
https://hostname/resources/automation?parm1=value1
&parm2=value2...
Response
Return values
v 200 - OK
v 401 - Unauthorized
v 404 - Not found
869
human_service
implementation_type
GET
URL pattern
https://hostname/resources/automation/{id}
Response
JSON object for the specific orchestration action entry, based on the
ID provided in the query:
{ id:
name:
description:
created:
updated:
process:
process_app_id:
process_app_short_name:
process_app_name:
category:
operation_type:
apply_to_all_pattern:
event:
icon:
human_service:
human_service_app_id:
human_service_app_short_name:
human_service_app_name:
implementation_type:
ownerid:
pattern[
{
id,
patternId,
paternType
}
]
priority:
}
Return values
v 200 - OK
v 401 - Unauthorized
v 404 - Not found
870
HTTP method
PUT
URL pattern
https://hostname/resources/automation/{id}
Table 95. Add or update an entry in the orchestration actions REST API call (continued)
Response
Return values
DELETE
URL pattern
https://hostname/resources/automation/{id}
Response
Return values
v 204 - No content
v 401 - Unauthorized
v 404 - Not found
871
GET
URL pattern
https://hostname/resources/instances/{id}/
automation?parm1=value1&parm2=value2...
Response
Return values
872
GET
URL pattern
https://hostname/kernel/bpm/runbook/
Response
Return values
873
GET
URL pattern
https://hostname/kernel/bpm/runbook/runbook_id
Response
Return values
GET
URL pattern
https://hostname/kernel/bpm/humanService
Response
Return values
874
.
Get entries for a specific human service:
Use this REST call to retrieve information about a human service with an indicated
ID.
Available HTTP method
Table 101. Get information about a specific human service
HTTP method
GET
URL pattern
https://hostname/kernel/bpm/humanService/
<human_service_id>
Response
Return values
875
GET
URL pattern
https://hostname/kernel/bpm/task
Response
Return values
876
v requester - the requester of the process to which the pending activity belongs.
v serviceInstanceId- the ID of the associated virtual system instance, if the
approval is part of a triggered event or user action.
v taskDueDate - due date of the pending human activity.
v taskOverdue:
true - if the current date is later than the due date of the pending human
activity.
false - otherwise.
v taskPriority - the priority of the task as used in the underlying execution
engine.
v taskStatus - the status of the pending human activity as used in the underlying
execution engine.
v taskType - the type of human activity:
approval - for an approval request.
general - for a general human task.
v time - the time at which the process was triggered.
The following listing shows an example response with one pending task:
[
{
"relatedTo": "Sample_DeleteInstanceApproval",
"taskStatus": "Received",
"taskPriority": "Normal",
"taskOverdue": "true",
"id": "8",
"requester": "admin",
"taskDueDate": "2013-08-19T17:27:26Z",
"time": "2013-08-19T16:27:26Z",
"displayName": "Delete Instance Approval: Sample1",
"taskType": "approval",
"assignedToType": "group",
"operationContextId": "1007",
"domain": "Default",
"serviceInstanceId": null,
"assignedTo": "All Users",
"project": "admin"
}
]
.
Get entries for a specific Inbox item:
Use this REST call to retrieve information about an Inbox item with an indicated
ID
Available HTTP method
Table 103. Get information about a specific Inbox item
HTTP method
GET
URL pattern
https://hostname/kernel/bpm/task/<task id>
877
Return values
878
GET
URL pattern
https://hostname/kernel/serviceInstance/{deployment-id}
879
Table 104. Get entries for a specific service instance entry with a specified deployment ID
REST API call (continued)
Response
Return values
v 200 - OK
v 404 - The service instance with the given ID does not exist
v 500 - Internal server error
880
v type - a string that is internal (non-localized). It has one of the following values:
SINGLE_IMAGE_DEPLOYMENT - for instances that are deployed through the Virtual
Images application.
TOPOLOGY - for instances that are deployed from a virtual system pattern.
APPLICATION - for instances that are deployed from a virtual application
pattern or shared service.
v status - a string that is internal (non-localized). In case of the TOPOLOGY or
SINGLE_IMAGE_DEPLOYMENT type, the same as for the virtual system state
machine. In case of other types, the same as for the virtual application state
machine.
v id - the URI to retrieve the most current version of the document.
v virtualSystemId - the URI to retrieve the virtual system if there is any that is
associated.
v virtualApplicationId - the URI to retrieve the virtual application instance if
there is any that is associated.
v virtualApplicationPatternId - the storehouse URI for the vApp pattern if there
is any that is associated.
v virtualSystemPatternId - the URI for the system pattern that is used to create if
there is any that is associated.
v creator - the ID of the user who initiated the deployment of the instance. This
id is a resource id like /resources/users/1 for vSys and single image
deployments, and a storehouse id like /storehouse/admin/users/u-0 in other
cases.
v cloudGroup - the cloud group to which the virtual application instance was
deployed. The field can be empty for virtual system pattern deployments, or
single image deployments.
v params - a JSON object that contains custom parameters that were provided by
extension writers.
virtualMachine has the following attributes:
v name - a string identifier of the virtual machine. Typically, it does not match the
primary host name of the virtual machine. The identifier is unique within a
service instance.
v id - a unique string identifier of the virtual machine that is relative to the service
instance base url.
v cloudGroup - the URI of the cloud group where the virtual machine is deployed.
v networkInterfaces - network interfaces of the virtual machine.
v hostname - the primary host name of the virtual machine. If there are multiple
network interfaces, the primary host name depends on the implementation.
v virtualCpu - the number of virtual central processing units that is the number of
central processing units that the guest operating system on this virtual machine
sees.
v memory - the amount of memory that the guest operating system sees. The unit
of measurement is mebibyte (1 MB = 1048576 bytes).
v disk - the size of the primary/root disk. The unit of measurement is mebibyte (1
MB = 1048576 bytes).
v imageId - the image that was used to create the virtual machine. The lifecycles of
the virtual machine and image are separate.
v partname - the name of the part in the associated virtual system pattern, from
which the virtual machine was instantiated.
Chapter 11. Reference
881
v runtimeId - the ID of the virtual machine on the hypervisor. The format of this
string depends on the hypervisor.
v params - the custom parameters that were provided by extension writers.
networkInterface has the following attributes:
v ip - the IPv4 address in dotted decimal notation or the IPv6 address.
v hostname - a host name that should resolve to the given IP through DNS.
v ipgroup - the URI of the IP group that the address was allocated from.
role has the following attributes:
v name - the name of the role.
v type - the type of the role.
v endpoints - an array that contains an object for each endpoint.
v params - a JSON object that contains custom parameters that were provided by
extension writers.
service has the following attributes:
v name - a human-readable string that identifies the service.
v type - either a java-style package identifier or a tosca node type.
v params - contain custom parameters that were provided by extension writers. See
the details below:
It is a free-form JSON object that holds additional parameters that are required
by extension developers. The JSON object has the following limitations:
There is no support for JSON arrays, which are converted to strings.
All non-string simple types are converted to string when they are stored.
You can read and edit these objects to add data that their extension requires in
the context of a service instance. You can retrieve the data later from a Business
Process Manager process or other extension. This storage is backed by the
storehouse metadata APIs.
endpoint has the following attributes:
v name - a human-readable string that identifies the endpoint.
v URI - The URI that the endpoint points to.
The following listing shows an example response that can be retrieved by the
request:
{
"metaData": {
"cloudGroup": "\/resources\/clouds\/1",
"creator": "\/resources\/users\/2",
"id": "\/kernel\/serviceInstance\/1",
"name": "Test",
"params": {
"Hello": "World"
},
"status": "RM01005",
"type": "TOPOLOGY",
"virtualApplicationId": "",
"virtualApplicationPatternId": "",
"virtualSystemId": "\/resources\/virtualSystems\/1",
"virtualSystemPatternId": "\/resources\/patterns\/1"
},
"roles": [
],
882
"services": [
],
"virtualMachines": [
{
"cloudGroup": "\/resources\/clouds\/1",
"disk": 10240,
"hostname": "192-0-2-13.lightspeed.brhmal.sbcglobal.net",
"hypervisorid": "\/resources\/hypervisors\/PM-1",
"id": "1",
"memory": 2048,
"name": "965a92be-OS Node-Test-1",
"networkInterfaces": [
{
"hostname": "192-0-2-13.lightspeed.brhmal.sbcglobal.net",
"ip": "192.0.2.13",
"ipgroup": "\/resources\/ipgroups\/2"
}
],
"partname": "OS Node",
"runtimeId": "1b9417ca-6f66-42ec-a2da-661739d8e66d",
"virtualCpu": 1
}
]
}
GET
URL pattern
https://hostname/kernel/serviceInstance/{deployment-id}/
metaData/params/attribute.member.member
Response
Return values
v 200 - OK
v 500 - Internal Server Error
883
POST
URL pattern
https://hostname/kernel/serviceInstance/{deployment-id}/
metaData/params
Response
Return values
v 200 - OK
v 500 - Internal Server Error
Note: For this rest call, you need a content-type: application/json header.
The following listing shows a sample request body:
{"Hello":"World"}
DELETE
URL pattern
https://hostname/kernel/serviceInstance/{deployment-id}/
metaData/params/attribute.member.member
Response
Return values
v 200 - OK
v 500 - No query parameter is specified, or internal server error
Note: The REST call always returns a {"Status":"Ok"} response on a 200 return
value.
884
GET
URL pattern
https://hostname/kernel/serviceInstance/{deployment-id}/
virtualMachines/{name}/params/attribute.member.member
Response
Return values
v 200 - OK
v 500 - Internal Server Error
POST
URL pattern
https://hostname/kernel/serviceInstance/{deploymentID}/
virtualMachines/{name}/params
Response
Return values
v 200 - OK
v 500 - Internal Server Error
Note: For this rest call, you need a content-type: application/json header.
The following listing shows a sample request body:
{"Hello":"World"}
885
DELETE
URL pattern
https://hostname/kernel/serviceInstance/{deploymentID}/
virtualMachines/{name}/params/attribute.member.member
Response
Return values
v 200 - OK
v 500 - No query parameter is specified, or internal server error
Note: The REST call always returns a {"Status":"Ok"} response on a 200 return
value.
GET
URL pattern
http://hostname:port/resources/instances/{instanceid}/
deploymentparameters
Response
JSON object with the deployment parameters for the virtual system
instance filtered according to the query parameters:
{
parameterclass:
parametername:
parametervalue:
partkey:
userconfigurable:
scriptpackagename:
scriptpackagetype:
}
Return values
v 200 - OK
v 400 - Not Found
v 500 - Unexpected Error
886
POST
URL pattern
http://hostname:port/resources/instances/{instanceid}/
deploymentparameters
Response
JSON object with the deployment parameters for the virtual system
instance that is filtered according to the query parameters:
{
parametername:
parametervalue:
partkey:
}
Return values
v 200 - OK
v 400 - Not Found
v 500 - Unexpected Error
887
Each linked resource has at least one link to itself, the first href property. An item
property follows with the actual resource representation. There can also be
additional links to other resources. A collection resource is a collection of linked
resources. In addition to the basic properties of a linked resource, a collection
resource also features specific properties for pagination. The following code
displays the structure of a collection resource:
{
"href": "https://host:9443/orchestrator/v2/collection",
"start": 10,
"limit": 10,
"total": 49,
"first": {
"href": "https://host:9443/orchestrator/v2/collection/?_limit=10&_start=0"
},
"previous": {
"href": "https://host:9443/orchestrator/v2/collection/?_limit=10&_start=0"
},
"next": {
"href": "https://host:9443/orchestrator/v2/collection/?_limit=10&_start=20"
},
"last": {
"href": "https://host:9443/orchestrator/v2/collection/?_limit=10&_start=39"
},
"items": [ ... ]
The start, limit and total properties enable you to display the correct number of
pages in a UI. The number of pages is the total size divided by the page size. You
can also choose to leverage the provided first, next, etc. links and can call them
directly from a UI to navigate the collection easily.
888
Request
Description
200 OK
201 Created
POST /collection
202 Accepted
POST /collection/{id}/
launch
204 No Content
DELETE /resource
401 Unauthorized
Any
403 Forbidden
Any
Any
Any
409 Conflict
POST /collection
Any
889
Meaning
_start=n
_limit=n
_sortby=abc
_sort=asc | desc
_search=abc
property=value
Examples
v /orchestrator/v2/categories - Returns service catalog categories starting at
index 0 with a limit of 10, sorted by ID in ascending order.
v /orchestrator/v2/categories?_start=10 - Returns service catalog categories
starting at index 10 with a limit of 10, sorted by ID in ascending order.
v /orchestrator/v2/categories?_start=10&_limit=20 - Returns service catalog
categories starting at index 10 with a limit of 20, sorted by IDin ascending order.
v /orchestrator/v2/categories?_sortby=name - Returns service catalog categories
starting at index 0 with a limit of 10, sorted by name in ascending order.
v /orchestrator/v2/categories?_sortby=id&_search=virtual - Returns service
catalog categories containing the word "virtual" starting at index 0 with a limit
of 10, sorted by ID in ascending order.
v /orchestrator/v2/categories?name=OpenStack - Returns service catalog
categories whose name is "OpenStack" starting at index 0 with a limit of 10.
v /orchestrator/v2/categories?id=123&id=456 - Returns service catalog categories
with the ids 123 and 456 starting at index 0 with a limit of 10.
890
Category Response:
{
"href": "https://host:9443/orchestrator/v2/categories/4711",
"item": {
"id": 4711,
"isbuiltin": 0,
"icon": "Web Machine Category Icon:ge100_webcatalog_24",
"name": "Manage Virtual Machines",
"description": "Deploy, start, stop and virtual machines based on a single image."
}
}
Categories Response
{
"href": "https://host:9443/orchestrator/v2/categories/",
"start": 0,
"limit": 10,
"total": 9,
"first": {
"href": "https://host:9443/orchestrator/v2/categories/?_limit=10&_start=0"
},
"previous": null,
"next": null,
"last": {
"href": "https://host:9443/orchestrator/v2/categories/?_limit=10&_start=0"
},
"items": [ Category Response, ...., Category Response ]
}
Table 115.
Attribute
Description
Type
Mandatory
Generated
Comment
id
category id
Number
no
yes
automatically
assigned when a
new category gets
created
icon
icon name
String
no
no
891
Description
Type
Mandatory
Generated
Comment
name
category name
String
yes
no
name of the
category
description
category
description
String
yes
no
description of the
category
isbuiltin
built in
Number
no
892
Response
Category Response
Authorization
role:"admin"
GET: Get category
URL pattern
/orchestrator/v2/categories/{id}
Accepts
*
Content-Type
application/JSON
Normal Response Codes
200 OK
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Response
Category Response
Authorization
no authorization
PUT: Update category
URL pattern
/orchestrator/v2/categories/{id}
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
200 OK
Error Response Codes
400 bad request if bad JSON was passed
401 unauthorized
404 not found
500 internal server error
Request
{
...
"name": "Manage Virtual Image",
"description": "Deploy, start, stop"
...
}
893
Response
Category Response
Authorization
role: "admin"
DELETE: Delete category
URL pattern
/orchestrator/v2/categories/{id}
Accepts
*
Normal Response Codes
204 no content
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Authorization
role: "admin"
Offering attributes:
Attributes for the offering are displayed in this section.
Attributes
Table 116.
Attribute
Description
Type
Mandatory
Generated
Comment
id
service id
Number
no
yes
automatically assigned
when a new offering is
created
icon
icon name
String
no
no
name
offering name
String
yes
no
description
offering description
String
yes
no
description of the
offering
category
offering
Number
no
no
category id of this
offering
implementation_type
no
process_app_id
String
yes
no
process
BPM process
implementing the
offering or action
String
yes
no
894
defaults to
"ibm_bpm_process" if
not passed
Description
Type
Mandatory
Generated
human_service_app_id
String
no
no
human_service
Human service
implementing the User
Interface for the
offering or action
String
no
no
Comment
ownerid
no
operation_type
no
"offering",
"singleInstanceAction",
"multiInstanceAction"
instancetype
String
no
no
name of instance
provider
tags
List of
Strings
no
no
subset of tags of
instance provider
acl
List of ACL no
JSON
no
acl/domain
no
acl/project
acl/role
acl/use
acl/modify
acl/view
Offering instances:
A list of instances for the offering are described in this section.
Instances
GET: Lists offerings
URL pattern
/orchestrator/v2/offerings
Accepts
*
Content-Type
application/JSON
Normal Response Codes
200 OK
Error Response Codes
401 unauthorized
500 internal server error
Chapter 11. Reference
895
Response
Offerings Response
Search Attributes
name, description
Filter Attributes
id, name, description, icon, human_service, human_service_app_id,
priority, created, updated, process, process_app_id, owner_id, category,
implementation_type, operation_type, instancetype
Authorization
role: admin or ACL with 'view' set to 'true' for given domain, project and
role
POST: Creates an offering
URL pattern
/orchestrator/v2/offerings
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
201 created
Error Response Codes
400 bad request if bad JSON was passed or mandatory attributes were
missing
401 unauthorized
500 internal server error
Request
Offering Request
Response
Offering Response
Authorization
roles: "admin", "domain_admin"
POST: Creates an offering
URL pattern
/orchestrator/v2/offerings
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
201 created
Error Response Codes
400 bad request if bad JSON was passed or mandatory attributes were
missing
401 unauthorized
896
897
Authorization
role: admin or ACL with 'modify' set to 'true' for given domain, project and
role
DELETE: Delete an offering
URL pattern
/orchestrator/v2/offerings/{id}
Accepts
*
Normal Response Codes
204 no content
Error Response Codes
401 unauthorized
500 internal server error
Authorization
role: admin or ACL with 'modify' set to 'true' for given domain, project and
role
POST: Execute an offering
URL pattern
/orchestrator/v2/offerings/{id}/launch
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
202 accepted
Error Response Codes
400 bad request if bad JSON was passed
401 unauthorized
404 not found
500 internal server error
Request
Go to Launching an offering via offering API
Response
TaskResponse
Authorization
role: admin or ACL with 'use' set to 'true' for given domain, project and
role
GET: Get ACL entries for a given offering
URL pattern
/orchestrator/v2/offerings/{id}/acl
Accepts
*
898
Content-Type
application/JSON
Normal Response Codes
200 OK
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Response
ACLs Response
Authorization
no authorization needed but result is restricted for the given domain,
project and role of the user
PUT: Update given acl for a given offering
URL pattern
/orchestrator/v2/offerings/{id}/acl
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
400 bad request of bas JSON was passed
401 unauthorized
404 not found
500 internal server error
Request
ACLs Request
Response
ACLS Response
Authorization
no authorization needed but the given ACL is adjusted to the given
domain, project and role of the user
GET: Get input parameters for a given offering
URL pattern
/orchestrator/v2/offerings/{id}/parameters
Accepts
*
Content-Type
application/JSON
Normal Response Codes
200 OK
Chapter 11. Reference
899
2. The offering does require additional input data In this case you have to decide
whether you want to use the IBM Cloud Orchestrator User Interface to gather
that data and start the process, or if you provide the data as part of the launch
request.
Using the User Interface
900
If you want to use the IBM Cloud Orchestrator User Interface you will have to
perform two steps:
1. Issue the call to initiate the offering by performing the post request:
POST https://<IBM_Cloud_Orchestrator>:8443/orchestrator/v2/offerings/<offering-id>/launch
2. In the JSON response of that request find the redirect attribute that contains
the path to launch the IBM Cloud Orchestrator User Interface for the offering.
It looks like this: "redirect":"\/teamworks\/
executeServiceByName?processApp=SCONOVA
&serviceName=Deploy+Single+Virtual+Machine
&tw.local.operationContextId=3059"
3. Launch the URL:
https://<IBM_Cloud_Orchestrator>:8443/orchestrator/v2/offerings/<offering-id>/launch
You must also provide a POST body in that request describing the
InputParameterObject that is passed into the process. Pass the body a JSON
document in this format:
{"parm":{"OperationParameter":
"<variable type=\"Sample_BusinessObject\">
<field1 type=\"String\"><![CDATA[Hello]]><\/field1>
<field2 type=\"String\"><![CDATA[Phone]]><\/field2> <\/variable>"}}
901
2. Use REST Interface for Business Process Manager - related resources to get
details about exposed items:
GET https://<IBM_Cloud_Orchestrator>:8443/rest/bpm/wle/v1/exposed
Use parts=all for more information about the process. Find the process with
itemID=process and processAppID=process_app_id:
{ "status":"200", "data":{ ..., "DataModel":{ ...} } }
"status" : "200",
"data" : {
"DataModel" : {
"properties" : { "message" : { "type" : "String", "isList" : false },
"returnFromRest" : { "type" : "String", "isList" : false }
},
"inputs" : {
"operationContext" : { "type" : "OperationContext", "isList" : false },
"inputParameterObject" : { "type" : "Sample_BusinessObject", "isList" : false }
},
...
"Sample_BusinessObject" : {
"properties" : { "field1" : { "isList" : false, "type" : "String },
"field2" : { "isList" : false, "type" : "String }
},
type" : "object",
"ID" : "12.2c079fa7-89a0-426c-a3c1-079be08930ac",
"isShared" : false
},
902
3. Within the response, search for Operation Parameter. The following is a sample
excerpt:
"OperationParameter" : "<variable type=\"MyRequest\"
<vpmoNumber type=\"Integer\"><![CDATA[116560]]><\/vpmoNumber>
<appId type=\"Integer\"><![CDATA[19073]]><\/appId>
<attuidNo type=\"String\"><![CDATA[dw945f]]><\/attuidNo>
<serverType type=\"NameValuePair\">
<name type=\"String\"><![CDATA[Test]]<\/name>
<value type=\"String\"><![CDATA[T]]><\/value>
<\/serverType>
The response contains information about the status of the request. For details on
possible values, refer to GET entries for a specific task. Sample response (excerpt)
from
REST GET https://<IBM_Cloud_Orchestrator>:8443/kernel/tasks/{id}
:
{
"updated_iso" : "2014-02-19T17:54:15+0100",
"description_message" : "PROCESS_COMPLETE",
"domain" : "Default",
"created" : 1392828461580,
"error" : { ... },
"serviceInstance" : {
"virtualMachines" : [{
"memory" : 4096,
"hypervisorid" : "\/resources\/hypervisors\/PM-1",
"hostname" : "SC-192-168-0-103.RegionOne.example.com",
....
}
],
....
},
"user" : "admin",
"parm" : {
"startPlanByPlugpointEventHandler" : "done",
"CUSTOM_PARM1": "abc",
"CUSTOM_PARM2": "xyz",
"OperationParameter" : "<variable ...<\/variable>",
"serviceInstanceId" : "282",
"plan" : { ... },
"processId" : "1356"
},
"created_iso" : "2014-02-19T17:47:41+0100",
"status_localized" : "TASKSTATUS_COMPLETED",
"error_message" : "BPM_PROCESS_COMPLETE",
"status" : "COMPLETED",
"eventTopic" : "com\/ibm\/orchestrator\/serviceinstance\/plan\/ibm_bpm_process",
"delayInSeconds" : 30,
903
"project" : "admin",
...
}
}
JSON Formats
Resource Type Request
{
"name" : "myprovider",
"displayname" : "My Provider",
"description" : "This is my provider",
"icon" : "Web Icon:glyphicons_266_flag",
"provider" : "com.ibm.orchestrator.core.instance.providers.myprovider.MyProvider",
"type" : "admin",
"tags" : ["enabled", "disabled"],
"detailsview" : {
"application" : "SCOABC",
"humanservice" : "Show My Provider Details"
},
"keyfields" : [{
"instanceattribute" : "displayname",
"header" : "Name"
}, {
"instanceattribute" : "description",
"header" : "Description"
}
]
}
904
"total": 3,
"first": "http://<hostname:port>/orchestrator/v2/instancetypes?_start=0&_limit=10",
"previous": null,
"next": null,
"last": "http://<hostname:port>/orchestrator/v2/instancetypes?_start=0&_limit=10",
"items":
[
Resource Type Response 1,...,Resource Type Response n
]
}
905
Instances
GET : Lists all resource types
URL pattern
/orchestrator/v2/instancetypes
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
500 internal server error
Response
Resource Type Response
Authorization
No authorization needed
POST: Create resource type
URL pattern
/orchestrator/v2/instancetypes/
Accepts
application/JSON
Content type
application/JSON
Normal Response Codes
201
Error Response Codes
401 unauthorized
409 conflict
500 internal server error
906
Request
Resource Type Request
Response
Resource Type Request
Authorization
role: admin
GET: Get one resource type
URL pattern
/orchestrator/v2/instancetypes/{name}
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
404 not found
500 internal server error
Response
Resource Type Response
Authorization
No authorization needed
PUT: Update a resource type
URL pattern
/orchestrator/v2/instancetypes/{name}
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Request
Resource Type Request (partial)
Response
Resource Type Response
Authorization
role: admin
DELETE: Delete a resource type.
907
URL pattern
/orchestrator/v2/instancetypes/{name}
accepts
*/*
Content-Type
application/JSON
Normal Response Codes
204
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Authorization
role: admin
GET : List instances of a given type.
URL pattern
/orchestrator/v2/instancetypes/{name}/instances
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Response
Resource Instances Response
Authorization
Instance provider dependent. Generic Provider: Access Control Link with
view set to true for the domain, project and role you are working on.
POST: Creates a instance of a given type.
URL pattern
/orchestrator/v2/instancetypes/{name}/instances
Accepts
application/JSON
Content-Type
application/JSON
Normal Response Codes
201 created
Error Response Codes
401 unauthorized
908
909
Authorization
Instance provider dependent. Generic Provider: Access Control Link with
modify set to true for given domain, project and role you are working on.
DELETE: Deletes an instance of a given type.
URL pattern: /orchestrator/v2/instancetypes/{name}/instances/{id}
Accepts
*/*
Content-Type
application/JSON
Normal response Codes
204
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Authorization
Instance provider dependent. Generic Provider: Access Control Link with
modify set to true for given domain, project and role of the user.
GET: Lists actions defined on a given instance of a given type.
URL pattern: /orchestrator/v2/instancetypes/{name}/instances/{id}/services
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Authorization
Instance provider dependent. Generic Provider: Access Control Link with
view set to true on the instance and services for given domain, project and
role of the user.
POST: Launch given action on a given instance of a given type.
URL pattern: /orchestrator/v2/instancetypes/{name}/instances/{id}/services/
{serviceid}/launch
Content-Type
application/JSON
Normal Response Codes
202 accepted
910
Type
Description
Displayed in
UI
id
String
Instance ID
displayname
String
Name of the
domain
asc/desc
description
String
Description of
the domain
asc/desc
icon
String
Not used
detailsURL
String
URI to display
details of the
domain
parm
String
JSON result
from
OpenStack
enabled
Boolean
Domain
enablement
status
asc/desc
domain
String
Domain ID
tags
String
Tags to control
Sortable
Filterable
Searchable
asc/desc
911
Group provider:
This provider lists OpenStack keystone groups.
Instance Type
group
Provider Class
com.ibm.orchestrator.core.instance.providers.openstack.OpenstackGroupProvider
Table 118.
Attribute
name
Type
Description
Displayed in
UI
id
String
Instance ID
displayname
String
Name of the
group
asc/desc
description
String
Description of
the group
asc/desc
icon
String
Not used
detailsURL
String
URI to display
details of the
group
parm
String
JSON result
from
OpenStack
domain
String
Domain ID
tags
String
Tags to control
action
availability
Sortable
Filterable
Searchable
asc/desc
Type
Description
id
String
Instance ID
displayname
String
Name of the
server
description
String
Description of
the server
icon
String
Not used
detailsURL
String
URI to display
details of the
server
912
Displayed in
UI
Sortable
Filterable
Searchable
asc/desc
asc/desc
Type
Description
Displayed in
UI
Sortable
Filterable
parm
String
JSON result
from
OpenStack
status
String
Server status
in OpenStack
asc/desc
openstackId
String
Server ID in
OpenStack
region
String
OpenStack
Region
updated
String
Time of last
update
created
String
Creation time
tags
String
Tags to control
action
availability
Searchable
asc/desc
asc/desc
Project provider:
This provider lists OpenStack keystone projects.
Instance Type
project
Provider Class
com.ibm.orchestrator.core.instance.providers.openstack.OpenstackProjectProvider
Table 120.
Attribute
Name
Type
Description
Displayed in
UI
id
String
Instance ID
displayname
String
Name of the
project
asc/desc
description
String
Description of
the project
asc/desc
icon
String
Not used
detailsURL
String
URI to display
details of the
project
parm
String
JSON result
from
OpenStack
enabled
Boolean
Project
enablement
status
asc/desc
domain
String
Domain ID
tags
String
Tags to control
action
availability
Sortable
Filterable
Searchable
asc/desc
913
User provider:
This provider lists OpenStack keystone users.
Instance Type
user
Provider Class
com.ibm.orchestrator.core.instance.providers.openstack.OpenstackUserProvider
Table 121.
Attribute
Name
Type
Description
id
String
Instance ID
displayname
String
Name of the
user
description
String
Description of
the user
icon
String
Not used
detailsURL
String
URI to display
details of the
user
parm
String
JSON result
from
OpenStack
enabled
Boolean
User
enablement
status
defaultProjectId String
Default project
for this user
String
Email address
of this user
domain
String
Domain ID
tags
String
Tags to control
action
availability
Displayed in
UI
Sortable
Filterable
Searchable
asc/desc
y
asc/desc
asc/desc
asc/desc
VM provider:
This provider lists OpenStack Nova virtual servers.
Instance Type
openstackvms
Provider Class
com.ibm.orchestrator.core.instance.providers.openstack.OpenstackVMProvider
Table 122.
Displayed
in UI
Attribute Name
Type
Description
id
String
Instance ID
displayname
String
description
String
Description of the
server
914
Sortable
Filterable
Searchable
asc/desc
asc/desc
Sortable
Filterable
Server status in
OpenStack
asc/desc
String
OpenStack Region
updated
String
created
String
Creation time
keyPair
String
patternInstanceType
String
patternName
String
patternURI
String
tags
String
ipAddresses
String
IP Adresses assigned to y
the server
Attribute Name
Type
Description
icon
String
Not used
detailsURL
String
parm
String
openstackId
String
Server ID in OpenStack
status
String
region
Searchable
asc/desc
asc/desc
asc/desc
asc/desc
Type
Description
id
String
Instance ID
displayname
String
Name of the
offering
description
String
Description of
the offering
Displayed in
UI
Sortable
Filterable
asc/desc
Searchable
asc/desc
asc/desc
915
Type
Description
icon
String
Offering icon
detailsURL
String
URI to display
details of the
offering
parm
String
JSON result
from Catalog
type
String
Type of this
service
category
String
Category of
this offering
tags
String
Tags to control
action
availability
Displayed in
UI
Sortable
Filterable
asc/desc
Searchable
asc/desc
y
asc/desc
Action provider:
This provider lists instance actions. Actions are services that can be executed on
one or more selected instances.
Instance Type
Offering
Provider Class
com.ibm.orchestrator.core.instance.providers.catalog.CatalogActionProvider
Table 124.
Attribute
name
Type
Description
id
String
Instance ID
displayname
String
Name of the
action
description
String
Description of
the action
icon
String
Action icon
detailsURL
String
URI to display
details of the
action
parm
String
JSON result
from Catalog
type
String
Type of this
service
category
String
Category of
this offering
instancetype
String
Type of
instance upon
which this
action can be
executed
916
Displayed in
UI
Sortable
Filterable
asc/desc
asc/desc
asc/desc
asc/desc
asc/desc
asc/desc
asc/desc
Searchable
Type
Description
Displayed in
UI
tagsAsString
String
Tags combined y
to a single
string
tags
String
Tags to control
action
availability
Sortable
Filterable
Searchable
Type
Description
id
String
Instance ID
displayname
String
Name of the
category
description
String
Description of
the category
icon
String
Category icon
detailsURL
String
URI to display
details of the
category
parm
String
JSON result
from Catalog
tags
String
Tags to control
action
availability
Displayed in
UI
Sortable
Filterable
Searchable
asc/desc
asc/desc
asc/desc
asc/desc
Generic provider:
This provider lists generic resources. The provider may be registered multiple
times under different instance types.
Instance Type
<not registered by default>
Provider Class
com.ibm.orchestrator.core.instance.providers.generic.GenericProvider
Attribute
name
Type
Description
id
String
Instance ID
Displayed in
UI
Sortable
Filterable
asc/desc
Searchable
917
Attribute
name
Type
Description
Displayed in
UI
Sortable
Filterable
Searchable
displayname
String
Name of the
instance
asc/desc
description
String
Description of
the instance
asc/desc
icon
String
Instance icon
asc/desc
detailsURL
String
URI to display
details of the
instance
parm
String
JSON object
used to store
additional
information
tags
String
Tags to control
action
availability
918
Task Response
[ Task Response 1,....., Task Response n]
GET: Get all tasks
URL method
/orchestrator/v2/tasks
Accepts
*/*
Content-Type
application/JSON
Normal response Codes
200
Error Response Codes
500 internal server error
Request Parameters
Expand: if set to serviceInstance the information service instance referred
by the attribute serviceInstanceId will get returned in the Task Response
in the parameter serviceInstance.
Response
Tasks Response
Authorization
No authorization needed but the output is restricted to tasks from all users
within the current project. Users with role admin can see all the tasks.
POST: Get all tasks
URL method
/orchestrator/v2/tasks
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
201 created
Error Response Codes
401 unauthorized
500 internal server error
Response
Tasks Response
Authorization
role: admin
GET: Get the task with a given id
URL method
/orchestrator/v2/tasks/{id}
Accepts
*/*
Chapter 11. Reference
919
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
404 not found
500 internal server error
Response
task Response
Authorization
No authorization needed but output restricted to role
PUT : Get the task with a given id
URL pattern
/orchestrator/v2/tasks/{id}
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Response
Task Response
Authorization
role: admin or in same project as task
DELETE: Delete the task with a given id
URL pattern
/orchestrator/v2/tasks/{id}
Accepts
*/*
Content-Type
application/JSON
Normal response Codes
204
Error Response Codes
401 unauthorized
404 not found
500 internal server error
Response
Task Response
920
Authorization
role: admin
REST API
Endpoint
Domain
v3/domains
Project
v3/projects
User
v3/users
Group
v3/group
Quota
v2.0/{tenant_id}/os-quotasets
Category
ochestrator/v2/categories
Offering
ochestrator/v2/offerings
Action
ochestrator/v2/offerings
921
Example:
https://xvm127.boeblingen.de.ibm.com:8443/orchestrator/v2/instancetypes/
project
{
"href": "https://xvm127.boeblingen.de.ibm.com:8443/
orchestrator/v2/instancetypes/project",
"item": {
"provider": "com.ibm.orchestrator.core.instance
.providers.openstack.OpenstackProjectProvider",
"detailsview": {
"application": "SCOMT",
"humanservice": "Show Project Details"
},
"keyfields": [
{
"instanceattribute": "displayname",
"header": "Name"
},
{
"instanceattribute": "description",
"header": "Description"
},
{
"instanceattribute": "enabled",
"header": "Enabled?"
}
],
"tags": [
"enabled",
"disabled"
],
"icon": "Web Icon:glyphicons_232_cloud",
"type": "admin",
"name": "project",
"description": "Show your OpenStack projects.",
"displayname": "Projects"
},
"instances": {
"href":
"https://xvm127.boeblingen.de.ibm.com:8443/orchestrator
/v2/instancetypes/project/instances"
},
"services": {
"href": "https://xvm127.boeblingen.de.ibm.com:8443/
orchestrator/v2/instancetypes/project/services"
}
}
2. Get the instance you want to manage and find the ID and name. You can also
use the API filter to search the name attribute.
HTTP Method:
GET
Example:
https://xvm127.boeblingen.de.ibm.com:8443/orchestrator/v2/instancetypes/
project/instances
{
"href": "https://xvm127.boeblingen.de.ibm.com:8443
/orchestrator/v2/instancetypes/project/instances",
"start": 0,
"limit": 10,
"total": 4,
"first": {
"href":
"https://xvm127.boeblingen.de.ibm.com:8443/orchestrator
922
/v2/instancetypes/project/instances?_limit=10&_start=0"
},
"previous": null,
"next": null,
"last": {
"href":
"https://xvm127.boeblingen.de.ibm.com:8443
/orchestrator/v2/instancetypes/project/instances?_limit=10&_start=0"
},
"items": [
{
"href":
"https://xvm127.boeblingen.de.ibm.com:8443/
orchestrator/v2/instancetypes/project/instances/
4ae7ade7e4724c69ab90246ea72965e6",
"item": {
"enabled": true,
"domain": "4ae7ade7e4724c69ab90246ea72965e6",
"tags": [
"enabled"
],
"icon": null,
"id": "4ae7ade7e4724c69ab90246ea72965e6",
"parm": {
"enabled": true,
"domain_id": "default",
"links": {
"self":
"http://192.0.2.35:5000/v3/projects
/4ae7ade7e4724c69ab90246ea72965e6"
},
"id": "4ae7ade7e4724c69ab90246ea72965e6",
"name": "admin",
"description": "admin Tenant"
},
"description": "admin Tenant",
"detailsURL":
"https://xvm127.boeblingen.de.ibm.com:8443/teamworks/
executeServiceByName?processApp=SCOMT&serviceName=
Show+Project+Details&tw.local.projectId=
4ae7ade7e4724c69ab90246ea72965e6&tw.local.domainId=
default&tw.local.authUser=admin&tw.local.authDomain=
Default&tw.local.authProject=admin",
"displayname": "admin"
}
},
...
]
}
3. Get the actions that are applicable to projects. Get the link to in the services
attribute of the response in step 1. Get the services and find the Edit Project
action by name and remember its ID.
HTTP method:
GET
Example:
https://xvm127.boeblingen.de.ibm.com:8443/orchestrator/v2/instancetypes/
project/services
...
{
"href": "https://xvm127.boeblingen.de.ibm.com:8443/
orchestrator/v2/instancetypes/project/services/69",
"item": {
"human_service": "Edit Project Action",
"priority": 0,
"human_service_app_name": "SCOrchestrator Multi-Tenancy Toolkit",
Chapter 11. Reference
923
"implementation_type": null,
"created": 1401827772,
"human_service_app_short_name": "SCOMT",
"process_app_id": "2066.227c57b3-a5e5-4e5b-a283-c920cf9bed50",
"acl": [
...
],
"name": "Edit Project",
"ownerid": 0,
"instancetype": "project",
"process": "Edit Project Action",
"operation_type": "singleInstanceAction",
"human_service_app_id": "2066.227c57b3-a5e5-4e5b-a283-c920cf9bed50",
"tags": [
"enabled"
],
"process_app_name": "SCOrchestrator Multi-Tenancy Toolkit",
"icon": "act16_return",
"updated": 1401827772,
"id": 69,
"process_app_short_name": "SCOMT",
"description": "Edit the project details",
"category": 31
}
},
...
4. Launch the action with the ID from step 3, passing the ID of the selected
instance (project) from step 2 in the request body. The call returns a task which
is in the state NEW and a new ID.
Note: For all actions of type "createInstance", the ID of the domain must be
passed in the "instances" array of the PUT request.
HTTP method:
POST
Body:
{
"instances": ["default"]
}
Example:
https://xvm127.boeblingen.de.ibm.com:8443/orchestrator/v2/instancetypes/
project/services/69/launch
{
"updated_iso": "1970-01-01T01:00:00+0100",
"description_message": "HS_OFFERING_INVOCATION",
"domain": "Default",
"message": "Launched",
"created": 1402569924045,
"error": null,
"user": "admin",
"parm": {
"plan": {
"human_service": "Edit Project Action",
"priority": 0,
"human_service_app_name": "SCOrchestrator Multi-Tenancy
Toolkit",
"implementation_type": null,
"created": 1401827772,
"human_service_app_short_name": "SCOMT",
"process_app_id": "2066.227c57b3-a5e5-4e5b-a283-c920cf9bed50",
"acl": [
...
],
924
5. Set the parameters of the task which are the input for the action. Then set the
status of the task to QUEUED to queue the task for execution. In the body the
description is set to test and the other attributes remain the same.
HTTP method:
PUT
Body:
{
"status":"QUEUED",
"parm":{"OperationParameter":"<variable type=\"Project\">\n
<name type=\"String\"><![CDATA[admin]]><\/name>\n
<description type=\"String\"><![CDATA[test]]>
<\/description>\n
<enabled type=\"Boolean\"><![CDATA[true]]><\/enabled>\n
<id type=\"String\"><![CDATA[
4ae7ade7e4724c69ab90246ea72965e6]]><\/id>\n
<domainId type=\"String\"><![CDATA[default]]>
<\/domainId>\n<\/variable>"}
}
Example:
Chapter 11. Reference
925
https://xvm127.boeblingen.de.ibm.com:8443/kernel/tasks/1521
6. Check if the task succeeded or failed. The status will switch to RUNNING. If
the task succeeds the status says COMPLETED. If the task fails the status says
FAILED and an error_message is shown. In the example, the process
completed.
HTTP method:
GET
Example:
https://xvm127.boeblingen.de.ibm.com:8443/kernel/tasks/1521
{
"error_message": "CTJCO0002I: Business process instance 79
completed successfully.",
"status": "COMPLETED",
}
7. Verify if the action applied the changes on the entity. Finally, it is possible to
ensure if the change happened on the instance.
HTTP method:
GET
Example:
https://xvm127.boeblingen.de.ibm.com:8443/orchestrator/v2/instancetypes/
project/instances/4ae7ade7e4724c69ab90246ea72965e6
{
"href": "https://xvm127.boeblingen.de.ibm.com:8443/orchestrator/v2
/instancetypes/project
/instances/4ae7ade7e4724c69ab90246ea72965e6",
"item": {
"enabled": true,
"domain": "4ae7ade7e4724c69ab90246ea72965e6",
"tags": [
"enabled"
],
"icon": null,
"id": "4ae7ade7e4724c69ab90246ea72965e6",
"parm": {
"enabled": true,
"domain_id": "default",
"links": {
"self": "http://192.0.2.35:5000/v3/projects
/4ae7ade7e4724c69ab90246ea72965e6"
},
"id": "4ae7ade7e4724c69ab90246ea72965e6",
"name": "admin",
"description": "test"
},
"description": "test",
"detailsURL":
"https://xvm127.boeblingen.de.ibm.com:8443
/teamworks/executeServiceByName?processApp=SCOMT&serviceName
=Show+Project+Details&tw.local.
projectId=4ae7ade7e4724c69ab90246ea72965e6&tw.local.
domainId=default&tw.local.authUser=admin&tw.local
.authDomain=Default&tw.local.authProject=admin",
"displayname": "admin"
}
}
926
POST
URL pattern
https://hostname/resources/services
Response
The response of the server contains the specified offering. It has the
following set of attributes:
{
category:
created:
description:
human_service:
human_service_app_id:
human_service_app_name:
human_service_app_short_name:
icon:
id:
implementation_type:
name:
operation_type:
ownerid:
process:
process_app_id:
process_app_name:
process_app_short_name:
updated:
}
Return values
927
928
GET
URL pattern
https://hostname/resources/services
Response
Return values
929
v updated - the time when the self-service offering was last updated, represented
as the number of milliseconds since midnight, January 1, 1970 UTC. This value
is numeric and is automatically generated by the product.
The following listing shows an example response that can be retrieved by way of
the request:
[
{
"human_service": "Sample_ReportProblem",
"implementation_type": "ibm_bpm_process",
"process_app_id": "2066.596706e1-2e92-4fb1-a2dd-e0e4bdc4f7fc",
"name": "Problem report",
"created": 1242965374865,
"updated": 1242965392870,
"ownerid": 2,
"process": "Sample_Report",
"operation_type": "service",
"human_service_app_id": "2066.596706e1-2e92-4fb1-a2dd-e0e4bdc4f7fc",
"icon": "Job Icon:ge100_job_24",
"id": 5,
"description": "Report a problem",
"category": 5
}
]
GET
URL pattern
https://hostname/resources/services/{id}[?acl=true]
Response
The response of the server contains the specified offering. It has the
following set of attributes:
{
acl:
category:
created:
description:
human_service:
human_service_app_id:
human_service_app_name:
human_service_app_short_name:
icon:
id:
implementation_type:
name:
operation_type:
ownerid:
process:
process_app_id:
process_app_name:
process_app_short_name:
updated:
}
Note: The acl attribute is only returned when the optional query
parameter acl is passed with the value true.
930
Table 129. Get entries for a specific self-service offering REST API call (continued)
Return values
931
"process_app_id": "2066.596706e1-2e92-4fb1-a2dd-e0e4bdc4f7fc",
"name": "Problem report",
"created": 1242965374865,
"updated": 1242965392870,
"ownerid": 2,
"process": "Sample_Report",
"operation_type": "service",
"human_service_app_id": "2066.596706e1-2e92-4fb1-a2dd-e0e4bdc4f7fc",
"process_app_name": "SCOrchestrator_Toolkit",
"icon": "Configuration Icon:ge100_config_24",
"id": 5,
"process_app_short_name": "SCOTLKT",
"description": "Report a problem",
"category": 5
}
"acl":
[
{
"domain": "default",
"view": true,
"role": "default",
"use": false,
"resourceType": "SCOService",
"project": "default",
"resourceId": 101,
"modify": true,
"id": 151
}
]
DELETE
URL pattern
https://hostname/resources/services/{id}
Response
Return values
932
HTTP method
PUT
URL pattern
https://hostname/resources/services/{id}
Response
"<attribute>": "<attribute_value>" }
The following listing shows sample content of request body to update the
description of a self-service offering:
{
(IDs for the offerings can be retrieved with REST GET /resources/services).
Executing self-service offering without human service providing all
parameters
1. Run REST POST /resources/automation/<offering-id>
2. Retrieve the <request-id> from the response.
3. Run REST PUT /kernel/tasks/<request-id> with required parameters as
payload.
Executing self-service offering with human service to collect all the required
parameters through the UI
1. Run REST POST /resources/automation/<offering-id>
2. Get <human-service-url> from the response.
3. Use the URL in an iframe, for example, to display the user interface and collect
parameters (when you click Submit, the business process is kicked off).
Step 1 - Start the request by creating the operation context
Use the IBM Cloud Orchestrator interface to create the operationContext by
posting to the offering API. Note the mandatory API version and content-type
headers, and the empty json object {} we send over, an offering with ID equal to 31
is used in this example:
curl -ku admin:passw0rd -H "X-IBM-Workload-Deployer-API-Version: 3.1"
-H "Content-Type: application/json"
--data-binary {} https://10.102.98.11/resources/automation/31
This is all that needs to be done for scenario A. For scenario B, search for the
request ID in the response of the call:
{..."id": "1004", ...}
Chapter 11. Reference
933
or in case of scenario C (delegated UI), search for the human service URL:
{.... "redirect": "/teamworks/executeServiceByName?
processApp=SCOTLKT&serviceName=Sample_UserInterface&tw.local.operationContextId=1004",...}
For scenario C, use the URL in an iframe, for example, to display the user interface
and collect parameters (when you click Submit, the business process is kicked off).
Step 2 - Trigger the execution of the offering workflow (Business Process
Manager process)
Step 2 is only required for scenario B. This step passes the parameters to Business
Process Manager process and starts the process, use the request ID from the
previous step.
Sample call: this call passes the values Hello and Phone for a
Sample_BusinessObject consisting of two string attributes (field1 and field2):
curl -ku admin:passw0rd -X PUT -d {"status":"QUEUED", "parm":{"OperationParameter":
"<variable type=\"Sample_BusinessObject\">
<field1 type=\"String\"><![CDATA[Hello]]><\/field1>
<field2 type=\"String\"><![CDATA[Phone]]><\/field2> <\/variable>"}}
-H "Content-Type: application/json" https://10.102.98.11/kernel/tasks/1004
where status must be set to QUEUED so that the process is started, and parm must
contain the input values for the process the values must be passed in the
serialized form of the related Business Process Manager business object as returned
by the Business Process Manager
tw.system.serializer.toXML(tw.local.inputParameterObject)
934
You may want to use "parts=all" which will then return more information
about the process such as detailed descriptions. For the bpdId use itemID from
above, for processAppId use the one above:
{ "status":"200", "data":{ ..., "DataModel":{ ...} } }
935
Now using the REST call to retrieve details about the request use:
GET .../kernel/tasks/{request-id}
for example:
curl -ku admin:passw0rd -H "X-IBM-Workload-Deployer-API-Version: 3.1"
-H "Content-Type: application/json" https://192.0.2.114/kernel/tasks/2472"
For example:
curl -ku admin:passw0rd -H "X-IBM-Workload-Deployer-API-Version: 3.1"
-H "Content-Type:application/json" https://192.0.2.114/kernel/tasks/2472 -X GET
Among other information, the response contains information about the STATUS of
the request. For details on possible values, refer to Get entries for a specific task
on page 948Sample response (excerpt) from REST GET /kernel/tasks/{id}:
{
"updated_iso" : "2014-02-19T17:54:15+0100",
"description_message" : "PROCESS_COMPLETE",
"domain" : "Default",
"created" : 1392828461580,
"error" : { ... },
"serviceInstance" : {
"virtualMachines" : [{
"memory" : 4096,
"hypervisorid" : "\/resources\/hypervisors\/PM-1",
"hostname" : "SC-192-168-0-103.RegionOne.example.com",
....
}
],
....
},
"user" : "admin",
"parm" : {
"startPlanByPlugpointEventHandler" : "done",
"CUSTOM_PARM1": "abc",
"CUSTOM_PARM2": "xyz",
"OperationParameter" : "<variable ...<\/variable>",
"serviceInstanceId" : "282",
"plan" : { ... },
"processId" : "1356"
},
"created_iso" : "2014-02-19T17:47:41+0100",
"status_localized" : "TASKSTATUS_COMPLETED",
936
"error_message" : "BPM_PROCESS_COMPLETE",
"status" : "COMPLETED",
"eventTopic" :
"com\/ibm\/orchestrator\/serviceinstance\/plan\/ibm_bpm_process",
"delayInSeconds" : 30,
"project" : "admin",
...
}
}
Once the request has finished, there may be the need to retrieve further
information about what has been done by this request. If this is needed, the
Business Process Manager process can use certain building blocks (Integration
Services) to store process specific information in the request (actually in the
operation context).
For details, see Integration services.
The two integration services of interest here are SetOperationContextParameters
and SetServiceInstanceId.
SetOperationContextParameters enables the Process Designer to easily store any
custom specific set of key/value pairs in the operation context object. These
parameters are then part of the response of the GET /kernel/tasks/{id} REST call.
The key/value pairs will be added to the parm section in the response. This
capability can be used to either transfer data from human service to execution
process, or to store information in the operation context which is useful to be
retrieved programmatically once the request has finished.
SetServiceInstanceId enables the Process Designer to store the ID of the service
instance in the operation context. This is useful if the process is about creating a
virtual service instance, and therefore as a result of this process, a reference to the
newly provisioned virtual system instance should be stored. The virtual system
instance ID as provided by the deploy pattern building block can be used to store
as the service instance ID in the operation context. If the operation context contains
a valid service instance ID, the REST call GET /kernel/tasks/{id} can be used to
get all information of the virtual service instance in the response. Therefore use GET
/kernel/tasks/{id}?expand=serviceInstance.
In the sample response above the following lines are of interest:
v If SetServiceInstanceId is used, the following property will be in the response
in the parm section:
"serviceInstanceId" : "282"
v If ?expand=serviceInstance is used for the REST call, the following section will
be part of the response:
"serviceInstance" : {
"virtualMachines" : [{
"memory" : 4096,
"hypervisorid" : "\/resources\/hypervisors\/PM-1",
"hostname" : "SC-192-168-0-103.RegionOne.example.com",
These two custom parameters have been added to the key/value map in the
business process by using the following Java script code, for example:
Chapter 11. Reference
937
2. Get the ID, the process_app_id, and the process value of the offering to
execute:
curl -ku admin:passw0rd -H "X-IBM-Workload-Deployer-API-Version: 3.1"
-H "Content-Type:application/json" https://192.0.2.98/resources/services -X GET
...
{
"human_service": "Stop Single vSys Instance",
"implementation_type": "ibm_bpm_process",
"created": 1390828290449,
"process_app_id": "2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"name": "Stop Virtual System Instance",
"ownerid": 40,
"process": "Stop Single vSys Pattern Instance",
"operation_type": "service",
"human_service_app_id": "2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"icon": "Cloud Icon:ge100_virtualfabric_24",
"updated": 1390828290449,
"id": 3,
"description": "",
"category": "2"
}
...
3. Create the operation context by passing the ID that you got in Step 2 and
getting the ID value from the response:
curl -ku admin:passw0rd -H "X-IBM-Workload-Deployer-API-Version: 3.1"
-H "Content-Type:application/json" -d {} https://192.0.2.98/resources/automation/3
-X POST
938
(example):
{
"updated_iso": "2014-02-06T16:15:34-0500",
"description_message": "Starting offering Stop Virtual System Instance. ",
"domain": "Default",
"message": "The action was submitted successfully. See the History section of this
instance to track the action progress.",
"created": 1391721334473,
"error": null,
"user": "admin",
"parm": {
"plan": {
"human_service": "Stop Single vSys Instance",
"priority": 5,
"human_service_app_name": "SCOrchestrator_Support_vSys_Toolkit",
"implementation_type": "ibm_bpm_process",
"human_service_app_short_name": "SCOVSYS",
"created": 1390828290449,
"process_app_id": "2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"name": "Stop Virtual System Instance",
"ownerid": 40,
"process": "Stop Single vSys Pattern Instance",
"operation_type": "service",
"human_service_app_id": "2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"process_app_name": "SCOrchestrator_Support_vSys_Toolkit",
"icon": "CTJCO1138Ige100_virtualfabric_24",
"event": null,
"updated": 1390828290449,
"id": 3,
"process_app_short_name": "SCOVSYS",
"description": "",
"category": "2",
"apply_to_all_pattern": 0
}
},
"created_iso": "2014-02-06T16:15:34-0500",
"status_localized": "New",
"error_message": null,
"status": "NEW",
"eventTopic": "com/ibm/orchestrator/serviceinstance/plan/ibm_bpm_process",
"delayInSeconds": 0,
"project": "admin",
"updated": 1391721334479,
"id": "1025",
"redirect": "/teamworks/executeServiceByName?processApp=SCOVSYS&serviceName=
Stop+Single+vSys+Instance&tw.local.operationContextId=1025",
"description": {
"resourceBundle": "com.ibm.orchestrator.messages.orchestratormessages",
"message": "OFFERING_INVOCATION",
"messageKey": "OFFERING_INVOCATION",
"args": [
"Stop Virtual System Instance"
]
}
}
939
"snapshotCreatedOn":"2013-10-29T14:55:24Z",
"display":"Stop Single vSys Pattern Instance",
"tip":true,
"branchID":"2063.44ef08ae-2760-4d2b-ab11-7de7b3475617",
"branchName":"Main",
"startURL":"/rest/bpm/wle/v1/process?action=start
&bpdId=25.f2899c1e-2884-4009-96d6-581f4bcef980
&processAppId=2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"topLevelToolkitAcronym":"SCOVSYS",
"topLevelToolkitName":"SCOrchestrator_Support_vSys_Toolkit",
"isDe fault":false,"ID":"2015.363"}
b. Get details about the process model, passing the idemId that you got in step
a and the process_app_id that you got in Step 2:
curl -v -ku admin:passw0rd
https://192.0.2.99:9443/rest/bpm/wle/v1/processModel/25.f2899c1e-2884-4009-96d6-581f4bcef980
?processAppId=2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e&parts=dataModel -X GET
(example):
{"status":"200",
...
"inputs":{"operationContext":{"type":"OperationContext","isList":false},
"inputParameterObject":{"type":"VirtualSystem","isList":false}},
...
"VirtualSystem":{
...
"properties":{"currentstatus_text":{"isList":false,"type":"String"},
"envProfileId":{"isList":false,"type":"String"},
"currentstatus":{"isList":false,"type":"String"},
"name":{"isList":false,"type":"String"},
"id":{"isList":false,"type":"String"},
"currentmessage":{"isList":false,"type":"String"}}
...
}
5. Start the execution of the offering, passing in the ID of the Virtual System to
stop that you got from Step 1, and the task ID value that you got from Step 3:
curl -ku admin:passw0rd -X PUT -d {"status":"QUEUED",
"parm":{"OperationParameter":"<variable type=\"VirtualSystem\">
<id type=\"String\"><![CDATA[3]]><\/id;<\/variable>"}}
-H "Content-Type: application/json" https://192.0.2.98/kernel/tasks/1025
{"updated_iso":"2014-02-06T17:34:35-0500",
"description_message":"Starting offering Stop Virtual System Instance. ",
"domain":"Default","created":1391726039661,"error":null,"user":"admin","parm":
{"OperationParameter":"<variable type=\"VirtualSystem\"><id type=\"String\"><![CDATA[3]]>
<\/id><\/variable>","plan":{"human_service":"Stop Single vSys Instance","priority":5,
"human_service_app_name":"SCOrchestrator_Support_vSys_Toolkit",
"implementation_type":"ibm_bpm_process","human_service_app_short_name":"SCOVSYS",
"created":1390828290449,"process_app_id":"2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"name":"Stop Virtual System Instance","ownerid":40,
"process":"Stop Single vSys Pattern Instance","operation_type":"service",
"human_service_app_id":"2066.6ecd41b3-6c42-47e4-a69e-117d77f4104e",
"icon":"CTJCO1138Ige100_virtualfabric_24",
"process_app_name":"SCOrchestrator_Support_vSys_Toolkit","event":null,"id":3,
"updated":1390828290449,"description":"","process_app_short_name":"SCOVSYS",
"category":"2","apply_to_all_pattern":0}}
940
Name
The name of the category, that is, the name that appears in the
service catalog
Description
Icon
Isbuiltin
"1 or 0" for "true or false" whether the category is provided by the
product
Create category:
Use this REST call to create a category.
Available HTTP method
Table 133. Create a category REST API call
HTTP method
POST
URL pattern
/resources/automationcategories
Response
The response of the server contains the specified offering. It has the
following set of attributes:
{
category:
created:
description:
human_service:
human_service_app_id:
human_service_app_name:
human_service_app_short_name:
icon:
id:
implementation_type:
name:
operation_type:
ownerid:
process:
process_app_id:
process_app_name:
process_app_short_name:
updated:
}
941
942
HTTP method
GET
URL pattern
/resources/automationcategories
Table 134. Get the list of categories REST API call (continued)
Response
The response of the server contains the specified offering. It has the
following set of attributes:
{
category:
created:
description:
human_service:
human_service_app_id:
human_service_app_name:
human_service_app_short_name:
icon:
id:
implementation_type:
name:
operation_type:
ownerid:
process:
process_app_id:
process_app_name:
process_app_short_name:
updated:
}
943
GET
URL pattern
/resources/automationcategories/8
Response
The response of the server contains the specified offering. It has the
following set of attributes:
{
category:
created:
description:
human_service:
human_service_app_id:
human_service_app_name:
human_service_app_short_name:
icon:
id:
implementation_type:
name:
operation_type:
ownerid:
process:
process_app_id:
process_app_name:
process_app_short_name:
updated:
}
944
Update a category:
Use this REST call to update a category.
Available HTTP method
Table 136. Update a category REST API call
HTTP method
PUT
URL pattern
/resources/automationcategories/8
Response
"<attribute>": "<attribute_value>" }
945
{
"description": "new description of category 8"
}
Delete a category:
Use this REST call to delete a category.
Available HTTP method
Table 137. Delete a category REST API call
HTTP method
DELETE
URL pattern
/resources/automationcategories/8
Response
946
[
]
}
}
Tasks Response
[ Task Response 1,....., Task Response n]
GET: Get all tasks
URL method
/kernel/tasks
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Response
Tasks Response
GET: Get the task with a given id
URL method
/kernel/tasks/{id}
Accepts
*/*
Content-Type
application/JSON
Normal Response Codes
200
Response
Task Response
List all currently running and recently completed tasks:
Use this REST API method to list all currently running tasks and the tasks that
were completed not longer than two weeks before.
Available HTTP method
Table 138. List all currently running and recently completed tasks REST API call
HTTP method
GET
URL pattern
https://hostname/kernel/tasks/
Response
Return values
v 200 - OK
v 500 - Internal Server Error
947
GET
URL pattern
https://hostname/kernel/tasks/{id}
Response
Return values
v 200 - OK
v 401 - The currently logged in user is not authorized to retrieve
the task. Only Administrators and creators of the task can see
the task.
v 404 The task does not exist.
948
949
POST
URI Pattern
Date Format
/resources/
environmentProfiles
application/json
/resources/
environmentProfiles
application/json
Success Codes
200
201
Error Codes
403
Returns the list of
environment profiles
that are visible to the
client.
500
500
950
environment_text
Specifies the textual representation of environment. This is a string
representation of environment in the preferred language of the requester
and is automatically generated by the product.
id
kmsipaddress
Specifies the IP address of the KMS server in your environment.
kmsport
Specifies the port used for KMS service.
name
Specifies the display name associated with this environment profile. This
field contains a string value with a maximum of 1024 characters.
owner Specifies the uniform resource identifier (URI) of the user that owns this
environment profile. The URI is relative and should be resolved against the
URI of the owner.
platform
Specifies the type of hypervisors this environment profile supports on
deployments. Valid values are ESX, PowerVM, and zVM.
updated
Specifies the time the environment profile was last updated, represented as
the number of milliseconds since midnight, January 1, 1970 UTC. This
value is numeric and is automatically generated by the product.
vmname_pattern
Specifies the pattern used to generate virtual machine names.
Chapter 11. Reference
951
952
953
URI Pattern
POST
/resources/logViewerMgr
GET
Data Format
/resources/logViewerMgr/
{id}
Success Codes
This code is
returned if log
viewing for the
specified log file
was successfully
initialized. The
Location header
in the response
contains a URL
that can be
queried to view
contents of the
log file.
403
This code is
returned if the
requester has not
been assigned the
admin role.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
200
This code is
returned if
content from the
log file is
included in the
output.
403
This code is
returned if the
requester has not
been assigned the
admin role.
404
204
This code is
returned if the
specified
startingPoint and
lineCount do not
include any
content from the
log file.
This code is
returned if the
specified log
viewing has not
been initialized
correctly.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
200
application/json
Error Codes
Response headers:
Location: https://myproduct.mycompany.com/resources/logViewerMgr/trace_%5E_trace.log
954
Response JSON:
{
"NEXT_CHUNK": 214,
"TAIL_CONTENT": "************ Start Display Current Environment ************
Workload Deployer version number is 1.0.0.1-11776
Java Version = J2RE 1.6.0 IBM J9 2.4 Linux x86-32 jvmxi3260-20090215_29883
(JIT enabled,AOT enabled)"
}
The TAIL_CONTENT entry in the response contains contents of the log file; the
NEXT_CHUNK value in the response can be used as the startingPoint in the next
request to retrieve subsequent content from the log file.
Related tasks:
REST API reference on page 867
The representational state transfer (REST) application programming interface (API)
is provided by IBM Cloud Orchestrator.
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/logs/virtualMachines/
Web_Application-was.11319468974926
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
955
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.MONITORING/console.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.WAS/trace.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.WAS/console.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.SSH/trace.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.SSH/console.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.AGENT/trace.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/
Web_Application-was.11319468974926.AGENT/console.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/console.log.0",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/trace.log.0",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/trace.log.2",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/install/trace.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/install/console.log",
"/opt/IBM/maestro/agent/usr/servers/Web_Application-was.11319468974926/logs/trace.log.1",
"/0config/0config.log"
]
}
/resources/virtualApplications/{virtual_application_instance_id}/logs/virtualMachines/
{virtual_machine_id} /{log_absolute_path}
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/logs/virtualMachines/
Web_Application-was.11319468974926/opt/IBM/maestro/
agent/usr/servers/Web_Application-was.11319468974926/
logs/Web_Application-was.11319468974926.WAS/trace.log
Request headers
bytes={start}-{end}
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[2011-10-24 16:03:53,756]
[2011-10-24 16:03:53,757]
simple under context root
[2011-10-24 16:05:53,998]
to RUNNING
956
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/monitoring
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
{
"SERVERS":[{
"vm_id":"1",
"vm_uuid":"4217cc6c-0d82-575d-c983-4c76d221ded5",
"hypervisor_uuid":"52e11db5-2c40-e011-ae1b-00215e5d6968",
"private_ip":"192.0.2.208",
"server_name":"Web_Application-was.11319468974926",
"public_hostname":"ipas-vm-071-208.purescale.raleigh.ibm.com",
"state":"RUNNING",
"time_stamp":1319471278884,
"vm_name":"Web_Application-was.11319468974926",
"deployment_id":"d-65d29715-9063-436d-8593-d5218208f8aa",
"hypervisor_hostname":"192.0.2.31",
"availability":"NORMAL",
"public_ip":"192.0.2.208"
}
],
"application":{
"connectors":[],
"workload":"TRUE",
"application_name":"simple",
"application_id":"a-0f5985ee-d5f2-4512-b9ae-e4934a8e3ea0"
},
"ROLETYPES":[{
"roleType":"AGENT",
"template":"Web_Application-was",
"availability":"NORMAL"
},
{
"roleType":"SSH",
"template":"Web_Application-was",
"availability":"NORMAL"
},
{
"roleType":"MONITORING",
"template":"Web_Application-was",
"availability":"NORMAL"
},
{
"roleType":"WAS",
Chapter 11. Reference
957
"template":"Web_Application-was",
"availability":"NORMAL"
}
],
"deployment":{
"time_stamp":1319472359280,
"platform":"ESX",
"env_profile_id":"1",
"cloud_group_id":"1",
"deployment_name":"simple",
"deployment_id":"d-65d29715-9063-436d-8593-d5218208f8aa",
"availability":"NORMAL",
"deployment_status":"RUNNING",
"vs_id":"1"
},
"version":2,
"ROLES":[{
"time_stamp":1319472359280,
"state":"RUNNING",
"private_ip":"192.0.2.208",
"role_type":"WAS",
"role_name":"Web_Application-was.11319468974926.WAS",
"display_metrics":true,
"server_name":"Web_Application-was.11319468974926",
"pattern_version":"2.0",
"pattern_type":"webapp",
"availability":"NORMAL"
}
],
"appliance":{
"runtime_env":"vm",
"appliance_type":"unknown",
"appliance_group":"",
"appliance_name":"",
"appliance_id":"unknown"
}
}
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/monitoring/servers
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[{
"vm_id":"1",
"vm_uuid":"4217cc6c-0d82-575d-c983-4c76d221ded5",
"hypervisor_uuid":"52e11db5-2c40-e011-ae1b-00215e5d6968",
"private_ip":"192.0.2.208",
"server_name":"Web_Application-was.11319468974926",
"public_hostname":"ipas-vm-071-208.purescale.raleigh.ibm.com",
"state":"RUNNING",
958
"time_stamp":1319471278884,
"vm_name":"Web_Application-was.11319468974926",
"deployment_id":"d-65d29715-9063-436d-8593-d5218208f8aa",
"hypervisor_hostname":"192.0.2.31",
"availability":"NORMAL",
"public_ip":"192.0.2.208"
}
]
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/monitoring/role
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[{
"time_stamp":1319472359280,
"state":"RUNNING",
"private_ip":"192.0.2.208",
"role_type":"WAS",
"role_name":"Web_Application-was.11319468974926.WAS",
"display_metrics":true,
"server_name":"Web_Application-was.11319468974926",
"pattern_version":"2.0",
"pattern_type":"webapp",
"availability":"NORMAL"
}
]
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/monitoring/servers/
Web_Application-was.11319468974926/metrics/
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
Chapter 11. Reference
959
{
"MEMORY":{
"memory_used_percent":9.97,
"time_stamp":1319537926447,
"memory_total":2368
},
"CPU":{
"time_stamp":1319537926447,
"busy_cpu":3.73
},
"DISK":{
"blocks_reads_per_second":642,
"time_stamp":1319537917993,
"blocks_written_per_second":10441
},
"NETWORK":{
"time_stamp":1319537917993,
"megabytes_received_per_sec":0.001,
"megabytes_transmitted_per_sec":0.002
}
}
https://localhost/resources/virtualApplications/d-65d297159063-436d-8593-d5218208f8aa/monitoring/roles/
Web_Application-was.11319468974926.WAS/metrics/
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
{
"WAS_WebApplications":{
"time_stamp":1319538432348,
"max_service_time":0,
"min_service_time":0,
"service_time":0,
"request_count":0
},
"WAS_TransactionManager":{
"time_stamp":1319538432348,
"rolledback_count":0,
"active_count":0,
"committed_count":12
},
"WAS_JDBCConnectionPools":{
"time_stamp":1319538432348,
"max_percent_used":0,
"min_percent_used":0,
"percent_used":0,
"wait_time":0,
"min_wait_time":0,
"max_wait_time":0
},
"WAS_JVMRuntime":{
960
"time_stamp":1319538432348,
"jvm_heap_used":51.387638,
"used_memory":77583,
"heap_size":150976
}
}
https://localhost/resources/patternTypes/?version=vr
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[
{
"license": {
"type": "PVU",
"pid": "5725E00"
},
"status": "avail",
"licenses": [
"https://192.0.2.45:9444/storehouse/admin/patterntypes/dbaas/1.0/licenses/"
],
"shortname": "dbaas",
"version": "1.0",
"name": "DBaaS Pattern Type",
"description": "IBM Workload Deployer Pattern Type for DBaaS",
"url": "https://192.0.2.45:9444/storehouse/admin/patterntypes/dbaas/1.0/"
},
...
]
https://localhost/resources/patternTypes/
Request content-type
application/json
Request example
https://localhost/resources/patternTypes/
{patternTypesName}/{version_vrmf}
Response code
201
Created successfully
Chapter 11. Reference
961
403
Access forbidden
500
Unexpected error
https://localhost/resources/patternTypes/dbaas/1.0
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
{
"license": {
"type": "PVU",
"pid": "5725E00"
},
"status": "avail",
"licenses": [
"https://192.0.2.45:9444/storehouse/admin/patterntypes/dbaas/1.0/licenses/"
],
"shortname": "dbaas",
"version": "1.0",
"name": "DBaaS Pattern Type",
"description": "IBM Workload Deployer Pattern Type for DBaaS",
"url": "https://192.0.2.45:9444/storehouse/admin/patterntypes/dbaas/1.0/"
}
https://localhost/resources/patternTypes/dbaas/1.0/
plugins
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
962
[
"firewall/1.0.0.0",
"webapp-license/1.0.0.0",
"tds/1.0.0.0",
"agent/1.0.0.0",
"logbackup/1.0.0.0",
"monitoring/1.0.0.0",
"ssh/1.0.0.0",
...
]
https://localhost/resources/patternTypes/webapp/1.0/
Response content-type
application/json
Request example
Request body:
{
"status": "accepted"
}
Valid status includes accepted, avail, and unavail.
Response body
{
"status": "accepted"
}
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
https://localhost/resources/patternTypes/webapp/1.0.0.0
Response content-type
application/json
Response example
true or false
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
963
https://localhost/resources/plugins/
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[
{
"content_type": "application/json",
"last_modifier": "cbadmin",
"create_time": "2011-02-23T13:35:55Z",
"enabled": true,
"last_modified": "2011-02-23T13:38:48Z",
"access_rights": {
"cbadmin": "F",
"_group_:Everyone": "R"
},
"content_md5": "6DDE51DF49D718372BA1EBAFF3E71410",
"name": " waswmqq/1.0.0.0",
"creator": "cbadmin"
},
{
"content_type": "application/json",
"last_modifier": "cbadmin",
"create_time": "2011-02-23T13:36:08Z",
"enabled": true,
"last_modified": "2011-02-23T13:36:09Z",
"access_rights": {
"cbadmin": "F",
"_group_:Everyone": "R"
},
"content_md5": "83F1AAD5EFCEBB89B835A3CD2C89D6A5",
"name": "webservice/1.0.0.0",
"creator": "cbadmin"
},
...
]
Create a plug-in
POST /resources/plugins/
Table 156. Create a plug-in details
964
Example URL
https://localhost/resources/plugins/
Request content-type
application/binary
Request example
https://localhost/resources/plugins/firewall/1.0.0.0
Response code
201
Created successfully
401
403
Access forbidden
409
Conflict
500
Unexpected error
Response example:
{
"artifacts": "https://192.0.2.84:9444/storehouse/admin/plugins/firewall/1.0.0.0/",
"enabled": true,
"plugin": "https://192.0.2.84:9443/services/plugins/firewall/1.0.0.0",
"ETag": "\"177436A9E6C767F309C2D1D8158F8587-2011-05-14T14:13:25Z-1305382405351\"",
"patterntypes": null,
"name": "firewall/1.0.0.0"
}
Delete a plug-in
DELETE /resources/plugins/{plugin_name}/{version}
Table 157. Delete a plug-in details
Example URL
https://localhost/resources/plugins/firewall/1.0.0.0
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
409
Conflict
500
Unexpected error
https://localhost/resources/plugins/firewall/1.0.0.0
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
404
500
Unexpected error
Response example:
965
{
"CreateTime": "2011-05-14T14:13:25Z",
"Content-MD5": "177436A9E6C767F309C2D1D8158F8587",
"name": "firewall/1.0.0.0",
"AccessRights": {
"cbadmin": "F",
"_group_:Everyone": "R"
},
"Creator": "cbadmin",
"LastModifier": "cbadmin",
"Content-Type": "application/json",
"LastModified": "2011-05-14T14:13:25Z"
}
https://localhost/resources/sharedServices
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[
{
"last_modifier": "cbadmin",
"content_type": "application/json",
"service_version": "1.0",
"app_storehouse_base_url": "https://192.0.2.130:9444/storehouse/user/applications/a-098b
587d-4f62-4ec6-a753-983d37db804f/",
"app_name": "ITM_SERVICE_LABEL",
"patterntype": "foundation",
"creator": "cbadmin",
"service_type": "External",
"service_supported_clients": "[0.0,1.0]",
"create_time": "2011-09-19T11:03:47Z",
"last_modified": "2011-09-19T11:03:47Z",
"access_rights": {
"cbadmin": "F",
"_group_:Everyone": "R"
},
"app_mgmtserver_url": "https://192.0.2.130:9443/services/applications/a-098b587d-4f6
2-4ec6-a753-983d37db804f",
"version": "2.0",
"content_md5": "93DABBAB77E0BBE9571E81FF1133F807",
"app_type": "service",
"app_id": "a-098b587d-4f62-4ec6-a753-983d37db804f",
"description": "ITMEXT_SERVICE_DESC",
966
"Collection": "."
},
...
]
https://localhost/resources/sharedServices/
a-6d29ddbc-7005-469a-878 f-b467ff57dd3f
/virtualApplications
Response content-type
application/json
Response code
201
OK
400
401
409
500
Unexpected error
Response example:
{
"deployment_name":"PROXY",
"ssh_keys":[""],
"model":{
"model":{
"servicename":"proxy",
"nodes":[
{
"attributes":{
"numberOfELBInstances":2
},
"type":"PROXY",
"id":"sharedservice",
"groups":{}
}
],
"serviceversion":"2.0",
"version":"2.0",
"servicedisplayname":"proxy",
"app_type":"service",
"links":[],
"patterntype":"foundation",
"name":"PROXY",
"description":"ELB proxy Service",
"servicesupportedclients":"2.0"
}
},
"cloud_group":"1",
"ip_version":"IPv4"
}
967
URI Pattern
Data Format
GET
/resources/version
application/json
Success Codes
200
Returns information
about the Workload
Deployer component
version installed in
IBM Cloud
Orchestrator. See the
example for a
sample of the data
returned.
Error Codes
None.
Related tasks:
REST API reference on page 867
The representational state transfer (REST) application programming interface (API)
is provided by IBM Cloud Orchestrator.
968
GET
URI Pattern
Data Format
/resources/
virtualApplianceInstances
application/json
Success Codes
200
/resources/
application/json
200
virtualApplianceInstances/{id}
Error Codes
403
Returns the list
of virtual
appliance
instances that are
visible to the
client.
Returns the
virtual appliance
instance
associated with
the given ID.
This code is
returned if the
requester does
not have access
to list virtual
appliance
instances.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have access
to the requested
virtual appliance
instance.
404
This code is
returned if the
requested virtual
appliance
instance is not
defined.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
969
URI Pattern
Data Format
Success Codes
/resources/
application/json
200
virtualApplianceInstances/{id}
DELETE
/resources/
virtualApplianceInstances/{id}
204
400
This code is
returned if there
are problems
parsing the JSON
data in the
request.
403
This code is
returned if the
requester does
not have
permission to
update the
virtual appliance
instance.
404
This code is
returned if the
request
references a
resource that is
not defined.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
The virtual
appliance
instance has been
deleted.
This code is
returned if the
requester does
not have
permission to
delete the virtual
appliance
instance.
404
This code is
returned if the
requested virtual
appliance
instance is not
defined.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
The virtual
appliance
instance was
successfully
updated. The
response body
contains a JSON
representation of
the current state
of the virtual
appliance.
970
Error Codes
created
Specifies the creation time of the virtual appliance instance, represented as
the number of milliseconds since midnight, January 1, 1970 UTC.
currentmessage
Specifies the message associated with the current status of the virtual
appliance instance. This field contains an 8 character string value that is
generated by the product.
currentstatus
Specifies a string constant representing the current status of the virtual
appliance instance. This field contains an 8 character string value that is
generated by the product.
currentstatus_text
Specifies the textual representation of currentstatus. This is a string
representation of currentstatus in the preferred language of the requester
and is automatically generated by the product.
desiredstatus
Specifies the desired status of the virtual appliance instance. Setting this
value causes IBM Cloud Orchestrator to initiate the steps needed to get the
virtual system appliance to this state. This value is an 8 character string
value and it can be set to one of the following values: RM01006 (started) or
RM01011 (stopped).
id
name
Specifies the display name associated with this virtual appliance instance.
This string contains a string value with a maximum of 1024 characters.
type
updated
Specifies the time that the virtual appliance instance was last updated,
represented as the number of milliseconds since midnight, January 1, 1970
UTC.
971
"deploymentstatus": null,
"type": "APPLIANCE",
"updated": 1393239552497,
"description": null,
"stage": 1
},
...
]
Related tasks:
REST API reference on page 867
The representational state transfer (REST) application programming interface (API)
is provided by IBM Cloud Orchestrator.
972
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[
"cloud":"/resources/clouds/1"
"virtual_system_id":"1"
"deployment_name": "test",
"start_time": "2012-04-12T13:38:31.832Z",
"creator": "test",
"create_time": "2012-04-12T13:38:20Z",
"status": "RUNNING",
"access_rights": {
"user1": "F",
"test": "F",
"d-fcca6175-830f-42fa-8c7b-ce144d4e9af5": "R"
},
"app_type": "application",
"app_id": "a-a5685f25-e85b-49c0-b35a-2a50659984c6",
"id": "d-fcca6175-830f-42fa-8c7b-ce144d4e9af5",
"health": "NORMAL",
"role_error": false
}
]
https://localhost/resources/virtualApplicationInstances/d7956c64e-0fac-49f2-b04e-efbc131a4cc4
Response content-type
application/json
973
200
OK
401
403
404
500
Unexpected error
In addition to the attributes that are used by the "Get a list of virtual application
instances" API, the following attributes are supported:
referenced_services
The shared services this virtual application instance uses.
Roles
Instances
A list of virtual machines for this virtual application instance. Some key
attributes such as IP and status can be found in each virtual machine
object.
updatable
This attribute contains one object that indicates whether the application is
updatable, it is in the following format: {"can_update":
false,"can_commit_or_revert": false} "can_update" indicates whether it
is updatable. "can_commit_or_revert" indicates whether it can commit or
revert the previous update.
can_resume
This attribute defines whether the virtual application can be resumed from
a maintenance mode, possible values are true or false.
Response example:
{
"referenced_services": [
{
"deployment_id": "d-e8467f98-fb06-47ca-8188-33e6118520c3"
},
{
"deployment_id": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568"
}
],
"deployment_name": "Sample Web Application Only",
"patterntype": "webapp",
"start_time": "2012-04-16T00:16:17.654Z",
"creator": "cbadmin",
"create_time": "2012-04-16T00:16:11Z",
"access_rights": {
"cbadmin": "F",
"d-b4402a23-e64e-4636-b4eb-85b9b01edc28": "R"
},
"status": "LAUNCHING",
"updatable": {
"can_update": false,
"can_commit_or_revert": false
},
"app_type": "application",
"version": "2.0",
974
"app_id": "a-10eb8a46-a6e7-4344-8b77-b782a3868bae",
"id": "d-b4402a23-e64e-4636-b4eb-85b9b01edc28",
"health": "CRITICAL",
"roles": [
{
"statuses": [
{
"status": "STARTING",
"health": "CRITICAL"
}
],
"name": "Web_Application-was.ElbServicePlaceholder"
},
{
"statuses": [
{
"status": "STARTING",
"health": "CRITICAL"
}
],
"external_uri": [
{
"ENDPOINT": "https://defaultHost:443/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/webapp/"
},
{
"ENDPOINT": "http://defaultHost/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/webapp/"
}
],
"name": "Web_Application-was.WAS"
}
],
"can_resume": false,
"creator_name": "cbadmin",
"instances": [
{
"master": true,
"private_ip": "192.0.2.101",
"role_count": 5,
"reboot.count": 0,
"activation-status": "RUNNING",
"public_hostname": "vm-073-101.mycompany.com",
"start_time": "2012-04-16T00:16:20.624Z",
"hypervisorHostname": "192.0.2.31",
"name": "Web_Application-was.11334535377654",
"health_url": "192.0.2.92",
"last_update": "2012-04-16T00:21:26.794Z",
"vmId": 24,
"status": "RUNNING",
"hypervisorUUId": "30c99382-6e40-e011-9cba-00215e5d6754",
"health_url_src_deployment": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568",
"stopped.by": "",
"volumes": [],
"id": "Web_Application-was.11334535377654",
"uuid": "4221081a-993b-ce5c-3e57-845c393acb10",
"health": "CRITICAL",
"roles": [
{
"status": "STARTING",
"node": "Web_Application-was.11334535377654",
"last_update": "2012-04-16T00:23:18.355Z",
"health_url_src_deployment": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568",
"type": "ElbServicePlaceholder",
"id": "Web_Application-was.11334535377654.ElbServicePlaceholder",
"health_url": "192.0.2.92",
"health": "CRITICAL"
},
{
"status": "STARTING",
"node": "Web_Application-was.11334535377654",
"last_update": "2012-04-16T00:24:06.451Z",
"health_url_src_deployment": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568",
Chapter 11. Reference
975
"external_uri": [
{
"ENDPOINT": "https://defaultHost:443/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/
webapp/"
},
{
"ENDPOINT": "http://defaultHost/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/
webapp/"
}
],
"type": "WAS",
"id": "Web_Application-was.11334535377654.WAS",
"health_url": "192.0.2.92",
"health": "CRITICAL"
}
],
"public_ip": "192.0.2.101"
}
],
"role_error": false
}
https://localhost/resources/virtualApplicationInstances/d7956c64e-0fac-49f2-b04e-efbc131a4cc4
Response content-type
application/json
Request body
{"success":"true"}
Response code
200
OK
Note: If the deployment
specified by {depl_id} is not
found, a 200 response code
returns, response body:
{"success": "false"}
401
403
Access forbidden
409
500
Unexpected error
Success
True indicates that the virtual application instance is found and deleted.
False indicates that the virtual application instance is not found.
976
Example URL
https://localhost/resources/virtualApplicationPatterns/
Response content-type
application/json
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[
{
"content_type": "application/json",
"last_modifier": "tester",
"create_time": "2011-02-24T05:41:34Z",
"last_modified": "2011-02-24T05:41:34Z",
"access_rights": {
"tester": "F"
},
"content_md5": "661D31C9F14615539E537E9AA5CB02E9",
"app_type": "application",
"app_id": "a-faac12d0-23d7-4f57-b3cb-13ce92d5e07f",
"app_name": "untitled",
"creator": "tester"
},
...
]
https://localhost/resources/irtualApplicationPatterns/a679a68f4-6798-424f-8039-1f682f949f45
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
404
500
Unexpected error
Response example:
{
"last_modifier": "cbadmin",
"content_type": "application/json",
"app_name": "Secured JEE web application",
"creator": "cbadmin",
"create_time": "2011-02-24T05:41:34Z",
"last_modified": "2011-02-24T05:41:34Z",
"access_rights": {
"AllUsers": "R"
},
"content_md5": "60136D4754C7CEE19E827665FE601C33",
"app_type": "application",
Chapter 11. Reference
977
"app_id": "a-679a68f4-6798-424f-8039-1f682f949f45",
"description": "HitCount is a secured Java Platform, Enterprise Edition (Java EE)
web application demonstrating how to increment a counter with WebSphere Application
Server,Tivoli Direcotry Service, and DB2. Access HitCount via http://[IP]:9080/hitcount,
where [IP] is the IP of the deployed WebSphere Application Server virtual machine."
}
https://localhost/resources/virtualApplicationPatterns/
Response content-type
application/json
https://localhost/resources/virtualApplicationPatterns/a4e21f6e9-2ca7-4a3a-a5cc-00f04f7b7f08
Response code
201
Created
401
403
Access forbidden
412
415
500
Unexpected error
Response example:
{
"content_type": "application/json",
"last_modifier": "tester",
"create_time": "2011-02-24T05:41:34Z",
"last_modified": "2011-02-24T05:41:34Z",
"access_rights": {
"tester": "F"
},
"content_md5": "EF7142254CD653D987E9A9E8A48C01D3",
"app_type": "application",
"app_id": "a-4e21f6e9-2ca7-4a3a-a5cc-00f04f7b7f08",
"app_name": "test",
"creator": "tester"
}
978
https://localhost/resources/virtualApplicationPatterns/
?source=a-679a68f4-6798-424f-8039-1f682f949f45
&app_name=testApp
Create an application name testAppfrom application with ID
a-679a68f4-6798-424f-8039-1f682f949f45
Response content-type
application/json
https://localhost/resources/virtualApplicationPatterns/afb70796e-1b13-467a-babe-b8b700bd563b
Response code
201
Created
401
403
Access forbidden
412
415
500
Unexpected error
Response example:
{
"content_type": "application/json",
"last_modifier": "tester",
"create_time": "2011-02-24T05:41:34Z",
"last_modified": "2011-02-24T05:41:34Z",
"access_rights": {
"AllUsers": "R",
"tester": "F"
},
"content_md5": "7AB9C3524906672BB0E1EA0209FF3803",
"app_type": "application",
"app_id": "a-fb70796e-1b13-467a-babe-b8b700bd563b",
"app_name": "testApp",
"creator": "tester"
}
979
This parameter tells the system to generate a placement for the deployment,
which is returned in response body. You can modify this placement before
you pass it to the system in the second phase to deploy the pattern.
v Then, deploy the pattern by calling the PUT REST API with the
deployPlacement operation, and pass the modified placement for the
deployment in the request body.
Notes:
v Because placement is handled by the system, you do not have to specify a cloud
group or IP group if an environment profile is specified. If the pattern cannot
use placement, then the cloud group and IP group parameters are required. For
example, if some of the plug-ins in the pattern do not require Foundation 2.1,
the application cannot use placement. In this scenario, the cloud group and IP
group are required. If you do not specify these parameters for a pattern that
cannot use placement, the deployment fails.
v If "placement_only":True is in the request body, but placement is not supported
for the pattern, that parameter is ignored by the system. The pattern is deployed
as if the placement_only was not specified, or was set to False.
Restriction: If your API version is not 5.0.0.0:
v The cloud group and IP group are required parameters.
v Deploying to an environment profile with the cloud management type set to "By
way of external network" is not supported.
Deploy the application or generate the placement if you are using the two-phase
method:
POST /resources/virtualApplicationPatterns/pattern_id/virtualApplicationInstances/
Table 170. Deploy a virtual application pattern
Example URL
http://server/resources/virtualApplicationPatterns/a-123/
virtualApplicationInstances/
Request content-type
application/json
Response headers
content-type
application/json
Response code
201
Created successfully
403
Access forbidden
500
Unexpected error
Request body example for a one-phase deployment that either uses the placement
that is determined by the system, or does not use placement:
{
"deployment_name": "My Virtual Application",
"environment_profile_id": "1",
"ssh_keys":["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCevpm4/EYFrjQ9NkC535Whr3Yswv2xJkx
Gz44/2g5uC6385hWvEycSAyoUQ3pt6/n4BxMHxilLVrT3y9FhyGBfIkJsySvzsiMVe0shh7JWct03uCiiQ5
emoe2eaVOiYz2P5vBe9V8amTC1Is+Uv/SXFF7UuKlV7gP8hBuBNGwnN2/hI6dKtZKH2GDcJbPz9J9dFl2XQ
YoX7XnaJ3eea+UZfIvS21Gi7SF3Ff+/UdPuOumHGhw1S1POGbApFStjOWXU92p6Mz4wON+mRtWzYXGEdlXDA
QisX8yBlZdVZ6+g4HB2cv5TWvYchiAYqG6M1B5tZIr/ZYzEZVTjd4ZCQMwR auto generated key"]
}
980
{
"deployment_name": "Two nodes_testname",
"ssh_keys": ["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCoh/DkvoUoio7rMUtoYFGuqFVF+07igSj
xylqhTt/8xM8jjTpBY6qngFBqyjI76ss522Gf8ubqNarqzL6cqLm7ZLI5Fz4FvwQmbux9xK9zicoZ/Hi1QAz
N9mcYt0w7zFKrx79HEye7VZMCeR5PzsATb9oI8F3frx2oS/kFTUNRyul8nTSND3Ae98dcxUUYaXCLkNuSdQs
ZV1rUjxJxQcL0010EiV5TYRtucvznIQ4U5MU2uKP7iLz2/AYojnFafIZi8xD/W/tEj9lCzvKYLiQzsfmsD/2
boziE44d+Af0MjX7DRjZDgF1LJpfnoSFS3PEjeN03jQCqq53LkkWkN5Wb iwd-generated-rsa-key-20140522",
"environment_profile_id": "1",
"placement_only": true
}
Response body example for a one-phase deployment that either uses the placement
that is determined by the system, or does not use placement:
{
"status": "RUNNING",
"deployment_id": "d-7956c64e-0fac-49f2-b04e-efbc131a4cc4",
"deployment_name": "db2",
"app_type": "application",
"app_id": "a-3761fe57-2bda-4f9b-b90c-d2c435d69cb7",
"start_time": "2011-03-25T17:02:57.878Z",
"virtual_system": {
"id":"1"
}
"instances": [
{
"status": "RUNNING",
"master": true,
"last_update": "2011-03-25T17:11:13.750Z",
"private_ip": "1xx.1x2.165.49",
"reboot.count": 0,
"stopped.by": "",
"volumes": [
],
"start_time": "2011-03-25T17:03:51.654Z",
"id": "rack9.xdblade32b04.22889.03473",
"name": "database-db2.11301072577884",
"roles": [
{
"node": "database-db2.11301072577884",
"status": "RUNNING",
"last_update": "2011-03-25T17:11:14.840Z",
"external_uri": "jdbc:db2://1xx.1x2.165.49:50000/mydb:user=appdba;
password=FgxmZv47TM8GwJD62Y1;",
"id": "database-db2.11301072577884.DB2"
}
],
"public_ip": "1x.1xx.165.49"
}
],
"role_error": false
}
Response body example for the first phase of a two-phase deployment. When
"placement_only": true is included in the request body, placement is returned in
the response body:
{
"placement": {
...
},
"deployment_url":
"https://1xx.0.0.1:9443/services/deployments/d-55e08a31-a3f0-419e-b262-3cdba2cf6be2",
"app_type": null,
"topology": [
{
"component_id": "OS Node",
Chapter 11. Reference
981
"name": "OS_Node",
"parameters": [{
"placement": true,
"id": "WAS.PASSWORD",
"label": "ADMIN_USER_PWD_LABEL",
"description":"ADMIN_USER_PWD_DESCRIPTION",
"type":"string",
"displayType":"password"
}],
"scaling": {
"min": 1,
"max": 10,
"init": 2
}
}
],
"app_id": "a-3712cdf6-c919-45e7-b010-b032cc03e35b",
"creator_name": "cbadmin",
"deployment_name": "Two nodes_testname",
"role_error": false,
"deployment_id": "d-55e08a31-a3f0-419e-b262-3cdba2cf6be2",
"creator": "cbadmin"
}
https://localhost/resources/virtualApplications/d-55e08a31a3f0-419e-b262-3cdba2cf6be2
Request content-type
application/json
Specify the updated placement in the request body.
Response code
982
200
OK
401
403
Access forbidden
404
412
A specified parameter is
not valid. For example,
the JSON file is not valid.
500
Unexpected error
Request example:
{
"operation": "deployPlacement",
"topology_parameters": {},
"addon_parameters": {},
"placement": { //required
"vm-templates": [{
"locations": [{
"name": "1721665121",
"cloud_groups": [{
"name": "esxset15",
"instances": [{
"new_instances": 1,
"nics": [{
"ip_groups": [{
"name": "172",
"new_instances": 1,
"purpose": "data"
}],
"name": "management",
"purpose": "data"
}]
}]
}],
"new_instances": 1
},
{
"name": "1721665123",
"cloud_groups": [{
"name": "esxset16",
"instances": [{
"new_instances": 1,
"nics": [{
"ip_groups": [{
"name": "172_2",
"purpose": "data",
"new_instances": 1
}],
"name": "management",
"purpose": "data"
}]
}]
}],
"messages": ["CWZKS6401E: 1721665123 is missing image IBM OS Image
for Red Hat Linux Systems:2.1.0.0."],
"new_instances": 1
}],
"environment_profile": "MyTest",
"name": "Web_Application-was",
"new_instances": 2
}],
"version": "5.0.0.0"
}
}
https://localhost/resources/virtualApplicationPatterns/acdaac959-672c-4df7-a648-b333a3843422
983
Request content-type
application/json
Response code
200
OK
401
403
Access forbidden
404
412
500
Unexpected error
Response example:
{
"content_type": "application/json",
"last_modifier": "tester",
"create_time": "2011-02-24T05:41:34Z",
"last_modified": "2011-02-24T05:41:34Z",
"access_rights": {
"tester": "F"
},
"content_md5": "5B8F7E6CF56F7CE804788C0086589AFF",
"app_type": "application",
"app_id": "a-fb70796e-1b13-467a-babe-b8b700bd563b",
"name": "App for Testing",
"locked": "false",
"creator": "tester"
}
984
Example URL
https://localhost/resources/virtualApplicationPatterns/acdaac959-672c-4df7-a648-b333a3843422
Response content-type
application/json
Response code
200
OK
Note: If an application
specified by {appID} is not
found, the 200 response code
returns, response body:
{"success": "false"})
401
403
Access forbidden
409
Conflict
500
Unexpected error
URI Pattern
Data Format
GET
/resources/virtualImages
application/json
GET
GET
/resources/virtualImages/{id}
/resources/clouds/
{cloud_id}/images
application/json
application/json
Success Codes
200
200
200
Error Codes
403
Returns the list
of virtual images
that are visible to
the client.
Returns the
virtual image
associated with
the given ID.
This code is
returned if the
requester does
not have access
to list virtual
images.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have access
to the requested
virtual image.
404
This code is
returned if the
requested virtual
image is not
defined.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
985
URI Pattern
Data Format
/resources/clouds/
{cloud_id}/images/
{image_id}
application/json
Success Codes
200
500
Returns the
selected image of
the selected
cloud group. If
you specify a
.ovf suffix the
response is an
OVF XML file.
See the example
for a sample of
the data
returned.
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
200
403
Returns the list
of virtual images
that are visible to
the client.
This code is
returned if the
requester does
not have access
to list virtual
images.
/resources/clouds/
{cloud_id}/images/
{image_id}.ovf
GET
POST
/resources/templates
/resources/templates
application/json
application/json
Error Codes
201
The resources
has been created
successfully.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
Specifies the build number associated with the virtual image. This string
value is supplied by the provider of the virtual image and cannot be
changed.
created
Specifies the creation time of the virtual image, represented as the number
of milliseconds since midnight, January 1, 1970 UTC. The value is numeric
and is automatically generated by the product.
currentmessage
Specifies the message associated with the current status of the virtual
image. This field contains an 8 character string value that is generated by
the product.
986
currentmessage_text
Specifies the textual representation of currentmessage in the preferred
language of the requester and is automatically generated by the product.
currentstatus
Specifies a string constant representing the current status of the virtual
image. This field contains an 8 character string value that is generated by
the product.
currentstatus_text
Specifies the textual representation of currentstatus in the preferred
language of the requester and is automatically generated by the product.
description
Specifies the description of the virtual image. This string value is supplied
by the provider of the virtual image and cannot be changed.
hypervisortype
Specifies OpenStack as the type of hypervisor on which this virtual image
can run.
id
Specifies the name of the virtual image. This string value is supplied by
the provider of the virtual image and cannot be changed.
operatingsystemdescription
Specifies a textual description of the operating system contained within the
virtual image. The string value for this optional attribute is supplied by the
provider of the virtual image and cannot be changed.
operatingsystemid
Specifies the numeric ID of the operating system contained within the
virtual image. The ID is one of the values described in the common
information model (CIM) specification and is supplied by the provider of
the virtual image.
operatingsystemid_text
Specifies a textual representation of operatingsystemid.
operatingsystemversion
Specifies the version of the operating system contained within the virtual
image. The string value for this optional attribute is supplied by the
provider of the virtual image and cannot be changed.
owner Specifies the uniform resource identifier (URI) of the user that owns this
virtual image. The URI is relative and should be resolved against the URI
of the virtual image.
987
updated
Specifies the time that the virtual image was last updated, represented as
the number of milliseconds since midnight, January 1, 1970 UTC. This
value is numeric and is automatically generated by the product.
version
Specifies the version of the virtual image. This string value is supplied by
the provider of the virtual image and cannot be changed.
988
989
}
],
"relatedtype": "TEMPLATES",
"relatedid": 1,
"niccount": 1,
"created": 1339571317669,
"pcpu": 0,
"updated": 1339571317669,
"id": 1
},
"created": 1339571308504,
"licenseaccepted": "T",
"operatingsystemversion": "10",
"linked_cloudid": 1,
"currentstatus": "RM01027",
"published": "T",
"currentmessage": "",
"build": "",
"servicelevel": "0",
"editionstatus": "RM01028",
"parenttemplateeditionid": 0,
"pmtype": "ESX",
"updated": 1339575604458,
"description": "A virtual machine",
"parenttemplateid": 0
},
....
]
where url is the address from which the image can be downloaded or a path to
the locally stored OVA file.
990
GET
URI Pattern
Data Format
/resources/virtualSystems/
{id}/virtualMachines
application/json
/resources/virtualSystems/
{id}/virtualMachines/{id}
application/json
Success Codes
200
200
Error Codes
Returns the
virtual machine
associated with
the given ID.
This code is
returned if the
requester does
not have access to
list virtual
machines on the
virtual system
instance.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have access to
the requested
virtual system
instance.
404
This code is
returned if the
requested virtual
machine is not
defined.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
991
GET
992
URI Pattern
Data Format
/resources/virtualSystems/
{id}/virtualMachines
application/json
/resources/virtualSystems/
{id}/virtualMachines/{id}
application/json
Success Codes
200
200
Error Codes
Returns the
virtual machine
associated with
the given ID.
This code is
returned if the
requester does
not have access to
list virtual
machines on the
virtual system
instance.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have access to
the requested
virtual system
instance.
404
This code is
returned if the
requested virtual
machine is not
defined.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
PUT
URI Pattern
Data Format
/resources/virtualSystems/
{id}/virtualMachines
application/json
/resources/
adoptUnmanagedVM
application/json
Success Codes
201
200
Error Codes
400
The virtual
machine(s) have
been defined in
IBM Cloud
Orchestrator. The
virtual machines
are started when
403
the product is
able to do so. The
response body
contains a list of
URIs for the new
virtual machines.
Relative URIs are
resolved against
the URI used for 500
this request. The
URI of the first
virtual machine is
also included in
the HTTP
Location header
of the response.
This code is
returned if there
are problems
parsing the JSON
data in the
request.
400
The virtual
system instance
was successfully
updated. The
response body is
empty.
This code is
returned if there
are problems
parsing the JSON
data in the
request.
This code is
returned if the
requester does
not have access to
list virtual
machines on the
virtual system
instance.
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have
permission to
import the
unmanaged
virtual machine.
404
This code is
returned if the
request references
a resource that is
not defined.
500
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
993
cloud
Specifies the uniform resource identifier (URI) of the cloud group in which
this virtual machine was started. The URI is relative and should be
resolved against the URI of the virtual machine.
cpucount
Specifies the number of virtual CPUs assigned to this virtual machine. This
value is an integer.
created
Specifies the creation time of the virtual machine, represented as the
number of milliseconds since midnight, January 1, 1970 UTC. This value is
numeric and is automatically generated by the product.
currentmessage
Specifies the message associated with the current status of the virtual
machine. This field contains an 8 character string value that is generated
by the product.
currentmessage_text
Specifies the textual representation of currentmessage. This is a string
representation of currentmessage in the preferred language of the requester
and is automatically generated by the product.
currentstatus
Specifies a string constant representing the current status of the virtual
machine. This field contains an 8 character string value that is generated
by the product.
currentstatus_text
Specifies the textual representation of currentstatus. This is a string
representation of currentstatus in the preferred language of the requester
and is automatically generated by the product.
displayname
Specifies the display name used on the hypervisor for this virtual machine.
This field contains a string value with a maximum of 1024 characters.
hypervisor
Specifies the URI of the hypervisor on which this virtual machine is
running. The URI is relative and should be resolved against the URI of the
virtual machine.
hypervisormachineid
Specifies the ID assigned to the virtual machine by the hypervisor. This
field contains a string value with a maximum of 1024 characters.
id
nics
Specifies a list of the URIs of the IP addresses used by this virtual machine.
URIs are relative and resolved against the URIs of the virtual machine.
memory
Specifies the amount of memory allocated to this virtual machine,
represented in megabytes. This value is an integer.
name
Specifies the display name associated with this virtual machine. This field
contains a string value with a maximum of 1024 characters.
runtimeid
Specifies the runtime ID generated by the hypervisor on which this virtual
machine is running. This field contains a string value with a maximum of
1024 characters.
994
storageid
Specifies the hypervisor storage ID of the storage on which this virtual
machine resides. This field contains a string value with a maximum of 1024
characters.
updated
Specifies the time the virtual system instance was last updated, represented
as the number of milliseconds since midnight, January 1, 1970 UTC. This
value is numeric and is automatically generated by the product.
995
Response JSON:
[
"/resources/virtualSystems/5/virtualMachines/16",
"/resources/virtualSystems/5/virtualMachines/17"
]
Related tasks:
REST API reference on page 867
The representational state transfer (REST) application programming interface (API)
is provided by IBM Cloud Orchestrator.
996
Example URL
http://server/resources/virtualSystemPatterns/a-123/
virtualSystemInstances/
Request content-type
application/json
201
OK
403
Access forbidden
500
Unexpected error
Request body:
{
"deployment_name": "simple",
"cloud_group": "1",
"ip_group": "1",
"environment_profile_id": "1",
"model": {
"nodes": [
{
"attributes": {
"ResumableOnError": true,
"SensitiveEnable": false
},
"id": "Debug component"
}
]
},
"selected_oslist": {
"Linux": "*"
},
"ssh_keys": ["ssh-rsa ..."]
}
application/json
Response code
200
OK
401
403
Access forbidden
500
Unexpected error
Response example:
[
"cloud":"/resources/clouds/1"
"virtual_system_id":"1"
"deployment_name": "test",
"start_time": "2012-04-12T13:38:31.832Z",
"creator": "test",
"create_time": "2012-04-12T13:38:20Z",
"status": "RUNNING",
"access_rights": {
"user1": "F",
"test": "F",
"d-fcca6175-830f-42fa-8c7b-ce144d4e9af5": "R"
},
"app_type": "application",
"app_id": "a-a5685f25-e85b-49c0-b35a-2a50659984c6",
"id": "d-fcca6175-830f-42fa-8c7b-ce144d4e9af5",
997
"health": "NORMAL",
"role_error": false
}
]
https://localhost/resources/virtualSystemInstances/d7956c64e-0fac-49f2-b04e-efbc131a4cc4
Response content-type
application/json
Response code
200
OK
401
403
404
500
Unexpected error
Response example:
{
"referenced_services": [
{
"deployment_id": "d-e8467f98-fb06-47ca-8188-33e6118520c3"
},
{
"deployment_id": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568"
}
],
"deployment_name": "Sample Web Application Only",
"patterntype": "webapp",
"start_time": "2012-04-16T00:16:17.654Z",
"creator": "cbadmin",
"create_time": "2012-04-16T00:16:11Z",
"access_rights": {
"cbadmin": "F",
"d-b4402a23-e64e-4636-b4eb-85b9b01edc28": "R"
},
"status": "LAUNCHING",
"updatable": {
"can_update": false,
"can_commit_or_revert": false
},
"app_type": "application",
"version": "2.0",
"app_id": "a-10eb8a46-a6e7-4344-8b77-b782a3868bae",
"id": "d-b4402a23-e64e-4636-b4eb-85b9b01edc28",
"health": "CRITICAL",
"roles": [
{
"statuses": [
{
"status": "STARTING",
"health": "CRITICAL"
}
],
"name": "Web_Application-was.ElbServicePlaceholder"
},
{
"statuses": [
998
{
"status": "STARTING",
"health": "CRITICAL"
}
],
"external_uri": [
{
"ENDPOINT": "https://defaultHost:443/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/
webapp/"
},
{
"ENDPOINT": "http://defaultHost/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/webapp/"
}
],
"name": "Web_Application-was.WAS"
}
],
"can_resume": false,
"creator_name": "cbadmin",
"instances": [
{
"master": true,
"private_ip": "192.0.2.101",
"role_count": 5,
"reboot.count": 0,
"activation-status": "RUNNING",
"public_hostname": "vm-073-101.mycompany.com",
"start_time": "2012-04-16T00:16:20.624Z",
"hypervisorHostname": "192.0.2.31",
"name": "Web_Application-was.11334535377654",
"health_url": "192.0.2.92",
"last_update": "2012-04-16T00:21:26.794Z",
"vmId": 24,
"status": "RUNNING",
"hypervisorUUId": "30c99382-6e40-e011-9cba-00215e5d6754",
"health_url_src_deployment": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568",
"stopped.by": "",
"volumes": [],
"id": "Web_Application-was.11334535377654",
"uuid": "4221081a-993b-ce5c-3e57-845c393acb10",
"health": "CRITICAL",
"roles": [
{
"status": "STARTING",
"node": "Web_Application-was.11334535377654",
"last_update": "2012-04-16T00:23:18.355Z",
"health_url_src_deployment": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568",
"type": "ElbServicePlaceholder",
"id": "Web_Application-was.11334535377654.ElbServicePlaceholder",
"health_url": "192.0.2.92",
"health": "CRITICAL"
},
{
"status": "STARTING",
"node": "Web_Application-was.11334535377654",
"last_update": "2012-04-16T00:24:06.451Z",
"health_url_src_deployment": "d-47147c5f-5d1e-47ad-8921-1d0d909ad568",
"external_uri": [
{
"ENDPOINT": "https://defaultHost:443/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/
webapp/"
},
{
"ENDPOINT": "http://defaultHost/d-b4402a23-e64e-4636-b4eb-85b9b01edc28/
webapp/"
}
],
"type": "WAS",
"id": "Web_Application-was.11334535377654.WAS",
"health_url": "192.0.2.92",
"health": "CRITICAL"
Chapter 11. Reference
999
}
],
"public_ip": "192.0.2.101"
}
],
"role_error": false }
In addition to the attributes that are used by the "Get a list of virtual system
instances" API, the following attributes are supported:
referenced_services
The shared services this virtual system instance uses.
Roles
Instances
A list of virtual machines for this virtual system instance. Some key
attributes such as IP and status can be found in each virtual machine
object.
updatable
This attribute contains one object that indicates whether the system is
updatable, it is in the following format: {"can_update":
false,"can_commit_or_revert": false} "can_update" indicates whether it
is updatable. "can_commit_or_revert" indicates whether it can commit or
revert the previous update.
can_resume
This attribute defines whether the virtual system can be resumed from a
maintenance mode, possible values are true or false.
URI Pattern
Data Format
GET
/resources/virtualSystems
application/json
Success Codes
200
Error Codes
403
Returns the list
of virtual system
instances that
are visible to the
client.
500
1000
This code is
returned if the
requester does
not have access
to list virtual
system
instances.
This code is
returned if IBM
Cloud
Orchestrator
encountered an
internal error
while processing
the request.
URI Pattern
Data Format
POST
/resources/virtualSystems
application/json
GET
/resources/virtualSystems/
{id}
application/json
Success Codes
201
200
Error Codes
400
The virtual
system instance
has been created
and is included
in the response
body. The URI
of the new
403
virtual system
instance is
included in the
Location header
of the response.
This code is
returned if there
are problems
parsing the
JSON data in the
request.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have access
to the requested
virtual system
instance.
404
This code is
returned if the
requested virtual
system instance
is not defined.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
Returns the
virtual system
instance
associated with
the given ID.
This code is
returned if the
requester does
not have
permission to
create virtual
system
instances.
1001
DELETE
URI Pattern
Data Format
/resources/virtualSystems/
{id}
application/json
/resources/virtualSystems/
{id}
Success Codes
200
204
Error Codes
400
The virtual
system instance
was successfully
updated. The
response body
contains a JSON
representation of
403
the current state
of the virtual
system.
The virtual
system instance
has been
deleted.
This code is
returned if the
requester does
not have
permission to
update the
virtual system
instance.
404
This code is
returned if the
request
references a
resource that is
not defined.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
403
This code is
returned if the
requester does
not have
permission to
delete the virtual
system instance.
404
This code is
returned if the
requested virtual
system instance
is not defined.
500
This code is
returned if the
IBM Cloud
Orchestrator
encountered an
internal error
while processing
the request.
1002
This code is
returned if there
are problems
parsing the
JSON data in the
request.
endtime
Specifies the time the virtual system instance is to be stopped, represented
as the number of milliseconds since midnight, January 1, 1970 UTC. This
attribute is optional. If not specified, the virtual system instance will run
until it is manually stopped.
environmentProfile
This attribute specifies the URI of the environment in which to deploy the
new virtual system instance. The environmentProfile attribute must be
specified at this level.
name
pattern
Specifies the URI of the pattern to be used for the new virtual system.
virtualimage
Specifies the URI of the virtual image to be used for single image
deployment. 'X-IBM-Workload-Deployer-API-Version' in the header must
be 4.0 and higher to enable this attribute. Either pattern or virtualimage
can be used as attributes.
parts
Specifies a list containing one map per part contained in the pattern.
Because you are using the environmentProfile attribute, then you also use
one of the cloud and ipGroup pairs provided in the environment profile.
The environment profile provides the valid cloud and IP group attributes
for that profile:
cloud
Specifies the URI of the cloud in which to deploy the new virtual
system.
ipGroup
Specifies the URI of the IP Group in which to deploy the new
virtual system.
ipAddress
Specifies the IP address of the virtual machine in the virtual system
instance. This attribute is required when the value of IP addresses
provided by in the environment profile is pattern deployer.
Important: If the environment profile indicates that the pattern
deployer provides the IP address, then you cannot specify an IP
address that is contained within the IP groups that are already
defined in IBM Cloud Orchestrator.
hostname
Specifies the hostname of the virtual machine in the virtual system
instance. This attribute is optional when the value of IP addresses
provided by in the environment profile is pattern deployer.
The following example shows how the ipGroup, ipAddress, and hostname
attributes might be specified:
"nics": [[{
"ipGroup": "/resources/ipGroups/1",
"hostname": "myhostname.mycompany.com",
"ipAddress": "1.1.1.2"
},
{
1003
"ipGroup": "/resources/ipGroups/1",
"hostname": "myhostname1.mycompany.com",
"ipAddress": "1.1.1.3"
}]]
label
description
Specifies a textual description of the part.
flavor Specifies the predefined size of the part in terms of CPU and
memory.
properties
Specifies a list containing one map per property defined for the
part.
scripts Specifies a list containing one map per script defined for the part.
The map for each part property contains the following attributes:
description
Specifies a textual description of the property.
key
label
validValues
For properties that are only allowed to have certain values, the
validValues attribute contains a list of the allowable values.
value
Specifies the default value for the property. The type of this value
depends on the property's type.
label
parameters
Specifies a list containing one map per parameter defined for the
script.
The map for each parameter contains the following attributes:
key
1004
value
The default value for the parameter. All parameter have string
values with a maximum length of 4098 characters.
starttime
Specifies the time the virtual system instance is to be started, represented
as the number of milliseconds since midnight, January 1, 1970 UTC. This
attribute is optional. If not specified, the virtual system instance starts as
soon as possible.
1005
"key": "numvcpus",
"label": "Virtual CPUs",
"pclass": "HWAttributes",
"type": "integer",
"validValues": ["1","2","4"],
"value": "1"
},
{
"description": "Memory size required in megabytes",
"key": "memsize",
"label": "Memory size (MB)",
"pclass": "HWAttributes",
"type": "integer",
"value": "3072"
},
{
"description": "This is the cell name of the profile",
"key": "cell_name",
"label": "Cell name",
"pclass": "ConfigWAS",
"type": "string",
"value": "DeployerCell"
},
{
"description": "This is the node name of the profile",
"key": "node_name",
"label": "Node name",
"pclass": "ConfigWAS",
"type": "string",
"value": "DeployerNode"
},
{
"description": "List of feature packs",
"key": "augment_list",
"label": "Feature packs",
"pclass": "ConfigWAS",
"type": "string",
"validValues": ["sca","none"],
"value": "none"
},
{
"description": "This is the root password for the system",
"key": "password",
"label": "Password (root)",
"pclass": "ConfigPWD_ROOT",
"type": "string",
"value": "root-password"
},
{
"description": "This is the password for the system and
WebSphere account (virtuser)",
"key": "password",
"label": "Password (virtuser)",
"pclass": "ConfigPWD_USER",
"type": "string",
"value": "virtuser-password"
}],
"scripts": [{
"description": "Test script",
"id": 1,
"label": "test script",
"parameters": [{
"key": "key1",
"value": "value1"
},
{
"key": "key2",
1006
1007
}],
"pattern": "/resources/patterns/1",
"starttime": 1250000000000
}
Response JSON:
{
"created": 1245361773378,
"currentmessage": null,
"currentmessage_text": null,
"currentstatus": "RM01036",
"currentstatus_text": "Queued",
"desiredstatus": "",
"desiredstatus_text": null,
"id": 13,
"name": "sample virtual system instance",
"owner": "/resources/users/1",
"pattern": "/resources/patterns/1",
"updated": 1245361773378
}
Note: Key-value pairs that are only used by user interface clients are optional.
A virtual system instance has the following attributes:
created
Specifies the creation time of the virtual system instance, represented as the
number of milliseconds since midnight, January 1, 1970 UTC. This value is
numeric and is automatically generated by the product.
currentmessage
Specifies the message associated with the current status of the virtual
system instance. This is an 8 character string value that is generated by the
product.
currentmessage_text
Specifies the textual representation of currentmessage. This is a string
representation of currentmessage in the preferred language of the requester
and is automatically generated by the product.
currentstatus
Specifies a string constant representing the current status of the virtual
system instance. This is an 8 character string value is automatically
generated by the product.
1008
currentstatus_text
Specifies the textual representation of currentstatus. This is a string
representation of currentstatus in the preferred language of the requester
and is automatically generated by the product.
desiredstatus
Specifies the desired status of the virtual system instance. Setting this value
causes IBM Cloud Orchestrator to initiate whatever steps are needed to get
the virtual system instance to this state. This value is an 8 character string
value that can only be set to the following values: 'RM01006' (started) or
'RM01011' (stopped), 'RM01020' (snapshot).
desiredstatus_text
Specifies the textual representation of desiredstatus. This is a string
representation of desiredstatus in the preferred language of the requester
and is automatically generated by the product.
id
Specifies the ID of the virtual system instance. This value is numeric and is
automatically generated by the product.
name
Specifies the display name associated with this virtual system instance.
This field contains a string value with a maximum of 1024 characters.
owner Specifies the URI of the user that owns this virtual system instance. The
URI is relative and should be resolved against the URI of the owner.
pattern
Specifies the URI of the pattern used to create this virtual system instance.
The URI is relative and should be resolved against the URI of the pattern.
updated
Specifies the time the virtual system instance was last updated, represented
as the number of milliseconds since midnight, January 1, 1970 UTC. This
value is numeric and is automatically generated by the product.
Response JSON:
{
"created": 1245356439153,
"currentmessage": "RM07028",
"currentmessage_text": "The virtual system instance has
been deployed and is ready to use",
"currentstatus": "RM01010",
"currentstatus_text": "Started",
"desiredstatus": "RM01011",
Chapter 11. Reference
1009
"desiredstatus_text": null,
"id": 9,
"name": "test virtual system instance",
"owner": "/resources/users/1",
"pattern": "/resources/patterns/6",
"updated": 1245357249316
}
Related tasks:
REST API reference on page 867
The representational state transfer (REST) application programming interface (API)
is provided by IBM Cloud Orchestrator.
https://localhost/resources/virtualSystemPatterns
Response content-type
application/json
Response code
200
OK
403
Access forbidden
500
Unexpected error
Response example:
[
{
"app_mgmtserver_url": "https://1xx.0.0.1:9443/services/applications/
a-1326e132-5d9e-4830-86d3-1ccb72b29c46",
"last_modifier": "admin",
"app_type": "application",
"app_storehouse_base_url": "https://1xx.0.0.1:9444/storehouse/user/applications/
a-1326e132-5d9e-4830-86d3-1ccb72b29c46/",
"patterntype": "vsys",
"app_name": "BaseImageWithScalingPolicy",
"creator": "admin",
"version": "1.0",
"patternversion": "1.0",
"last_modified": "2014-02-20T20:15:09Z",
1010
"content_sha2": "c63f76b66b35a718a79b795aaf4fb7b11f600f425cf56c11ef8bd4039d5084837fb0
c8823369b973936e0735be064d8dd581893678328667c4091ac876127715",
"description": "",
"create_time": "2014-02-20T20:15:09Z",
"content_md5": "ce633115cf9ca5981c70ba740b8fd992",
"app_id": "a-1326e132-5d9e-4830-86d3-1ccb72b29c46",
"access_rights": {
"admin": "F",
"_group_:Everyone": "R"
},
"content_type": "application/json"
},
{
"app_mgmtserver_url": "https://1xx.0.0.1:9443/services/applications/
a-14415000-1eb6-4c41-af91-af06d0928296",
"last_modifier": "admin",
"app_type": "application",
"app_storehouse_base_url": "https://1xx.0.0.1:9444/storehouse/user/applications/
a-14415000-1eb6-4c41-af91-af06d0928296/",
"patterntype": "vsys",
"app_name": "was",
"creator": "admin",
"version": "1.0",
"patternversion": "1.0",
"last_modified": "2014-02-20T20:15:22Z",
"content_sha2": "8f5d03d74ea23afda652a367ec85db080690274fb058cf10c1b4ca8d117ccf5c742b4
cd9f8be756d7d3ef05d73af691169dbc1653f84e74ac51d4bf15ffd17c8",
"description": "",
"create_time": "2014-02-20T20:15:22Z",
"content_md5": "85217ef4ce99c65ad0b76697ba5af240",
"app_id": "a-14415000-1eb6-4c41-af91-af06d0928296",
"access_rights": {
"admin": "F",
"_group_:Everyone": "R"
},
"content_type": "application/json"
},
{
"app_mgmtserver_url": "https://1xx.0.0.1:9443/services/applications/
a-7cc55c0b-5656-4e79-93ad-d6935dea71b3",
"last_modifier": "admin",
"app_type": "application",
"app_storehouse_base_url": "https://1xx.0.0.1:9444/storehouse/user/applications/
a-7cc55c0b-5656-4e79-93ad-d6935dea71b3/",
"patterntype": "vsys",
"app_name": "Database",
"creator": "admin",
"version": "1.0",
"patternversion": "1.0",
"last_modified": "2014-02-20T20:15:13Z",
"content_sha2": "398c13c39e49ca8eede91141f4cb1c927a5924652dfe052987c11ca1148f3069e6f
ce89830c53fc989e72f6ff4ffc2f0db527c7bfda3a33ba343e91b24ff9c04",
"description": "",
"create_time": "2014-02-20T20:15:13Z",
"content_md5": "2078bd11bbbdbabc57f855e658e8b31e",
"app_id": "a-7cc55c0b-5656-4e79-93ad-d6935dea71b3",
"access_rights": {
"admin": "F",
"_group_:Everyone": "R"
},
"content_type": "application/json"
},
{
"app_mgmtserver_url": "https://1xx.0.0.1:9443/services/applications/
a-9af3ab60-c60c-4e9e-9f3a-1c31543becde",
"last_modifier": "admin",
"app_storehouse_base_url": "https://1xx.0.0.1:9444/storehouse/user/applications/
a-9af3ab60-c60c-4e9e-9f3a-1c31543becde/",
"app_type": "application",
"patterntype": "vsys",
"app_name": "mytestpattern",
Chapter 11. Reference
1011
"creator": "admin",
"version": "1.0",
"patternversion": "1.0",
"last_modified": "2014-02-19T20:54:28Z",
"content_sha2": "d491baa605f5138e6be87f8efdaa849e5bc3bc0de5522e7977bb6e3be18e964f6fc1
6ef12f613fef419e22fdd081e84fd335ee9",
"description": "",
"create_time": "2014-02-19T20:51:35Z",
"content_md5": "423b79a64893a46817e14cd223fa80aa",
"app_id": "a-9af3ab60-c60c-4e9e-9f3a-1c31543becde",
"access_rights": {
"admin": "F"
},
"locked": "false",
"readonly": false,
"content_type": "application/json"
},
]
https://localhost/resources/virtualSystemPatterns/a-1326e1325d9e-4830-86d3-1ccb72b29c46
Response content-type
application/json
Response code
200
OK
403
Access forbidden
500
Unexpected error
Response example:
{
"app_mgmtserver_url": "https://1xx.0.0.1:9443/services/applications/
a-9af3ab60-c60c-4e9e-9f3a-1c31543becde",
"model": {
"nodes": [
{
"id": "OS Node",
"attributes": {
"ConfigPWD_USER.password": "<xor>LzsLChvLTs=",
"HWAttributes.memsize": 2048,
"HWAttributes.numvcpus": 1,
"ConfigPWD_ROOT.password": "<xor>LzsLChvLTs="
},
"type": "image:OS Node:IBM OS Image for Red Hat Linux Systems:2.0.0.4:148"
}
],
"description": "",
"name": "mytestpattern",
"app_type": "application",
"patterntype": "vsys",
"links": [
],
"locked": false,
"readonly": false,
"patternversion": "1.0",
"version": "1.0"
},
"last_modifier": "admin",
"app_type": "application",
"app_storehouse_base_url": "https://1xx.0.0.1:9444/storehouse/user/applications/
a-9af3ab60-c60c-4e9e-9f3a-1c31543becde/",
"layers": [
{
1012
"id": "layer",
"nodes": [
"OS Node"
]
}
],
"patterntype": "vsys",
"tooling": {
"nodes": [
{
"id": "OS Node",
"location": {
"y": "141px",
"x": "349px"
},
"ismini": false
}
],
"links": [
]
},
"app_name": "mytestpattern",
"version": "1.0",
"patternversion": "1.0",
"creator": "admin",
"content_sha2": "d491baa605f5138e6be87f8efdaa849e5bc3bc0de5522e7977bb6e3be18e964f6
fc16ef12f613fef419e22fdd081e84fd335ee954a4d",
"last_modified": "2014-02-19T20:54:28Z",
"description": "",
"create_time": "2014-02-19T20:51:35Z",
"content_md5": "423b79a64893a46817e14cd223fa80aa",
"access_rights": {
"admin": "F"
},
"app_id": "a-9af3ab60-c60c-4e9e-9f3a-1c31543becde",
"locked": "false",
"content_type": "application/json",
"readonly": false
}
https://localhost/resources/virtualSystemPatterns/
?source=a-679a68f4-6798-424f-8039-1f682f949f45
&app_name=testSys
Create a virtual system named testSys from application
with ID a-679a68f4-6798-424f-8039-1f682f949f45
Response content-type
application/json
https://localhost/resources/virtualSystemPatterns/afb70796e-1b13-467a-babe-b8b700bd563b
Request content-type
application/json
1013
Table 182. Create a virtual system pattern from an existing virtual system pattern
(clone) (continued)
Response headers
content-type
application/json
Response body
Response code
201
Created
403
Access forbidden
404
Not found
500
Unexpected error
1014
http://server/resources/virtualSystemPatterns/a-123/
virtualSystemInstances/
Request content-type
application/json
Response headers
content-type
application/json
Response code
201
Created successfully
403
Access forbidden
500
Unexpected error
Request body example for a one-phase deployment that either uses the placement
that is determined by the system, or does not use placement:
{
"deployment_name": "My Virtual System",
"environment_profile_id": "1",
"ssh_keys":["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCevpm4/EYFrjQ9NkC535Whr3Yswv2xJk
xGz44/2g5uC6385hWvEycSAyoUQ3pt6/n4BxMHxilLVrT3y9FhyGBfIkJsySvzsiMVe0sh
h7JWct03uCiiQ5emoe2eaVOiYz2P5vBe9V8amTC1Is+Uv/SXFF7UuKlV7gP8hBuBNGwnN2
/hI6dKtZKH2GDcJbPz9J9dFl2XQYoX7XnaJ3eea+UZfIvS21Gi7SF3Ff+/UdPuOumHGhw1
S1POGbApFStjOWXU92p6Mz4wON+mRtWzYXGEdlXDAQisX8yBlZdVZ6+g4HB2cv5TWvYchi
AYqG6M1B5tZIr/ZYzEZVTjd4ZCQMwR auto generated key"]
}
Response body example for a one-phase deployment that either uses the placement
that is determined by the system, or does not use placement:
{
"status": "RUNNING",
"deployment_id": "d-7956c64e-0fac-49f2-b04e-efbc131a4cc4",
"deployment_name": "db2",
"app_type": "application",
"app_id": "a-3761fe57-2bda-4f9b-b90c-d2c435d69cb7",
"start_time": "2011-03-25T17:02:57.878Z",
"virtual_system": {
"id":"1"
}
"instances": [
{
"status": "RUNNING",
"master": true,
"last_update": "2011-03-25T17:11:13.750Z",
"private_ip": "1xx.102.165.49",
"reboot.count": 0,
"stopped.by": "",
"volumes": [
],
Chapter 11. Reference
1015
"start_time": "2011-03-25T17:03:51.654Z",
"id": "rack9.xdblade32b04.22889.03473",
"name": "database-db2.11301072577884",
"roles": [
{
"node": "database-db2.11301072577884",
"status": "RUNNING",
"last_update": "2011-03-25T17:11:14.840Z",
"external_uri": "jdbc:db2://1xx.102.165.49:50000/mydb:user=appdba;
password=FgxmZv47TM8GwJD62Y1;",
"id": "database-db2.11301072577884.DB2"
}
],
"public_ip": "1xx.1xx.165.49"
}
],
"role_error": false
}
Response body example for the first phase of a two-phase deployment. When
"placement_only": true is included in the request body, placement is returned in
the response body:
{
"placement": {
...
},
"deployment_url": "https://1xx.0.0.1:9443/services/deployments/
d-55e08a31-a3f0-419e-b262-3cdba2cf6be2",
"app_type": null,
"topology": [
{
"component_id": "OS Node",
"name": "OS_Node",
"parameters": [{
"placement": true,
"id": "WAS.PASSWORD",
"label": "ADMIN_USER_PWD_LABEL",
"description":"ADMIN_USER_PWD_DESCRIPTION",
"type":"string",
"displayType":"password"
}],
"scaling": {
"min": 1,
"max": 10,
"init": 2
}
}
],
"app_id": "a-3712cdf6-c919-45e7-b010-b032cc03e35b",
"creator_name": "cbadmin",
"deployment_name": "Two nodes_testname",
"role_error": false,
"deployment_id": "d-55e08a31-a3f0-419e-b262-3cdba2cf6be2",
"creator": "cbadmin"
}
1016
topology_parameters
Optional. Define the topology parameters that are needed for deployment as a
key value map. The key format is topologyname.parameterid. For example,
WAS.PASSWORD.
addon_parameters
Optional. Define the topology parameters that are needed for deployment as a
key value map.
Table 184. Deploy a virtual system pattern - second phase with modified placement
Example URL
https://localhost/resources/virtualSystemPatterns/acdaac959-672c-4df7-a648-b333a3843422
Request content-type
application/json
Specify the updated placement in the request body.
Response code
200
OK
401
403
Access forbidden
404
412
500
Unexpected error
Request example:
{
"operation": "deployPlacement",
"topology_parameters": {},
"addon_parameters": {},
"placement": {
"vm-templates": [{
"locations": [{
"name": "1721665121",
"cloud_groups": [{
"name": "esxset15",
"instances": [{
"new_instances": 1,
"nics": [{
"ip_groups": [{
"name": "172",
"new_instances": 1,
"purpose": "data"
}],
"name": "management",
"purpose": "data"
}]
}]
}],
"new_instances": 1
},
{
"name": "1721665123",
"cloud_groups": [{
"name": "esxset16",
"instances": [{
"new_instances": 1,
Chapter 11. Reference
1017
"nics": [{
"ip_groups": [{
"name": "172_2",
"purpose": "data",
"new_instances": 1
}],
"name": "management",
"purpose": "data"
}]
}]
}],
"messages": ["CWZKS6401E: 1721665123 is missing image IBM OS Image for
Red Hat Linux Systems:2.1.0.0."],
"new_instances": 1
}],
"environment_profile": "MyTest",
"name": "Web_Application-was",
"new_instances": 2
}],
"version": "5.0.0.0"
}
}
https://localhost/resources/virtualSystemPatterns/acdaac959-672c-4df7-a648-b333a3843422
Request content-type
application/json
Specify the updated properties in the request body.
Response code
200
OK
401
403
Access forbidden
404
412
A specified parameter is
not valid. For example,
the JSON file is not valid.
500
Unexpected error
Request example:
{
"content_type": "application/json",
"last_modifier": "tester",
"create_time": "2011-02-24T05:41:34Z",
"last_modified": "2011-02-24T05:41:34Z",
"access_rights": {
"tester": "F"
},
"content_md5": "5B8F7E6CF56F7CE804788C0086589AFF",
"app_type": "application",
"app_id": "a-fb70796e-1b13-467a-babe-b8b700bd563b",
1018
https://localhost/resources/virtualSystemPatterns/acdaac959-672c-4df7-a648-b333a3843422
Response content-type
application/json
Response code
200
OK
401
403
Access forbidden
409
Conflict
500
Unexpected error
URI Pattern
Date Format
GET
/resources/patterns
application/json
Success Codes
200
Error Codes
403
500
1019
Table 187. REST API for virtual system patterns (classic) (continued)
HTTP
Method
GET
URI Pattern
Date Format
/resources/patterns/
{id}
application/json
Success Codes
200
Error Codes
403
404
500
1020
id
name
Specifies the name of the pattern. This field is a string value with a
maximum of 1024 characters.
owner Specifies the uniform resource identifier (URI) of the user that owns this
pattern. The URI is relative and should be resolved against the URI of the
pattern.
parts
Specifies a list containing one map per part contained in the pattern.
The map for each part contains the following attributes:
count
The number of virtual machines that are created from this part
when the pattern is deployed. A value of null indicates that the
part can only be used to construct a single virtual machine. Parts
that can be used to construct multiple virtual machines will have a
positive integer value for this attribute.
description
Specifies a textual description of the part.
id
label
properties
Specifies a list containing one map per property defined for the
part.
scripts Specifies a list containing one map per script defined for the part.
virtualimage
Specifies the uniform resource identifier (URI) of the virtual image
associated with the part. The URI is relative and should be
resolved against the URI of the pattern.
The map for each part property contains the following attributes:
description
Specifies a textual description of the property.
key
label
validValues
For properties that are only allowed to have certain values, the
validValues attribute contains a list of the allowable values.
value
Specifies the default value for the property. The type of this value
depends on the property's type.
1021
label
parameters
Specifies a list containing one map per parameter defined for the
script.
The map for each parameter contains the following attributes:
key
value
The default value for the parameter. All parameter have string
values with a maximum length of 4098 characters.
updated
Specifies the time the pattern was last updated, represented as the number
of milliseconds since midnight, January 1, 1970 UTC. This value is numeric
and is automatically generated by the product.
virtualsystems
Specifies the list of URIs of the virtual system instances using this pattern.
The URIs are relative should be resolved against the URI of the pattern
that contains them.
1022
{
"status": "RM01001",
"message": "RM06000"
}
],
"id": 2,
"updated": 1369145895473,
"counter": 0,
"description": null
}
Note: Key-value pairs that are only used by user interface clients are optional.
Related tasks:
REST API reference on page 867
The representational state transfer (REST) application programming interface (API)
is provided by IBM Cloud Orchestrator.
1023
1024
Node
/var/log/scoui.log
/var/log/scoui.trc
Central Server 2
/var/log/httpd
Central Server 2
OpenStack
/var/log/nova
/var/log/glance
/var/log/cinder
/var/log/heat
Region Server
/var/log/keystone
Central Server 2
/var/log/ceilometer
/var/log/qpid
/var/log/neutron
Neutron Server
Central Server 1
DB2
su -db2inst1 -s db2support
Installer
/var/log/cloud-deployer
Deployment Server
Deployment Service
/var/log/ds
Deployment Server
HTTP Server
/var/log/httpd
Central Server 2
Hyper-V Compute
Hyper-V Server
haproxy
/var/log/haproxy.log
1025
Node
/var/ibm/tivoli/common/eez/logs
/var/log/ico_monitoring
/var/log/pcg/pcg.log
Central Server 2
/opt/ibm/BPM/v8.5/profiles/Node1Profile/
logs
Central Server 2
/var/ibm/InstallationManager/logs
Central Server 2
Workload Deployer
/drouter/ramdisk2/mnt/raid-volume/raid0/
logs/error/*
Central Server 3
/drouter/ramdisk2/mnt/raid-volume/raid0/
logs/trace/*
Central Server 3
/drouter/ramdisk2/mnt/raid-volume/raid0/
usr/servers/fileserver/*
Central Server 3
/drouter/ramdisk2/mnt/raid-volume/raid0/
usr/servers/kernelservices/logs/*
Central Server 3
/drouter/ramdisk2/mnt/raid-volume/raid0/
usr/servers/storehouse/logs/*
Central Server 3
1026
The dynamic list of compute nodes distinguishes between KVM and VMware
nodes. The components that are assumed on the additional compute nodes need to
have a specific name:
v computeNode: KVM compute host
v vmNode: VMware vCenter host
v esxNode: ESX hypervisor host
Procedure
On Central Server 1 or the Deployment Server, open the command line and run
the following script as root:
python pdcollect.py [options]
1027
-p COMPONENTLIST, --components=COMPONENTLIST
Lists the components to be scanned for the log files. The COMPONENTLIST
format is component1,component2,component3,...
-s STARTDATE, --start=STARTDATE
Defines the first date of the log sequence. The STARTDATE format is
YYYY-MM-DD.
-e ENDDATE, --end=ENDDATE
Defines the day after the last day of the log sequence. TheENDDATE format is
YYYY-MM-DD .
--version
Shows the pdcollect tool version and exits.
A disclaimer is displayed and logged first to alert you that the data that is
gathered and stored may be confidential and could contain passwords.
Results
As output, a zip file container with the following name is created:
PDCollectlog_<date and time>_<hostname>.zip
Procedure
1. Create a new user SSH for all the Central Servers and Region Servers:
a. On each of the Central Servers and Region Servers:
v Create a new user <yourmechid> and set the password:
useradd -m <yourmechid>
passwd <yourmechaid> #enter at the prompt <yourmechapwd>
b. On Central Server 1:
v Generate the SSH keys for <yourmechid> and cp it to all IBM Cloud
Orchestrator Servers:
su - <yourmechid> -c "ssh-keygen -q -t rsa -N -f ~/.ssh/id_rsa"
v Here $i stands for the IP address of each IBM Cloud Orchestrator server
including Central Server 1:
[root@cs-1] su <yourmechid>
[yourmechid@cs-1] scp ~/.ssh/id_rsa.pub $i:~/.ssh/authorized_keys
Note: Make sure that you accept the server key and the password
required in <yourmechid>.
c. Verify that <yourmechid> on Central Server 1 can SSH to all the IBM Cloud
Orchestrator Servers including Central Server 1 without interruption:
su - <yourmechid> -c "ssh <yourmechid>@$SCO_server_ip"
1028
pdcollect.py pdcollect.py.org
pdcollect_nonroot.py pdcollect.py
-R <yourmechid>:<yourmechidgroup> /home/<yourmechid>/
e. Modify the file pdcollect.py, and replace "yourmechid" with the new user
name:
# User which is used to execute remote commands
SSH_USER = "yourmechid"
3. On each of the IBM Cloud Orchestrator servers, add the user <yourmechid> in
the sudo list:
a. Create a sudoer file named <yourmechid> and place it in /etc/sudoers.d:
The content of the file <yourmechid> is as follows:
Note: Replace <yourmechid> with your new user name.
# sudoers additional file for /etc/sudoers.d/
# IMPORTANT: This file must have no ~ or . in its name and file permissions
# must be set to 440!!!
# this file is for the SAAM mech-ID to call the SCO control scripts
Defaults:<yourmechid> !requiretty
# scripts found in control script directory
# adapt the directoy names to the mech id!
# allow for
<yourmechid> ALL = (root) NOPASSWD:/bin/su - db2inst1 -c db2support, (root) \
NOPASSWD:/bin/find, (root) NOPASSWD:/bin/su, (root) NOPASSWD:/usr/bin/tee, (root) \
NOPASSWD:/bin/netstat, (root) NOPASSWD:/bin/chmod, (root) NOPASSWD:/bin/rm, (root) \
NOPASSWD:/usr/bin/zip, (root) NOPASSWD: /bin/cp
1029
v To increase the logging level for the Self-service user interface, edit the
/opt/ibm/ccs/scui/etc/log4j.properties file on Central Server 2, and replace
all occurrences of INFO with TRACE. Restart the user interface with the following
command:
service scui restart
v To increase the logging level for the OpenStack Nova components, edit the
/etc/nova/nova.conf on the Region Servers and add the following line in the
[DEFAULT section:
default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN,
smartcloud=DEBUG,nova=DEBUG
v
v
You can change the logging setting for individual Nova components to WARN,
INFO or DEBUG. After changing the logging level, you must restart the Nova
components. For information about starting IBM Cloud Orchestrator services, see
Managing the services on page 223.
To increase the logging level for the OpenStack Glance component edit the
/etc/glance/glance*.conf files on the Region Server and change the Debug
value to True. After changing the logging level, you must restart glance. For
information about starting IBM Cloud Orchestrator services, see Managing the
services on page 223.
To increase the logging level for the OpenStack Keystone component, edit the
/etc/keystone/keystone.conf file on Central Server 2 and change level=WARNING
to level=DEBUG in the [logger_keystone] section. After changing the logging
level, you must restart the Keystone component by running the service
openstack-keystone restart command.
To set the logging level for the Workload Deployer component, see Workload
Deployer log files on page 1031.
The Public Cloud Gateway component uses log4j logging. The
log4j.properties file is located in the /opt/ibm/pcg/etc directory. For more
information about the properties in the log4j.properties file, see the
documentation on the Log4j web site at http://logging.apache.org/log4j.
v To set the logging level for the Business Process Manager component, log on to
the WebSphere Integrated Solutions Console on the Central Server 2 and click
Troubleshooting > Logs and trace to access the Logging and Tracing panel.
1030
Procedure
1. Navigate to PATTERNS > Deployer Administration > Troubleshooting and
expand the Logging section. By expanding the Logging section, you have
access to the available logs using the log viewer and also be able to download
the available logs to your file system for additional review.
2. Click one of the following links to open a new web page and view the log files:
v View current error file to view the error log.
v View current trace file to view trace log.
a. Click Pause to stop new log entries from being appended. This action is
only available if the log viewer is accepting new entries.
b. Click Restart for new entries to be appended. This action is only available if
the log viewer is not accepting new entries because it is paused.
c. Click Clear to clear all the data from the log viewer. This action is available
whether the log viewer is accepting new entries or if it is not accepting new
entries.
3. Click Download log files to save all the available logs to your file system in
.zip format. If you need to view information regarding events that have already
happened, then you must use this link. A window is presented allowing you to
open the compressed file or save it to your file system. The compressed file
includes the current error.log file and the current trace.log file in their
entirety, and the available archived versions of these logs. You can download
all files with a single click.
4. Expand Configure trace levels to view or modify the trace levels A set of
default classes are defined as the trace string to be included in the logs. The
level of trace for these classes can be modified and new classes can be added.
The trace levels provided are based on Java Logging convention and
WebSphere Application Server levels. The complete list of trace levels is listed
later in this section, ordered in ascending order of severity:
1031
v FINE: The trace information is a general trace plus method entry / exit /
return values.
v FINER: The trace information is a detailed trace.
v FINEST: the trace information is an even more detailed trace that includes all
the detail that is needed to debug problems.
v ALL: All events are logged. If you create custom levels, ALL includes your
custom levels and can provide a more detailed trace than FINEST.
v
v
v
v
SEVERE: The task cannot continue, but the component can still function.
WARNING: Potential or impending error.
INFO: General information outlining the overall task progress.
OFF: No events are logged.
Increasing logging will decrease performance. You might need advice from IBM
Customer Support if you want to change the trace levels.
a. Add a trace string. Click Add trace string and enter in a valid trace string.
The trace level for a new trace string is set to INFO by default.
b. Remove a trace string Click the remove icon next to a trace string to remove
that trace string.
c. Modify a trace level. Click the <trace_string> and select a new trace level in
the drop down menu. Click Save to commit the new trace level for the
specified trace string.
Results
After you have completed these steps, you have reviewed all the available log
data.
Related reference:
Example script to configure the trace levels on page 405
This script package sets a trace specification level (example
"com.ibm.ws.ibm.*=info") on all servers in a cell. It can be included on either a
stand-alone pattern part or a Deployment Manager pattern part. Users can specify
the trace specification during deployment.
Product limitations
Review the following list of limitations of IBM Cloud Orchestrator.
v The network adapter of type E1000E is not supported by IBM Cloud
Orchestrator. You cannot deploy images that contain this type of network
adapter.
v For OpenStack, the service users (for example, nova, cinder, glance, heat,
ceilometer) must not be renamed and must be enabled. Also, the service project
must not be renamed and must remain enabled. IBM Cloud Orchestrator has
additional requirements: the admin administrator user and the admin project
(admin tenant) must not be renamed or disabled.
v Virtual System Patterns do not support flavors with a disk size of 0, regardless
of the type of hypervisor used. For Virtual System Pattern deployments, use a
flavor with a disk size that is equal to or greater than the disk size of the image.
1032
v In a VMware region, after deploying a virtual machine, if you rename the virtual
machine by using vCenter and then you change the flavor by using the
Self-service user interface, the name of the virtual machine is set back to the
original name.
v If an image has been imported to Glance with one of the following procedures:
using the OpenStack command line interface to import disk image to Glance,
as described in Adding images to your OpenStack environment on page
341;
using the single disk OVA image that was imported to the Workload
Deployer component and the image was checked out to the specified region;
the first provisioning of the virtual machine to the VMware hypervisor might:
be slow (for example, might take more than 1 hour) when using single image
deployment of Workload Deployer patterns;
fail due to timeout (by default set to 1 hour) for the virtual machine
registration phase in the OpenStack when deploying by using Workload
Deployer patterns.
1033
The problem does not occur after the first provisioning. If the provisioning fails
due to timeout, you can trigger the provisioning manually to allow the transfer
of the disk image to the target datastore.
v PowerVC limitations:
The Power NPIV feature requires that all of the hosts in a given system pool
have NPIV-capable Fibre Channel adapters.
Instances or other resources cannot be moved from one project to another.
Each PowerVC Region Server has one availability zone only.
Because of an OpenStack limitation, in the Administration user interface,
PowerVC hypervisors are always displayed with an active status. This
limitation might cause that IBM Cloud Orchestrator tries to deploy virtual
machines to an inactive hypervisor.
Workload Deployer does not support pLinux images.
AIX does not support cloud-init; this prevents AIX images deployed via
Single Server Deployment or Heat from being able to leverage password
change or ssh key injection functionality at deploy time.
Shared Storage Pool based images do not support disk resize or extension.
Shared Volume Controller FlashCopy operations can only occur serially per
image; if a copy or extending operation is in progress on a boot volume, you
cannot invoke a new operation to make changes to the volume. You can check
the Shared Volume Controller UI to see whether a FlashCopy is in progress
on the target volume.
The option to specify user SSH keys is not supported if deploying virtual
system classic on Power regions.
v Hyper-V limitations:
Virtual system patterns and virtual system patterns (classic) are not
supported.
Discovery and on-boarding of existing instances into IBM Cloud Orchestrator
is not supported for Hyper-V.
Resizing an instance will temporarily make the instance invisible in Hyper-V
manager. It will be re-added once the resize is finished.
Resizing the disk to a smaller size will result in an error.
1034
Hypervisor errors
You can receive error messages for hypervisors defined to IBM Cloud Orchestrator
under certain circumstances.
Solution
disk_available_least for hypervisor can be a negative number to indicate
the over commitment of hypervisor disk space.
1035
The qcow2 disk format is used for the virtual machine in the KVM
hypervisor, the whole size of disk will not be allocated from beginning to
save the disk space, disk_available_least comes from the following
equation:
disk_available_least = free_disk_gb - disk_overcommit_size
disk_overcommit_size =
virtual size of disks of all instance instance - used disk size of all instances
When the hypervisor instances overcommitted more disk space than free
disk space, disk_available_least is a negative number.
The free_ram_mb for hypervisor can be a negative number to indicate the over
commitment of hypervisor memory. The default memory overcommit rate is 1.5,
that means you can use memory overall memory_mb * 1.5 memories. The default
cpu overcommit rate is 16, that means you can use memory overall vcpus * 16
vcpus.
To configure the overcommit rate, you must modify the following attribute in
nova.conf and restart the openstack-nova-scheduler and the openstack-novacompute services.
Note: This configuration is effective for KVM, PowerVC, and VMware regions.
# virtual CPU to Physical CPU allocation ratio (default: 16.0)
cpu_allocation_ratio=16.0
# virtual ram to physical ram allocation ratio (default: 1.5)
ram_allocation_ratio=1.5
1036
Image errors
There are some known errors that might occur when managing images in IBM
Cloud Orchestrator.
Causes
This error might happen if you specified the wrong product key for the Windows
operating system of the image you are deploying.
Resolving the problem
Configure the virtual image part in the pattern by specifying the right product key
for the Windows operating system and deploy the virtual image again.
1037
graphic device for the instance if the VNC is disabled. Therefore, if the image is
depending on the graphic during the boot time, it will hang and will not boot up.
but an image with the same name is not displayed in the image list.
Causes
The problem occurs if you are importing a virtual image with the catalogeditor
role and another user already imported an image with the same name.
Resolving the problem
Ask a user with the admin role to perform one of the following actions:
v Granting the access to the existing virtual image to the project to which your
user belongs.
v Registering the new virtual image with a different name and then granting the
access to the virtual image to the project to which your user belongs.
Instance errors
There are some known errors that might occur when managing instances in IBM
Cloud Orchestrator.
Solution:
Install openstack-nova-console*.rpm that can be found in your IBM Cloud
Orchestrator package and then run:
/etc/init.d/openstack-nova-consoleauth restart
1038
Which indicate the RAM is not enough to host one more virtual machine with 2
GB required.
Deployment errors
There are some known errors that might occur when deploying virtual system
patterns and images in IBM Cloud Orchestrator.
1039
where
v IPADDR is the value passed from IBM Cloud Orchestrator as
${partname.ipaddr}
Solution
To solve this problem, perform the following steps:
1. All networks created in OpenStack must be created with --project
<project id> parameter specified. If a network is to be shared across
multiple users, the easiest way is to define a Public project and include
all users in that project.
Because in a multi-tenancy scenario each project must have its own
network created and assigned, make sure that your project has one
network attached. For example, project003 has network 192.0.2.0/24
attached, then the member of project003 can deploy successfully.
Verify that the specified network ID belongs to the project that you
currently use, by using the following commands:
[root@SVT-CIL-NEW ~]# keystone tenant-list
+----------------------------------+------------+---------+
|
id
|
name
| enabled |
1040
+----------------------------------+------------+---------+
| 1f9f8b62052046ee97763f4eb88288e3 | service
|
true |
| 3c8b192caab1499aa4aeb2dcf4280a12 |
admin
|
true |
| c7ea7db95d2241c383f2f5995b31fa19 | project003 |
true |
+----------------------------------+------------+---------+
[root@SVT-CIL-NEW ~]# nova-manage network list
id IPv4
IPv6 start address DNS1
1
192.0.2.0/24
None 192.0.2.10
192.0.2.2
project
c7ea7db95d2241c383f2f5995b31fa19
DNS2
192.0.2.2
VlanID
4090
...
...
where
v <x.x.x.x/yy> is the network to be modified.
v <project_ID> is the ID of the project to be associated.
For example:
nova-manage network modify 192.0.2.0/24 c7ea7db95d2241c383f2f5995b31fa19
Solution
To resolve the problem, set the disk size of the image flavor to 0. The flavor then
uses the default size of the image to be deployed.
Alternatively, set the disk size of the flavor to a value that is equal to, or greater
than, the size of the image to be deployed. To identify the correct value, use the
nova image-show image_ID | grep disk command, as shown in the following
example:
nova image-show 4223c31a-4fcf-1747-b3e3-478d44510201 | grep disk
| metadata customization.disksize.hard disk 1
| {"category": "Storage Settings", "name": "DiskSize.Hard disk 1",
"classification": {"id": "STORAGE", "label": "Storage"},
"rules":
Chapter 12. Troubleshooting
1041
Note: Line breaks and indents have been inserted in the command output, to
make the example easier to read.
In this example, the disk size required to create an instance of this image is 12288.
Symptoms
The deployment history ends with the following message:
Virtual machine could not be registered
date/time
In addition, when browsing the Virtual Machines section of the failed instance, you
see that the name of the virtual machines containing just the first octet of the IP
address instead of the host name, for example d_192 when the IP address of the
machine is like 192.0.*.*.
Causes
The DNS entry is missing in the forward lookup zone for the IP address which has
been assigned to the virtual machine. You can find the exact address in the
hypervisor tools (vCenter or OpenStack).
Symptoms
The user is not properly created on the newly provisioned Power virtual machine
even if the add user script has run and is marked as successfully completed on the
virtual machine details.
Causes
This is an issue with the add-on script being used, which is not properly handling
the error condition and always completing with success.
1042
Cause
The actual deployment of a virtual machine failed since an error occurred while
placing the virtual machine in the cloud. This error message hides the actual root
cause of the problem.
1043
Security limitations
Check the known security limitations that might expose your IBM Cloud
Orchestrator environment to risks.
1044
1045
| e6c343c911c844c9b6fb492166e3945a |
| e6c343c911c844c9b6fb492166e3945a |
| be9192ecbe7c4306beebf8efaf2b49b9 |
| 18877e517fdd4643921ceb3b354c928b |
| 18877e517fdd4643921ceb3b354c928b |
| be9192ecbe7c4306beebf8efaf2b49b9 |
| 6c3ba418b10b4c62a195d9aafd1c3412 |
| 18877e517fdd4643921ceb3b354c928b |
| be9192ecbe7c4306beebf8efaf2b49b9 |
| 2b84a9bc5c6f4338aba4d638072d5624 |
| be9192ecbe7c4306beebf8efaf2b49b9 |
+----------------------------------+
Solution:
1. Copy /root/keystonerc from central-server-2. Change the OS_REGION_NAME
variable to the corresponding region name.
2. Source the copied keystonerc:
source keystonerc
Symptoms
The Default add disk add-on failed or the device requested in the Default raw
disk add-on is not present in the fdisk -l output.
1046
Problem
The problem occurs because in Linux there is only one default gateway, which
means that even if the network packet can reach the second NIC, the response
packet still uses the default gateway. At that point, the response packet is not able
to reach the sender.
Solution
The solution is to manually add another routing table by performing the following
steps:
1. Determine which is the default gateway and which NIC needs to add an
additional route table. Run the command:
ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1454 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:cd:c3:17 brd ff:ff:ff:ff:ff:ff
inet 192.0.1.145/24 brd 192.0.1.255 scope global eth0
inet6 fe80::f816:3eff:fecd:c317/64 scope link
valid_lft forever preferred_lft forever
eth1:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1454 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:f8:a4:f2 brd ff:ff:ff:ff:ff:ff
inet 192.0.2.4/24 brd 192.0.2.255 scope global eth1
inet6 fe80::f816:3eff:fef8:a4f2/64 scope link
valid_lft forever preferred_lft forever
Now, the virtual machine has two NICs: eth0 has 192.0.1.145, eth1 has
192.0.2.4.
Check the route table:
1047
route -n
Kernel IP routing table
169.254.169.254
192.0.1.0
192.0.2.0
169.254.0.0
169.254.0.0
192.0.2.1
Destination Gateway
192.0.2.3
0.0.0.0
0.0.0.0
0.0.0.0
0.0.0.0
0.0.0.0
Genmask
Flags Metric Ref
255.255.255.255 UGH 0
0
255.255.255.0
U
0
0
255.255.255.0
U
0
0
255.255.0.0
U
1002
0
255.255.0.0
U
1003
0
UG
0
0
Use
0
0
0
0
0
0
Iface
eth1
eth0
eth1
eth0
eth1
eth1
so the NIC eth1 has a default gateway that can be reached from the outside,
while eth0 does not have a gateway so it cannot be reached from other
networks.
2. You must add another route table for eth0. Use following command (eth0 is
the name of the route table or you can provide your own meaningful name):
echo "1 eth0" >> /etc/iproute2/rt_tables
Problem
A user cannot access the Administration user interface after successful installation.
This is not a common error and occurs only if the node runs out of space.
Solution
Check if there is enough space on Central Server 2 in and if the user has created
any softlink to manage the space on the partition. Then, make sure that the
targeted folder has write permissions. The installer requires write permission on
directories, such as /tmp, /var/tmp, /usr/tmp, /var/www. If you have mounted
these directories on the target directory, make sure that they have write permission
by group, for example 777 permission on /tmp, /var/tmp.
Problem Determination
If you get nothing when running nova list and nova hypervisor-list, check the
following files and services:
1. /var/log/nova/compute.log:
2014-08-05 23:40:38.779 10710 ERROR suds.client [-] <?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="urn:vim25" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
<ns1:Body>
<ns0:Login>
<ns0:_this type="SessionManager">SessionManager</ns0:_this>
<ns0:userName>root</ns0:userName>
<ns0:password>object00!</ns0:password>
</ns0:Login>
1048
</ns1:Body>
</SOAP-ENV:Envelope>
2014-08-05 23:40:38.782 10710 CRITICAL nova.virt.vmwareapi.driver [-] Unable to connect
to server at 172.19.4.9, sleeping for 60 seconds
2014-08-05 23:40:38.782 10710 TRACE nova.virt.vmwareapi.driver Traceback (most recent call
last):
2014-08-05 23:40:38.782 10710 TRACE nova.virt.vmwareapi.driver
File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 1067,
in _create_session 2014-08-05 23:40:38.782 10710 TRACE nova.virt.vmwareapi.driver
password=self._host_password)
2014-08-05 23:40:38.782 10710 TRACE nova.virt.vmwareapi.driver
File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vim.py", line 203,
in vim_request_handler 2014-08-05 23:40:38.782 10710 TRACE nova.virt.vmwareapi.driver details)
2014-08-05 23:40:38.782 10710 TRACE nova.virt.vmwareapi.driver
VimFaultException: Cannot complete login due to an incorrect user name or password.
2. /etc/init.d/openstack-nova-compute status:
[root@sco24-a28-node2 init.d]# ./openstack-nova-compute status
openstack-nova-compute (pid 10710) is running...
Restart openstack-nova-compute
You must restart openstack-nova-compute:
service openstack-nova-compute restart
1049
Causes:
The API or user interface only lists 1000 resources. This is an intentional limit as a
larger result sets require greater cost to derive and manage.
Solution:
If it is desired to see a larger result sets, increase the maximum number of
instances returned in a single response by setting the osapi_max_limit property in
the /etc/nova/nova.conf file in the Compute Nodes of the related region. This will
impact several interfaces, including the Nova list interface, the Horizon
administrative user interface, and the IBM Cloud Orchestrator user interface. To
manage all instances in a region, the recommended setting would be the maximum
number of instances for the region and a growth buffer. For example, if the region
can contain 2000 instances, and a 10% growth buffer is desired, a limit of 2200
should be used.
1050
Problem
When you use the Self-service user interface to start an instance on a KVM
Region by clicking PATTERNS > Instances > Virtual System Instances, the
instance hangs in Launching status. However, in the nova list command output,
the image has ACTIVE status.
Causes
The KVM Compute Node hardware clock is configured as LOCAL instead of UTC.
Solution
Complete the following steps on each KVM Compute Node in the region:
1. Log on to the KVM Compute Node as a root user.
2. Edit the /etc/adjtime file to ensure that the hardware clock is configured as
UTC, as shown in the following example:
619.737272 1411131873 0.000000
1411131873
UTC
3. Make sure that the current system clock is synchronized with the time server.
For example, you can use one of the following commands to synchronize the
system clock:
v sntp -sS ntp_host
v ntpd -gq; ntp-wait
v rdate -s ntp_host
v ntpdate ntp_host
4. Save the current system clock to hardware by running the following command:
hwclock --systohc
Problem:
When you use the key file to create a node, the node shows as UNAVAILABLE state.
If you check the deployment server log file you will find the following error:
[Errno 13] Permission denied: u/home/marty/.ssh/id_rsa execute
/usr/lib/python2.6/site-packages/ds/engine/deploy/node_create/_080_check_resources.py:59
Solution:
This is because the deployment server will use the Heat user to access the key file
provided. Make sure that the Heat user has access to the key file.
1051
Workaround
As a workaround, you must specify the availability zone when you create this
aggregate, or must reset the metadata after updating the aggregate.
Problem
The default landing page for the Administration user interface is the hypervisor
page. This page is not accessible for a Public Cloud Gateway region.
Solution
Change the default landing page to the domain page. To change the landing page
on Central-Server2, complete the following steps:
1. Edit the Administration user interface configuration: /etc/openstackdashboard/local_settings.
2. Update the following line: "HORIZON_CONFIG[user_home] = lambda user:
/admin/hypervisors if user.is_superuser else /project" to
"HORIZON_CONFIG[user_home] = lambda user: /admin/domains if
user.is_superuser else /project".
3. Restart the Administration user interface: service httpd restart.
Symptom
When you deploy a VMware virtual system pattern in a multi-language
environment you get the following error message: TRACE nova.api.openstack
UnicodeDecoderError: "ascii" codec cant decode byte 0xe7 in position 28:
ordinal not in range (128).
Problem Description
Environment:
Platform & Locale: zh_CN.utf8
vCenter: S.Chinese Windows 2008 R2
After the installation of IBM Cloud Orchestrator, the following tasks are
performed:
1052
Using the command glance image-show XXX, you can see that many properties are
in Simple Chinese, which is consistent with vCenter.
Solution
This issue happens due to support issues for a multi-language environment. There
are two different default string encoding settings in Python:
* Python 2.x: ASCII
* Python 3.x: UTF-8
This problem happens because ASCII is set as the default encoder in the
environment. If you want to change Python 2.x to use UTF-8 by default, you must
add the following three lines into the service daemon script:
Chapter 12. Troubleshooting
1053
import sys
reload(sys)
sys.setdefaultencoding("UTF-8")
Note:
v The above line number might be different, but the basic principle for the change
is to add those three lines following the line import sys closely.
v If you decide to change the default encoding, you must be aware that
unpredictable errors might happen, as the default encoding form affects the
translation between Python and the outside world, and also all internal
conversions between 8-bit strings and Unicode.
Cause
Check that the openstack-keystone component is online on Central Server 2 by
running the following command on Central Server 1:
/opt/ibm/orchestrator/scorchestrator/SCOrchestrator.py \
--status --components=openstack-keystone
Solution
If the Keystone process is not online, restart it by running the following command
on Central Server 2:
/opt/ibm/orchestrator/scorchestrator/SCOrchestrator.py
\--start --components=openstack-keystone
Check that users can log in to the IBM Cloud Orchestrator user interfaces.
1054
1055
1056
-check status
Shared Volume Controller FlashCopy operations can only occur serially per
image; if it is doing a copy or extending operation on a boot volume, you
cannot invoke a new operation to make changes to the volume. You can
check the Shared Volume Controller UI to see whether a flashcopy is in
progress to the target volume or the current target volume is being
extended:
2014-11-11 01:24:58.006 2095 TRACE nova.openstack.common.loopingcall
ResizeError: Get error: PVCExpendvdiskFCMapException: Flashcopy is in
progress for boot volume, volume size didnt change. Please try again later.
(HTTP 409) (Request-ID: req-29350427-32b3-4721-baca-504c5216b041)
during resizing the instance in the PowerVC
Changing the PowerVC username and password on the Region Server if the
PowerVC username or password changes
1. On the Region Server, edit the following file:
/etc/powervc/powervc.conf
1057
After these three commands are executed the status of the virtual machine
in the PowerVC UI should change from warning to OK, but this may take
several minutes.
PowerVC intermittent volume attach issues on Workload Deployer
Intermittently Workload Deployer will return the pattern status of failed
against a AIX deployment that had a disk attached. The error returned as
part of the disk attach script in Workload Deployer will state the following
or similar:
*** basic validation of input
*** validation done
*** searching for uninitialized disk of size 1GiB
new pvs =
pvtouse=
ERROR : no free pv found of size 1
But the final status of the virtual machine itself will be green, what has
happened is that PowerVC had returned that the status of the volume was
in-use when in reality it had not been attached yet. This caused the
Workload Deployer verification of the attachment to fail. In all cases where
this issue occurs, the volume is correctly attached and the only side effect
is that the overall status of the pattern is failed. The virtual machine is
fully functional.
rstrip error in nova-powervc.log and some actions such as starting or stopping
the virtual machine may not be responsive
In the event that the following error is in the nova-powervc.log:
2014-12-17 02:57:10.348 26453 ERROR powervc.nova.driver.compute.manager [-]
Exception: Exception: NoneType object has no attribute rstrip
1058
stop and start the services on the PowerVC Region Server using
SCOrchestrator.py.
PowerVC Region Server does not support secure connection to PowerVC server
with mixed-case or uppercase host name
This problem affects only secure connections; insecure connections are
unaffected. The following error is displayed:
Host "hostname" does not match x509 certificate contents: CommonName "hostNAME", subjectAltName "DNS:hostNAME, DNS IP"
1059
The DB2 diagnostic logs will then contain error entries for RAINMAKE and/or
STORHOUS databases indicating the problem and providing solution information. At
the same time, the Workload Deployer component will not be able to start
successfully, reporting database connectivity errors in the trace logs.
Sample error entry from the DB2 diagnostic log showing inconsistent state for the
database RAINMAKE:
2014-10-01-11.46.34.244536-240 E560597E650
LEVEL: Error
PID
: 28925
TID : 140323141445376 PROC : db2sysc 0
INSTANCE: db2inst1
NODE : 000
DB
: RAINMAKE
APPHDL : 0-3751
APPID: 172.21.29.101.44732.141001163839
AUTHID : DB2INST1
HOSTNAME: cil021029097.cil021029.ibm.com
EDUID
: 16732
EDUNAME: db2agent (RAINMAKE) 0
FUNCTION: DB2 UDB, base sys utilities, sqledint, probe:2535
MESSAGE : SQL1015N The database is in an inconsistent state.
DATA #1 : String, 91 bytes
Crash Recovery is needed.
Issue RESTART DATABASE on this node before issuing this command.
SQLSTATE=55025
SQLSTATE=55025
Sample error message from the Workload Deployer component trace log in case of
problems with inconsistent database:
CWZCO1014E: Could not retrieve resources: [pdq][0][2.7.116]
CWPZC9001E: Could not obtain Connection from org.apache.commons.dbcp.PoolingDataSource;
Caused by: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-1015, SQLSTATE=55025,
SQLERRMC=null, DRIVER=4.14.113 com.ibm.pdq.runtime.exception.DataRuntimeException:
[pdq][0][2.7.116] CWPZC9001E: Could not obtain Connection from
org.apache.commons.dbcp.PoolingDataSource;
Caused by: com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-1015, SQLSTATE=55025,
SQLERRMC=null, DRIVER=4.14.113
The Workload Deployer component logs location when the error message from
above could be present:
/drouter/ramdisk2/mnt/raid-volume/raid0/logs/trace/trace.log
/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/kernelservices/logs/trace.log.*
/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/storehouse/logs/trace.log.*
Solution:
In case of database inconsistent state, you must run the following commands on
the DB2 node:
su - db2inst1 -c db2 connect to rainmake; db2 restart database rainmake
su - db2inst1 -c db2 connect to storhous; db2 restart database storhous
1060
Log tracing
In Business Process Manager, tracing is switched off by default. If you must
troubleshoot or debug any issues, switch tracing on.
2. Run the following command on Central Server 2 from the directory where file
has been saved:
<path to BPM profile>/bin/wsadmin.sh -host `uname -n` -username admin
-password <passw0rd> -f disableLogging.py
2. Run the following command on Central Server 2 from the directory where file
has been saved:
<path to BPM profile>/bin/wsadmin.sh -host `uname -n` -username admin
-password <passw0rd> -f enableLogging.py
Solution:
1. Open the Business Process Manager Process Designer.
2. Open Process Apps or Toolkits, for example:
SCOrchestrator_Support_vSys_Toolkit (SCOVSYS)
3.
1061
1062
3. Log in to IBM Cloud Orchestrator and wait for the new regions to
display. Once the images are displayed, delete the old
EC2-US-WEST_NORTHERN-CA and EC2-US-WEST_OREGON cloud groups and
hypervisors . You can also delete any registered images that belong to
these regions.
4. Log in to the Central Server 2 and run the following command:
keystone endpoint-list
1063
Solution:
The credentials used to connect to non-IBM supplied OpenStack have read but not
write permissions. The Public Cloud Gateway requires credentials with
permissions to deploy instances.
1064
Quota troubleshooting
Resolve any quota issues that you encounter when using the Public Cloud
Gateway.
Default quota is not defined large enough
Without any customization, a default quota definition exists in
the config.json. There are situations in which this default quota definition
is too small.
Resolution: Either create a project level quota in the Quota tab of the
Project page in the Administration user interface or increase the default
quota definition in config.json.
Project quota definition is too small
Resolution: If a project level quota definition exists, the values can only be
changed in the Quota tab of the Project page in the Administration user
interface.
Existing virtual machine instances already consume more resources than the
quota allows
Resolution: Count the number of instances, cores, RAM, and volume usage
and update the corresponding quota values. Volumes is the sum of the VM
instance volume and the additional disks. There is a gigabytes value in the
quotas which defines the largest possible virtual machine instance possible.
It is possible that you have reached this limit.
Too many key-pairs already exist
Resolution: Key pairs are stored on a per project base post-fixed with the
project ID. Sum up all the key pairs that have the same project ID and
adjust the quota definition for key pairs accordingly.
Too much storage is already consumed than is defined in the quota
Resolution: Volumes are the sum of the virtual machine instance volume
and the additional disks. Adjust the quota definition for volumes
accordingly
Provisioning failed even though quota has not been reached yet
There might be situation where the capacity of the region (EC2, SoftLayer
or NIOS) is already exhausted prior to the defined quotas.
Resolution:
v Check if you have set the quota for the region and projects higher than
the capacity of the region. Either lower the quota for the related projects
or increase the capacity of the region.
Chapter 12. Troubleshooting
1065
Problem
The Public Cloud Gateway startup fails with Unable generate admin token
and a HybridUnauthorizedException errors.
During startup of Public Cloud Gateway an admin token is generated based on the
following configuration information in the etc directory of the Public Cloud
Gateway:
* admin.json
* config.json
Admin.json content:
{
"auth":{
"passwordCredentials":{
"username":"xxxx",
"password":"yyyyy"
},
"tenantName":"zzzz",
"domainName": "ddddd"
}
}
The username must be a user ID which has admin rights. The password must be
encrypted via the encryptPassword.sh. The tenantName must be set to the tenant
name of the admin user. The domainName is optional and defaults to the Default
domain. Set domainName to the domain of the admin user.
Note: Required if the user is in a non-default domain.
Resolution
Make sure the values in admin.json match to your admin userid in the system.
Config.json content:
"auth":{
"provider":"keystone",
"service_url":"http://KeystoneHost:5000",
"admin_url":"http://KeystoneHost:35357"
}
If you manually changed the content of the service_url or the admin_url, the
admin token cannot be created.
Make sure that the KeystoneHost is set to the host name where keystone is
installed in your IBM Cloud Orchestrator environment. During installation the
values are configured based on your topology selection.
1066
1067
Symptom
The problem occurs configuring privateNetworkOnly according to what
documented at Configuring subnets and security groups in a non-default VPC
region on page 702.
During provisioning the result is not as expected, whether a public IP address is
assigned or not.
Cause
Late in 2014 Amazon added new out of the box support for auto assign public
IP address on the subnet configuration for Default and non-Default VPC.
If you see a Modify Auto-Assign Public IP button on the subnets page in your
VPC Dashboard, you might face the problem.
If Auto-assign Public IP is set to yes, this setting all the time takes precedence
over what you configured in Public Cloud Gateway.
Solution
The Auto-assign Public IP flag on the subnet used for provisioning must be set to
no for default or non-default VPC.
Setting the Auto-assign Public IP flag on the subnet to yes has all time precedence
over the Public Cloud Gateway related configuration.
Related concepts:
Configuring subnets and security groups in a non-default VPC region on page
702
You can configure subnets and security groups in a non-default VPC region.
1068
2. Make sure that you performed all the image setup steps for the
deployment scenario that you chose.
3. Check the password rules that are active on the base image and ensure
that the provided password is compliant.
4. Make sure that cloud-init or cloudbase-init is installed and
configured, and that the Public Cloud Gateway provided extensions are
installed.
5. For Windows provisioning, make sure that all the required ports are
reachable from the Workload Deploy node.
1069
1070
Accessibility features
The following list includes the major accessibility features in IBM Cloud
Orchestrator:
v Keyboard-only operation
v Interfaces that are commonly used by screen readers
v Keys that are discernible by touch but do not activate just by touching them
v Industry-standard devices for ports and connectors
v The attachment of alternative input and output devices
Note: The default configuration of JAWS screen reader does not read tooltips.
JAWS users must enable their current mode to read tooltips by selecting Utilities >
Settings Center > Speech Verbosity > Verbosity Level > Configure Verbosity
Levels.
User documentation is provided in HTML and PDF format. Descriptive text is
provided for all documentation images.
The knowledge center, and its related publications, are accessibility-enabled.
1071
1072
Notices
This information was developed for products and services that are offered in the
USA.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
United States of America
For license inquiries regarding double-byte character set (DBCS) information,
contact the IBM Intellectual Property Department in your country or send
inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for
convenience only and do not in any manner serve as an endorsement of those
1073
websites. The materials at those websites are not part of the materials for this IBM
product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758
U.S.A.
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
Any performance data contained herein was determined in a controlled
environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which
illustrate programming techniques on various operating platforms. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating
1074
platform for which the sample programs are written. These examples have not
been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. The sample
programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:
Portions of this code are derived from IBM Corp. Sample Programs.
Copyright IBM Corp. 2013, 2015. All rights reserved.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at www.ibm.com/legal/
copytrade.shtml.
Applicability
These terms and conditions are in addition to any terms of use for the IBM
website.
Personal use
You may reproduce these publications for your personal, noncommercial use
provided that all proprietary notices are preserved. You may not distribute, display
or make derivative work of these publications, or any portion thereof, without the
express consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make
derivative works of these publications, or reproduce, distribute or display these
publications or any portion thereof outside your enterprise, without the express
consent of IBM.
Notices
1075
Rights
Except as expressly granted in this permission, no other permissions, licenses or
rights are granted, either express or implied, to the publications or any
information, data, software or other intellectual property contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE
PUBLICATIONS. THE PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING
BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,
NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
1076
Glossary
This glossary includes terms and definitions for
IBM Cloud Orchestrator.
The following cross-references are used in this
glossary:
v See refers you from a term to a preferred
synonym, or from an acronym or abbreviation
to the defined full form.
v See also refers you to a related or contrasting
term.
To view glossaries for other IBM products, go to
www.ibm.com/software/globalization/
terminology (opens in new window).
A
account code
A code that uniquely identifies an
individual, billing, or reporting entity
within chargeback and resource
accounting.
account code conversion table
An ASCII text file that contains the
definitions that are required to convert
the identifier values defined by the
account code input field to the
user-defined output account codes.
account report
A report that is used to show account
level information for usage and charge.
audit data
A data record that contains information
about specific types of user activity,
security events, and configuration
changes in the product and in the cloud.
availability zone
A logical group of OpenStack Compute
hosts. It provides a form of physical
isolation and redundancy from other
availability zones, such as by using
separate power supply or network
equipment.
1077
B
Bill program
A program that performs cost extensions
within SmartCloud Cost Management and
summarizes cost and resource utilization
by account code. The Bill program uses
the rate code table that is assigned to the
client to determine the amount to be
charged for each resource consumed.
building block
The model of an image that is created by
combining models of a base operating
system and software bundles. Each
building block contains a semantic and
functional model that describes the
contents of the components, for example,
the installed products, supported
operating systems, prerequisites, and
requirements.
business object
A software entity that represents a
business entity, such as an invoice. A
business object includes persistent and
nonpersistent attributes, actions that can
be performed on the business object, and
rules that the business object is governed
by.
business process
A defined set of business activities that
represent the required steps to achieve a
business objective. A business process
includes the flow and use of information
and resources.
C
chargeback identifier
A label, which is often tied to an
algorithm or set of rules, that is not
guaranteed to be unique, but is used to
identify and distinguish a specific
chargeback item or chargeback entity
from others.
cloud group
A collection of hypervisors from a single
vendor. See availability zone.
compute node
A node that runs a virtual machine
instance, which provides a wide range of
services, such as providing a development
environment or performing analytics.
1078
consolidation process
A process during which the data
collectors process the nightly accounting
and storage files that were created by the
data collection scripts and produce an
output CSR file.
conversion mapping
An entry in a mapping table which allows
you to map identifiers to accounts or
other identifiers.
custom node
A virtual image part that provides an
unconfigured node for a pattern that has
a deployment manager or a control node
as its base.
E
exception file
A file that contains a list of records with
identifier names that do not have a
matching Parameter IdentifierName
attribute value.
exception processing
A process in which the system writes all
records that to do match an entry in the
account code conversion table to an
exception file.
H
human service
An activity in the business process
definition that creates an interactive task
that the process participants can perform
in a web-based user interface.
hypervisor
Software or a physical device that enables
multiple instances of operating systems to
run simultaneously on the same
hardware.
K
kernel The part of an operating system that
contains programs for such tasks as
input/output, management and control of
hardware, and the scheduling of user
tasks.
parameter (parm)
A value or reference passed to a function,
command, or program that serves as
input or controls actions. The value is
supplied by a user or by another program
or process.
service operation
A custom operation that can be run in the
context of the data center. These
operations are typically administrative
operations and are used to automate the
configuration. Service operations can also
be used to enhance the catalog of
available services with extra functionality.
parm
See parameter.
performance counter
A utility that provides a way for software
to monitor and measure processor
performance.
primary key
In a relational database, a key that
uniquely identifies one row of a database
table.
process application
A container in the Process Center
repository for process models and
supporting implementations. A process
application typically includes business
process definitions (BPDs), the services to
handle implementation of activities and
integration with other systems, and any
other items that are required to run the
processes. Each process application can
include one or more tracks.
proration
A process that distributes the overall or
individual resources of an account and
the cost of those resources across multiple
accounts at a specified percentage.
proration table
An ASCII text file that defines the
identifier values and rate codes that are
used in the proration process.
R
rate code
The identifier of a rate that is used to link
a resource unit or volume metric with its
charging characteristics.
rate group
A group of rate codes that is used to
create rate subtotals in reports, graphs,
and spreadsheets.
registry
A repository that contains access and
configuration information for users,
systems, and software.
shared service
A predefined virtual application pattern
that is deployed and shared by multiple
application deployments in the cloud,
including virtual applications, virtual
systems, and virtual appliances.
software bundle
A collection of software installation files,
configuration files, and metadata that can
be deployed on a virtual machine
instance.
T
toolkit
A container where artifacts can be stored
for reuse by process applications or other
toolkits.
V
virtual application
A complete set of platform resources that
fulfill a business need, including web
applications, databases, user registries,
messaging services, and transaction
processes. A virtual application is defined
by a virtual application pattern. See also
virtual application pattern.
virtual application pattern
A pattern that defines the resources that
are required to support virtual
applications, including web applications,
databases, user registries, and more.
These patterns are the deployment unit
for a virtual application. See also virtual
application.
virtual machine (VM)
An instance of a data-processing system
that appears to be at the exclusive
disposal of a single user, but whose
Glossary
1079
1080
Printed in USA