SG 248551
SG 248551
Redbooks
Draft Document for Review December 20, 2023 5:00 pm 8551edno.fm
IBM Redbooks
November 2023
SG24-8551-00
8551edno.fm Draft Document for Review December 20, 2023 5:00 pm
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
iii
8551edno.fm Draft Document for Review December 20, 2023 5:00 pm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
5.1.2 Using the URI modules to interact with PowerVC API services . . . . . . . . . . . . . 243
5.2 IBM Power Virtual Server (PowerVS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.2.1 Using the IBM Cloud collection for PowerVS . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.2.2 Using the URI module for PowerVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Contents vii
8551TOC.fm Draft Document for Review December 20, 2023 5:00 pm
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright
and trademark information” at https://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Security® Rational®
DB2® IBM Sterling® Redbooks®
DS8000® IBM Z® Redbooks (logo) ®
IBM® Instana® SoftLayer®
IBM Cloud® Passport Advantage® Sterling™
IBM Cloud Pak® POWER® System z®
IBM Consulting™ PowerHA® SystemMirror®
IBM FlashSystem® PowerVM® Turbonomic®
IBM Instana™ QRadar® WebSphere®
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Ansible, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in
the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Preface
This IBM Redbooks publication will help you install, tailor and configure an automation
environment using Ansible in an IBM Power server environment. Ansible is a versatile and
easy to use IT automation platform that makes your applications and systems easier to deploy
and maintain. With Ansible you can automate almost anything: code deployment, network
configuration, server infrastructure deployment, security and patch management, and cloud
management. Ansible is implemented in an easy to use and human readable language
(YAML) and uses SSH to connect to the managed systems, hence with no agents to install on
remote systems.
Ansible is an Open Source solution that is gaining market share in the automation workspace
as it can help you automate almost anything. This IBM Redbooks publication will show you
how to integrate Ansible to manage all aspects of your IBM Power infrastructure, including
server hardware, the hardware management console, PowerVM, PowerVC, AIX, IBM i, and
Linux on Power. We provide guidance on where to run your Ansible automation controller
nodes and demonstrate how it can be installed on any operating system supported on IBM
Power and also show you how to set up your IBM Power infrastructure components to be
managed using Ansible.
This publication is intended for use by anyone who is interested in automaton using Ansible
whether they are just getting started or if they are experts on Ansible and want to understand
how to integrate IBM Power into their existing environment.
Authors
This book was produced by a team of specialists from around the world in conjunction with
IBM Redbooks.
Tim Simon is an IBM Redbooks® Project Leader in Tulsa, Oklahoma, USA. He has over 40
years of experience with IBM primarily in a technical sales role working with customers to
help them create IBM solutions to solve their business problems. He holds a BS degree in
Math from Towson University in Maryland. He has worked with many IBM products and has
extensive experience creating customer solutions using IBM Power, IBM Storage, and IBM
System z throughout his career. He currently lives in Tulsa, Oklahoma where he enjoys
spending time with his grandchildren.
Jose Martin Abeleira is a Senior Systems and Storage Administrator at DGI (Uruguay Taxes
Collection Agency). A former IBMer, he is a Gold Redbooks Author, Certified Consulting IT
Specialist, and IBM Certified Systems Expert Enterprise Technical Support for IBM AIX and
Linux in Montevideo, Uruguay. He worked with IBM for 8 years and has 18 years of AIX
experience. He holds an Information Systems degree from Universidad Ort Uruguay. His
areas of expertise include IBM Power, AIX, UNIX, and LINUX, Live Partition Mobility (LPM),
IBM PowerHA® SystemMirror®, SAN and Storage on IBM DS line, V7000, HITACHI HUSVM,
and G200/G400/G370/E590. He teaches Systems Administration in the Systems Engineer
career at the Universidad Catolica del Uruguay, and he teaches Infrastructure Administration
in the Computer Technologist career created by the joint venture between Universidad del la
Republica Uruguay, Universidad del Trabajo del Uruguay and Universidad Tecnologica.
Shahid Ali is a Cloud Solution Lead for MEA Region. At the time of this publication, he is based
in Riyadh, Saudi Arabia, and leading hybrid multi-cloud solutions in MEA region. He is an
experienced Enterprise Architect and joined IBM around 5 years back as an Enterprise Architect.
He has 28 years of experience as an architect and consultant. Before joining IBM, he has
provided consultancy services in some of the largest projects in Saudi Arabia in Ministries of
Interior, Education and Labor and related organizations. These projects produced nationwide
solutions for fingerprinting, country-wide secure networks, smart ID cards, e-services portals,
enterprise resource planning systems, and massive open online courses platforms. He has
several IBM and industry certifications and is also a member of IBM Academy of Technology.
Vijaybabu Anaimuthu is a Technical Consultant at IBM Systems Experts Labs in India. He holds
a bachelor's degree (BE) in Electrical and Electronics Engineering from Anna University, Chennai.
He has over 15 years of experience working with customers designing and deploying solutions on
IBM Power server and AIX. He focuses on areas such as IT Infrastructure Enterprise Solutions,
technical enablement and implementations relative to IBM Power servers, Enterprise Pools,
performance, and automation. His areas of expertise include capacity planning, migration
planning, system performance and automation.
Sambasiva Andaluri (Sam) is an experienced Developer turned Solution Architect Leader with
over 30 years of experience. For the past decade, he has been a pre-sales and post-sales solution
architect for trading systems at Fidessa, a pre-sales solution architect at AWS and as an SRE
onboarding ISVs for Google marketplace at a partner. He brings multifaceted experience to the
table, is a continuous learner, and is a strong supporter of STEM. In his free time, he coaches K-12
students for FIRST LEGO league competitions, inspiring the young minds to take up STEM
careers.
Marcelo Avalos Del Carpio is a Cloud Architect at Kyndryl Consult in Uruguay with over 9 years
of experience in Information Technology. A former IBM leader, he has specialized in deploying IBM
technical solutions for key accounts across South America and North America. He holds an
Electronic Systems Engineering degree from Escuela Militar de Ingeniería, Bolivia, and a master's
in Project Management from GSPM UCI, Costa Rica. He is certified by The Open Group, and
specializes in IT infrastructure, cloud platforms, and DevOps, drawing from frameworks such as
PMI, ITIL, and TOGAF.
Thomas Baumann is Senior Systems Engineer and Managing Director of ACP IT Consulting
GmbH (formerly: tiri GmbH) in Hamburg, Germany, which is an IBM Business Partner and a Red
Hat Premier Partner. He has over 30 years of experience in computer technology, and is also a
trainer for IBM Software, Ansible Automation, Linux/Cloud, and Security/Threat Management. His
main focus is creating smart and cool solutions which fit the customer needs, always from the
lens of restorability, workability and operability.
Ivaylo Bozhinov is a Power Systems subject matter expert (SME) at IBM Bulgaria. His main area
of expertise is solving complex hardware and software issues on IBM Power Systems products,
IBM AIX, VIOS, HMC, IBM i, PowerVM®, and Linux on Power Systems servers. He has been with
IBM since 2015 providing reactive break-fix, proactive, preventative, and cognitive support.
Carlo Castillo is a Client Services Manager for IBM Power for Right Computer Systems, an IBM
Business Partner and Red Hat partner in the Philippines. He has over 32 years of experience in
pre-sales and post-sales support, designing full IBM infrastructure solutions, creating pre-sales
configurations, performing IBM Power installation, implementation and integration services, and
providing post-sales services and technical support for customers, as well as conducting
presentations at customer engagements and corporate events. He was the very first IBM-certified
AIX Technical Support engineer in the Philippines in 1999. As training coordinator during RCS'
tenure as an IBM Authorized Training Provider from 2007 to 2014, he also administered the IBM
Power Systems curriculum, and conducted IBM training classes covering AIX, PureSystems,
PowerVM, and IBM i. He holds a degree in Computer Data Processing Management from the
Polytechnic University of the Philippines.
Rafael Cezario is a Senior Solutions Engineer at Blue Trust, an IBM business partner in Brazil.
Previously he was an employee of IBM where he worked as a Pre-Sales technical resource on
IBM Power servers. He has 19 years of IT experience having worked on various infrastructure
projects including design, implementation, demonstration, installation and integration of the
solutions. He has worked with a wide variety of existing software on the IBM Power platform
such as PowerVM implementations including SEA and vNIC, PowerVC, PowerSC, OpenShift,
Ansible, and NIM Server to list a few. During his career at IBM, he served as a consultant for
large clients with IBM Power and AIX and performed pre-sales and post-sales activities as well
as doing presentations and demonstrations for clients. He has worked in several areas of
infrastructure during his career, becoming certified in several of these technologies, Cisco
CCNA, Nutanix NCA, IBM AIX, He holds a degree in Electrical Engineering with a specialization
in Telecommunications from the Instituto de Ensino Superior de Brasília (IESB).
Stuart Cunliffe is a solution engineer within IBM Technology Expert Labs in the UK,
specializing in IBM Power systems and helping customers exploit the most value out of their
Power infrastructure. He has worked for IBM since graduating from Leeds Metropolitan
University in 1995, and has held roles in IBM Demonstration Group, Global Technologies
Services (GTS) System Outsourcing, eBusiness hosting, and ITS. A key area of his expertise
is helping customers design and deliver automation across their IBM Power estate with
solutions involving tools such as Red Hat Ansible, HashiCorp Terraform and IBM PowerVC.
Munshi Hafizul Haque is a Senior Platform Consultant at Red Hat in Kuala Lumpur, Malaysia.
Munshi is an experienced technologist in engineering, design, and architecture of PaaS and
cloud infrastructures. At the time of this publication, he is part of the Red Hat Consulting
Services team where he helps organizations adopt automation, container technology and
DevOps practices. Before that, he worked for IBM as a senior consultant with IBM Systems
Lab Services in Petaling Jaya, Malaysia, where he took part in various projects with different
people in different ASEAN countries, and as a specialist in IBM Power Systems and
associated enterprise edition technology
Subha Hari is a Senior Delivery Consultant from IBM Technology Expert Labs (Sustainability
Software) in Bangalore, India. She has over 19 years of experience, primarily in Performance
Testing of IBM Sterling™ Order Management suite of applications, Production Performance
Health Checks, Sizing & HA/DR activities. She holds a Masters degree (Master of Computer
Applications) from Bharathidasan University, Trichy, India. Subha has led various initiatives on
automation using Ansible, Python, Shell scripting etc. Her areas of expertise include pre-sales,
performance testing/benchmarking, upgrade and modernization of IBM Sterling suite of
products.
Preface xiii
8551pref.fm Draft Document for Review December 20, 2023 5:00 pm
Osman Omer is a senior IT Managing Consultant based in Qatar. He is in his 20th year of his
IBM tenure, the first half of which he was based in Rochester, Minnesota where he worked as
a software engineer for 8 years before joining Lab Services. All his IBM tenure has been on
IBM Systems management, cloud solutions and automation services. His first project was
porting IBM i to be managed by HMC, then IBM i OS enablement for system management,
tooling, Systems Director, VMControl and PowerVC. As a Lab Services consultant, he enables
IBM customers on the products he used to develop. Upon transitioning to Qatar, he became an
integral member of the MEA (middle east and Africa) team owning cloud and automation
services delivery in the region. He is currently acting as the EMEA Power Services Delivery
Practice Leader in addition to his consulting and leadership responsibilities. Osman holds a
master degree in Computer Science from South Dakota State University.
Rosana Ramos is a Security Architect at IBM Systems BISO Organization. She holds a
bachelor's degree in Computer Engineering from Universidad de Guadalajara México and a
master’s in Computer Science from Universidad Autonoma de Guadalajara. She has more than
10 years of experience in Linux and Unix ystem administration, she has specialized on
implementation of security best practices and system hardening. She is certified as CISSP, CEH,
CRISC and by the Open Group as Master Certified Technical Specialist.
Prashant Sharma is an IBM Power Brand Technical Specialist based in Singapore. He holds a
degree in Information Technology from University of Teesside, England. He has over 12 years of
experience in IT Infrastructure Enterprise Solutioning, pre-sales, client, and partner consultation,
technical enablement and implementations relative to IBM Power servers, IBM i and IBM Storage.
Stephen Tremain has been with IBM for 17 years, and currently works as a Software Engineer
at IBM Security® - Australia Development Lab on the Gold Coast in Queensland, Australia.
Before joining IBM, Stephen worked as a UNIX System Administrator with an investment bank
for 10 years, and also worked in the education and research sectors. Stephen graduated from
the University of New England in Australia, with a BS and a Graduate Diploma in Agricultural
Sciences.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xv
8551pref.fm Draft Document for Review December 20, 2023 5:00 pm
Also in this chapter, we discuss IBM Power, IBM’s powerful and robust midrange server
platform. We provide an overview of IBM Power as one of the leading enterprise server
architectures in the market and show how your IBM Power servers can be automated using
the same Ansible tools that you may already be using for other servers, storage devices, and
networking components in your environment.
We also explore how Ansible automation can reshape the IT management landscape
providing an end-to-end automation platform to configure systems, deploy software, and
orchestrate workflows on IBM Power. We delve into provisioning, patch management,
security, configuration, business continuity, disaster recovery, application development, and
much more, showcasing how Ansible and IBM Power harness the combined potential of
cutting-edge technology and a highly-configurable automation platform.
Automation in everyday life has been a staple for many years, for example:
– Automatic dishwashers do our dishes and automatic washers and dryers clean our
clothes.
– Robotic machines do repetitive tasks in manufacturing.
– Machines automatically “call home” when an error is detected.
– Features in your automobile automatically check and report on safety issues and can
even allow the car to drive itself.
Automation is designed to do simple repetitive tasks that consume time and energy in everyday
life and makes our life easier. However, automation has become a necessity in today’s
information technology world as the number of components that need to be managed is
increasing exponentially.
The two concepts are closely related and at times the line between the two concepts can be
blurry. The biggest differentiator is that orchestration takes a set of automated tasks and
groups them together, checking for values before and after a task completes, checking the
results for each task (was it successful or not), and adding intelligence to the workflow and
adapting the steps based on results from each step.
Another way to say it is that automation is a subset of orchestration – you cannot orchestrate
manual, non-automated tasks – however multiple automation tasks strung together are not
orchestration unless they include programmatic control over the process based on the results
of each task.
Automation and orchestration are not meant to replace the role of the system administrator,
but aim to help us in creating more reliable automated tasks, and give us time to focus on
innovation, problem solving, or studying new technologies, instead of day to day manual
tasks.
Puppet is an automated administrative engine for your Linux, Unix, and Windows systems,
performs administrative tasks (such as adding users, installing packages, and updating
server configurations) based on a centralized specification Manage and automate more
infrastructure and complex workflows with reusable blocks of self-healing infrastructure as
code, and Quickly deploy infrastructure to support your evolving business needs at will
(and at scale) with model-driven and task-based configuration management.?
HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and
on-premises resources in human-readable configuration files that you can version, reuse,
and share. You can then use a consistent workflow to provision and manage all of your
infrastructure throughout its lifecycle. Terraform can manage low-level components like
compute, storage, and networking resources, as well as high-level components like DNS
entries and SaaS features.
In August of 2023 HashiCorp announced that future versions of Terraform will be covered
by the business source license (BSL) as compared to current versions being open-source
under the Mozilla Public License (MPL) v2.0 license. As a result of this announcement the
Linux Foundation announced the formation of OpenTofu, an open source alternative to
Terraform for code provisioning.
Ansible is an open-source, cross-platform tool for resource provisioning automation that
DevOps professionals popularly use for continuous delivery of software code by taking
advantage of an “infrastructure as code” approach. Over the years, the Ansible automation
platform has evolved to deliver sophisticated automation solutions for operators,
administrators, and IT decision-makers across various technical disciplines. It is a leading
enterprise automation solution with flourishing open-source software and the unofficial
standard in IT automated systems. It operates on several Unix-like platforms and can
manage systems like Unix and Microsoft architectures. It comes with descriptive language
for describing system settings.
Because of the broad acceptance of the Ansible platform, its open source design, and its
wide support for many devices and platforms it has started to become a dominant tool in the
market. However, it is also common to use some of the other automation tools in conjunction
with Ansible to do more complex automation – for example many companies use Ansible in
cooperation with Terraform to provide automatic provisioning of their infrastructure.
What is Ansible
Ansible is an open source application designed to manage IT automation. It can configure
systems, deploy software, and orchestrate advanced workflows to support application
deployment, system updates, and more.
Ansible’s main strengths are simplicity and ease of use. It also has a strong focus on security
and reliability, featuring minimal moving parts. It uses OpenSSH for transport (other
transports and connections are supported as alternatives) and uses a human-readable
language that is designed for getting started quickly without a lot of training.
All of the automation tasks are executed on the Ansible controller which is connected to the
Ansible client hosts – generally using SSH, but can use other transports. On the Ansible
controller there are one or more Ansible collections installed. These collections consist of
modules, plugins and roles which define the types of actions that Ansible can execute on the
client node. These actions can be executed individually through the Ansible CLI.
Executing actions individually through Ansible is useful, but is not likely to save you time and
effort as you manage a large number of devices. Ansible provides a method to automate a
workflow which consists of multiple sequential tasks through a playbook. An Ansible playbook
is a file that contains a set of instructions that Ansible can use to automate tasks on remote
hosts. Playbooks are written in YAML, a human-readable markup language. A playbook
typically consists of one or more plays, a collection of tasks run in sequence. Each task is a
single instruction that Ansible can execute, such as installing a package, configuring a
service, or copying a file.
Why Ansible
Ansible is an open-source IT automation engine that may help you save time at work while
simultaneously improving the scalability, consistency, and dependability of your IT
infrastructure. It is designed for IT professionals who require it for application deployment,
system integration, in-service coordination, and anything else that an IT administrator or
network manager routinely accomplishes.
In contrast to more simplistic management tools, Ansible users can leverage Ansible
automation for installing software, automating daily tasks, provisioning infrastructure,
improving security and compliance, and sharing automation across the entire enterprise. It is
highly scalable and easy to use. Ansible enables the rapid configuration of an entire network
of devices without the requirement for programming expertise.
In summary, Ansible is a powerful tool that can be simple to use and install. It provides a
broad range of services – by using Ansible playbooks you can check a system, change a
system, install new features, and anything you can do when connected directly to the
machine. Using Ansible will provide the productivity benefits of automation and allow you to
better manage your ever increasingly complex IT environments – improving the scalability,
consistency, and dependability of the applications and systems that drive your business.
The Ansible project is a collection of open source projects and community contributed
modules, collections and roles, along with even more community contributions through
Ansible Galaxy. You can share and contribute collections, roles and playbooks through
Ansible Galaxy.
Red Hat offers an enterprise level, fully supported Ansible solution as Red Hat Ansible
Automation Platform. Red Hat Ansible Automation Platform consists of projects from Ansible
such as AWX, Ansible Core, and others. It also includes curated, certified and validated
Ansible collections and roles for partners such as IBM, Juniper, Cisco, and public cloud
providers.
AWX: Free, community supported open source software. providing a GUI and API tool for
wrapping around community Ansible.
Red Hat Ansible Automation Platform (RHAAP): Subscription based, enterprise product.
Combines 20+ community projects into a fully supported automation platform for your
enterprise.
Ansible-core X
Community Ansible X
AWX X
Red Hat Ansible Automation
X
Platform
Which method you use to procure Ansible is determined by your business requirements. If
your automation environment is small and not business critical, it would be acceptable to use
the community supported versions. However, if you are supporting business critical
environments, it is important to consider the benefits of a supported enterprise product. You
should consider an enterprise if you:
– Require enhanced security.
– Are embarking on an IT transformation initiative.
– Are ready to expand automation to include more people, teams, and use cases.
– Need flexibility to adapt to changing business requirements-with proven, innovative
solutions.
– Want to prioritize automation objectives over managing automation infrastructure.
When teams want to scale automation objectives at an organizational level, Ansible Automation
Platform (AAP) is a better choice, given its support for developer tooling, flexible deployment
options across multiple data centers and cloud and edge locations. Additionally AAP provides
guaranteed SLA support for compatibility, upgrades, and security vulnerabilities. You can also
scale your automation spend more efficiently and transparently with an enterprise solution.
Ansible Automation Platform provides a more comprehensive solution for larger organizations
with more complex automation needs.
Table 1-2 Community Ansible and AWX vs. Red Hat Ansible Automation Platform comparison
Capability Community Ansible and Red Hat Ansible Automation Platform
AWX
effectively manage disparate community tools and their desire to contribute to and understand
open source development models.
In addition, Ansible Automation Platform includes Event-Driven Ansible, which reduces manual
efforts by connecting sources of events with corresponding actions via rules. You design
rulebooks and Event-Driven Ansible recognizes the specified event, matches it with the
appropriate action, and automatically executes it. It helps your teams stay focused on
high-value work.
End-to-end automation
Ansible Automation Platform is the strategic automation solution. It is no longer just an
upstream command line Ansible package with support, and it is not just a graphical user
interface for Ansible. It is a proven enterprise platform used by every Fortune 500 company in
the airline, government, and military sectors to create, manage, and scale automation
strategies.1
Red Hat Ansible Automation Platform is a single automation platform for multiple use cases
as below:
– Hybrid cloud: Automate cloud-native environments and manage infrastructure and
services across public, private, and hybrid clouds with certified integrations.
– Edge: Standardize configuration and deployment across your entire IT landscape-from
datacenter to cloud to edge environments-with a single, consistent automation
platform.
– Networks: Manage entire network and IT processes across physical networks,
software-defined networks, and cloud-based networks, all the way to edge locations.
1
https://www.redhat.com/en/topics/automation/why-choose-red-hat-for-automation
Automation controller
Automation controller is a distributed system, where different software components can be
co-located or deployed across multiple compute nodes. Automation controller is the control
plane for automation, and includes a user interface, browse-able API, role-based access
control (RBAC), job scheduling, integrated notifications, graphical inventory management,
CI/CD integrations, and workflow visualizer functions. Manage inventory, launch and schedule
workflows, track changes, and integrate into reporting, all from a centralized user interface
(UI) and RESTful application programming interface (API).
In the installer, node types of control, hybrid, execution, and hop are provided as abstractions
to help the user design the topology that is appropriate for their use case. Figure 1-5 on
page 13 shows the anatomy of the automation operation using the automation controller.
2 https://www.ansible.com/blog/peeling-back-the-layers-and-understanding-automation-mesh
Automation hub
Automation hub enables you to discover and use new certified automation content from Red
Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and
manage Ansible Collections, which are supported automation content developed by Red Hat
and its partners for use cases such as cloud automation, network automation, and security
automation.
Private automation hub provides both disconnected and on premises solutions for
synchronizing collections and execution environment images from Red Hat cloud automation
hub. You can also use other sources such as Ansible Galaxy or other container registries to
provide content to your private automation hub. Private automation hubs can integrate into
your enterprise directory and your CI/CD pipelines. Figure 1-6 shows the development cycle
for an automated execution environment.
Execution Environment
An automation execution environment is a container image used to execute Ansible
playbooks and roles. Automation execution environments provide a defined, consistent, and
portable way to build and distribute your automation environment between development and
production. Execution environments give Ansible Automation Platform administrators the
ability to provide and manage the right automation environments that meet the needs of
different teams, such as networking and cloud teams. They also enable automation teams to
define, build, and update their automation environments themselves. Execution environments
provide a common language to communicate automation dependency between automation
developers, architects, and platform administrators. Figure 1-7 on page 14 details the
automation execution environment.
Automation mesh
Automation mesh is an overlay network intended to ease the distribution of work across a
large and dispersed collection of workers through nodes that establish peer-to-peer
connections with each other using existing networks. Automation mesh makes use of unique
node types to create both the control and execution plane. This is shown in Figure 1-8.
Control plane
The control plane consists of hybrid and control nodes.
Hybrid nodes: The default node type for control plane nodes, responsible for automation
controller runtime functions like project updates, management jobs and ansible-runner
task operations. Hybrid nodes are also used for automation execution.
Control nodes: The control nodes run project and inventory updates and system jobs, but
not regular jobs. Execution capabilities are disabled on these nodes.
Execution Plane
The execution plane consists of execution nodes that execute automation on behalf of the
control plane and have no control functions. And consists of hop and execution nodes.
Execution nodes: run jobs under ansible-runner with podman isolation. This node type is
similar to isolated nodes. This is the default node type for execution plane nodes.
Hop nodes: Similar to a jump host, hop nodes will route traffic to other execution nodes.
Hop nodes cannot execute automation.
Peer Relationship: define the node-to-node connections between controller and execution
nodes.
More information can be found in the Red Hat Ansible Automation Platform (RHAAP)
documentation.
Event-driven automation helps connect data, analytics, and service requests to automated
actions so that activities, such as responding to an outage or adjusting some aspect of an IT
system that can take place in a single, rapid motion. Automating in an “if-this-then-that”
fashion helps IT teams manage how and when to target specific actions. Figure 1-9 shows a
typical event-driven automation environment.
What is an IT event?
An event refers to any detectable occurrence that has significance for the management of IT
infrastructure or the delivery of an IT service. Events are often identified by third party
monitoring tools, and typically indicate significant occurrences or changes of state in
applications, hardware, software, cloud instances, or other technologies.
3 https://www.redhat.com/en/topics/automation/what-is-event-driven-automation
For example, a system outage can trigger an event that automatically executes a specific
action, such as logging a trouble ticket, gathering facts needed for troubleshooting, or
performing a reboot. Since these actions are predefined and automated, they can be
performed more quickly than if the required steps were done manually.
Event-driven automation can help teams move from a reactive to a proactive approach to IT
management and streamline IT actions with full end-to-end automation. Solutions with
event-handling capabilities extend the use of automation across domains, processes, and
geographies, which advances automation maturity by ensuring operational consistency,
resilience, and efficiency.
Automated tuning and capacity management: Ongoing tuning and capacity management
are necessary for many IT functions, such as managing web applications and monitoring
storage pools. For some teams, tuning occurs thousands or tens of thousands of times per
month, making it time-consuming when done manually. Event-driven automation can
respond to these types of events based on predetermined rules to address things like low
storage capacity-and trigger automatic adjustments.
Scaling automation: As with tuning, it can be burdensome to manually scale applications'
storage, processing, and network bandwidth to meet user demand. For example, an
event-driven automation solution can monitor buffer pools, automatically adjusting sizes as
limits are reached.
For more information see what is event driven automation in the Red Hat documentation.
Ansible
Ansible is known for its focus on orchestration and configuration management. It excels at
automating tasks that involve the setup, configuration, and management of systems,
applications, and networks. Ansible uses a declarative approach, where you define the
desired state of your systems, and Ansible takes care of bringing them to that state. Ansible
playbooks, written in YAML, encapsulate these declarative configurations and automation
workflows.
Configuration management
Ansible shines when it comes to ensuring consistency across a variety of systems. Its
idempotent nature ensures that tasks are only executed if necessary, reducing the risk of
unintended changes.
Ad-hoc commands
Ansible allows for quick and flexible execution of ad-hoc commands across multiple servers.
This feature is particularly useful for tasks that require immediate attention or investigation.
Terraform
Terraform, on the other hand, is purpose-built for provisioning and managing infrastructure. It
employs a declarative language to define the infrastructure's desired state, creating a clear
separation between the “what” and the “how.” Terraform's configuration files, written in
HashiCorp Configuration Language (HCL), describe the infrastructure resources, their
dependencies, and relationships.
Infrastructure provisioning
Terraform's strength lies in its ability to create and manage infrastructure resources across
various cloud providers and on-premises environments. It excels at managing the lifecycle of
resources, from creation to updates and destruction.
State management
Terraform maintains a state file that records the current state of the infrastructure. This state
file allows Terraform to determine what changes are necessary to reach the desired state and
helps prevent accidental changes.
Complementary roles
Ansible and Terraform, are not mutually exclusive; in fact, they often work best when used in
tandem. Terraform excels at setting up the infrastructure, ensuring resources are created and
managed accurately, while Ansible takes charge of configuring and maintaining the systems
running on that infrastructure. This synergistic approach maximizes the strengths of both
tools while minimizing their respective weaknesses. Understanding that the two tools can
work well together, Red Hat has created two certified collections to help you to better
integrate the two tools.
The Terraform Collection for Ansible Automation Platform automates the management and
provisioning of infrastructure as code using the Terraform CLI tool within Ansible playbooks
and Execution Environment runtimes. The Ansible provider for Terraform allows your
Teraform workflows to integrate to your Ansible workflows by collecting the build results to
populate an Ansible inventory for further automation with Ansible.
In conclusion, Ansible and Terraform offer distinct yet complementary features in the world of
Infrastructure as Code. While Ansible excels at orchestration and configuration management,
Terraform focuses on provisioning and managing infrastructure. By understanding their
differences and overlaps, DevOps practitioners can harness the power of both tools to create
a robust, automated, and efficient infrastructure management.
1.3.5 Provisioning
Automated infrastructure provisioning is the first step in automating the operational life cycle
of your applications. From traditional servers to the latest serverless or function-as-a-service
environments, Red Hat Ansible Automation Platform (RHAAP) can provision cloud platforms,
virtualized hosts and hypervisors, applications, network devices, and bare-metal servers. It
can then connect these deployed nodes to storage, add them to a load balancer, patch them
for security, or perform any number of other operational tasks executed by separate teams.
Ansible Automation Platform is the single platform in your process pipeline for deploying
infrastructure and connecting it, simplifying the deployment and day-to-day management of
your infrastructure. Consider the following components of your infrastructure that can be
provisioned by Ansible:
Bare metal: Underneath virtualization and cloud platforms is bare metal, and you still
need to provision it depending on the situation. Ansible Automation Platform integrates
with many datacenter management tools to both invoke and enact the provisioning steps
required.
Virtualized: From hypervisors to virtual storage and virtual networks, you can use Ansible
Automation Platform to simplify the experience of cross platform management. The large
selection of available integrations gives you flexibility and choice to manage your diverse
environment.
Networks: Ansible's network automation capabilities allow users to configure, validate,
and ensure continuous compliance for physical network devices. Ansible Automation
Platform can easily provision across multi-vendor environments, often replacing manual
processes.
Storage: Ansible Automation Platform can provision and manage storage in your
infrastructure. Whether it's software-defined storage, cloud-based storage, or even
hardware storage appliances, you can find a module to benefit from Ansible's common,
powerful language.
Public cloud: Ansible Automation Platform is packaged with hundreds of modules
supporting services on the largest public cloud platforms. Compute, storage, and
networking modules allow playbooks to directly provision these services. Ansible can even
act as an orchestrator of other popular provisioning tools.
Private cloud: One of the easiest ways to deploy, configure, and orchestrate OpenStack
private cloud is by using Ansible Automation Platform. It can be used to provision the
underlying infrastructure, install services and applications, add computer hosts, and more.
Figure 1-10 shows actions that can be improved by automation, as you can see it includes the
full lifetime of your IT assets.
Provisioning with Ansible Automation Platform allows you to seamlessly transition into
configuration management, orchestration, and application deployment using the same
simple, human-readable automation language. For more information on provisioning with
Ansible see https://www.redhat.com/en/technologies/management/ansible/provisioning.
Ever wonder how you can apply patches on your systems, restart, and continue working with
minimal downtime? Ansible can also function as a simple management tool to make patch
management easy. Complicated administration tasks that take hours to complete can be
managed easily with Ansible.
While configuration management deals with maintaining the integrity and consistency of your
system’s components, patch management concentrates on updating and applying patches to
these components. Through the use of packaging modules and playbooks, Ansible effectively
minimizes the amount of time required to patch your systems. Whenever you receive alerts
for Common Vulnerabilities and Exposure (CVE) notifications or Information Assurance
Vulnerability Alerts (IAVA), Ansible enables you to act swiftly in response to any potential
dangers to your infrastructure.
To successfully monitor and manage compliance for your business's infrastructure, you'll
need to:
Assess: Identify systems that are non-compliant, vulnerable, or unpatched.
Organize: Prioritize remediation actions by effort, impact, and issue severity.
Remediate: Quickly and easily patch and reconfigure systems that require action.
Report: Validate that changes were applied and report change results.
These best practices can help you stay abreast of any regulatory changes and keep your
systems compliant:
1. Regular system scans: Daily monitoring can help you identify compliance issues, as well
as security vulnerabilities, before they impact business operations or result in fees or
delays.
2. Deploy automation: As the size of your infrastructure grows and changes, it becomes
more challenging to manage manually. Using automation can streamline common tasks,
improve consistency, and ensure regular monitoring and reporting, which then frees you
up to focus on other aspects of your business.
3. Consistent patching and patch testing: Keeping systems up to date can boost security,
reliability, performance, and compliance. Patches should be applied once a month to keep
pace with important issues, and patching can be automated. Patches for critical bugs and
defects should be applied as soon as possible. Be sure to test patched systems for
acceptance before placing them back into production.
4. Connect your tools: Distributed environments often contain different management tools for
each platform. Integrate these tools via application programming interfaces (APIs). This
allows you to use your preferred interfaces to perform tasks in other tools. Using a smaller
number of interfaces streamlines operations and improves visibility into the security and
compliance status of all systems in your environment.
Some of the security and compliance tools that can help are:
Proactive scanning: Automated scanning can ensure systems are monitored at regular
intervals and alert you to issues without expending much staff time and effort.
Actionable insight: Information that is tailored to your environment can help you more
quickly identify which compliance issues and security vulnerabilities are present, which
systems are affected, and what potential impacts you can expect.
Customizable results: Define business context to reduce false positives, manage
business risk and provide a more realistic view of your security and compliance status are
ideal.
Prescriptive, prioritized remediation: Prescriptive remediation instructions eliminate the
need to research actions yourself, saving time and reducing the risk of mistakes.
Prioritization of actions based on potential impact and systems affected help you make the
most of limited patching windows.
Intuitive reporting: Generating clear, intuitive reports about which systems are patched,
which need patching, and which are non-compliant with security and regulatory policies
increases auditability and helps you gain a better understanding of the status of your
environment.
Selecting the right automation technologies is key for rapid implementation across the data
center and network software systems in hybrid environments.
Red Hat
Red Hat has an end-to-end software stack for automation and management that includes:
– Red Hat Enterprise Linux
– Red Hat Ansible Automation
– Red Hat Satellite
– Red Hat Insights
IBM
IBM has:
– IBM QRadar®
– IBM PowerSC
– IBM Instana™
– IBM Turbonomic®
An important aspect of configuration management is that you need to be assured that when
you run a Playbook, that you will get the results you expect, that the resultant configuration
will match what is defined in the Playbook. Ansible handles this by ensuring that Playbooks
are idempotent in nature. Idempotent is defined as being able to run a Playbook over and over
again with the results being the same each time that Playbook is executed. This ensures that
when you run a Playbook to change a configuration parameter, the resulting configuration will
match what is defined in the Playbook.
Ansible is a valuable tool for enhancing business continuity and disaster recovery (BCDR)
efforts by automating various tasks and processes related to system recovery, data backup,
and infrastructure provisioning. Using Ansible, new servers and instances can be created,
either in the same site for high availability, or in another site for disaster recovery. ensuring the
infrastructure is ready when needed. In the event of increased demand during a disaster,
Ansible helps to scale resources dynamically to handle the load and maintain business
operations.
You can apply CI/CD to many components and assets within your organization, including
applications, platforms, infrastructure, networking, and automation code. Automation is at the
core of CI/CD pipelines. By definition, CI/CD pipelines require automation. While it is possible
to manually execute each step in your development workflow, automation maximizes the
value of your CI/CD pipeline. It ensures consistency across development, test, and production
environments and processes, allowing you to build more reliable pipelines.
Ansible automates the major stages of continuous integration, delivery, and deployment
(CI/CD) pipelines – becoming the activating tool of DevOps methodologies. The automation
technology you choose can affect the effectiveness of your pipeline. Ideal automation
technologies include these key features and capabilities:
– Ansible offers a simple solution for deploying applications. It gives you the power to
deploy multi-tier applications reliably and consistently, all from one common framework.
You can configure key services as well as push application files from a single common
system.
– Rather than writing custom code to automate your systems, your team writes simple
task descriptions that even the newest team member can understand – saving not only
up front costs, but making it easier to react to change over time.
Ansible allows you to write playbooks that describe the desired state of your systems, and
then it does the hard work of getting your systems to the desired state. Playbooks make your
installations, upgrades, and day-to-day management repeatable and reliable.
IBM Power is built to be scalable and powerful, while also providing flexible virtualization and
management features. IBM Power supports a wide range of open-source tools, including
Ansible.
Many of the most mission-critical enterprise workloads are run on IBM Power. The core of the
global IT infrastructure, encompassing the financial, retail, government, health care, and
every other sector in between, is comprised of IBM Power systems, which are renowned for
their industry-leading security, reliability, and performance attributes. For enterprise
applications, including databases, application and web servers, ERP, and AI many clients use
IBM Power.
flexibility, and IBM Power are completely prepared for the cloud. Whether you're using
Kubernetes and Red Hat OpenShift to modernize enterprise applications, creating a private
cloud environment within your data center with adaptable pay-as-you-go services, using IBM
Cloud to launch applications as needed, or creating a seamless hybrid management
experience across your multicloud landscape, IBM Power delivers whatever hybrid multicloud
approach you choose.
The modern data center consists of a combination of on-premises and off-premises, multiple
platforms, such as IBM Power, IBM Z®, and x86. The applications range from monolithic to
cloud-native – inherently some combination of bare metal, virtual machines, and containers.
An effective hybrid cloud management solution must account for all of these factors. IBM and
Red Hat are uniquely positioned to best accommodate the applications that you’re running
today and the modernized applications of tomorrow, wherever they are.
IBM Power delivers one of the highest availability ratings among servers4
IBM Power delivers 99.999% availability, giving 25% less downtime than comparable
offerings, due to built-in recovery and self-healing functions for redundant components.
Organizations are also able to switch from an earlier Power server to the current generation
while applications continue to run, giving you high-availability and minimal downtime when
migrating.
IBM Power is consistently rated as one of the most secure systems in the
market5
For the fourth straight year, IBM Power has been rated as one of the most secure systems in
2022, with only 2.7 minutes or less of unplanned outages due to security issues. This puts
IBM Power:
2x more secure than comparable HPE Superdome servers,
6x compared to Cisco UCS servers,
16x compared to Dell PowerEdge servers,
20x compared to Oracle x86 servers,
and up to more than 60x compared to unbranded white box servers.
Security breaches were also detected immediately or within the first 10 minutes in 95% of the
IBM Power systems that were surveyed. This results in better chances that a business will
suffer little to no downtime, nor will they be susceptible to damaged, compromised, or stolen
data.
Current IBM Power systems can range from scale-out servers that start with 4 cores and
32GB of memory on the IBM Power S1014 to enterprise systems with up to 240 cores and
64TB of memory on the IBM Power E1080.
Note: IBM’s full lineup of server models based on the latest Power processors can be
found here.
The Power processor also provides security within the system itself through Transparent
Memory Encryption, where data is encrypted by cryptography engines located in the
processor core, right where memory is located. This gives you four times the speed than
average encryption.
Power also features reliability, availability, and serviceability (RAS) capabilities such as
advanced recovery, diagnostics, and Open Memory Interface (OMI) attached advanced
memory DIMMs that deliver 2X better reliability and availability than industry standard
DIMMs.
The latest version of Power processors is the Power10, built on a 7nm design that is 50%
faster than its predecessor and 33% more energy efficient.
Power10 chip benefits are the result of important evolutions of many of the components that
were in previous IBM POWER® chips. Several of these important Power10 processor
improvements are listed in Table 1-3.
7 https://www.ibm.com/it-infrastructure/resources/power-performance/e1080/
Figure 1-12 shows the Power10 processor die with several functional units that are labeled.
Sixteen SMT8 processor cores are shown, but the dual-chip module (DCM) with two Power10
processors provide 12-, 18-, or 24-core for Power E1050 server configurations.
Figure 1-12 The Power10 processor chip (die photo courtesy of Samsung Foundry)
PowerVM also provides IBM Power other advanced features, including, but not limited to:
Micro-partitioning
Allows a partition/VM to initially occupy as small as 0.05 processing units, or one-twentieth
of a single processor core, and allows adjustments as small as a hundredth (0.001) of a
processor core. This allows tremendous flexibility in the ability to adjust your resources
according to the exact needs of your workload.
Shared Processor Pools
Allows for effective overall utilization of system resources by automatically applying only
the required amount of processor resource needed by each partition. The hypervisor can
automatically and continually adjust the amount of processing capacity allocated to each
partition/VM based on system demand. You can set a shared processor partition so that, if
a VM requires more processing capacity than its assigned number of processing units, the
VM can use unused processing units from the shared processor pool.
Virtual I/O Server (VIOS)
VIOS provides the ability to share storage and network resources across several VMs
simultaneously, thereby avoiding excessive costs by configuring the precise amount of
hardware resources needed by the system.
Live Partition Mobility (LPM)
LPM brings the ability to move running VMs across different physical systems without
disrupting the operating system and applications running within them.
Share Storage Pools (SSP)
SSP provides the ability to provide distributed storage resources to VIO servers in a
cluster.
Dynamic LPAR operations (DLPAR)
Introduces the ability to dynamically allocate additional resources, such as available cores
and memory, to a VM without stopping the application.
Performance and Capacity Monitoring
Supports gathering of important statistics to provide an administrator information
regarding physical resource distribution among VMs and continuous monitoring of
resource utilization levels, ensuring they are evenly distributed and optimally used.
Remote Restart
Allows for quick recovery in your environment by allowing you to restart a VM on a different
physical server when an error causes an outage.
Note: For a more comprehensive description of PowerVM and its capabilities, you may
refer to the IBM Redbooks publication Introduction to IBM PowerVM, SG24-8535.
Note: Software maps detailing which operating system versions are supported on which
specific IBM Power server models (including previous generations of IBM Power) can be
found in these IBM Support pages.
8 https://www.sap.com/about/benchmark.html
9 https://www.spec.org/cpu2017/results/
For the 14th straight year, IBM Power delivers the top reliability results, better than any
Intel x86 platform, and only exceeded by the IBM Z. IBM Power also reported less number
of data breaches (one) in the same period compared to x86 platforms.11
As shown by the list of supported operating systems in Table 1-4 on page 29, IBM Power
delivers the ability to run a wide variety of AIX, IBM i, or Linux workloads simultaneously,
giving you flexibility in virtualization that is unmatched by any x86 offering.
Both Power and x86 architectures are established, mature foundations for modern workloads.
But Power stands out for its efficiency, deeply integrated virtualization, highly dependable
availability and reliability, and its unparalleled capability of supporting enterprise-class
workloads that do not require the massive infrastructure that you would need to do the same
with x86 hardware.
The IBM Power collections, developed by IBM are available at the links shown in Table 1-5.
PowerVC Ansible support for PowerVC is provided using the Ansible Collection
for OpenStack.
10 https://www.ibm.com/it-infrastructure/resources/power-performance/e1080/#5
11 https://itic-corp.com/itic-2022-global-server-reliability-results/
The Red Hat Automation Hub interface showing these collections is shown in Figure 1-13.
The following sections discuss the benefit of using Ansible across these different
environments, as well as providing detail on the specific collection contents.
Introduction
A major benefit of Linux is that it is open source. The software is unencumbered by licensing
fees and its source code is freely available. A wide variety of Linux distributions are available
for almost every computing platform.
12
Management Insight Technologies. “The State of Linux in the Public Cloud for Enterprises,” February 2018.
https://www.redhat.com/en/resources/state-of-linux-public-cloud-solutions-ebook
Ubuntu
IBM and Canonical have collaborated to make the Ubuntu distribution on IBM Power
Systems. The availability of Ubuntu strengthens the Linux operating system choices available
on IBM Power technology. Support for Ubuntu is available directly from Canonical.
Red Hat OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS (RHCOS)
which represents the next generation of single-purpose container operating system
technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with
automated, remote upgrade features. RHCOS is the only supported operating system for
OpenShift Container Platform control plane (master) nodes. While RHCOS is the default
operating system for all cluster machines, you can create compute (worker) nodes that use
RHEL as their operating system.
Ansible supports Red Hat OpenShift Container Platform as either a control node or a client
node.
Note: Table 1-6 on page 32 just represents a small selection of Ansible modules available.
A much more comprehensive list of Ansible modules and collections is described in this
document.
Ansible accesses Linux hosts in the inventory via SSH from the Ansible controller node, with
authentication via either password, or public and private key pairs. We discuss how to choose
a controller node and install Ansible in Chapter 3, “Getting started with Ansible” on page 101.
No special modules or collections are required to access a Linux host. For example, to
access a Linux host named sles15sp5 from a controller node using a user name and
password, it is as simple as shown in Example 1-1.
Alternatively, to setup password-less authentication using SSH public and private keys, you
can use the ‘ssh-copy-id’ command, as shown in Example 1-2, to copy a preexisting public
key to a Linux host.
$ ssh ansible@sles15sp5
Last login: Mon Aug 7 16:36:31 2023 from 192.168.115.249
ansible@sles15sp5:~>
Once password-less logins to your Linux hosts are configured, access via Ansible becomes
simpler, as shown in Example 1-3.
Your Ansible command can be further simplified, by specifying the ‘ansible_user’ in either the
inventory or a ansible.cfg file.
Login options
A lot of actions you will want Ansible to perform will require superuser, or root access. There
are several options that will allow you to acquire the right authorization for those operations.
A value of ‘yes’ will allow root logins using a password, while a setting of ‘no’ will deny access.
The settings ‘prohibit-password’ and ‘without-password’ will allow root logins, but only by SSH
key-pair, and not password. The option ‘forced-commands-only’ will allow password-less login
by root, but only to run predefined commands.
Your organization’s security policy may not permit direct logins as the root user via SSH. In
this case you will need to login as a regular user and then use the ‘sudo’ or ‘su’ command to
perform tasks that requires superuser access, also called privilege escalation.
Using sudo
To use sudo, the user issuing the sudo command must be listed in the /etc/sudoers file on the
target host. The operating system groups ‘sudo’ or ‘wheel’ are commonly used for this type of
authorization. An example of a /etc/sudoers file allowing the members of the ‘sudo’ group to
run any command as root is show in Example 1-4.
Example 1-4 Sample of /etc/sudoers file allowing root access for members of sudo group
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
Using su
The ‘su’ command is another way to gain superuser privileges. In this case the user provides
the root password to obtain a privilege escalation, rather than their own password. The use of
‘sudo’ over ‘su’ is preferred due to greater granularity of control via sudo.
Privilege escalation
To utilize privilege escalation in Ansible, you use the ‘become’ keyword. You can use the
become keyword in a playbook, an ansible.cfg file or on the command line.
Note: Privilege escalation using become is not limited to the root user. You can specify
another user to become with the ‘become_user’ option. For example, you might use
‘become_user: apache’ to perform tasks as the web server owner
Red Hat Enterprise Linux, SUSE Linux Enterprise Server and Ubuntu all use their own
separate package management systems, and in turn have separate Ansible modules to install
packages.
– Red Hat Enterprise Linux: ansible.builtin.yum
– SUSE Linux Enterprise Server: community.general.zypper
– Ubuntu: ansible.builtin.apt
So based on what we have discussed so far in this section, we write our simple playbook to
do the following:
Execute our play book on the hosts in the ‘power-linux’ inventory group
Login to the Linux hosts as the user ‘ansible’
Use ‘sudo’ to become the ‘root’ user
Install the ‘tcpdump’ package using the generic ‘ansible.builtin.package’ module
The package module will ensure the package is installed if not already present, by using
the in ‘state’ keyword with the value of ‘present’.
tasks:
- name: install tcpdump package
ansible.builtin.package:
name: tcpdump
state: present
Our inventory file contains the host definitions shown in Example 1-6.
PLAY [power-linux]
*******************************************************************************************
*************************************
changed: [ubuntu22ppc64]
ok: [rhel7ppc64]
ok: [rhel9ppc64]
changed: [rhel8ppc64]
changed: [sles15ppc64]
PLAY RECAP
*******************************************************************************************
*************************************
rhel7ppc64 : ok=2 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
rhel8ppc64 : ok=2 changed=1 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
rhel9ppc64 : ok=2 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
sles15ppc64 : ok=2 changed=1 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
ubuntu20ppc64 : ok=2 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
ubuntu22ppc64 : ok=2 changed=1 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
ubuntu20ppc64 : ok=2 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
ubuntu20ppc64 : ok=2 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
The output from the command as seen in Example 1-8 on page 36 shows that only the target
hosts rhel8ppc64,sles15ppc64 and ubuntu22ppc64 have a status value of ‘changed’, and
therefore were the only hosts that required the tcpdump package to be installed.
We can further simplify our command-line by putting options in an ansible.cfg file, rather
than specifying them each time we run a playbook.
An example ansible.cfg file you may use when running a playbook as a non-root user, and
prompting for user and sudo password is shown in Example 1-9.
Example 1-9 Sample ansible.cfg file for playbook using become for privilege escalation
[defaults]
inventory = inventory
remote_user = ansible
ask_pass = true
[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = true
An example ansible.cfg file that you may use when running a playbook to use SSH keys to
login into target hosts directly as root is shown in Example 1-10.
Example 1-10 Sample ansible.cfg file for password-less logins as the root user
[defaults]
inventory = inventory
remote_user = root
ask_pass = false
Updating packages
A common use for Ansible playbooks is to ensure compliance by keeping packages updated
with the latest security patches.
Example 1-11 Using generic package module to update all packages to latest
- name: Update all packages to latest available
ansible.builtin.package:
name: '*'
state: latest
The generic package module may be fine for simple playbooks, but there are times where you
require more fine grained control of the package updates. The next section introduces the
ansible.builtin.dnf module which provides additional capabilities.
For example, you may wish to upgrade a list of packages, but only if they are already installed.
The task code to do this would look something like Example 1-12.
Example 1-12 Upgrade a list of package, but do not install if not present
- name: Update required packages
dnf:
name:
- firefox
- curl
- python3
update_only: yes
state: latest
Full documentation for the dnf module can be found in the dnf collection description.
Once installed, the zypper module options are similar to the dnf module. Full documentation
for the zypper module can be found here.
The full documentation for the apt module can be found here.
For example, the Ubuntu packages intel-microcode and amd64-microcode are not present on
ppc64 platforms. If you wanted to explicitly update these packages to the latest version in
your playbook, you could use the tasks in Example 1-14 to update the packages on x86_64
platforms, but not generate an error on ppc64 platforms.
Conversely, packages from the Linux on IBM Power repository will not be present on other
architectures. For more information see IBM Linux on Power Tools.
Ansible accesses AIX hosts in the inventory via SSH, with authentication via either password,
or public and private key pairs.
All nodes need to be enabled to run Open Source packages and have Python 3 installed.
Beginning with AIX 7.3, Python 3 is automatically preinstalled with the operating system. On
the controlling node, also Ansible version 2.9 or higher must be installed.
All of the packages can be downloaded from the AIX Toolbox for Linux Applications. It is
recommended to use the DNF package.
No special modules or collections are required to access an AIX host. For example, to access
an AIX host named aix7.3, using a username and password, it is as simple as shown in
Example 1-15.
Alternatively, to setup password-less authentication using SSH public and private keys, you
can use the ‘ssh-copy-id’ command, as shown in Example 1-2 on page 33, to copy a
preexisting public key to a AIX host.
Once the key exchange is done, you can include the private key in your node controller
inventory file and execute the command directly without the need to use a password, like
shown in Example 1-16.
Ansible for IBM i bridges existing gaps by providing a versatile solution that aligns with
modern requirements. It equips businesses with tools to efficiently automate tasks and
manage cloud workload migration. As organizations seek to extract optimum value from their
IBM i environments, Ansible offers a practical approach, fostering innovation and adaptability
in a dynamic digital landscape.
The acquisition of Red Hat by IBM has sparked an interest in Ansible as a tool for automating
IBM i processes and tasks. As the adoption of Ansible gains momentum, organizations are
seeking IT professionals with expertise in both open-source practices and Ansible's
capabilities. This intersection creates opportunities for the evolution of the IBM i platform,
unlocking avenues for innovation and adaptability.
June 2020
In this initial phase, Ansible for IBM i set its foundation by focusing on core operational
aspects:
PTF and LPP Management: Ansible brought the ability to manage Program Temporary
Fixes (PTFs) and Licensed Program Products (LPPs) efficiently through automation.
Open-Source Package Management: The roadmap introduced support for managing
open-source packages, streamlining installation and updates.
Object Management: Automation was extended to manage objects on the IBM i platform,
promoting a more streamlined and consistent approach.
PASE Support: Ansible embraced the Portable Application Solutions Environment (PASE),
enabling more versatile scripting and automation.
Work Management Runtime: Ansible started facilitating work management runtime
operations, enhancing control over system resources.
Device Management: Automation was expanded to include device management, ensuring
smoother handling of hardware resources.
IASP Support: The introduction of support for Independent Auxiliary Storage Pools
(IASPs) contributed to enhanced data storage capabilities.
September 2020
Building on the foundational elements, Ansible for IBM i continued to evolve with a focus on
broader capabilities:
Advanced Fix Management: Ansible extended its fix management capabilities, allowing
more advanced and targeted fixes.
Basic network configuration: Automation now covered basic network configuration tasks,
ensuring network settings were managed effectively.
Work management: Work management capabilities were further refined, enhancing
resource allocation and workload management.
Security management: Ansible incorporated security management features, contributing
to robust security practices across the platform.
Message handling: Automation was introduced to handle message handling tasks,
ensuring timely and efficient communication.
2021
As Ansible for IBM i matured, it began addressing more strategic aspects:
Solution and product configuration: The focus shifted towards solution and product
configuration, enabling more comprehensive setup and customization.
SQL services bundles: Ansible introduced bundles for SQL services, streamlining
database-related operations.
System health bundles: The roadmap incorporated system health bundles, contributing to
proactive system monitoring and maintenance.
2022
As the landscape continued to evolve, Ansible for IBM i looked ahead:
Manage in Hybrid Cloud: Ansible aimed to offer seamless management capabilities in
hybrid cloud environments, ensuring consistency across on-premises and cloud
deployments.
Application Management in Cloud: The focus expanded to include application
management in cloud environments, enabling efficient deployment and maintenance.
Enhance Existing Functions: Ansible continued to enhance its existing functions, refining
automation processes and expanding its capabilities.
Upcoming releases
While specific details about this release are not provided, it is indicated that more use cases
and functionalities will be integrated, further enriching Ansible's offerings for IBM i users.
Note: The integration of Red Hat Ansible Automation with IBM Power Systems reflects a
commitment to the automation processes, improving operational agility, and enabling
enterprises to confidently manage their IBM i workloads. This collaboration provides a
powerful tool set for organizations seeking to navigate the complexities of enterprise
automation with consistency, transparency, and advanced skills development. For more
details and insights into this integration, further information can be accessed through the
provided links:
Red Hat Ansible Integration with IBM Power Systems
Red Hat Ansible Automation Hub for IBM Power Systems.
Note: To explore additional Ansible roles tailored for IBM i, you can find a comprehensive
collection. This repository hosts a variety of roles designed to facilitate IBM i-specific tasks,
providing a valuable resource for enhancing your automation capabilities on the platform.
Note: Finding these plugins is straightforward. You can explore them through the following
links:
Ansible Galaxy - IBM Power i Plugins.
GitHub repository for Ansible IBM i Plugins.
In Table 1-13, you find a selection of Ansible plugins specifically designed for IBM i. These
plugins cover various functionalities, such as copying files, interacting with IBM DB2® on IBM
i, utility functions, fetching data, and rebooting operations.
Note: To explore practical examples, IBM i users can refer to the samples available here
In the context of IBM i, your inventory file can define various groups such as “IBM Power
Virtualization Center” and “IBM i systems”. This provides a clear structure for managing and
orchestrating tasks across different systems. Each group is associated with specific
attributes, including connection details and authentication credentials.
Example 1-17 shows an inventory setup where you have two groups: “powervc_servers” and
“IBM i”. The “powervc_servers” group includes details about the Power Virtualization Center
servers, specifying the SSH host, username, password, and Python interpreter. The “IBM i”
group, on the other hand, outlines the connection information for your IBM i system.
Example 1-17 Sample Ansible inventory configuration for IBM i and PowerVC
[powervc_servers]
powervc ansible_ssh_host=your_powervc_ip ansible_ssh_user=your_user
ansible_ssh_pass=your_password
ansible_python_interpreter="python3"
[ibmi]
source ansible_ssh_host=your_source_ibmi_ip
ansible_ssh_user=your_user ansible_ssh_pass=your_password
By using this approach, you can efficiently manage and automate tasks across a diverse set
of systems within your infrastructure. This ensures that Ansible can interact with the specified
hosts, the process of configuration management and orchestration.
Power HMC Ansible content helps administrators include Power HMC as part of their
automation strategies through the Ansible ecosystem. Using Power HMC Ansible content in
IT automation helps maintain a consistent and convenient management interface for multiple
Power HMCs and Power servers.
Power HMC Ansible content ‘modules can be leveraged to do Power HMC patch
management, Logical Partition management, Power Server management, password policy
configurations and HMC-based Power systems dynamic inventory building.
Table 1-15 Specific Ansible modules for the IBM Power Hardware Management Console (HMC)
Module Minimum Description
Ansible version
hmc_command_stack 2.9 Fix for show ldap details in configure ldap action
The ansible-power-hmc collection requires that you be running HMC V10, HMC V9R1 or later,
or HMC V8R8.7.0 or later. It supports Ansible 2.9 or later and requires Python 3.
This is a perfect case for Ansible management as you need to ensure that all of your VIOS
LPARs are maintained and updated as required as new updates come out. In addition, if you
are building a new bare metal environment, it is helpful to be able to ensure that the VIOS
image used in the new server is the level that you want. Starting with VIOS 4.1, VIOS will be
enabled for Ansible out of the box, making your Ansible setup significantly easier.
The power-vios collection provides the functions you need to manage your VIOS environment
such as install a new VIOS, manage the security parameters on your VIOS, backup and
restore VIOS images, and upgrade VIOS code.
Architectural overview
Within the IBM Cloud environment, the IBM Power Virtual Server resides as a distinct entity.
Picture it as a specialized collocation site within one of our IBM SoftLayer® or cloud data
centers. Here, you find a segregated enclosure housing all the POWER equipment. A
significant advantage of IBM Power Virtual Servers is the consistency of their architecture
with on-premises Power Systems. This means that if you are configuring Power Systems
on-premises today, you can rest assured that it is a similar process to set up an IBM Power
Systems Virtual Server. To illustrate the architecture refer to Figure 1-14 on page 54.
For example, redundant Virtual I/O servers and NPIV attached storage, as seen in
on-premises setups, are mirrored in Power virtual server environments. Redundant SAN
fabric, Hardware Management Console (HMC), and PowerVC configurations are all
consistent between on-premises and cloud deployments.
Infrastructure as a Service
Transitioning POWER resources to a cloud data center is not as straightforward as a simple
relocation. This is where the concept of a collocation (COLO) site comes into play. The COLO
serves as the foundation for Infrastructure-as-a-Service (IaaS). In this context, IaaS
encompasses everything beneath the virtual machine layer. This includes the PowerVM
hypervisor, firmware, Virtual I/O server, HMC, PowerVC, network switches, and SAN
switches, all of which are integral parts of the IaaS.
So, how do you manage this environment if you cannot directly access PowerVC or the SAN
switches? You interface with a service layer that implements the open service broker
framework. This framework is a standard across various cloud portals, such as GCP and
Azure. Essentially, it is a means of managing services in the cloud.
This service layer offers multiple interfaces, including a command line and a REST API. While
the interface may change, you retain the same core capabilities you are accustomed to
on-premises. For instance, if you have scripts that interact with the HMC today, you will need
to adapt them to the IBM Cloud command line. However, the functionality you rely on remains
available.
Specifically, when considering Ansible, it permit us to provision AIX and IBM i instances within
IBM PowerVS. Here is how to go about it.
Configuration parameters
Users have the flexibility to set the following parameters:
pi_name: The name assigned to the Virtual Server Instance.
sys_type: The type of system on which to create the VM (e.g., s922, e880, any).
pi_image: The name of the VM image (users can retrieve available images).
proc_type: The type of processor mode in which the VM will run (shared or dedicated).
processors: The number of vCPUs to assign to the VM (as visible within the guest
operating system).
memory: The amount of memory (in GB) to assign to the VM.
pi_cloud_instance_id: The cloud_instance_id for this account.
ssh_public_key: The value of the SSH public key to be authorized for SSH access.
3. Export your desired IBM Cloud region to the IC_REGION environment variable: export
IC_REGION=<REGION_NAME_HERE>
4. Export your desired IBM Cloud zone to the IC_ZONE environment variable: export
IC_ZONE=<ZONE_NAME_HERE>
Create resources
To create all resources and test public SSH connections to the VM, run the create playbook:
Using Ansible for automation in these application environments can reduce the amount of
time your support staff spends on basic tasks like installing or modifying your application
environment and allows you to have a build a consistent and secure environment for your
application instances, while allowing your support staff to concentrate on tasks that are more
important and that can drive new business opportunities for your company. Figure 1-15 shows
how Ansible can be used across your infrastructure to install and maintain your applications.
There are many application environments that can be managed by Ansible. We will describe
how Ansible can make your team more productive when managing two of the most common
applications that run on IBM Power systems.
The Oracle Database is known for its ability to manage and process large amounts of data
quickly and efficiently. As a result, Oracle Database is widely used across different industries,
from finance to healthcare, and is trusted by organizations of all sizes to store and manage
critical business data. Oracle Database has established itself as a reliable and versatile
database management system that can meet the complex data needs of modern enterprises.
Its performance capabilities, scalability, security features, and integration with cloud
technologies make it a top choice for organizations.
One of the common platforms for running Oracle databases and application is on IBM Power
running the AIX operating system. With automation becoming the norm in IT operations,
installation and administration of Oracle database on AIX is not an exception. There now exist
Ansible collections for installation operations both on a single-node machine as well as on
Oracle RAC in addition to a collection for automating DBA administrative operations.
also performance and service degradation, increased security exposure, and poor end user
experience. This not only affects your ROI, but it also acts to distract your SAP teams from more
strategic, high-priority projects.
Figure 1-16 illustrates the breadth of function that can be provided by Ansible for automating
your SAP land scape.
Red Hat Ansible Automation Platform eliminates these common obstacles, with an intuitive
interface and ready-to-use, trusted content that is custom-built for SAP migrations. Ansible
can also be integrated with SAP to streamline operations, improve efficiency, and reduce
manual tasks. With Ansible Automation Platform, manual tasks that used to take days can be
done in hours or even minutes. By consolidating on a single, unified platform, your teams can
more easily share automation content and workflows, and scale as your organization evolves
and uncovers new automation use cases. Firstly, let's understand what SAP is. SAP stands
for Systems, Applications, and Products in Data Processing. It is a software suite that covers
various business processes such as finance, logistics, human resources, and more. SAP is
widely used by organizations of all sizes and industries to manage their operations effectively.
SAP installations can be quite complex and require skilled administrators to manage and
maintain them. This usually involves performing various tasks such as system configuration,
installation of patches, managing user accounts, monitoring system performance, and more.
Managing SAP systems can be complex and time-consuming, but with the power of Ansible,
organizations can revolutionize their SAP operations. By automating tasks such as system
configuration, software deployment, monitoring, and scaling, Ansible simplifies SAP
management, enhances efficiency, and drives agility in your SAP landscape. With the Red
Hat Ansible Automation Platform, you can streamline your SAP operations, reduce manual
effort, ensure consistency, and improve overall productivity and reliability of your SAP
environment.
One option for building your environment is to use the automation base package - Ansible
Core (previously called Ansible Engine or Ansible base) and use Ansible Automation
Controller as the command-line interface (CLI).
Alternatively, the automation environment can be built with Red Hat Ansible Automation
Platform (RHAAP) together with Ansible Core. Automation Platform has a graphical user
interface (GUI) and also extends Ansible functionality with additional management
capabilities. For more details refer to section 1.3.2, “Red Hat Ansible Automation Platform” on
page 10.
You should consider several things before deciding how to set up your automation
environment. These considerations will help you to choose the right Ansible products, as you
define and build the proper architecture that will meet your business requirements. Consider
the following:
Computing resource availability and cost
Understand the number of systems that will be managed and the required availability of
your Ansible management infrastructure. This will impact the number of Ansible systems
required to manage your environment, including the required resources (CPU, Memory,
Disk etc).
Administrative environment
Is a command line interface (CLI) acceptable or do you need a graphical user interface
(GUI) for user friendliness.
Security and compliance
Understand your security and compliance requirements for user authentication and
access control. Do you need to separate automation into multiple management domains
to comply with security & compliance guidelines.
High availability and scalability
Do you need high availability in your automaton? If you are monitoring and managing
critical business functions, the answer is probably yes. Be careful to design and build the
automation so that it will scale to manage additional environments or handle growth in the
number of machines and tasks being run.
Once you have considered all of these factors, you can plan and design the automation
environment based on your requirements in these areas. More detail on designing your
Ansible environment, including reference architectures, is shown in section 3.1, “Architecting
your Ansible environment” on page 102.
Controller functions
A controller is the machine or set of machines which run the Ansible tools (ansible-playbook,
Ansible, ansible-vault and others) and automation tasks. Depending on your solution, the
controller will run the Ansible CLI or it can be using a graphical user interface. Using a GUI
often simplifies the management of your client inventory, job templates, and workflow
templates as well as simplifying how you launch jobs, schedule workflows, and track and
report changes. The requirements for being an Ansible controller are discussed in section
3.2, “Choosing the Ansible controller node” on page 119.
Client Functions
The machine or set of machines, which are being managed by Ansible. They are also referred
to as 'hosts' or managed nodes and are servers, network appliances or other managed
devices. Ansible does not need to be installed on the managed nodes as the Ansible control
nodes are used to run the automation tasks. The prerequisites for being a managed node is
discussed in section 3.4, “Preparing your systems to be Ansible clients” on page 146.
The communication between Ansible controllers to managed nodes or the target devices can
be different depending on the target devices. For example:
– Linux/Unix hosts use SSH by default,
– Windows hosts use WinRM over HTTP/HTTPS by default.
– Network devices use CLI or XML over SSH.
– Appliance or web-based services could use RESTful API over HTTP/HTTPS.
For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value
pairs, commonly called a “hash” or a “dictionary”. So, we need to know how to write lists and
dictionaries in YAML:
All members of a list are lines beginning at the same indentation level starting with a "- " (a
dash and a space) as shown in Example 2-1.
A dictionary is represented in a simple key: value form (the colon must be followed by a
space) as shown in Example 2-2.
In the playbook in Example 2-3, the lines name and hosts are simple key-value pairs.
Whereas the line tasks is a list. A list can contain other objects which are defined with
key-value pairs organized in a hierarchy.
Variables
You can define variables in Ansible to make your playbooks more dynamic and reusable.
Variables can be defined at different levels, including playbook, group, and host variables. In
Example 2-4 the vars section defines a playbook-level variable http_port.
In this playbook http_ports is a variable which is a list of ports. The loop keyword is used to
iterate over the list and generate configuration files for each port.
2.2.2 Jinja2
Jinja is a modern and widely used template engine for Python. It was developed by Armin
Ronacher, the creator of the Flask web framework, and was first released in 2008. Jinja2 is
used for generating dynamic content, such as HTML, XML, JSON, YAML and other
text-based formats, by incorporating data from various sources into predefined templates.
Jinja2 was created as a successor to the original Jinja template engine. The original Jinja was
inspired by the Django template system but was designed to be more flexible and extensible.
However, it had some limitations, and Jinja2 was developed to address these shortcomings
and provide a more powerful, feature-rich, and robust templating engine.
Jinja2 quickly gained popularity within the Python community due to its simplicity, readability,
and performance. It became the default template engine for Flask, a popular web framework,
which contributed to its widespread adoption.
Jinja2 offers several features and benefits that make it a valuable tool in various contexts:
Template Inheritance: Jinja2 supports template inheritance, allowing you to create a base
template with common structure and blocks. Child templates can
extend the base template and override specific blocks, promoting code
reuse and maintainability.
Variables: You can easily insert variables into templates using double curly
braces {{ ... }}. This enables dynamic content generation by
substituting placeholders with actual data from variables.
Control Structures: Jinja2 provides control structures like if, for, and macros, which allow
you to create conditional logic, loops, and reusable template
fragments.
Filters: Filters allow you to modify variables before displaying them in
templates. Filters can format dates, convert strings to uppercase, sort
lists, and perform various other operations on data.
Extensibility: Jinja2 can be extended with custom filters, tests, and extensions,
making it highly adaptable to specific requirements and use cases.
Jinja templating with YAML makes Ansible a powerful automation platform that enhances
customizability, readability and re-usability of templates.
If you want to manage your servers and applications with Ansible, there should be a way to
define a list of them. This list of targets is called an inventory in Ansible. It can be a very
simple static inventory – similar to a list of ingredients you want to buy today in a shop – or it
can be a more complex static inventory with groups and additional variables.
The inventory can also be a dynamic inventory where you get the list from third party provider
like IBM Cloud, PowerVC or Power HMC.
Inventory components
Beyond identifying remote hosts, an inventory can be much more than just a list of servers. It
can also include information about groups, variables, and more. Let's dive into these key
components:
Hosts These represent the individual machines or servers you want to manage with
Ansible. Each host typically has a unique name or IP address associated
with it.
Groups Organizing hosts into groups makes it easier to manage and perform
operations on specific sets of machines. For example, you could have groups
like webservers, database-servers, or even staging and production groups.
Variables An inventory allows you to define variables at both the host and group level.
These variables can be used to customize and parameterize your playbooks.
It helps in making your automation scripts more flexible and reusable.
Ansible inventory plays a crucial role in managing and automating your IT infrastructure. It
helps Ansible identify the target systems, organize them into groups, define variables, and
execute tasks accordingly. The inventory file empowers you to automate tasks based on your
specific needs by providing a clear and structured overview of your environment. Whether you
are a system administrator, a DevOps engineer, or simply curious about automation,
mastering the Ansible inventory is an essential skill to have in your toolkit.
The simplest inventory is a single file with a list of hosts and groups. The default location for
this file is /etc/ansible/hosts but you can specify a different inventory file at the command
line using the -i <path> option, or in the configuration file using inventory.
Ansible inventory plugins support a range of formats and sources to make your inventory
flexible and customizable. As your inventory expands, you may need more than a single file to
organize your hosts and groups. Here are three options beyond the /etc/ansible/hosts file:
1. You can create a directory with multiple inventory files. See “Organizing inventory in a
directory” on page 69. These can use different formats (YAML, ini, and so on).
2. You can pull inventory dynamically. For example, you can use a dynamic inventory plugin
to list resources in one or more cloud providers. See 2.3.2, “Overview of dynamic
inventory” on page 70.
3. You can use multiple sources for inventory, including both dynamic inventory and static
files.
You can also combine multiple inventory source types in an inventory directory. This can be
useful for combining static and dynamic hosts and managing them as one inventory. The
inventory directory shown in Example 2-6 combines an inventory plugin source, a dynamic
inventory script, and a file with static hosts.
You can also configure the inventory directory in your ansible.cfg file.
Dynamic inventory is a way defining the list of servers to manage by using a special inventory
plugin. You can find the list of plugins available to you by using the ansible-doc command as
shown in Example 2-7.
We usually have several inventory plugins which are delivered with Ansible. They start with
ansible.builtin. In this particular case there are two additional plugins provided by installed
collections:
– ibm.power_hmc.powervm_inventory provided by the ibm.power_hmc collection
– openstack.cloud.openstack provided by openstack.cloud collection.
Just like with static inventories, you can use the ansible-inventory command to test and to
show your inventory. In the case of dynamic inventories, it makes even more sense to test
your configuration before you run your playbook as shown in Example 2-8.
"vio1": {
"ansible_host": "10.17.19.113"
},
"vio2": {
"ansible_host": "10.17.19.114"
},
"linux": {
"ansible_host": "10.17.19.13"
}
}
},
"all": {
"children": [
"ungrouped",
"P10-9080-HEX"
]
},
"P10-9080-HEX": {
"hosts": [
"aixlpar1",
"aixlpar2",
"vio1",
"vio2",
"linux"
]
}
}
From the output in Example 2-8 on page 70 we see that the dynamic inventory defined using
the file hmc1.power_hmc.yml found several AIX, Linux and Virtual I/O Server LPARs.
If we look into the configuration file, we find information on how to connect to the HMC and
filters which determine which systems and LPARs we would like to see in the output. This is
shown in Example 2-9.
Example 2-10 shows a fairly common situation where you want to get a list of LPARs direct
from an HMC. Using powervm_inventory from ibm.power_hmc collection you can dynamically
define variables and assign servers to groups.
password: abcd1234
system_filters:
SystemName: 'P10-9080-HEX'
filters:
PartitionState: 'running'
compose:
ansible_host: PartitionName
ansible_python_interpreter: "'/opt/freeware/bin/python3' if 'AIX' in
OperatingSystemVersion or 'VIOS' in OperatingSystemVersion else
'/QOpenSys/pkgs/bin/python3' if 'IBM i' in OperatingSystemVersion else '/usr/bin/python3'"
ansible_user: "'root' if 'AIX' in OperatingSystemVersion or 'Linux' in
OperatingSystemVersion else 'ansible' if 'VIOS' in OperatingSystemVersion else 'qsecofr'"
groups:
AIX: "'AIX' in OperatingSystemVersion"
Linux: "'Linux' in OperatingSystemVersion"
IBMi: "'IBM i' in OperatingSystemVersion"
VIOS: "'VIOS' in OperatingSystemVersion"
In this example we define groups of hosts according to their operating systems. All Virtual I/O
Servers will be assigned to the group VIOS, all IBM i LPARs will be assigned to the group ‘IBM
i’, all Linux LPARs will be assigned to the group Linux and all AIX LPARs will be assigned to the
group AIX.
Similar we dynamically define variables for our hosts. We want to connect to AIX and Linux
LPARs as user root and to IBM i LPARs as user qsecofr. That is why we set the variable
ansible_user depending on the LPAR’s operating system. We change the path to the python
interpreter based on the LPAR’s operating system too.
If you use IBM PowerVC to manage your LPARs, you can use the openstack.cloud.openstack
inventory plugin to get the list of the LPARs in PowerVC. The easiest way to use the inventory
plugin is to set variables from /opt/ibm/powervc/powervcrc and then create an inventory file
with one line in it as shown in Example 2-11.
If you don’t want to set environment variables, you can create a file named clouds.yaml in the
same directory with authentication parameters as shown in Example 2-12. It will be used by
the OpenStack inventory plugin automatically.
You can get more information about the inventory plugins using the ansible-doc command:
# ansible-doc -t inventory ibm.power_hmc.powervm_inventory
# ansible-doc -t inventory openstack.cloud.openstack
It is also possible to extend the current playbook with newly provisioned hosts using dynamic
inventory.
Example 2-13 shows a playbook consisting of 2 plays, one play targets the localhost and the
second play targets the dynamic group "powervms".
Note that for documentation reasons we set ansible_connection: local, in production you may
omit this line if you wish. The output from Example 2-13 is shown in Example 2-14.
PLAY [localhost]
*******************************************************************************************
**
changed: [localhost] => (item={'ip': '2.4.5.6', 'name': 'vm02'}) => {"add_host": {"groups":
["powervms"], "host_name": "2.4.5.6", "host_vars": {"ansible_connection": "local"}},
"ansible_loop_var": "item", "changed": true, "item": {"ip": "2.4.5.6", "name": "vm02"}}
changed: [localhost] => (item={'ip': '3.4.5.6', 'name': 'vm03'}) => {"add_host": {"groups":
["powervms"], "host_name": "3.4.5.6", "host_vars": {"ansible_connection": "local"}},
"ansible_loop_var": "item", "changed": true, "item": {"ip": "3.4.5.6", "name": "vm03"}}
PLAY [powervms]
*******************************************************************************************
***
TASK [gather_facts]
******************************************************************************************
Saturday 30 September 2023 14:02:13 +0200 (0:00:05.427) 0:00:05.544 ****
Saturday 30 September 2023 14:02:13 +0200 (0:00:05.427) 0:00:05.543 ****
ok: [2.4.5.6]
ok: [3.4.5.6]
ok: [1.2.3.4]
TASK [debug]
*******************************************************************************************
******
Saturday 30 September 2023 14:02:14 +0200 (0:00:00.805) 0:00:06.349 ****
Saturday 30 September 2023 14:02:14 +0200 (0:00:00.805) 0:00:06.348 ****
ok: [1.2.3.4] => {
"msg": "vm9810 192.168.98.10"
}
ok: [2.4.5.6] => {
"msg": "vm9810 192.168.98.10"
}
ok: [3.4.5.6] => {
"msg": "vm9810 192.168.98.10"
}
PLAY RECAP
*******************************************************************************************
********
1.2.3.4 : ok=3 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
2.4.5.6 : ok=3 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
3.4.5.6 : ok=3 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
If you want to make changes to a local inventory file instead of using an in memory inventory,
you can use the lineinfile or template module to populate the inventory file. To make Ansible
use the most current file, there is a meta directive that will reread and reload the inventory.
See Example 2-15 for a playbook that uses these two actions.
tasks:
- name: create dynamic inventory
local_action:
module: lineinfile
path: inventories/powervms.ini
regexp: ^{{ item.name }}
insertafter: "[powervms]"
line: "{{ item.name }} ansible_host={{ item.ip }} ansible_connection=local"
loop:
- ip: 1.2.3.4
name: vm01
- ip: 2.4.5.6
name: vm02
- ip: 3.4.5.6
name: vm03
- meta: refresh_inventory
- hosts: powervms
tasks:
- name: wait for reacheability
wait_for_connection:
delay: 5
timeout: 240
- name: gather_facts
setup:
gather_subset: min
- debug:
msg: "{{ ansible_hostname }} {{ansible_default_ipv4.address}}"
Example 2-16 shows the output from the playbook in Example 2-15.
PLAY [localhost]
*******************************************************************************************
**
TASK [meta]
*******************************************************************************************
*******
Saturday 30 September 2023 14:14:36 +0200 (0:00:00.709) 0:00:00.750 ****
Saturday 30 September 2023 14:14:36 +0200 (0:00:00.709) 0:00:00.750 ****
PLAY [powervms]
*******************************************************************************************
***
ok: [vm02]
ok: [vm01]
TASK [gather_facts]
******************************************************************************************
Saturday 30 September 2023 14:14:43 +0200 (0:00:05.398) 0:00:07.577 ****
Saturday 30 September 2023 14:14:43 +0200 (0:00:05.398) 0:00:07.576 ****
ok: [vm03]
ok: [vm02]
ok: [vm01]
TASK [debug]
*******************************************************************************************
******
Saturday 30 September 2023 14:14:44 +0200 (0:00:00.541) 0:00:08.118 ****
Saturday 30 September 2023 14:14:44 +0200 (0:00:00.541) 0:00:08.117 ****
ok: [vm01] => {
"msg": "vm9810 192.168.98.10"
}
ok: [vm02] => {
"msg": "vm9810 192.168.98.10"
}
ok: [vm03] => {
"msg": "vm9810 192.168.98.10"
}
PLAY RECAP
*******************************************************************************************
********
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
vm01 : ok=4 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
vm02 : ok=4 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
vm03 : ok=4 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
Each task leverages Ansible modules to achieve expected outcomes. A module is a reusable,
standalone script that Ansible runs, either locally or remotely. Modules interact with the local
machine, an API, or a remote system to perform specific tasks like changing a database
password or spinning up a cloud instance. We will discuss each of these in this section.
Consider installing an add-on for your text editor to help you write clean YAML syntax in your
playbooks. If your preferred editor is vi (or vim), then you may add these lines to ~/.vimrc to
allow you to use TAB by making the TAB character appear as 2 blank spaces:
autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab
autocmd FileType yml setlocal ts=2 sts=2 sw=2 expandtab
A playbook is what drives Ansible automation. The following concepts are important to
understand when building playbooks for your environment:
Playbook - A text file containing a list of one or more plays to run in a specific order, from
top to bottom, to achieve an overall goal.
Play - An ordered list of tasks that maps to managed nodes in an inventory. This is the top
level specification for a group of tasks. Defined in the play are the hosts it will execute on
(the inventory) and control behaviors such as fact gathering or privilege level. Multiple
plays can exist within a single Ansible playbook and may execute on different hosts.
Task - The application of a module to perform a specific unit of work. A play combines a
sequence of tasks to be applied, in order, to one or more hosts selected from your
inventory.
Module - Parametrized components or programs with internal logic, representing a single
step to be done on the target machine. The modules “do” things in Ansible.
Plugins - Pieces of code that augment Ansible’s core functionality. These are often
provided by a manufacturer for their specific devices. Ansible uses a plugin architecture to
enable a rich, flexible, and expandable feature set.
The playbook uses indentation with space characters to indicate the structure of its data.
YAML does not place strict requirements on how many spaces are used for the indentation,
but there are two basic rules:
Data elements at the same level in the hierarchy (such as items in the same list) must
have the same indentation.
Items that are children of another item must be indented more than their parents
Note: Only the space character can be used for indentation. TAB characters are not
allowed.
Start of playbook
A playbook starts with a line consisting of three dashes (---) as a starting document marker
and may end with three dots (...) as an end-of-document marker (the ... is optional and in
practice are often omitted).
Between these markers, the playbook is contains a list of plays. Each item in a YAML list
starts with a single dash followed by a space. Example 2-17 shows an example playbook
which is designed to capture the oslevel from the system.
tasks:
- name: Gather LPP Facts
shell: "oslevel -s"
register: output_oslevel
Order in plays
Please note the order in plays is always:
– pre_tasks:
– roles:
– tasks:
– handlers:
You may change the order using include_roles, import_roles, include_tasks, or import_tasks,.
You can also use the directive tasks_from: while including tasks.
Good practice is to use import when you deal with logical “units”. For example, separate long
list of tasks into subtask files in a main.yml. The include keyword is used to make decisions
based on dynamically gathered facts as shown here:
- include_tasks: taskrun_{{ ansible_os_family | lower }}.yml
For more detail on using import or include see “Including and importing other playbooks” on
page 81.
Verifying playbooks
You may want to verify your playbooks to catch syntax errors and other problems before you
run them. The ansible-playbook command offers several options for verification, including
--check, --diff, --list-hosts, --list-tasks, and --syntax-check. Use the command shown in
Example 2-18 on page 80 to check the playbook for syntax errors using --syntax-check.
Ansible tasks
In the realm of IT automation and configuration management, Ansible shines as a powerful
tool that allows you to define and execute tasks across a multitude of systems. Ansible tasks
form the core building blocks of automation, enabling you to specify the desired state of your
infrastructure and applications in a declarative manner. Figure 2-2 shows how tasks relate to
rest of the tooling within Ansible ecosystem.
Tasks are a fundamental building blocks in Ansible and there is a plethora of plugins available.
Each plugin is further divided into modules and are also grouped together as collections. The
plugins are of different categories and Ansible categorizes the plugins based on what they do.
For example: become plugin allows you to become a superuser or run the function as a specific
user.
The plugin ecosystem is vast and includes Ansible's own built-in plugins, as well as community
plugins from contributions and vendor plugins from companies such as Cisco, IBM, and AWS
for example. This Ansible document provides a list of plugins and their modules.
Writing tasks
Each task in Ansible consists of 3 components which are defined in YAML format.
The task starts with a name – which is free form text – to describe what is being done. It is the
best practice to provide a good description that matches the task.
You would then call a plugin, which are of format namespace.plugin.module which is then
followed by it's arguments or parameters.
In Example 2-19 on page 81, you see get_url, which is a builtin plugin in the Ansible
namespace. For Ansible builtin plugins, you may omit the namespace (ansible) and plugin
(builtin). The get_url also shows the parameters that you need to pass such as url, dest, mode
and validate_certs.
Example 2-19 Example playbook for IBM AIX that downloads and configures yum package manager,
and then installs MariaDB
---
- name: Install MariaDB open source relational database
hosts: ansible-vms
tasks:
- name: Download 'yum.sh' script
get_url:
url: https://public.dhe.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc/yum.sh
dest: /tmp/yum.sh
mode: 0755
validate_certs: False
It is a good practice to include tasks which apply to a specific target OS or target hardware. To
get better readability of playbooks, you may use the pattern shown in Example 2-21.
tags: vars
Variables
Ansible uses variables to manage differences between systems. With Ansible, you can execute
tasks and playbooks on multiple different systems with a single command. To represent the
variations among those different systems, you can create variables with standard YAML syntax,
including lists and dictionaries. You can define these variables in your playbooks, in your
inventory, in re-usable files or roles, or at the command line. You can also create variables
during a playbook run by registering the return value or values of a task as a new variable.
Not all strings are valid Ansible variable names. A variable name can only include letters, numbers,
and underscores. Python keywords or playbook keywords are not valid variable names. A variable
name cannot begin with a number.
Variable names can begin with an underscore. In many programming languages, variables that
begin with an underscore are private. This is not true in Ansible. Variables that begin with an
underscore are treated exactly the same as any other variable. Do not rely on this convention for
privacy or security. Figure 2-3 gives examples of valid and invalid variable names.
The vars_nginx.yml file would look like Example 2-22 on page 82.
Referencing variables
After you define a variable, use Jinja2 syntax to reference it. Jinja2 variables use double curly
braces. For example, the expression My amp goes to {{ max_amp_value }} demonstrates the
most basic form of variable substitution. You can use Jinja2 syntax in playbooks as seen in
Example 2-24.
When you want to display a debug message with a variable, then you would use a double-quoted
string with the variable name embedded in double braces as you can see in Example 2-25.
Variables can be concatenated between the double braces by using the tilde operator ~, as
shown in Example 2-26.
Registering variables
You can create variables from the output of an Ansible task with the task keyword register. You
can use registered variables in any later tasks in your play. Remember that each Ansible
module returns results in JSON format. To use these results, you create a registered variable
using the register clause when invoking a module as shown in Example 2-27.
- debug:
var: os_disk
Example 2-28 on page 84 shows how to capture the output of the whoami command to a
variable named logon.
Execution control
When you execute Ansible tasks, the results are easily identified as successful or
unsuccessful because by default, they return green, yellow or red.
Green - a task executed as expected, no change was made.
Yellow - a task executed as expected, making a change.
Red - a task failed to execute successfully.
To create a role, you need to follow a standard directory structure with eight main directories:
tasks, handlers, vars, defaults, meta, library, module_utils, and lookup_plugins. Each
directory contains a main.yml file that holds the relevant content for that directory. For
example, the tasks/main.yml file contains the main list of tasks that the role executes, and the
vars/main.yml file contains the variables associated with the role. You can also use other files
and directories within each directory as needed.
To use a role in a playbook, you can either include it at the play level using the roles keyword,
or import it at the task level using the import_role or include_role modules. You can also
pass parameters to the role using the vars keyword or the args keyword. Additionally, you can
specify role dependencies in the meta/main.yml file, which means that Ansible will
automatically run those roles before the current role.
Ansible roles are a powerful feature that can help in your configuration management and
automate your deployments. You can also use Ansible Galaxy to find and share roles created
by other users. For further information take a look at Ansible Roles.
4. The core tasks of your role are defined in the tasks/main.yml file. This is where the
automation happens. Example 2-30 is an excerpt depicting the deletion of an IBM i VM
using the PowerVC module:
5. The meta/main.yml file is your role's calling card. It provides crucial metadata for users
who can interact with your role. Example 2-31 is a snippet that showcases the role's
author, description, company, supported platforms, and more:
Role Variables
--------------
Example Playbooks
----------------
```
---
- name: Delete a vm
hosts: powervc
tasks:
- include_role:
name: delete_vm_via_powervc
vars:
vm_name: 'itso0x'
vm_state: 'absent'
...
```
License
-------
Apache-2.0
Note: When publishing the created role, it is important to adhere to the formatting
guidelines for the README.md file. Utilizing triple backticks is crucial to ensure proper
readability on GitHub. For more information see GitHub Code Blocks which documents
creating and highlighting code blocks:
Roles allow you to organize your automation work into smaller, more manageable units that
can be easily shared and reused. You can use roles to abstract common functionality, such as
installing a web server or updating a database, and then use them in multiple playbooks or
even multiple times within one playbook.
To use roles in your playbooks, you need to follow a defined directory structure that Ansible
recognizes. Each role must have at least one of the following directories:
tasks: The main list of tasks that the role executes.
handlers: Handlers, which may be used within or outside this role.
library: Modules, which may be used within this role.
files: Files that the role deploys.
templates: Templates that the role deploys.
vars: Other variables for the role.
defaults: Default variables for the role. These variables have the lowest priority of any
variables available, and can be easily overridden by any other variable,
including inventory variables.
meta: Metadata for the role, including role dependencies.
Each directory must contain a main.yml file (or main.yaml or main) that contains the relevant
content for that directory. You can also use other YAML files in some directories to organize
your tasks or variables better.
Note: For more insights into sharing and reusing roles across multiple playbooks, you can
explore the following resources:
Ansible Playbook Reuse Guide:
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_reuse.html
Sharing and Reusing Roles in Ansible:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
Playbook Reuse with Ansible Roles:
https://runebook.dev/en/docs/ansible/user_guide/playbooks_reuse_roles
Role dependencies
Role dependencies let you automatically pull in other roles when using a role, ensuring that
the target computer is in a predictable condition before running your tasks. For example, you
can have a common role that installs some packages and updates the system, and you want
to run it before any other role.
To define role dependencies, you need to create a meta/main.yml file inside your role
directory with a dependencies block. Example 2-33 shows an example of dependencies as
follows:
This means that before running the current role, Ansible will first run the common role and
then the sshd role, but only if the environment variable is set to 'production'. You can also
pass variables to the dependent roles using the same syntax.
Role dependencies are executed before the roles that depend on them, and they are only
executed once per playbook run. If two roles state the same one as their dependency, it is
only executed the first time.
For example, if you have three roles: role1, role2 and role3, and both role1 and role2 depend
on role3, the execution order is as follows:
role3 → role1 → role2
You can override this behavior by setting allow_duplicates: true in the meta/main.yml file of
the dependent roles. This will make Ansible run the role every time it is listed as a
dependency. For example, if you set allow_duplicates: true for role3, the execution order can
be:
Role dependencies are a feature of Ansible that can help you reuse your roles and simplify
your playbooks. However, use them with caution and avoid creating circular dependencies or
complex dependency chains that can make your roles hard to maintain and debug.
Note: For more information about role dependencies, you can refer to the official Ansible
documentation or the tutorial as follows: Ansible reuse roles or Role dependencies
A collection can also have dependencies on other collections, which are specified in a
meta/requirements.yml file. This allows you to reuse existing content from other sources and
avoid duplication.
Note: For more detailed information, please refer to the following websites:
Ansible User Guide on Using Collections:
https://docs.ansible.com/ansible/latest/user_guide/collections_using.html
Ansible Galaxy Guide on Creating Collections:
https://galaxy.ansible.com/docs/contributing/creating_collections.html
Ansible Community General Collection on GitHub:
https://github.com/ansible-collections/community.general
playbook. Registered variables are variables that are created by registering the output of a
task.
Use error handling techniques to deal with failures and unexpected situations. For
example, you can use the ignore_errors, failed_when, changed_when, and rescue or
always keywords to control the behavior of your tasks when an error occurs.
For more role details, the ansible-galaxy info command provides comprehensive information:
ansible-galaxy info itso.linux_common
Variables can be passed using the vars or vars_files keywords. For instance see
Example 2-36.
Example 2-36 Applying 'itso.linux_common' role with custom variables in Ansible playbook
- hosts: all
roles:
- role: itso.linux_common
vars:
var_name: value
Editing files such as README and metadata is essential. Uploading requires an account on the
Galaxy website and an API key. Building and uploading involve commands shown in
Example 2-38.
Note: Ansible Galaxy stands as a valuable asset for harnessing the potential of Ansible
automation through its extensive collection of roles and collections. Empowered by the
ansible-galaxy command line tool, users gain the ability to explore, install, and contribute
to these automation components, fostering collaboration and efficiency. For more in-depth
insights, you can refer to the following resources:
Ansible Galaxy's guide on creating roles:
https://galaxy.ansible.com/docs/contributing/creating_role.html
Ansible Galaxy's user guide:
https://docs.ansible.com/ansible/latest/galaxy/user_guide.html
In this section we explore how to version and document your playbooks and roles, so that you
can maintain them easily and collaborate with others effectively.
To version your playbooks and roles, you should use a version control system (VCS) such as
Git, which is a popular and widely used tool for managing code repositories. Git allows you to
create snapshots of your code at any point in time, called commits, and to switch between
different versions or branches of your code, called checkout.
Using Git, you can also push your code to a remote repository, such as GitHub or Bitbucket,
where you can store it safely and share it with others. You can also pull code from a remote
repository to update your local copy with the latest changes.
To push the local repository to GitHub, you should first create a new repository at
https://github.com/new. Your repository can be either private or public. The commands in
Example 2-40 shows how you would push your local repository to a remote GitHub you have
created named ‘my-first-ansible-repo’ as the GitHub user ‘username’.
Please note that the password for the ‘git push’ command should be a user token created
at https://github.com/settings/tokens.
Now that you have your code stored in both a local and remote git repository, any changes to
existing files or newly created files can be added to the repositories with the git ‘status’,
‘add’, ‘commit’ and ‘push’ commands. A sample session is shown in Example 2-41 where a
new playbook named ‘install-pkg.yaml’ and the updated playbook ‘update-hosts.yaml’ are
committed to the repository and pushed to the previously created remote repository.
Untracked files:
(use "git add <file>..." to include in what will be committed)
install-pkg.yaml
no changes added to commit (use "git add" and/or "git commit -a")
Modifications and additions to the local copy of the repository can be committed to the remote
repository by the methods shown in Example 2-41 on page 95. It is good practice to regularly
run ‘git pull’ as shown in Example 2-43 on your local repository to pull down other
contributors’ changes.
Branches can later be merged with the original repository, if appropriate, by a ‘git merge’,
usually initiated by a pull request submitted to the original repository administrator. The pull
request will inform the repository administrator that there are committed changes to a branch
that you wish to merge with the original repository. The administrator may or may not merge
the committed changes of the branch to the original repository.
Note: For comprehensive guidance on using Git with Ansible, consult this resource:
Ansible Best Practices - Content Organization:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.htm
l#content-organization
To adeptly document your Ansible playbooks and roles, comments and metadata files serve
as vital tools. Comments, while ignored by Ansible, provide human-readable context.
Metadata files, structured as YAML files, house key-value pairs elucidating code
characteristics.
For effective documentation via comments and metadata files, consider these steps:
Utilize comments within playbooks and roles to explain tasks, handlers, variables, or
templates, alongside assumptions and dependencies.
Employ metadata files in roles to provide information such as role name, description,
author, license, platforms, dependencies, tags, variables, examples, and more.
Use the ansible-doc command to generate documentation from metadata files in HTML
or plain text format.
Utilize the ansible-galaxy command to upload roles to Ansible Galaxy, a public
repository of roles which can be downloaded by anyone.
Note: For more information and detailed guidance, please explore the following resources:
Ansible Best Practices - Task and Handler Organization for a Role
https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.
html#task-and-handler-organization-for-a-role
Ansible Developing Modules - Documenting Modules
https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_documen
ting.html
Ansible Galaxy - Creating a Role
https://galaxy.ansible.com/docs/contributing/creating_role.html
Check mode
You can use the --check flag when running a playbook or a role to see what changes can be
made without actually applying them. This can help you spot any errors or inconsistencies in
your code before executing it. Check mode does not run scripts or commands, so you need to
disable it for some tasks using check_mode: false.
Modules
You can use certain modules that are useful for testing, such as assert, fail, debug, uri,
shell or command. These modules can help you verify the state or output of your target hosts,
check for certain conditions or values, display messages or variables, or run arbitrary
commands or scripts.
Linting
You can use ansible-lint to check your playbooks and roles for syntax errors, formatting
issues, best practices violations, or potential bugs. Ansible-lint can also be integrated with
other tools such as editors, IDEs, CI/CD pipelines, or pre-commit hooks.
Integration testing
You can use integration testing to test your playbooks and roles against different
configurations or environments. For example, you can use Vagrant, Docker, or cloud VMs to
create isolated test hosts with different operating systems or versions. You can also use
inventory files or variables to define different parameters for your test hosts, such as package
versions or service states.
Dry run
You can use the --diff flag when running a playbook or a role in check mode to see the
differences between the current state and the desired state of your target hosts. This can help
you verify that the changes are correct and complete before applying them.
Idempotence
You can run your playbook or role multiple times on the same target host to check that it is
idempotent. This means that it cannot make any changes on subsequent runs unless the
state of the host has changed externally. Idempotence ensures that your playbook or role is
consistent and reliable.
Verbose
The verbose option allows you to see more details about what Ansible is doing when it
executes your playbooks and roles. You can use different levels of verbosity, from -v to -vvvv,
to increase the amount of information displayed. The verbose option can help you debug your
Ansible code, identify errors, and check the results of your tasks.
Notifications
You can use handlers to trigger notifications when certain tasks make changes on your target
hosts. For example, you can use handlers to restart a service, reload a configuration file, or
send an email alert. Handlers can help you validate that your changes have taken effect on
your target hosts.
Reports
You can use callbacks or plugins to generate reports on the execution of your playbooks or
roles. For example, you can use callbacks to display statistics, summaries, logs, or graphs of
your playbook or role runs. You can also use plugins to send reports to external systems or
services, such as Slack, email, or webhooks. Reports can help you validate that your
playbooks or roles have run successfully and without errors.
Note: For more detailed insights about Testing and Validating playbooks and roles, please
refer to the following sources:
Five Questions for Testing Ansible Playbooks and Roles:
https://www.ansible.com/blog/five-questions-testing-ansible-playbooks-roles
Ansible Testing Strategies:
https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies
.html
Introduction to Ansible Playbooks:
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html
We discuss the different architectures that you might want to consider when designing your
Ansible automation environment, depending on your business requirements. We also provide
guidance on choosing the right server (or it might be more than one server) to be the Ansible
controller and then we show you how to install Ansible controller in your IBM Power
environment and show you how to prepare your different IBM Power based LPARs to be
Ansible clients.
In this simple case the architecture for your Ansible installation may look like Figure 3-1.
Figure 3-1 Using Ansible Core or Ansible Community to manage your IBM Power estate
Your first step is to choose the right server for your Ansible controller node. The Ansible
controller node is a server where you install Ansible and develop your playbooks. There are
only two requirements for this controller node server:
1. You can install Ansible on it
2. The server has SSH access to the IBM Power servers and other devices that you want to
manage.
Satisfying the first requirement is very easy because Ansible can be installed on any
supported operating system which runs on IBM Power. For more information about choosing
an operating system for Ansible controller node, see section 3.2, “Choosing the Ansible
controller node” on page 119.
Ansible primarily works using SSH connections to the managed devices although there are
some exceptions. For example, in the IBM Power world, for both IBM PowerVC and the IBM
Power Hardware Management Console (HMC) Ansible works by using the REST APIs
provided by those products.
If you plan to automate your Linux, AIX or IBM i LPARs, you will need SSH access to them.
Ansible does support the use of SSH gateways (jump servers or bastion hosts) for access to
devices with additional security requirements.
Even in the simplest case, you should consider using some sort of source control system like
Git to track the history of changes in the playbook. While it is not necessary, it is always a
good decision and it will makes development more transparent and traceable. Even the best
of us have problems remembering what and why they did something a year or two ago.
After choosing your Ansible controller node and deciding where to save your future work, it is
time to install Ansible. You will find the steps to install Ansible on your preferred operating
system in 3.3, “Installing your Ansible control node” on page 119.
Get started
Once you have installed Ansible on your controller node, you are ready to automate your first
task. Consider the following when you choose this first task:
Choose a simple task which you do regularly.
Keep it simple. Avoid perfectionism.
Automate step by step. Don’t try to develop the whole playbook at once.
Measure the outcome. How much time did you save?
Speak to your colleagues about your success. Spread the word about Ansible.
Tip: In this simple architecture you will use either Ansible Core or Ansible Community for
your Ansible controller. Both of these products are supported by the Ansible community
and not supported by any vendor. While it is easy to start with them, you need to consider
if you will need a better and more reliable support option. As you move into more
production environments you will want to consider using a vendor supported product such
as Red Hat Ansible Automation Platform which provides support for Ansible. Ansible
Automation Platform is discussed in section 3.1.2, “Scaling up – Ansible Automation
Platform” on page 103.
If you have experience with base Ansible Core and have several team members, you may
want to go the next level and use Ansible Automation Platform. This is especially true if you
have several administrators who develop the playbooks and a number of users who only run
the playbooks.
Ansible Automation Platform provides Role-Based Access Control (RBAC) for your Ansible
environment which enables you to define the rights of your automation users – which
playbooks they can run and what parameters they may use in those playbooks. It also defines
the rights of automation developers, defining which projects they are working on.
In addition, your users will be delighted with the nice graphical user interface of Ansible
Automation Platform, which removes the complexity of running single commands and
remembering the appropriate parameters to use in their Ansible playbook.
Ansible Automation Platform is fully supported by Red Hat. If you are architecting an
automation platform for an enterprise, support for Ansible is one of the most important
consideration, and you need to clarify your requirements before starting. Ansible Core and
Ansible Community are open-source projects which come with only community support. Even
if it is very easy to install Ansible on every imaginable operating system, the only support you
can get for it is support from the community through the projects code repositories on GitHub.
What would be the impact if your enterprise automation solution has an issue and you are
dependent on the community members, many of whom do not understand IBM Power, to find
a solution to your problems. That is why it makes sense to have a support contract with Red
Hat for any enterprise automation environment.
Currently, Red Hat supports Ansible Automation Platform on IBM Power running Red Hat
Enterprise Linux as a “Technology Preview”. This means that Red Hat can’t guarantee the
stability of all features on IBM Power, but will attempt to resolve any issues that customers
may experience. More about technology preview support can be found in this Red Hat
document. Similar to IBM, Red Hat does not do any forward-looking statements and so has
not stated when Ansible Automation Platform on IBM Power will be fully supported, but you
can expect it in the near future. When running Ansible Automation Platform on Red Hat
Enterprise Linux, you can manage systems running any Linux distribution, AIX or IBM i as
supported use cases, however Red Hat Ansible Automation Platform can only be installed on
Red Hat Enterprise Linux.
You will need at least 40GB of free space in /var on your server to proceed with the
installation. You may need even more if you have a lot of playbooks and different execution
environments. To get a suitable performance for your installation, the storage utilized should
be capable of a minimum of 1500 IOPS.
When using Automation Controller, you can store all your playbooks locally on the server
where the Automation Controller is installed by creating one or more subdirectories under
/var/lib/awx/projects/ and copy your playbooks there. While this is easy if you have one
automation developer, as you add additional automation developers – all working on different
projects – you may need several different directories and you will need to manage file
permissions for all of the users and projects. A flat filesystem containing files does not have
any source control mechanism built in, so we recommend the use of Github, Gitlab or some
similar software to store your playbooks source code centrally and enable collaboration
between automation developers. Figure 3-2 shows the use of Ansible Automation Controller
managing an IBM Power infrastructure.
Figure 3-2 Using Ansible Automation Controller to manage IBM Power infrastructure
As you can see in Figure 3-2, the architecture is almost the same as the one using Ansible
Core. However, it enables you to support more developers and users of your automation
projects, and it provides Red Hat support.
Reference architectures
This section provides some sample Ansible architectures with different combinations of
Ansible automation components designed to meet different availability requirements for your
department or business. These Ansible automation components can be deployed on a single
system in some environments or may require more than one system as you build out your
Ansible Automation Controller or Platform.
This architecture separates the database system to an external database provider. This is
shown in Figure 3-6.
Figure 3-7 Reference architecture 5 - External database server with high-availability (HA) except
database server
This architecture adds high availability to Reference architecture 5 - External database server
with high-availability (HA) except Database server. It is shown in Figure 3-8.
Figure 3-8 Reference architecture 6 - External database server with full high-availability
This architecture adds separate execution zones to Reference architecture 6 - with External
database Server with full high-availability (HA). This is shown in Figure 3-9.
Figure 3-9 Reference architecture 7 - External database Server with full high-availability (HA) and
separate execution node
This architecture adds a disaster recovery environment. Automation can be managed from
either the primary location or the disaster recovery location. This is shown in Figure 3-10.
Figure 3-10 Reference architecture 8 - External database server with full high-availability (HA) and
disaster recovery (DR) - Independent Operation
Figure 3-11 Reference architecture 9 - External database server with full high-availability (HA) and
disaster recovery (DR) - Joint Operation
Conclusion
Looking at the above reference architectures will give you some ideas on how to design your
automation environment. You can create your supported architecture with the help of the
reference architectures provided here, choosing the components that meet your requirements
best. However, there are some further things to consider for when adding and integrating
additional solution components.
In an enterprise environment, all source code must be saved in some source control repository. It
enables you to track changes to the code and see who did what. It also enables you to separate
projects and teams. The same concepts apply to automation source code – your playbooks and
roles. Your Windows administration team has nothing to do with AIX or IBM i automation. On the
other hand AIX operations usually have little interference with Microsoft SQL Server or Sharepoint
resources managed by other teams.
The some of the most common control management tools used in enterprises are Github
Enterprise and Gitlab Enterprise. They are based on open source git project and you can use
git on your Linux, AIX or IBM i server to work with them.
After a change to a source code is committed, the new code must be tested. This applies to
automation even more, because if someone made a small mistake in the automation code, it
can cause problems across your whole application deployment or infrastructure. The testing
can be as simple as doing a syntax check or can involve more complex integration testing
where the whole infrastructure is built and the application is deployed into a special testing
environment. Source control management tools like Github Enterprise and Gitlab Enterprise
have their own set of continuous integration (CI) tools, but if you wish to use other tools to
manage the source code, you may also use the open source tool Jenkins to build your
integration pipeline.
Another very important part of the process it to check that the source repository does not
contain any passwords, tokens or other secrets. Your secrets must be stored in Ansible
Automation Platform or in some other vault tool like Hashicorp Vault, but not in the source
code. Modern source code management tools like Github and Gitlab can integrate with all
common security tools to automate source code scanning.
When the whole testing process is completed successfully, the code can be deployed into your
production infrastructure. It can be done automatically, using continuous delivery, if you
choose. When using Ansible Automation Platform, we usually get a new version of a project
after synchronizing it. Just as a side note you may want to automate Ansible Automation
Platform the same way you automate your other applications.
In the simplest form of deployment we used just one Ansible Controller to manage all nodes. In
the enterprise-grade deployment you install Ansible Automation Controllers according to your
infrastructure requirements. You may have separate Controllers for each stage (development,
test, production) or you may install them based on your network configuration - separate
Controllers for office network, for “normal” servers, for high priority servers, for DMZ servers,
and so on. It makes your architecture more complex but it is easier to control which projects
access which resources.
With Event-Driven Automation, you develop playbooks for each use case – to configure a new
server, to create a user on a server, to expand a filesystem –and then connect your external
systems like PowerVC (in case of provisioning), ticketing system (in case of new user creation)
or monitoring (in case of filesystem) to EDA. Within EDA you define the policies and rules and
EDA will run the appropriate playbooks when a defined event happens, so you can build a fully
automated enterprise. More use-cases for EDA can be found in section 1.3.3, “Event-Driven
automation” on page 15.
In any case, don’t look at your architecture as something permanent. Automation evolves and
automation practices evolve as well. Your environment is live, be prepared to live with it and
enhance it every time you require a new feature or a new integration, and be prepared to
eliminate unnecessary or unused features.
However, the most important decision is not the software components you use. The most
important step is to create an “automation first” environment. Before doing any task on your
systems, take a step back, take your time and think – can I automate this function using
Ansible? The obvious answer is often, yes. Then automate the task and let the job to be done
by Ansible. When you get to this point you no longer need root or QSECOFR privileges on
your systems. All you need is that your systems are connected to your automation platform
and you can execute Ansible playbooks there.
Before choosing the right controller node for Ansible, you must answer a simple question:
Which systems do you plan to manage with Ansible? Consider the following cases:
If you only want to manage your IBM i database, the answer is very easy. Use your IBM i
as the Ansible controller.
If you want to manage your AIX environment, you may want to install Ansible on your
network installation manager (NIM) server. NIM is usually the central point of AIX
infrastructure and already has access to all AIX servers and many times, NIM already
uses open SSH connections between the NIM server and the NIM clients.
If you have SAP HANA on IBM Power, or other Linux applications on IBM Power, you may
want to use an existing Linux on Power LPAR to install Ansible. This choice has one big
advantage. Ansible is developed under Linux and with Linux first in mind. As of the time of
writing this Redbook you can install Ansible Core 2.15 on Linux on Power, but Ansible
Core 2.14 is the latest version available on AIX. While most modules and collections
support Ansible 2.9 or later, if you have something specific that requires a newer Ansible
version, your only choice is Linux.
Of course you may use Linux on x86 for Ansible controller node. This is usually an obvious
choice if your environment already has Ansible on an x86 Linux server. Then you don’t
need to install anything, you can use the existing server.
No matter which Ansible controller node you choose, it must have SSH access to all of the
systems you want to manage. Check that is has such access or ensure that the connection
can be made through firewalls and security zones.
It probably doesn’t make sense to install your Ansible controller node in AWS to manage AIX
servers on-premise or in IBM Cloud PowerVS. You may want to have your Ansible controller
node as close to the managed servers as possible.
If you place your Ansible controller node in the DMZ together with other servers, it will simplify
the connection between the Ansible controller and the target hosts, but it also might be
difficult to upload your playbooks and roles to it. It might be easier to install Ansible on a
server outside of the DMZ and use some jump host for playbook execution. This is probably a
topic of discussion in a meeting with your network and IT security teams.
Now that you have chosen which server to use for your Ansible controller node and have
consulted with your IT security and network teams and you have access from your Ansible
controller node to all your systems by SSH, you are ready to install Ansible!
orchestration. There are two components to consider when using Ansible, the first is the
control node which executes the playbooks and manages the automation, and the second is
the managed node or client which is the device being automated (often called the target
machine or device). As we have discussed, Ansible is agentless which means that it can
communicate with machines or devices without requiring that an application or service be
installed on that managed node. This is one of the main differences between Ansible and
other similar applications like Puppet, Chef, CFEngine and Salt.
The Ansible controller is often called Ansible Engine (old name) or Ansible Core (new name).
Ansible Core provides a command-line interface (CLI) to manage your Ansible automation
environment. For some administrators, the CLI based approach is intimidating and they are
looking for a graphical user interface (GUI) instead. For GUI based management, you can
choose to use the Red Hat Ansible Automation Platform which provides the GUI interface to
Ansible Core and also provides additional management capabilities.
Ansible Core is available and supported on all operating systems supported by IBM Power –
AIX, IBM i, and Linux on Power. Red Hat Ansible Automation Platform is also available for
IBM Power environments, but it is only supported by Linux on Power. Currently, the Linux on
Power support for Ansible Automation Platform is in Technical Preview, full support is
expected to be announced in the near future.
2. Verify the current status of the products and attached subscriptions for the system as
shown in Example 3-2.
3. Verify the RPM repository is configured and enabled using the command shown in
Example 3-3.
4. Install the ansible-core rpm in the system using the following command:
# dnf install ansible-core python3-virtualenv vim
Note: Make sure your system is connected to the right rpm repository. If the system
directly connects with the internet, then make sure the subscription is configured and
enable the correct repository. For more information about using subscription manager refer
to https://access.redhat.com/solutions/253273.
For example:
# ansible-console -i hosts --limit all -u root
3. Verify the inventory file name and location as shown in Example 3-5.
Install any additional python libraries or modules depending on requirements. Use the
following command:
# dnf install python3-pyOpenSSL python3-winrm python3-netaddr python3-psutil
python3-setuptools
Note: If any specific python libraries or modules are not available or not shipped with
RHEL OS in rpm format, they can be installed via pip (the python packages manager).You
can create a virtual environment, like a virtual machine or Linux chroot, that will have an
isolated structure of lightweight directories separated from the actual operating system
python directories to allow you to use different versions of python modules, files, or
configurations.
Create a ~/.vimrc file to customize the vim editor configuration to use the 2 space
indentation for yaml file editing as shown in Example 3-8.
Generate and copy the ssh key from Ansible Automation Controller node to managed
nodes using the following commands:
# ssh-keygen
# ssh-copy-id [email protected]
Figure 3-13 Select version and architecture for Ansible Automation Platform package download
The download list will be available once you select the Red Hat Ansible Automation
Platform version and the architecture from the product software download page.
Figure 3-14 List of Ansible Automation Platform package bundles that can be downloaded
Now download the bundle package. An example file name could be:
ansible-automation-platform-setup-bundle-2.4-1.2-ppc64le.tar.gz
2. Copy that tar file in the system and extract the files as shown in Example 3-9.
3. Go to the extracted directory and configure the inventory file with a vim editor for the
all-in-one installation scenario.This process is shown in Example 3-10.
# vim inventory
# grep -v ^# inventory |grep -v ^$
[automationcontroller]
bs-rbk-lnx-1.power-iaas.cloud.ibm.com node_type=hybrid
[automationcontroller:vars]
peers=execution_nodes
[execution_nodes]
[automationhub]
[automationedacontroller]
[database]
[sso]
[all:vars]
admin_password='Redhat123'
pg_host=''
pg_port=5432
pg_database='awx'
pg_username='awx'
pg_password='Redhat123'
pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL
registry_url='registry.redhat.io'
registry_username=''
registry_password='''
receptor_listener_port=27199
automationedacontroller_admin_password=''
automationedacontroller_pg_host=''
automationedacontroller_pg_port=5432
automationedacontroller_pg_database='automationedacontroller'
automationedacontroller_pg_username='automationedacontroller'
automationedacontroller_pg_password=''
sso_keystore_password=''
sso_console_admin_password=''
Note: The legacy execution environment (ee_29_enabled=true) is not supported for Power
Systems. If the ee_29_enabled = true then you will receive errors as shown in Figure 3-15.
4. Run the Red Hat Ansible Automation Platform setup script to start the installation. This is
shown in Figure 3-16.
Note: The default minimum RAM size is 8GiB. This can be modified for non-production or
a testing environment by changing the default configuration file located at:
collections/ansible_collections/ansible/automation_platform_installer/roles/pre
flight/defaults/main.yml
To adjust the minimum RAM size, modify the required_ram entry before continuing the
installation. As an example:
required_ram: 4000
6. When you login the first time, you will need to configure the subscription manager and
activate the subscription. In a disconnected or restricted environment (that is no internet
access from the system), you must first create a manifest file, allocate the Red Hat
software subscriptions with Ansible Automation Platform to the manifest, and then export
the manifest to enable you to download the manifest file that you just created.
Uploading the manifest is shown in Figure 3-18.
More information on creating and using a Red Hat Satellite manifest can be found at
https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest.
If the system is directly connected to the internet, you can use a Red Hat software
subscription username and password for the activation as seen in Figure 3-19.
Figure 3-19 Ansible Automation Platform subscription activate - using username password
7. Once you login you need to select the appropriate subscription from the list. An example is
shown in Figure 3-20. Click the Next button on the User and Automation Analytics screen
and finally click the Submit button on the End User License Agreements screen.
8. Now the Red Hat Ansible Automation Platform is ready for further integration and
configuration for you to start automating your environment. See Figure 3-21 on page 129
for details.
Some of the resources that can be created in the Ansible Automation Platform are:
– Templates (see Job Templates and Workflow Job Templates)
– Credentials
– Projects
– Inventories
– Hosts
– Organizations
– Users
– Teams
Also you can configure and integrate third party services that you require. Some example
services are:
– Enhanced and Simplified Role-Based Access Control and Auditing: Configure
role-based access control (RBAC). Automation controller allows for the granting of
permissions to perform a specific task (such as to view, create, or modify a file) to
different teams or explicit users through RBAC.
– Backup and Restore: The ability to backup and restore your system has been
integrated into the Ansible Automation Platform setup playbook, making it easy for you
to backup and replicate your instance as needed. Configure cron jobs and use
setup.sh script for backup and restore.
– Integrated Notifications: Configure stackable notifications for job templates, projects,
or entire organizations, and configure different notifications for job start, job success,
job failure, and job approval (for workflow nodes). Notifications can be integrated with
Email, Grafana, Slack or other tools.
– Authentication Enhancements: The Automation controller supports LDAP, SAML,
token-based authentication. Configure a feasible authentication method.
For more details on post configuration refer to the Automation Controller User Guide v4.4.
Also, remember to modify your VIM configuration to replace “Tab” with an indent using “2
whitespaces”.
[uninstall]
require-virtualenv = true
EOF
2. Upgrade pip, and other necessary Python libraries, and install requirements for the most
used Ansible collections as shown in Example 3-12.
3. Create a default ansible.cfg for this environment, and configure the default hosts.ini, and
populate it with at least the Ansible controller itself (localhost)). See Example 3-13.
4. To make it convenient to get into the virtualenv while logging in as user ansible, the source
and export lines can be added to the .bashrc (or .profile) of the user.
5. Adjust vimrc to make VIM recognize the yaml/yml indentation. This is shown in
Example 3-14.
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
source ~/ansible-venv/bin/activate
export PYTHONPATH=$( ls -1d ~/ansible-venv/lib/python*/site-packages )
If you are have previously installed open source tools on AIX, the installation of Ansible will be
familiar and will look similar to the installation in a Linux on Power LPAR. However, if you have
limited experience with open source deployments on AIX, you need to understand how to use the
open source installation methodology.
Ansible on AIX is delivered as a part of IBM AIX Toolbox for Open Source Software. All software
delivered through IBM AIX Toolbox for open source applications is packaged using RPM format
which is the same format used in Linux. As this is not the AIX-native BFF package format the
installation procedure is slightly different.
It is highly recommended that you use dnf to install any open source tools in your AIX environment.
The dnf command in a package manager for RPM packages. This is an updated version of the
yum command which you may have seen in a Linux environment. While RPM files can be installed
without a package manager, using dnf has two big advantages – it can automatically resolve
dependencies and then install them from package repositories. Without the dnf package manager,
you would be forced to manually determine any package dependences and then install them. You
can find a useful and detailed guide on to installing dnf here.
In our testing scenarios, we ran the installs on both AIX 7.2 and AIX 7.3. The process applies to
both versions.
The script downloads newest rpm.rte package and a bundle of RPM packages to be installed.
There are many packages in the bundle but the most important for our case are Python 3.9 and
DNF itself.
5. If install takes forever it might be failing at rpm.rte install, open the tar and extract rpm.rte
and update it through smit or installp from the cli.
You can find the latest bundles here. the steps to follow are:
1. There are two bundles – one for AIX 7.1 and 7.2 and another one for AIX 7.3. Choose the
correct bundle for your version of IBM AIX.
2. After downloading the bundle, unpack it to a temporary directory.
3. In the temporary directory you will find the script dnf_install.sh. Run ./dnf_install.sh -y. It
will run for a while and setup dnf if everything is okay.
If you search for Ansible in the repositories, you find three references to it as seen in
Example 3-18.
The package you want to install is called ansible.noarch. The package ansible-core.noarch is
the base package of Ansible, providing Ansible Core 2.14.2 at the time of writing this
publication.
The package ansible.noarch provides some additional collections you usually need to work
with Ansible. If you install the package ansible.noarch, it will automatically install the package
ansible-core.noarch.
Example 3-19 shows the command to install Ansible and the resulting output.
Transaction Summary
========================================================================================================
Install 5 Packages
The same applies to Ansible collections. You can install them globally into
/usr/share/ansible/collections, locally for your user, or just for one project.
Note: We recommend that you create configuration files and install collections on a project
basis.
3. Make sure the correct locale files are installed. Ansible require the UTF - 8 locale. The
command to validate your installed locale files is shown in Example 3-21.
Note: Not having the correct locale file will cause Ansible commands to fail with the
following error:
# /opt/freeware/bin/ansible
ERROR: Ansible requires the locale encoding to be UTF-8; Detected ISO8859-1.
Example 3-23
# ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location =
/opt/freeware/lib/python3.9/site-packages/ansible
ansible collection location =
/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/freeware/bin/ansible
As you can see from the example, Ansible core Version 2.14.2 was installed and the Python
version being used is Python3 Version 3.9.16.
For example:
# ansible-console -i hosts --limit all -u root
Next Steps
Now that you have installed Ansible, the next steps are to set up the configuration appropriate
for your environment. The following steps are recommended:
1. Generate the configuration file using the command below:
# ansible-config init --disabled -t all > ansible.cfg
2. Display the effective configuration and the configuration file location, in terms of current
working location or directory as shown in Example 3-24.
3. Verify the inventory file name and location as shown in Example 3-25.
[tags]
[runas_become_plugin]
[su_become_plugin]
[sudo_become_plugin]
[callback_tree]
[ssh_connection]
[winrm]
[inventory_plugins]
[inventory_plugin_script]
[inventory_plugin_yaml]
[url_lookup]
[powershell]
[vars_host_group_vars]
Example 3-27 Test Ansible functionality with ad-hoc command (sshpass must be installed with dnf)
atilio-ansiblerh73:/>ansible all -i hosts -m shell -a "hostname" -u root -k
SSH password:
hugo-rhel8-ansible | CHANGED | rc=0 >>
hugo-rhel8-ansible
Install any additional python libraries or modules depending on requirements. Use the
following command:
# dnf install python3-pip
Note: If any specific python libraries or modules are not available or not shipped with AIX
OS in rpm format, they can be installed via pip (the python packages manager).You can
create a virtual environment, like a virtual machine or Linux chroot, that will have an
isolated structure of lightweight directories separated from the actual operating system
python directories to allow you to use different versions of python modules, files, or
configurations.
Create a ~/.vimrc file to customize the vim editor configuration to use the 2 space
indentation for yaml file editing as shown in Example 3-28.
Generate and copy the ssh key from the Ansible Automation Controller node to the
managed nodes using the following commands:
# ssh-keygen
# ssh-copy-id [email protected]
Now your AIX based Ansible controller is ready for use for automation tasks.
If you don’t have direct access, you can download the collection on another server and then
copy it to your Ansible controller node as shown in Example 3-30.
The collection documentation has a demo inventory file that you can look at. However, for our
test environment our inventory file looks like what is shown in Example 3-31.
It is important to install YUM before initiating Ansible. This installation can be performed with or
without an Internet connection. To install YUM without an Internet connection, a one-time
bootstrap process is used. More information can be found at YUM installation.
After YUM is installed, navigate to the directory where it resides, issue cd /Qopensys/pkgs/bin/,
then yum -h.
Note: Packages can be managed through SSH terminal commands or IBM Access Client
Solution.
Note: By establishing these components and their integration, Ansible on IBM i gains a
powerful foundation, ready for versatile automation and management tasks.
If desired, you can create a local repository using reposync and createrepo, generating a
complete copy of the remote repository. Details on this process can be found here.
2. Internet Access from IBM i system: If your IBM i system has Internet access, ensure
that Python v3.6+ is installed. Log in to a SSH terminal and execute yum install ansible,
Next execute ansible -version. The first step installs Ansible and the second allows you
to verify the installation by checking its version.
3. Installation using IBM i Access Client Solutions: Ansible can be conveniently installed
through IBM ACS. For detailed step-by-step instructions, please refer to the section titled
Installation using IBM i Access Client Solutions (ACS). This approach offers a
user-friendly method to set up Ansible on your IBM i platform.
Example 3-33 Parameters for the Ansible configuration file for IBM i
[defaults]
inventory =
~/.ansible/collections/ansible_collections/ibm/power_ibmi/playbooks/hosts_ibmi.ini
library = ~/.ansible/collections/ansible_collections/ibm/power_ibmi/plugins/action
Note: Part of the Ansible configuration process involves setting up the ansible.cfg file.
While the default location for this file is typically /etc/ansible on various platforms,
including Ansible Controller on IBM i, it is important to create one if it does not exist.
The following procedure guides you through the process of configuring Ansible Galaxy, a
repository for Ansible on IBM i which contains content from the broader Ansible community.
1. Install the IBM i collection from Ansible Galaxy, the designated package manager for
Ansible, Example 3-34 displays the command.
2. Check the installation path of the collections in the IFS (Integrated File System) using the
cd command, issue:
cd /home/qsecofr/.ansible/collections/ansible_collections/ibm/power_ibmi
3. Display the content of power_ibmi directory using the long listing command ls -l
4. Navigate back to the user's home directory by issuing the following command:
cd /home/qsecofr
5. Create a .ssh directory in the user's home directory, issue the command:
mkdir -p /home/qsecofr/.ssh
6. Verify the creation of the new directory as seen in Example 3-36.
7. Generate an SSH key pair for the Ansible controller on IBM i and its managed hosts. Enter
the following command, pressing Enter three times without providing a passphrase or
changing the default location of the key. The results are shown in Example 3-37.
8. Confirm the generated content using the ls -la command as shown in Example 3-38.
Example 3-38 Displaying the public and private rsa key pair
# ls -la
total 52
drwxr-sr-x 2 qsecofr 0 12288 Sep 22 20:47 .
drwxr-sr-x 7 qsecofr 0 24576 Sep 22 17:31 ..
-rw------- 1 qsecofr 0 2602 Sep 22 20:47 id_rsa
-rw-r--r-- 1 qsecofr 0 565 Sep 22 20:47 id_rsa.pub
9. Before copying the SSH key to the managed hosts, install sshpass, a tool that facilitates
password authentication in both interactive and non-interactive modes. Use the following
command shown in Example 3-39 to install sshpass.
Dependencies Resolved
===========================================================================================
========================
Package Arch Version
Repository Size
===========================================================================================
========================
Installing:
sshpass ppc64 1.06-1 ibm
30 k
Transaction Summary
===========================================================================================
========================
Install 1 Package
Installed:
sshpass.ppc64 0:1.06-1
Complete!
For an IBM i system with Ansible Controller installed, Table 3-4 outlines recommended
hardware specifications based on the number of managed nodes:
However, there are some basic setup considerations for each of the LPARs that will be Ansible
clients. The obvious one is to ensure that SSH is installed and available. All supported operating
systems that run on IBM Power support SSH, but there are some considerations for ensuring that
it is installed correctly which can vary by operating system. Additionally, Ansible requires Python
be installed, and again this process differs depending on the operating system used in your client.
The next sections describe the recommended actions to prepare your Ansible client based on the
operating system used.
The following tasks will help avoid delays caused by dns resolution, or ssh timeouts:
1. In ./etc/ssh/sshd_config, make the following changes:
Uncomment GSSAPIAuthentication no
Uncomment GSSAPICleanupCredentials yes
Uncomment UseDNS No
2. Add your Ansible-Core node to the /etc/hosts file if no dns is setup
3. Verify your python3 install as shown in Example 3-40.
4. Create a user for connection or Setup ssh keys if you want to uses passwordless access
First, to avoid delays caused by dns resolution, or ssh timeouts check the points bellow
1. Modify /etc/ssh/sshd_config
Uncomment GSSAPIAuthentication no
UncommentGSSAPICleanupCredentials yes
Uncomment UseDNS No
2. Add your Ansible-Core node to the /etc/hosts file if no dns is setup, check your
/etc/netsvc.conf
3. Create a user for connection (we used ansible) or setup ssh keys if you want to use
passwordless access,
4. Verify your python3 install. Considerations for the installation of python3 are discussed in
“Python installation considerations”.
To start, you can verify if python3 is installed using the dnf command shown in Example 3-41.
Starting with AIX 7.3, Python is a standard part of the AIX distribution. You can check if your
specific AIX installation has python installed by using lslpp command shown in Example 3-42
on page 148.
On AIX versions earlier than 7.3, we recommend the use of python from the AIX Toolbox for
Open Source Software. We described the process of DNF installation in the section 3.3.2, “AIX
as an Ansible controller” on page 132. Python is installed together with DNF.
Depending on how you installed Python, the main python binary can be either
/opt/freeware/bin/python3 or /usr/bin/python3. For the sake of simplicity and standardization,
you should choose one standard path to access python3 across your whole environment. If you
use /usr/bin/python3 you will not need to make any other changes in your Ansible playbooks,
but you if you choose another location, then your future playbooks or inventories must define
the variable ansible_python_interpreter with the full path to python3 for your clients.
Wherever you install python, be sure that you have updated your PATH settings. It is also
recommended that you create links to python in /usr/bin/ as show in Example 3-43.
Note: To determine if these Licensed Program Products (LPPs) are already installed, you
can use the following SQL queries from a 5250 terminal:
STRSQL
select * from QSYS2.SOFTWARE_PRODUCT_INFO where product_id = '5733SC1';
select * from QSYS2.SOFTWARE_PRODUCT_INFO where product_id = '5770DG1';
If they are not installed, you can download them from this site. For download and
installation instructions see this IBM Support Site.
2. Check Open Source packages and ensure that Python 3.6+ is available, issue the
following commands:
yum search python | grep 3
python --version
Note: If they are not installed, install the required Python packages:
yum install python3 python3-itoolkit python3-ibm_db
3. To automatically start SSH after an Initial Program Load (IPL), issue the following
command:
system "chgtcpsvr svrspcval(*sshd) autostart(*yes)"
4. Make sure the home directory exists for the user defined as the Ansible user. To create a
home directory on the IBM i LPAR to store the user's SSH-related objects, issue the
command:
mkdir -p /home/<user>/.ssh,
where <user> needs to be changed to your user id.
5. Transfer the previously generated public key shown in Example 3-38 on page 144, named
id_rsa.pub, from the Ansible Controller on IBM i to the managed nodes or endpoints.
Example 3-44 illustrates the process.
Example 3-44 Transferring the public key from control node to the managed node
# ssh-copy-id [email protected]
/QOpenSys/pkgs/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
"/HOME/QSECOFR/.ssh/id_rsa.pub"
The authenticity of host '192.168.1.100 (192.168.1.100)' can't be established.
ECDSA key fingerprint is SHA256:lno5PMGRhgupc23tpStiFRE4cPxVEmpZ/dmN1kfJ1RQ.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/QOpenSys/pkgs/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter
out any that are already installed
/QOpenSys/pkgs/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are
prompted now it is to install the new keys
[email protected]'s password: “type the password”
sh: test: argument expected
Number of key(s) added: 1
6. Ensure that the authorized SSH key has been successfully transferred to the managed
node. To confirm, perform a passwordless login to the managed host. Shown in
Example 3-45
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
qsecofr@MANAGED:~#
Note: This inventory can be edited to fit your needs. The file named hosts_ibmi_ini in this
directory can be utilized for this purpose.
Check Inventory configuration: To verify your current inventory configuration, use the
following command:
ansible-inventory --list -y
Tip: Building a comprehensive inventory ensures that Ansible can effectively manage the
specified nodes and endpoints. This foundation is essential for orchestrating automation
and configuration tasks across your infrastructure.
Good practice: For the purpose of demonstration and general understanding in this
chapter, we have utilized the QSECOFR user profile. However, it is strongly advised
against using the QSECOFR user profile in conjunction with SSH on IBM i. This
recommendation applies to both the Ansible controller and the managed nodes or
endpoints. It is recommended to create a new user profile with comparable authority levels.
For this new user profile, ensure the creation of a HOME directory on the IBM i system.
Additionally, configure appropriate permissions for the user's profile HOME directory by
executing the chmod 755 command. Modify the ownership of the HOME directory to match
the SSH user. Integrate the HOME directory into the user profile.
In alignment with Ansible's prerequisites for generating and copying public keys, it is crucial
to establish a dedicated directory within the user's HOME directory. This directory named
ssh. Configure permissions for this SSH directory using the chmod 700 command to ensure
the appropriate level of security.
Furthermore, even the Ansible controller or control node itself can be incorporated as a
managed host within the inventory file. This functionality enhances the efficiency of executing
routine tasks. The subsequent examples shed light on the practical applications of Ansible
Ad-Hoc commands:
Verifying the readiness of all inventory hosts for management by the Ansible controller can
be done by running the command shown in Example 3-47.
On the other hand, playbooks are ideal for more intricate configurations or orchestration
scenarios, often involving a series of tasks that collectively execute a larger action using
modules. In the context of IBM i modules, common core modules such as find are supported.
Playbooks are particularly well-suited for various DevOps practices. Some typical scenarios
where Playbooks excel include IBM i configuration management, orchestrating IBM i
deployments, and managing tasks after deployments have taken place.
Example 3-49 on page 153 provides a comprehensive playbook sample for IBM i that
showcases various YAML syntax features.
Note: You can ensure consistent and dependable automation outcomes by grasping the
principle of idempotency and its implementation within Ansible modules on IBM i. This
awareness permits administrators to utilize automation confidently while safeguarding
against unintended consequences, ensuring the stability and integrity of your systems.
To harness the power of these modules, it is essential to understand their parameters and data
types. Each module is accompanied by its own set of parameters, each tailored to cater to
distinct functionalities. These parameters dictate the behavior and configuration of the module
during execution. Whether you are creating, modifying, or managing resources on the IBM i,
mastering the usage of these parameters ensures efficient and accurate task execution.
For comprehensive guidance on the usage of these modules, referring to the official
documentation is crucial. This documentation, accessible at IBM i modules, provides in-depth
insights into each module's capabilities, parameter details, and usage examples. By exploring
the documentation, IBM i administrators and developers can unlock the full potential of
Ansible's power_ibmi modules, making way for smoother, more efficient administration and
deployment processes.
However, it is important to recognize that the ibmi_cl_command module has specific boundaries.
Unlike a conventional 5250 emulator, this module does not facilitate interaction with menus or
commands in the same way. Instead, its primary purpose is to efficiently execute CL commands
in a controlled and automated manner.
A significant feature of this module is its adaptability regarding user context. The module can be
configured to run either as the user establishing the SSH connection, usually an administrator, or
as a designated user using the “become_user” and “become_user_password” parameters. This
adaptability ensures tasks are executed within the desired user context, accommodating various
operational scenarios.
A standout feature of the object_find module is its ability to conduct searches using a
comprehensive range of parameters. These parameters permit attributes such as object age,
size, name, type, and library, among others. This versatility grants administrators and
operators the means to precisely pinpoint objects within the IBM i environment, catering to a
multitude of scenarios and requirements.
Behind the scenes, the object_find module relies on the integration of IBM i's
QSYS2.OBJECT_STATISTICS and QSYS2.SYS_SCHEMAS views. This integration forms the
backbone of the module's efficiency, allowing it to swiftly retrieve pertinent information and
present it in a coherent and actionable manner.
By using the object_find module, administrators and operators can swiftly navigate and
extract valuable insights from their IBM i systems. The module's ability to execute nuanced
searches enhances the management and automation of tasks, providing a versatile solution
to address diverse operational needs. Example 3-51 shows an example of the module
described on this section.
One of the core strengths of the ibmi_sql_query module is its seamless execution of SQL
queries. By interfacing with DB2 for i, users can tap into a treasure trove of insights housed
within tables, views, and various SQL Services. These insights span crucial aspects of system
functioning, offering a comprehensive view of system health, resource allocation, and
performance metrics.
Furthermore, the ibmi_sql_query module extends its functionality by allowing users to specify
an expected_row_count parameter. This added feature permits administrators to fine-tune their
querying tasks, enabling them to define expectations for query results. If the actual result set
does not meet the defined expectations, the module can be configured to trigger a task failure,
providing an automated quality assurance mechanism.
By integrating the capabilities of SQL querying into Ansible workflows, the ibmi_sql_query
module bolsters the toolkit for IBM i administrators and operators. This module's ability to explore
into the inner workings of DB2 for i enhances the precision of system monitoring, diagnostics,
and optimization. As a result, Ansible users can navigate the intricacies of their IBM i
environments with greater efficiency and depth of insight. Example 3-52 displays a sample using
the SQL query module.
The Virtual I/O Server (VIOS) is part of the PowerVM hardware feature. The VIOS is software
that is located in a logical partition.running in your IBM Power server that provides virtualization
functionality for the other LPARs running in that server. The VIOS provides virtual Small
Computer Serial Interface (SCSI) target, virtual Fibre Channel, Shared Ethernet Adapter, and
PowerVM Active Memory Sharing capability to client logical partitions within the system. The
VIOS also provides the Suspend/Resume feature to AIX, IBM i, and Linux client logical partitions
within the system. You can use the VIOS to perform the following functions:
– Sharing of physical resources between logical partitions on the system
– Creating logical partitions without requiring additional physical I/O resources
– Creating more logical partitions than there are I/O slots or physical devices available with
the ability for logical partitions to have dedicated I/O, virtual I/O, or both
– Maximizing use of physical resources on the system
– Helping to reduce the storage area network (SAN) infrastructure
The Virtual I/O Server is configured and managed through a command-line interface. All aspects
of Virtual I/O Server administration can be accomplished through the command-line interface,
including the following:
– Device management (physical, virtual, logical volume manager (LVM))
– Network configuration
– Software installation and update
– Security
– User management
– Maintenance tasks
Starting with VIOS 4.1.0.10, python is included in the VIOS image. If you are using VIOS
levels earlier than that you will need to set up python. Example 3-54 shows that a simple
Ansible command is unsuccessful because python is not natively installed on the VIOS prior
to version 4.1.0.10.
Example 3-54 Ansible command failure due to Ansible not being set up
[root@ansible-AAP-redbook playbooks]# cat facts.yml
---
- hosts: all
remote_user: ansible
tasks:
- name: print facts
debug:
var: ansible_facts
[root@ansible-AAP-redbook playbooks]# ansible-playbook -i vios_inventory.yml facts.yml
--ask-pass
SSH password:
[root@ansible-AAP-redbook playbooks]
We opted to do option 2 and we created the playbook dnf-vios.yml to do the install. This is
shown in Example 3-55.
# cat -n dnf-vios.yml
- hosts: all
remote_user: ansible
gather_facts: no
become: true
become_method: su
tasks:
- name: download dnf_aixtoolbox.sh
get_url:
url:https:
//public.dhe.ibm.con/aix/freeSoftware/aixtoolbox/ezinstall/ppc/dnf_aixtoolbox.sh
dest: /tmp/dnf_aixtoolbox.sh
mode: 0755
delegate_to: localhost
- name: copy dnf_aixtoolbox.sh to vios
raw: "scp -p /tmp/dnf_aixtoolbox.sh {{ ansible_user }}@{{ inventory_hostname
}}:/tmp/dnf_aixtoolbox.sh
delegate_to: localhost
- name: execute dnf_aixtoolbox.sh on vios
raw: "chmod 755 /tmp/dnf_aixtoolbox.sh ; /tmp/dnf_aixtoolbox.sh -y"
PLAY [all]
*******************************************************************************************
*****
changed: [vio1]
changed: [vio1]
PLAY RECAP
*******************************************************************************************
*****
Now we have dnf installed in our VIOS which is required to install python.
There will be multiple VIOS in your environment as it is suggested for high availability that
there be at least two VIOS per server frame – acting as an active/active failover solution. If
you have additional requirements for separation of environments for security, there may be
additional VIOS on your server and there will almost certainly be multiple servers in your
environment so there will be many VIOS to manage.
We need to be sure we can access all of the VIOS LPARs. We can manually create the user
on each VIOS, or we just can use the power of Ansible. We chose to do it with Ansible and we
created a shell script as shown in Example 3-56 for this purpose.
if { $argc != 3 } {
send_error "Usage : $argv@ ssh-connection ssh-password user-password\n"
exit 1
log_user 1
set timeout 60
send "oem_setup_env\r"
send "exit\r"
send "exit\r"
The script shown above, will be used as part of our playbook to run the user creation on every
VIOS Server.
We have the script, now we’ll build an inventory to run that playbook as shown in
Example 3-57.
The playbook we used create the ansible user on each VIOS is shown in Example 3-58.
So we have our users, created we have our inventory we can install python on all the partitions,
with a similar script than the one we used for creating the users. This is shown in Example 3-59.
if { $argc != 3 } {
send_error "Usage : $argv@ ssh-connection ssh-password user-password\n"
exit 1
log_user 1
set timeout 60
send "oem_setup_env\r"
Now that our VIOS clients are ready for use, we’ll walk through a set of use cases for the
VIOS collection. These include:
– VIOS upgrade
– VIOS backup
– VIOS mapping
– VIOS hardening
First we need to validate that our collection is installed and ready. In Example 3-60 we
validate the versions of the collections that are installed.
# /root/.ansible/collections/ansible_collections
Collection Version
----------------- -------
community.general 6.6.0
ibm.aix *
ibm.power_aix 1.6.4
ibm.power_hmc 1.8.0
ibm.power_vios 1.2.3
[root@ansible-AAP-redbook ~]#
VIOS upgrade
With our collections installed and ready, we’ll start using them to wit some powerful Ansible
scripts for our servers. Example 3-61 shows a playbook used to update the VIOS software.
- hosts: all
remote_user: ansible
gather_facts: no
collections:
- ibm.power_aix
- ibm.power_vios
roles:
The magic behind the playbook, is in the roles. Example shows details on the update_vios
role.
However, the vios-update playbook shown in Example 3-61 is not complete as it should
handle a couple more issues:
– Commit can fail with RC = 19 or 20 which means “Already committed”.
– After update you should reboot your VIOS.
You just can manually execute the updateios playbook by on the command line and ignore
some return codes as shown in Example 3-63.
Backup VIOS
Now let us look at some playbooks to backup your VIOS. These will run viosbr which
basically is an mksysb for VIOS Servers. This Ansible playbook shown in Example 3-64 will
execute viosbr and direct the output to a NFS share location.
tasks:
- name: get current date
ansible.builtin.shell: "date +%Y%m%d"
register: ourdate
delegate_to: localhost
VIOS mapping
We can use an Ansible playbook to get facts about our VIOS Servers like with the
vios-facts.yml, script bellow
tasks:
- name: get mapping facts
ibm.power_vios.mapping_facts:
- name: print facts
ansible.builtin.debug:
var: ansible_facts
VIOS hardening
Out last example script, Example 3-66, is written to set the security level for the vios.
As you saw during this chapter, there is a lot you can do with the VIOS collection, using a mix
of the HMC, AIX, and VIOS collections.
Ansible will be more help if you are deploying your OpenShift cluster on PowerVS, on
PowerVM managed by PowerVC, or on Power servers managed by KVM. These are the
scenarios we will describe in this section. The environment is shown in Figure 3-22 on
page 165.
You must have an IBM Cloud account to deploy PowerVS resources. To ensure that your
PowerVS instance is capable of deploying OpenShift clusters, make sure that your account
has the proper permissions and validate that you have created the appropriate security
certificates.
Detailed prerequisites
This section helps you ensure that your IBM Cloud account is set up and that you will be able
create the PowerVS instance for your OpenShift cluster. The following steps will help you set
up your environment:
1. Validate that you have an IBM Cloud account to create your Power Systems Virtual Server
instance. If you don’t already have one, you can create an account at:
https://cloud.ibm.com.
2. Create an IBM Cloud account API key. For information on setting up your API key
reference IBM Power Virtual Server Guide for IBM AIX and Linux, SG24-8512 or see the
IBM Cloud documentation.
3. After you have an active IBM Cloud account, you can create a Power Systems Virtual
Server service, you can reference IBM Power Virtual Server Guide for IBM AIX and Linux,
SG24-8512 for more details.
4. Next, request an OpenShift Pull secret. Download the secret from
https://cloud.redhat.com/openshift/install/power/user-provisioned. You'll need to
place the file in the install directory and name it pull-secret.txt 3. You will need a RHEL
Subscription ID and Password.
The PowerVS instance will require some special network permissions to ensure inbound
access is allowed for TCP ports for ssh (22) and to allow outbound access for http (80), https
(443), and OC CLI (6443). This is only required when using a Cloud instance or a remote VM
so that you can connect to it using SSH and run the installer.
1. Create an install directory where all the configurations, logs and data files will be stored.
[root@ansible-AAP-redbook ~]# mkdir ocp-install-dir && cd ocp-install-dir
2. Download the script on your system and change the permission to execute as shown in
Example 3-67.
3. Running the script without parameters displays the help information for the script as
shown in Example 3-68.
Usage:
openshift-install-powervs [command] [<args> [<value>]]
Available commands:
setup Install all the required packages/binaries in current directory
variables Interactive way to populate the variables file
create Create an OpenShift cluster
destroy Destroy an OpenShift cluster
output Display the cluster information. Runs terraform output [NAME]
access-info Display the access information of installed OpenShift cluster
help Display this information
Where <args>:
-var Terraform variable to be passed to the create/destroy command
-var-file Terraform variable file name in current directory. (By default using
var.tfvars)
-flavor Cluster compute template to use eg: small, medium, large
-force-destroy Not ask for confirmation during destroy command
-ignore-os-checks Ignore operating system related checks
-ignore-warnings Warning messages will not be displayed. Should be specified first,
before any other args.
-verbose Enable verbose for terraform console messages
-all-images List all the images available during variables prompt
-trace Enable tracing of all executed commands
-version, -v Display the script version
Environment Variables:
IBMCLOUD_API_KEY IBM Cloud API key
RELEASE_VER OpenShift release version (Default: 4.13)
ARTIFACTS_VERSION Tag or Branch name of ocp4-upi-powervs repository (Default: main)
[root@ansible-AAP-redbook ocp-install-dir]#
4. Set up your environment by exporting the IBM Cloud API Key and RHEL Subscription
Password as shown in Example 3-69.
Advanced Usage
Before running the script, you may choose to override some environment variables depending
on your requirements. By default OpenShift version 4.12 is installed. If you want to install
4.11, then export the following variables:
$ export RELEASE_VER="4.11"
ARTIFACTS_VERSION: Tag/Branch (eg: release-4.11, v4.11, main) of
ocp4-upi-powervs repository. Default is "main".
$ export ARTIFACTS_VERSION="release-4.11"
Non-interactive mode
You can avoid the interactive mode by having the required input files available in the install
directory. The required input files are:
1. SSH key files (filename: id_rsa & id_rsa.pub)
2. Terraform vars file (filename: var.tfvars). An example is shown in Example 3-70.
You can also pass a custom Terraform variables file using the option `-var-file <filename>`
to the script. You can also use the option `-var "key=value"` to pass a single variable. If the
same variable is given more than once then precedence will be from left (low) to right (high).
For a flowchart of how the script and process works, please take a look at Figure 3-23 on
page 169.
The Ansible code for this process is provided by the ocp4-upi-powervm project and it provides
Terraform based automation code to help the deployment of OpenShift Container Platform
(OCP) 4.x on PowerVM systems managed by PowerVC. This project leverages the same
Ansible playbook internally for OCP deployment on PowerVM LPARs managed via PowerVC.
If you are not using PowerVC, but instead are using standalone PowerVM LPAR management,
then this guide walks you through the process of using the Ansible playbook to setup a helper
node (bastion) to simplify the OCP deployment.
PowerVC Prerequisites
As part of the install process, you will need to create a Red Hat CoreOS (RHCOS) image and a
RHEL 8.2 (or later) image in PowerVC. The RHEL 8.x image will be installed on the bastion
node and the RHCOS image is installed on the boostrap, master and worker nodes.
– For RHCOS image creation, follow the steps documented in this document.
– For RHEL image creation follow the steps mentioned in this document. You may either
create a new image from ISO or use a similar method to that used for the RHCOS option.
Compute Templates
You'll need to create compute templates for bastion, bootstrap, master and worker nodes. The
recommended LPAR configurations are:
Bootstrap: 2 vCPUs, 16GB RAM, 120 GB Disk
Master: 2 vCPUs, 32GB RAM, 120 GB Disk
Worker 2 vCPUs, 32GB RAM, 120 GB Disk
Bastion 2 vCPUs, 16GB RAM, 200 GB Disk
PowerVM LPARs by default use SMT8, so with 2 vCPUs the number of logical CPUs as seen
by the Operating System will be 16 (2 vCPUs x 8 SMT)
All further instructions provide here, assume you are in the code directory:
ocp4-upi-powervm
Start Install
Run the following commands from within the directory:
$ terraform init
$ terraform apply -var-file var.tfvars
If using environment variables for sensitive data, then do the following, instead.
$ terraform init
$ terraform apply -var-file var.tfvars -var user_name="$POWERVC_USERNAME" -var
password="$POWERVC_PASSWORD" -var
rhel_subscription_username="$RHEL_SUBS_USERNAME" -var
rhel_subscription_password="$RHEL_SUBS_PASSWORD"
Now wait for the installation to complete. It may take around 40 minutes to complete
provisioning.
install_status = COMPLETED
master_ips = [
"192.168.25.147",
"192.168.25.176",
]
oc_server_url = https://test-cluster-9a4f.mydomain.com:6443
storageclass_name = nfs-storage-provisioner
web_console_url = https://console-openshift-console.apps.test-cluster-9a4f.mydomain.com
worker_ips = [
"192.168.25.220",
"192.168.25.134",
]
If you are using a wild card domain name like nip.io or xip.io then etc_host_entries is empty as
shown in Example 3-74.
cluster_id = test-cluster-9a4f
etc_hosts_entries =
install_status = COMPLETED
master_ips = [
"192.168.25.147",
"192.168.25.176",
]
oc_server_url = https://test-cluster-9a4f.16.20.34.5.nip.io:6443
storageclass_name = nfs-storage-provisioner
web_console_url =
https://console-openshift-console.apps.test-cluster-9a4f.16.20.34.5.nip.io
worker_ips = [
"192.168.25.220",
"192.168.25.134",
]
This information can be retrieved anytime by running the following command from the root
folder of the code:
$ terraform output
In case of any errors, you'll have to re-apply. Please refer to known issues to get more details on
potential issues and workarounds.
Post Install
Once the deployment is completed successfully, you can safely delete the bootstrap node. This
step is optional but recommended so as to free up the resources used. To delete the bootstrap
node change the count value to 0 in bootstrap map variable and re-run the apply command.
Tip: For Linux and Mac hosts file is located at /etc/hosts and for Windows it's located at
c:\Windows\System32\Drivers\etc\hosts.
The general format is shown in Example 3-75. The entries for your installation were printed at
the end of your successful install. Alternatively you can retrieve it anytime by running terraform
output from the install directory. Append these values to the host file.
Cluster Access
Once your cluster is up and running, you can login to the cluster using the OpenShift login
credentials in the bastion host. The location will be printed at the end of a successful install. or
you can retrieve it anytime by running terraform output from the install directory. An example
is shown in Example 3-76.
Note: Ensure you securely store the OpenShift cluster access credentials. If desired delete
the access details from the bastion node after securely storing them elsewhere.
You can copy the access details to your local system, or a secure password vault:
$ scp -r -i data/id_rsa [email protected]:~/openstack-upi/auth/\*
Clean up
If you are finished with your cluster and want to destroy it, by running:
terraform destroy -var-file var.tfvars
This ensures that all resources are properly cleaned up. Do not manually clean up your
environment unless both of the following are true:
– You know what you are doing.
– Something went wrong with an automated deletion.
This involves Terraform install and creating install images for RHCOS and RHEL.
LibVirt Prerequisites
KVM virtualization relies on libvirt to work, so as prerequisites for installing OpenShift on
KVM, you’ll need to follow the steps detailed at:
https://ocp-power-automation.github.io/ocp4-upi-kvm/docs/libvirt-host-setup/
All further instructions assume you are in the code directory ocp4-upi-kvm.
Start Install
Run the following commands from within the directory.
$ terraform init
$ terraform apply -var-file var.tfvars
If using environment variables for sensitive data, then do the following, instead.
$ terraform init
$ terraform apply -var-file var.tfvars -var
rhel_subscription_username="$RHEL_SUBS_USERNAME" -var
rhel_subscription_password="$RHEL_SUBS_PASSWORD"
Now wait for the installation to complete. It may take around 40 minutes to complete
provisioning.
Example 3-78
bastion_ip = 192.168.61.2
bastion_ssh_command = ssh [email protected]
bootstrap_ip = 192.168.61.3
cluster_id = test-cluster-9a4f
etc_hosts_entries =
192.168.61.2 api.test-cluster-9a4f.mydomain.com
console-openshift-console.apps.test-cluster-9a4f.mydomain.com
integrated-oauth-server-openshift-authentication.apps.test-cluster-9a4f.mydomain.com
oauth-openshift.apps.test-cluster-9a4f.mydomain.com
prometheus-k8s-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com
grafana-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com
bolsilludo.apps.test-cluster-9a4f.mydomain.com
install_status = COMPLETED
master_ips = [
"192.168.61.4",
"192.168.61.5",
"192.168.61.6",
]
oc_server_url = https://api.test-cluster-9a4f.mydomain.com:6443/
storageclass_name = nfs-storage-provisioner
web_console_url = https://console-openshift-console.apps.test-cluster-9a4f.mydomain.com
worker_ips = []
These details can be retrieved anytime by running the following command from the root folder
of the code:
$ terraform output
Post Install
After installation you need to do the following:
Once the deployment is completed successfully, you can safely delete the bootstrap node.
This step is optional but recommended so as to free up the resources used. The process
is shown in “Clean up” on page 173.
Configure the dns as explained in “Create API and Ingress DNS Records” on page 172
8. Next click on “User Properties” to enable remote access as shown in Figure 3-25.
9. Depending on your security requirements, you may need to modify the access privileges
defining what resources (Frames and LPARs) that can be managed by your Ansible user.
This information is defined in the user roles, for each role we can define access to a
reduced set of frames, and on those frames, a reduced set of machines. To illustrate this,
in Figure 3-26 we created a role that is only allowed to manage a reduced set of frames
and LPARs.
10.If you define a new role after creating the user, you can just modify the user and select the
new Role as shown in Figure 3-27
At this point, we have setup a user with a set of permissions to run our Ansible scripts,
through ssh or through the hmc ibm power collection.
Along with the installation guide, we are providing some hints and tips that we collected
during our testing.
1. Prior to running the install, make sure that you have setup python3.9 as defaults for python
and pip3 as shown in Example 3-79.
2. Run ansible-galaxy.
If you are behind a proxy you might need to run the ansible command with the
--ignore-certs option. The collection installation progress will be output to the console.
Note the location of the installation so that you can review other content included with the
collection, such as the sample playbook. This is shown in Example 3-80.
If you want to install on a special directory you can run the install with the -p parameter to
specify the install directory:
[root@ansible-AAP-redbook ~]# ansible-galaxy collection install
ibm.power_hmc --ignore-certs -p /home/ansible/collections
If everything finished OK you can check the install directory (instead of /root, the path will
include the user directory tied to the user you used to run the install).
[root@ansible-AAP-redbook ~]# pwd
/root/.ansible/collections/ansible_collections/ibm/power_hmc/
Example 3-81 shows the content of the install directory in our example install.
Example 3-82 shows an example of using the Ansible built in shell to run cli commands on the
HMC.
[hmcs:vars]
ansible_user=ansible
hmc_password=xxxxxx
tasks:
- name: Get Information about the Frames Installed on the HMC
ansible.builtin.shell:
cmd: "sshpass -p {{ hmc_password }} ssh {{ ansible_user }}@{{ inventory_hostname }}
lssyscfg -r sys"
register: config
[root@ansible-AAP-redbook playbooks]#
In our playbook execution, the inventory has 3 HMCs but we only created the user with
special permits in one of them, hmc3. When we ran the playbook, the others failed because
the user does not exist. The output from the execution is shown in Example 3-83.
TASK [Get Information about the Frames Installed on the HMC] *******************
changed: [hmc3]
fatal: [hmc5]: FAILED! => {"changed": true, "cmd": "sshpass -p XXXXXX ssh ansible@hmc5
lssyscfg -r sys", "delta": "0:00:01.867500", "end": "2023-09-18 15:42:07.700107", "msg":
"non-zero return code", "rc": 5, "start": "2023-09-18 15:42:05.832607", "stderr": "",
"stderr_lines": [], "stdout": "", "stdout_lines": []}
fatal: [hmc4]: FAILED! => {"changed": true, "cmd": "sshpass -p XXXXXX ssh ansible@hmc4
lssyscfg -r sys", "delta": "0:00:01.843471", "end": "2023-09-18 15:42:07.787177", "msg":
"non-zero return code", "rc": 5, "start": "2023-09-18 15:42:05.943706", "stderr": "",
"stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [Save the Output, LPAR List for each Frame] *******************************
changed: [hmc3]
[root@ansible-AAP-redbook playbooks]#
This playbook was just an example of using the ssh connection to run cli commands. There
are many other options using the HMC collection that provide an extended set of features.
dynamic plugin facilitates the consolidation of the partitions into various groups and it can be
further fine-tuned according to its property. That means you can now dynamically group and
manage these partitions according to data center policy.
An example use case can be to apply specific patches only to a targeted version of AIX
partition by using the respective group created by this dynamic inventory management plugin.
The playbook shown in Example 3-84 is an example of dynamically creating a group
definition from the LPARs being managed. The playbook dynamically creates a group of all
the running AIX partitions with OS version 7.2,. This can be used as an inventory input to
perform patching or for OS upgrades.
Example 3-85
- name: Upgrade the HMC from 910 to 950
hosts: hmcs
collections:
- ibm.power_hmc
connection: local
vars:
curr_hmc_auth:
username: <hmc_username>
password: <hmc_password>
tasks:
- name: Update the 910 HMC with 9.1.910.6 PTF
hmc_update_upgrade:
hmc_host: '{{ inventory_hostname }}'
hmc_auth: '{{ curr_hmc_auth }}'
build_config:
location_type: ftp
hostname: <FTP_Server_IP>
userid: <FTP_Server_uname>
passwd: <FTP_Server_pwd>
build_file: HMC9.1.910.6/2010170040/x86_64/MH01857-9.1.910.6-2010170040-x86_64.iso
state: updated
In this snippet of a playbook there are two tasks defined utilizing the hmc_update_upgrade
module. The first task takes uses a PTF image stored in a FTP server to upgrade the HMC
running 9.1.910 level and the second task upgrades the HMC to 9.2.950 level using an image
stored in NFS server. Note that to update an HMC the task state should be defined as
updated and to upgrade an HMC the task state should be defined as upgraded. Both states
support NFS, FTP, SFTP and Disk (keeping the image in controller node) HMC image
repositories.
Additional examples can be found on this blog. More examples and samples are included in
the collection and can be found at. https://ibm.github.io/ansible-power-hmc/index.html
For application development, automation can help you build a modern continuous integration
and continuous development (CI/CD) pipeline which provides a management structure
around your development cycle and allows you to be more agile by integrating application
changes quickly and safely.
For application and database environments like Oracle and SAP, there are often multiple
instances of those environments that need to be deployed and maintained, often doing
repetitive tasks like building virtual machines and applying updates to your existing
environments. Automation using Ansible can help your IT specialists perform those repetitive
functions in an efficient, managed and repeatable way – freeing up time to do other tasks that
provide better value to your business.
Ansible can also automate the installation and management of many middleware and
application products such as Oracle Database, SAP NetWeaver and SAP HANA workloads,
IBM DB2 databases, and Red Hat OpenShift deployments.
4.2.2 IBM AIX, IBM i, and Linux on Power Collections for Ansible
The IBM Power Systems AIX collection provides modules that can be used to manage
configurations and deployments of Power AIX systems. Similar functions are provided by the
Power IBM i collection and the Linux on Power collection. The content in these collections
helps to manage applications and workloads running on IBM Power platforms as part of an
enterprise automation strategy within the Ansible ecosystem. These collections are available
from Ansible Galaxy with community support or through Red Hat Ansible Automation Platform
with full support from Red Hat and IBM.
To show the power and utility of using Ansible to manage your applications in an IBM Power
environment, we provide three use cases:
– 4.3, “Deploying a simple Node.js application” on page 187 show you how you can
automate application deployment in an AIX environment
– 4.4, “Orchestrating multi-tier application deployments” on page 188 points you to a
tutorial on orchestrating a two tier application with an application tier and a database tier.
– 4.5, “Continuous integration and deployment pipelines with Ansible” on page 188 shows
how to you can use Ansible to automate a CI/CD environment for an application in an
IBM i environment.
vars:
checkout_dir: 'tmp_nodejs'
tasks:
- name: Install Node js
command: /QOpensys/pkgs/bin/yum install nodejs10 -y
ignore_errors: true
- name: npm i
shell: "(/QOpensys/pkgs/lib/nodejs10/bin/npm i --scripts-prepend-node-path)"
args:
warn: false
chdir: '{{ checkout_dir }}/nodejs/mynodeapp'
executable: /usr/bin/sh
The playbook first validates that the required infrastructure components (git and yum) to
retrieve the application and install it. It then retrieves the application from the git repository
and has Ansible start the application on the server.
From an application orchestration standpoint, many prominent players have emerged in the
Kubernetes space such as: viz. Helm, Operator Framework, Kustomize, Automation, Broker
and Ansible Kubernetes module.
IBM has developed a step by step tutorial showing the deployment and orchestration of a
sample WordPress application along with its dependent MySQL database using Ansible. The
tutorial can be found at the following link:
https://developer.ibm.com/tutorials/orchestrate-multi-tier-application-deployme
nts-on-kubernetes-using-ansible-pt-3/
For IBM i users, Ansible emerges as a versatile solution for crafting CI/CD pipelines. It
interfaces with Version Control Solutions such as GitHub, facilitating tasks ranging from code
validation to final deployment. Ansible's utility spans beyond Continuous Testing, merging into
the broader domain of development processes. This adaptability benefits both native
applications and open-source software deployment on the IBM i platform.
The foundation of a CI/CD pipeline through Ansible takes shape as a series of interconnected
stages. Starting with the Development Environment, Ansible playbooks automate tasks such
as provisioning IBM i virtual machines, code retrieval, build and deployment, and branching
for new development cycles. This initial phase lays the groundwork for efficient development
practices.
In the second stage, Developers play a hands-on role in code modifications and unit testing.
In this phase, developers make code changes, conduct unit tests on the development IBM i
system, commit code alterations to specific branches, and create pull requests. This phase
demonstrates the collaborative essence of the development process.
As the CI/CD pipeline progresses, Ansible again takes center stage during the Testing phase.
Ansible playbooks orchestrate the setup of new IBM i virtual machines for testing, the
installation of dependencies, cloning development branches, and the execution of
comprehensive tests. Furthermore, Ansible assists in pull request approval and automated
merge processes, ensuring code integration. Upon test completion, Ansible manages
environment cleanup for ongoing efficiency.
The final phase, Deployment, encapsulates the core objective of delivering refined software to
production environments. Ansible playbooks manage cloning the latest code from master
branches to build systems, directing the build process, facilitating deployment to production
systems, and maintaining environment hygiene through cleanup procedures. This phase
marks the realization of the CI/CD pipeline, where developed and tested code becomes
accessible to end-users.
Figure 4-1 shows a diagram describing the four stages described above for CI/CD using
Ansible:
To display some sample code showing how to automate your CI/CD processes with Ansible,
go to “Full cycle of the CI/CD process on IBM i with Ansible” on page 346.
Note: Ansible's role within the CI/CD pipeline significantly streamlines IBM i application
development. Its automation capabilities extend across development environment creation,
manual developer tasks, testing orchestration, and deployment management. By aligning
with contemporary CI/CD practices, Ansible contributes to advancing IBM i software
development with heightened speed, dependability, and agility.
The single-node collection supports both JFS-based installation and ASM-based installation
while the Oracle RAC collection supports a multi-layered installation process. Such layers
include infrastructure provisioning with PowerVC, and grid setup configuration and
installation, as well as setup and installation of the database binaries.
If you are not using PowerVC to help you manage your environment, you can still use the
collection to install and setup the grid and database installation. Doing so requires manual
setup of the nodes as per Oracle RAC requirements.
The Oracle DBA collection provides a set of tools to enable database administrators to
automate a wide range of their administrative activities, such as patching the database
servers, creating new database instances, managing users, jobs, and tablespaces amongst a
wide range of operations.
For specifics on the use of each of these collections see the following:
4.6.2, “Automating deployment of a single node Oracle database with Ansible” on page 191.
4.6.3, “Automating deployment of Oracle RAC with Ansible” on page 195.
4.6.4, “Automating Oracle DBA operations” on page 206
Working together to provide services for their common clients, IBM and Oracle have
established an alliance that shows a shared commitment to the success of those clients. As a
result of this alliance, IBM and Oracle have over 80,000 joint clients that are provided with
enhanced hardware and software support. This is enabled via an in-depth certification of
Oracle database on AIX as a collective effort. The alliance provides an award-winning
services practice, with a diamond partnership, as a result of extensive technology
collaboration and cooperative client support.
For further details on why IBM Power Systems and AIX platform better suit hosting Oracle
database, you can refer to Oracle on IBM Power Systems, SG24-8485.
The Ansible power_aix_oracle galaxy collection contains artifacts (roles, vars, config files and
playbooks) that are usable for the single-node Oracle database installation. Those artifacts
have a number of prerequisites that need to be met prior to using the collection.
d. Disks in c on page 191 above should be clean from any old header data.
To check the header info use the following command:
lquerypv -h /dev/hdiskX
If you need to clear the disk’s pvid use this command:
chdev -l hdiskX -a pv=clear
To clear the header data, use this command:
dd if=/dev/zero of=/dev/hdiskX bs=1024k count=100
2. The Oracle database version currently supported on AIX is 19c. All power_aix_oracle
collection artifacts assume this version for the installation process. It can be downloaded
from Oracle edelivery website or Oracle Technology Network (OTN).
3. The power_aix_oracle collection uses the power_aix collection for some of the
configuration requirements, hence it needs to be installed as well on the Ansible server.
Note: If the LPAR has not been bootstrapped for Ansible before, it may be a good idea to
do so now.
Example 4-2 Installing the ibm.power_aix_oracle collection from Ansible Galaxy website
ansible-galaxy collection install ibm.power_aix_oracle
Process install dependency map
Starting collection install process
Installing 'ibm.power_aix_oracle:1.1.1' to
'/root/.ansible/collections/ansible_collections/ibm/power_aix_oracle'
Once the collection is installed successfully, all components such as roles, variables, and
playbooks are saved in the directory specified in the last output line of Example 4-2. You may
copy that directory to a working directory to update it with your variables and other
parameters to avoid changing the installed source.
Figure 4-3 show the commands to copy the collection and then shows the contents of the
collection.
Figure 4-3 Copy the collection’s directory into a working directory and show its contents
The heavy lifting of the collection is done by the roles. There are three key roles which are
stored in the roles directory as seen in Figure 4-3 on page 192.
Example 4-3 A playbook to install Oracle DB and creates an instances based on the vars file
- hosts: all
gather_facts: yes
vars_files: vars/oracle_params.yml
roles:
- role: preconfig
tags: preconfig
- role: oracle_install
tags: oracle_install
- role: oracle_createdb
tags: oracle_createdb
To check on future updates of the collection, you may refer to its documentation site.
Figure 4-4 shows a high level overview of Oracle RAC installation on existing infrastructure.
Figure 4-5 on page 196 shows the same overview for both infrastructure provisioning and
Oracle RAC software installation automation.
1
https://www.oracle.com/qa/database/real-application-clusters/
Figure 4-5 Oracle RAC deployment topology with IBM PowerVC automating the infrastructure layer
Oracle RAC installation, on AIX and elsewhere, offers a wide range of complexities both at the
infrastructure layer setup and in the software installation requirements. At the infrastructure
layer the complexities encompass setting up the AIX nodes on hosts that meet the RAC
requirements. These requirements include setting kernel tunable parameters, setting network
attributes, setting shared disks attributes, and setting up ssh password-less access among
others. It is a tedious, repetitive and error-prone process when done manually. Similarly, the
manual process of the grid and database software installation is interactive and requires user’s
attentive physical presence for hours.
Luckily, Ansible automation is here for the rescue. The Ansible Oracle RAC collection
(ibm.power_aix_oracle_rac_asm) available in Ansible galaxy and GitHub, simplifies the
installation of Oracle RAC 19c on the AIX operating system running on IBM Power servers by
automating both the infrastructure setup operation and the software installation and
configuration operation. It contains playbooks and a number of supporting roles and other
artifacts that automate both layers.
The infrastructure layer automation via the collection requires IBM PowerVC. If your
environment is not equipped with PowerVC, you can prepare the infrastructure manually and
then use the collection for the Oracle RAC software installation. Otherwise, you can use the
collection to automate both layers of the process.
Once the collection is installed successfully, all of its contents will be stored in the directory
shown in the last line of Example 4-4’s output above. Copy that directory to your workspace
and work with it. This way, the original collection’s directory will remain as an unchanged
reference.
Figure 4-6 shows contents of the collection’s directory post copying to the workspace
Figure 4-6 Copy the Oracle RAC installation collection directory to the workspace and show its
contents
Note: While the hdisks in items 1 and 2 are expected to deployed by PowerVC as hdisk0
and hdisk1, the requirement is that each one of them would have the same hdisk number
in both nodes. However, they do not have to be hdisk0 and hdisk1 respectively.
3. The OS installed on the image should be AIX 7.2 TL4 SP1 or newer or AIX 7.3.
4. The following filesets need to be installed on the AIX version prior to the process of
installing Oracle RAC software. While they maybe installed in the nodes after they are
created, the process becomes much simpler if they are installed in the source LPAR
before capturing it as the PowerVC image:
– bos.adt.base
– bos.adt.lib
– bos.adt.libm
– bos.perf.libperfstat
– bos.perf.perfstat
– bos.perf.proctools
– bos.loc.utf.EN_US
– bos.rte.security
– bos.rte.bind_cmds
– bos.compat.libs
– xlC.aix61.rte
– xlC.rte
– rsct.basic.rte
– rsct.compat.clients.rte
– xlsmp.msg.EN_US.rte
– xlfrte.aix61
– openssh.base.client
– expect.base
– perl.rte
– Java8_64.jre
– dsm
5. The unzip RPM should be installed on the image. You can download the latest version
from here.
6. Update the image section in the vars/powervc.yml file in the collection with the image,
image_aix_version and image_password with the latter set to the AIX root password
value.
Note: While the Oracle RAC collection can support up to an 8-node cluster, it has been
extensively tested for a 2-node cluster. If you intend to use the collection to provision a
cluster with more than two nodes, then all variables files must be reviewed to update the
variables’ values accordingly.
Also note that while this collection documents using a single IP address for RAC scan
service, this is for testing purpose only. Oracle recommends that you use three IP
addresses for this service.
Note: When using PowerVC for provisioning the nodes, ensure that both nodeX_net_ports
variables in vars/powervc.yml file list the public port, private 1 and virtual 2 ports in this
sequence which will guarantee using en0, en1 and en2 respectively.
7. Update the network section of the vars/powervc.yml file, supplying the network names
and IP addresses in the variables named according to the function defined in Table 4-1 on
page 199 above.
8. Also update the vars/powervc.yml file with the following additional variables:
– DNS server and domain.
The DNS server should be capable of forward and reverse name resolution for all 5 IP
addresses labeled Yes in the last column of Table 4-1 on page 199.
– NTP server.
The name server is needed to keep the cluster operational. Alternatively, you will need
to ensure that the date and time are synchronized between the 2 nodes.
– NFS server and its export directory as well as the directory to be used as NFS mount
point in the nodes.
Oracle RAC installation binaries including the grid, database, OPatch and RU should
be stored in subdirectories under that export directory in the NFS server. Check to see
if a node deployed from the PowerVC image would be able to successfully mount the
export directory from the NFS server.
Disks of each diskgroup should be of the same size, characteristics and similar IO speed.
Namely, these four diskgroups are:
1. The OCRVOTE diskgroup stores OCR (Oracle Cluster Registry) and Voting disks
information.
2. The GIMR (Grid Infrastructure Management Recovery) diskgroup contains a multi-tenant
database (MGMTDB) with one pluggable database.
3. The ACFS (ASM Cluster File System) diskgroup is used for staging the Oracle database
home binaries.
4. The DATA diskgroup is used for staging the database files.
The Oracle RAC installer expects each disk to have the same hdisk number in all of the RAC
nodes.
Note: When using the PowerVC playbook from the collection to provision the nodes, it
guarantees that each has the same hdisk number across all nodes. It does so by creating
them one at a time and after creating each one, it attaches it to all nodes and then runs
cfgmgr to ensure it captured the next available sequential number for the hdisk in all nodes
before moving on to the next disk.
Table 4-2 on page 201 cross matches the above diskgroups and their corresponding variable
names, disks count and each disk’s size as set as default in the disks list of the
vars/powervc.yml file. It also shows the corresponding hdisk number as set in the
diskgroups variable in the vars/powervc_rac.yml file.
GIMR “{{racName}}-GIMRx” 2 67 40
ACFS “{{racName}}-ACFS-DBHome” 1 8 75
DATA “{{racName}}-DBDiskx” 2 9 10 10
Keep in mind the following considerations as you update these variables in their variables file:
The diskgroups variable in the vars/powervc_rac.yml file lists the hdisk number of these
disks. With a PowerVC nodes’ deployment, the image has taken hdisk0 and hdisk1 as
described in “Setup PowerVC image for Oracle RAC” on page 198, consequently hdisk
number of each of these ASM disks would be as shown in the ‘hdisk number’ column in
Table 4-2 above. If you change the count of any of the diskgroups then you will need to
update the hdisk numbers in that diskgroups variable in the vars/powervc_rac.yml file.
You may change the disks’ sizes to meet your requirements. Ensure that all disks of a
given diskgroup have the same size.
The vol_size_GB variable in vars/powervc_rac.yml is set to 75 GB based on the ACFS
disk size being 75 GB. If you change that disk’s size in vars/powervc.yml then update that
variable in vars/powervc_rac.yml file as well.
Note: When using PowerVC playbook from the collection to provision the nodes, it
guarantees that all non-rootvg disks’ headers are clear and has no PVID because it carves
them fresh from the storage subsystem.
For manual setup of the nodes, use chdev -l hdiskX -a pv=clear to clear the PVID. Then
use lquerypv -h /dev/hdiskX to check if the header is clear, If it is not, then use dd
if=/dev/zero of=/devhdiskX bs=1024k count=100 to clear it.
powervc_add_nodes_to_inventory
This role updates the inventory file with the nodes and additional parameters to set it up
for Ansible management and the prepares the environment for execution of the second
playbook that is responsible for grid and database software installation.
When you build the infrastructure manually, installation of the grid and database software
requires updating vars/rac.yml file with the values of the required variables and include only
vars/rac.yml file in the playbook. If you automate the infrastructure provisioning with PowerVC
as per previous section, then you would have populated those variables in the vars/powervc.yml
file, which are then referenced in vars/powervc_rac.yml file. In this case you would need to
include both vars/powervc.yml file and vars/powervc_rac.yml file in the playbook.
Option 1 is shown in Figure 4-4 on page 195. Option 2 is shown in Figure 4-5 on page 196
4. Ensure the hostnames, virtual and scan IP addresses are added correctly to the DNS
server.
5. If no NTP server is configured for the nodes, then ensure the clock on the cluster nodes
are in sync.
6. NFS servers are needed for the AIX filesets installation as well as for staging the Oracle
19c software.
7. Update vars/rac.yml file to reflect the correct values for all the concerned variables.
8. Review the inventory and ansible.cfg files and ensure the nodes are added correctly to the
inventory file with the correct name of the nodes and the group containing them.
9. Update the install_and_configure_Oracle_RAC.yml playbook shown in Example 4-5 below
as follows:
a. Uncomment the hosts: line and set the field by specifying the inventory group name
per step 8.
b. Uncomment the first variables file (named vars/rac.yml) to have its variables included
in this execution.
10.Run the playbook as follows: ansible-playbook install_and_configure_Oracle_RAC.yml
the output is shown in Example 4-5.
Troubleshooting Note: If any of the nodes is rebuilt (or otherwise its ssh identity
has changed) after been added to Ansible server’s ~/.ssh/known_hosts, then entries
in that file need to be cleared out prior to any subsequent attempt to install Oracle
RAC software in these nodes. This is true for installation option 1 as well.
This precautionary step would prevent running into a “WARNING: POSSIBLE DNS
SPOOFING DETECTED!” which would cause password-less ssh connection to fail and
subsequently failure of the playbook execution on node1 and therefore incomplete
software installation process.
---
# powervc_build_AIX_RAC_nodes.yml
- name: Display the input name prefix and count of VMs to be built
debug:
msg: "Creating nodes {{racName}}1 and {{racName}}2 for this dual-node Oracle RAC."
- name: define the network ports based on the networks and IP addresses to be used.
import_role: name=powervc_create_network_ports
- import_role: name=powervc_obtain_token
- include_role: name=powervc_create_and_multiattach_asm_volumes
with_items: "{{ disks }}"
- name: Now the nodes are good to go, add them to the inventory file to be managed by
Ansible
import_role: name=powervc_add_nodes_to_inventory
# Importing the playbook to be used for installing and configuring the Oracl RAC.
- import_playbook: install_and_configure_Oracle_RAC.yml
8. Notice that the last step in it invokes the software installation playbook which is shown in
Example 4-6.
You need to update the software installation playbook by uncommenting the last variables
file (namely - vars/powervc_rac.yml) as the variables defined in that file are needed in the
playbook.
9. Run operation 1’s playbook (which will now perform both operations) as follows:
ansible-playbook powervc_build_AIX_RAC_nodes.yml -e racName=myorac.
In that case, you can run the infrastructure playbook the same way as in step Example 7 on
page 204 with the only exception of commenting out the last line in that playbook (shown in
Example 4-6 on page 205) to avoid importing the software installation playbook.
Then when you are ready to do the software installation, you follow these two steps:
1. Uncomment the middle variables file in the software installation playbook (shown in
Example 4-6 on page 205) which reads - vars/powervc.yml so that both the second and
third variable files that have PowerVC in their name are active.
2. Run the software installation playbook with racName as a variable as follows:
ansible-playbook install_and_configure_Oracle_RAC.yml -e racName=myorac
As you continue to work with the collection, you might find it useful to refer to the collection
documentation for newer release updates. Furthermore, the collection’s Github issues page is a
good tool to leverage for resolving any issues should they arise.
Note: Notice that the Oracle RAC automation collection stops at the software installation.
To create database instances and manage them you can use section 4.6.4, “Automating
Oracle DBA operations” on page 206
Prerequisites
The following software must be installed on the Ansible controller host.
Python 3.6 or later (Python can be installed using dnf install python3).
Cx_oracle
This is a python module which makes the connection to the database using sys privileges.
More information on Cx_oracle can be found here.
To install Cx_oracle online use one of the following commands:
– As root: python -m pip install cx_Oracle --upgrade
– As a non-root user: python -m pip install cx_Oracle--upgrade --user
For an offline installation do the following:
Note: If there are multiple python versions, the python version which was used to install
cx_oracle must be used for running the playbooks. To verify which Python version see
Example 4-7 for assistance.
Example 4-7 shows how to validate which version of Python was used to install the Cx_oracle
package and which must be used to run the playbooks.
Example 4-7 Determine the version of Python used to install Cx_oracle package
$ pip3.9 show cx_Oracle
Name: cx-Oracle
Version: 8.3.0
Summary: Python interface to Oracle
Home-page: https://oracle.github.io/python-cx_Oracle
Author: "Anthony Tuininga",
Author-email: "[email protected]",
License: BSD License
Location: /home/ansible/local/lib/python3.9/site-packages
Requires:
Required-by:
As we can see from Example 4-7, the location of Cx-Oracle is in python3.9 site-packages. So,
python3.9 must be used as the python interpreter to run the playbooks.
Getting Started
To use the Oracle DBA operations collection, Ansible version 2.9 or later must be installed on
RHEL 8.x or later for either Linux on Power or x86-64. You can download this collection from
any of the following public repositories:
– Galaxy: https://galaxy.ansible.com/ui/repo/published/ibm/power_aix_oracle_dba/
– Github: https://github.com/IBM/ansible-power-aix-oracle-dba
1. Install the collection using this command:
$ ansible-galaxy collection install ibm.power_aix_oracle_dba
2. Download and extract the Oracle Instant client software from Oracle site. When you arrive
at the download site click on “other platforms” as shown in Figure 4-7 to get the option for
downloading the Linux on Power client.
3. On the next page, select Linux on Power Little Endian as shown in Figure 4-8.
Create Database
The role “oradb_create” is used to create databases. It can be used for a Non Container
Database (CDB) instance or a CDB in a Single Instance or RAC. In this example we're going
to create a RAC Container Database (CDB) called “devdb” with one PDB called “devpdb”.
8. Execute the following command to run the playbook as shown in Example 4-12.
PLAY RECAP
*******************************************************************************************
rac93 : ok=11 changed=4 unreachable=0 failed=0 skipped=3
rescued=0 ignored=0
In the following example we're going to create two database users (testuser1 & testuser2) in a
pluggable database DEVPDB running in a container database and grant privileges to the
users.
1. There are two files which need to be updated:
a. {{ collection_dir }}/power_aix_oracle_dba/playbooks/vars/manage-users-vars.yml: This
contains database hostname, database port number and the path to the Oracle client.
b. {{ collection_dir }}/power_aix_oracle_dba/playbooks/vars/vault.yml: This contains sys
password which will be used by cx_oracle to connect to the database with sysdba
privilege.
2. Update the common variables file
{{collection_dir}}/power_aix_oracle_dba/playbooks/vars/manage-users-vars.yml
as shown in Example 4-14.
- users:
- schema: testuser1 # Username to be created.
default_tablespace: users # Default tablespace to be assigned to the user.
service_name: devpdb # Database service name.
schema_password: oracle3 # Password for the user.]
grants_mode: enforce # enforce|append.
grants:
- connect # Provide name of the privilege as a list to
grant to the user.
- resource
state: present # present|absent|locked|unlocked [present: Creates user,
# absent: Drops user]
# Multiple users can be created with different attributes as shown below.
- users:
- schema: testuser2
default_tablespace: users
service_name: devpdb
schema_password: oracle4
grants_mode: enforce
grants:
- connect
state: present # present|absent|locked|unlocked [present: Creates user,
# absent: drops user}
5. Check the user names in the database before creating them as shown in Example 4-17.
no rows selected
PLAY RECAP
*******************************************************************************************
*********************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
8. Check the user names in the database after creating them as shown in Example 4-20
where we can see the testuser1 & testuser2 are created in the PDB database.
USERNAME
---------------
TESTUSER2
TESTUSER1
To run other playbooks please refer the readme files in ansible-power-aix-oracle-dba/docs for
each corresponding DB admin task.
Figure 4-9 helps you understand how automation can help you over the life cycle of your SAP
environments.
When looking at the end-to-end SAP S/4HANA installation process, we can breakdown the
process into four major blocks:
1. Server provisioning
This is the most variable block, dependent on the infrastructure. This can be done using
Ansible alone or in concert with other tools such as Terraform. As SAP can be installed
across a wide variety of infrastructure and cloud environments, the server provisioning is
highly dependent on the infrastructure chosen for the SAP environment.
2. Basic OS Setup
SAP has spent a lot of effort in understanding how to install the base operating system for
the servers running the different SAP components. For SAP HANA, the operating system
is Linux, either SUSE or RHEL, and there are specific documented settings defined in
multiple SAP notes. Likewise, there are documented settings for NetWeaver installations.
3. HANA Installation and Configuration
4. S/4 installation and configuration
These projects are open source and community supported and may have more up to date
content compared to the Red Hat Enterprise Linux System Roles for SAP as it takes some
time to integrate new content from this GitHub project into the supported Red Hat product.
The SAP LinuxLab has a wide variety projects and tools but we are focused in this
publication on those projects that assist in automation of tasks in the SAP environment
including installation or Day 0 operations as well as Day 1 and Day 2 operations. The
current list of SAP LinuxLab projects relevant to automation is provided in Table 4-3.
Both of these options are described more completely in the following sections.
The Red Hat Enterprise Linux subscription provides support for RHEL System Roles. The
roles provided by the System Roles for SAP are:
– sap_general_preconfigure (was named sap-preconfigure in earlier versions)
– sap_netweaver_preconfigure (was named sap-netweaver-preconfigure previously)
– sap_hana_preconfigure (was named sap-hana-preconfigure previously)
The RHEL System Roles for SAP, just like the RHEL System Roles, are installed and run from
a central node or control node. The control node connects to one or more managed nodes
and performs installation and configuration steps on them. It is recommended that you use
the latest major release of RHEL on the control node (RHEL 8) and use the latest version of
the roles either from the rhel-system-roles-sap RPM or from Red Hat Automation Hub. The
RHEL System Roles for SAP and Ansible packages do not need to be installed on the
systems that are being managed. Table 4-4 shows the supported combinations of managed
systems and control nodes for the current version of the Linux System Roles for SAP.
Note: For control nodes running RHEL 7.8, RHEL 7.9, or RHEL 8.1, you can use the
previous versions of rhel-system-roles-sap which are in Tech Preview support status.
Please find the instructions for these versions here.
For control nodes running RHEL 8.2 or RHEL 8.3, you can use version 2 of
rhel-system-roles-sap which is fully supported. Please find the instructions for this version
here.
The System Roles for SAP support multiple hardware architectures for the managed nodes
including x86_64 for Intel compatible nodes, ppc64le for IBM Power nodes, and s390x for IBM
Z Systems.
Important: The System Roles for SAP are designed to be used right after the initial
installation of a managed node. Do not run these roles against a SAP or other production
system. The role will enforce a certain configuration on the managed node, which might
not be intended. Starting with version 3, the roles support an Assert parameter for
validating existing systems. See “Assert Parameter” on page 218 for more information.
Before applying the roles on a managed node, verify that the RHEL release on the
managed node is supported by the SAP software version that you are planning to install.
sap_general_preconfigure Install software and perform all configuration SAP NetWeaver and
steps which are required for the installation of SAP HANA
SAP NetWeaver or SAP HANA.
To prepare a managed node for running SAP HANA you would run both the
sap_general_preconfigure role and the sap_hana_preconfigure role. Likewise, to prepare a
node to run SAP NetWeaver you would run sap_general_preconfigure role and the
sap_netweaver_preconfigure role. Table 4-6 shows the SAP Notes that are implemented by
each of the system roles.
sap_netweaver_preconfigure SAP Note 2526952 (tuned profiles only) SAP Note 2526952 (tuned profiles only)
Install required packages as per documents Install required packages for SAP HANA as
SAP HANA 2.0 running on RHEL 7.x and SAP mentioned in SAP Note 2772999
HANA SPS 12 running on RHEL 7.x which are
attached to SAP Note 2009879
ppc64le only: SAP Note 2055470 ppc64le only: SAP Note 2055470
Assert Parameter
Starting with version 3 of the package rhel-system-roles-sap supports running the roles in
assert mode. In assert mode managed nodes are not modified, instead they report the
compliance of a node with the applicable SAP notes.
When running playbooks that use the assert mode on previous versions of the roles the
assert parameters are ignored which can modify the managed node instead of checking
them. Ensure that version 3 of the package is used. In addition check that the playbooks you
are using are calling the roles from the correct location which is /usr/share/ansible/roles by
default.
2. Validate that you can connect to the managed node using ssh without password.
# ssh hana-p11 uname -a
3. Create a yml file named sap-hana.yml with the content shown in Example 4-21.
Most of the initial configuration and provisioning activity (Day 0 activities) are done with the
Terraform modules and templates which support a wide variety of infrastructures, both
on-premises and in the cloud. These options include support for on-premises Power servers
and IBM PowerVS cloud instances.
This Ansible Collection executes various SAP Software installations for different SAP solution
scenarios, including:
SAP HANA installations via SAP HANA database lifecycle manager (HDBLCM)
– Install SAP HANA database server, with any SAP HANA Component (e.g. Live Cache
Apps, Application Function Library etc.)
– Configure Firewall rules and Hosts file for SAP HANA database server instance/s
– Apply license to SAP HANA
– Configure storage layout for SAP HANA mount points (i.e. /hana/data, /hana/log,
/hana/shared)
– Install SAP Host Agent
The collection is designed for Linux operating systems. It has not be tested or adapted for
SAP NetWeaver Application Server instances on IBM AIX or Windows Server. It supports
Red Hat Enterprise Linux 7 and above and SLES 15 SP3 and above.
Restriction: The collection does not support SLES prior to version 15 SP3 because:
– firewalld, which was added in SLES15 SP3, is used within the Ansible Collection.
– SELinux is used within the Ansible Collection. Full support for SELinux was provided
as of SLES 15 SP3.
This collection provides the Ansible roles described in Table 4-7. There are no custom
modules.
Important: In general the “preconfigure” and “prepare” roles are prerequisites for the
corresponding installation roles. The logic has been separated to support a flexible
execution of the different steps.
The collection consists of several Ansible roles which combine several Ansible modules into a
workflow. These roles, which are shown in Table 4-8, can be used within a playbook for
specific tasks.
sap_fapolicy update service fapolicyd for generic / sap nw / sap hana related uids
sap_firewall update service firewalld for generic / sap nw / sap hana related ports
In addition to the Roles, there are additional Ansible Modules provided by the collection.
These modules, shown in Table 4-9, can be called directly within a playbook.
Name Summary
Example Scenarios
This section provides some example scenarios using the functions provided by the
community.sap_operations Ansible collection.
sap_control
This Ansible Role executes basic SAP administration tasks on Linux operating systems,
including:
• Start/Stop/Restart of SAP HANA Database Server
• Start/Stop/Restart of SAP NetWeaver Application Server
• Multiple Automatic discovery and Start/Stop/Restart of SAP HANA Database
Server or SAP NetWeaver Application Server
The specific control function is defined using the sap_control_function parameter which
can be any of the following:
• restart_all_sap
• restart_all_nw
• restart_all_hana
• restart_sap_nw
• restart_sap_hana
• stop_all_sap
• start_all_sap
• stop_all_nw
• start_all_nw
• stop_all_hana
• start_all_hana
• stop_sap_nw
• start_sap_nw
• stop_sap_hana
• start_sap_hana
Executions specifying all will automatically detect any System IDs and corresponding
Instance Numbers. To specify a specific SAP system you would provide the SAP system
SID as a parameter.
To restart all SAP systems you would input:
sap_control_function: "restart_all_sap"
To stop a specific SAP HANA database you would input:
sap_control_function: "stop_sap_hana"
sap_sid: "HDB"
sap_hana_sr_takeover
This role can be used to ensure, control and change SAP HANA System Replication. The
role assumes that the SAP HANA System Replication was configured using the
community.sap_install.sap_ha_install_hana_hsr role.
The variables shown in Table 4-10 are mandatory for running this role unless a default
value is specified.
Table 4-10 Required variables
Variable Name Description
The playbook shown in Example 4-22 shows how to implement this role. The assumption
is that there are two systems set up for SAP HSR, hana1 and hana2, with SID RHE and
instance 00, The playbook ensures that hana1 is the primary and hana2 is the secondary.
The role will do nothing if hana1 is already the primary and hana2 the secondary. The role
will fail if hana1 is not configured for system replication and is not in sync.
Additional documentation and examples are available as part of the collection documentation
at https://github.com/sap-linuxlab/community.sap_operations. In the roles directory in
the GitHub location, there is a subdirectory for each role which includes a readme file
providing the specifics of how that role operates and its requirements.
Note: SAP software installation media must be obtained from SAP directly, and requires
valid license agreements with SAP in order to access these files.
When an SAP User ID is enabled as part of an SAP Universal ID, then the sap_launchpad
Ansible collection must use:
– The SAP User ID
– The password for login with the SAP Universal ID
If a SAP Universal ID is used then the recommendation is to check and reset the SAP User ID
‘Account Password’ in the SAP Universal ID Account Manager, which will help to avoid any
potential conflicts. Example 4-23 provides an example playbook to download specific SAP
software using the sap_launchpad.software_center-download module. Note that in this
playbook, the user is prompted to enter the SAP user ID and password. The playbook could
be modified to use variables to enter these values.
Example 4-23 Sample playbook to download software from the SAP software center
---
- hosts: all
collections:
- community.sap_launchpad
pre_tasks:
- name: Install Python package manager pip3 to system Python
yum:
name: python3-pip
state: present
- name: Install Python dependencies for Ansible Modules to system Python
pip:
name:
- urllib3
- requests
- beautifulsoup4
- lxml
In Example 4-24, we show an example playbook that downloads a list of files which are
defined using the maintenance planner. The playbook prompts for your SAP user credentials
and a specific maintenance planner transaction name which has been previously created.
collections:
- community.sap_launchpad
# pre_tasks:
# - debug:
# msg:
# - "{{ sap_maintenance_planner_basket_register.download_basket }}"
Infrastructure as Code provides us the ability to not only deploy or destroy new workloads, but
also to resize, re balance and migrate those workloads to different infrastructure. In this
chapter we will introduce the concept of Infrastructure as Code and describe the capabilities
that Ansible provides to deliver IaC on IBM Power.
Advantages of PowerVC
PowerVC offers a range of benefits tailored to IBM Power environments:
Expandability: Attach volumes or additional networks to VMs.
Flexibility: Import and export existing systems and volumes between on-premises and
off-premises locations.
Efficiency: Take snapshots of VMs and clone them for quick replication.
Seamless Migration: Migrate running VMs using Live Partition Mobility (LPM).
Continuity: Restart VMs remotely in the event of a server failure.
Simplified Management: Streamline Power Systems virtualization administration.
Agility: Adapt swiftly to changing business requirements.
Dynamic Resource Management: Create, resize, and adjust CPU and memory resources
for VMs.
When we deploy a new VM using IBM PowerVC it performs all of the required tasks. These
include:
Creating the VM profile on the HMC, including all network and storage interfaces
Creating the appropriate SAN zoning
Creating the VM on the storage controller
Creating the root and non-root storage volumes
Updating the VIO Server to map the VM to its new volumes
Booting the new VM
Likewise when we delete a VM, all the resources created by IBM PowerVC, are removed
cleanly.
With IBM PowerVC there two options to choose from to work with Ansible, these are:
Using the OpenStack modules
making RESTful API calls using the URI module
As IBM PowerVC is built on OpenStack we are able to use a number of the cloud modules
provided by the OpenStack community.
You can download the OpenStack Cloud collection either by using ansible-galaxy from the
command line, or via a requirements.yml file. Example 5-1 shows using ansible-galaxy to
download the collection.
Example 5-2 shows the requirements.yml that you can use to download the collection.
Example 5-2 Example requirements.yml file to download the OpenStack Cloud collection
collections:
- name: openstack.cloud
source: https://galaxy.ansible.com
In order for Ansible to run the OpenStack Cloud modules, you must first install the OpenStack
SDK on your Ansible controller. This is described in the ‘Read Me’ section of the collection
page in galaxy. The command shown in Example 5-3 will install the SDK.
You can verify that the modules have been installed correctly by using the ansible-doc
command. An example of viewing the documentation for the OpenStack Cloud image info
module is shown in Example 5-4.
Table 5-1 on page 232 shows some of the OpenStack Cloud modules relevant to IBM
PowerVC.
To authenticate from the command line create a ‘clouds.yml’ file which contains the
information about the cloud environments that Ansible needs to connect to. I this case it
would be our IBM PowerVC server. The OpenStack modules will look for the clouds.yaml file
in the following directories:
– current directory
– ~/.config/openstack
– /etc/openstack
It will use the first one it finds. The contents of an example clouds.yaml file are shown in
Example 5-5.
region_name: RegionOne
cacert: "./powervc.crt
The first line is the name of your cloud and is for reference only, it does not have to match the
real name. The cloud name allows us to define authentication methods to multiple OpenStack
clouds and refer to them individually within our playbooks.
The ‘auth_url’ is the IP address of your IBM PowerVC server, and the remaining auth settings
are specific to your PowerVC environment such as project, userid, password etc.
You also need to have a copy of the CA cert file from the PowerVC server that you reference
in the ‘cacert’ line. This can be found on your PowerVC server, the default location is
‘/etc/pki/tls/certs/powervc.crt’.
Example 5-6 Example playbook to list PowerVC images using the openstack,cloud.image_info module
$ cat PowerVC_list_images.yml
---
- name: List available PowerVC Images
hosts: localhost
gather_facts: false
tasks:
- name: Retrieve list of all AIX images
openstack.cloud.image_info:
cloud: powervc_cloud
register: image_results
In Example 5-7 we call a task using the ‘openstack.cloud.image_info’ modules, pointing at the
cloud ‘powervc_cloud’, This cloud name has to match the entry in our clouds.yml
authentication file defined earlier. We then register the results and output them in the second
task. We can now run the playbook to list our IBM PowerVC images as shown in
Example 5-7.
Example 5-7 Example playbook to list PowerVC images using the openstack,cloud.image_info module
$ ansible-playbook PowerVC_list_images.yml
PLAY [List available PowerVC Images]
****************************************************************************
TASK [Retrieve list of all AIX images]
****************************************************************************
ok: [localhost]
TASK [Show name, ID, OS distribition and status of images]
****************************************************************************
ok: [localhost] => {
"msg": [
{
"id": "f62a76dd-4742-445f-aa5c-f3f447dd778e",
"name": "RHCOS-4.12.17",
"os_distro": "coreos",
"status": "active"
},
{
"id": "0930d057-dc7e-415f-97cd-1fe36ecdcdbd",
"name": "RHEL v9.1",
"os_distro": "rhel",
"status": "active"
},
{
"id": "d51d8cfd-c83b-4ec6-9464-8d4215259546",
"name": "AIX 7.3",
"os_distro": "aix",
"status": "active"
},
{
"id": "c64ff508-3a81-4679-a9e4-29acc3f96430",
"name": "IBM i v7.3",
"os_distro": "ibmi",
"status": "active"
}
]
}
Note: You may have to install the community,general collection to parse the data using
‘json_query’. This is shown in Example 5-8.
Note that in Example 5-9 on page 234 we passed the openstack.cloud.server module a few
key variables to allow it to build the VM.
cloud - The name of the PowerVC cloud defined in clouds.yml
state - present (if the VM does not exist, create it)
name - name of the new VM (in this example we’ve used ‘aix-vm-1’)
image - name or ID of the PowerVC image to use (obtained from PowerVC)
flavor - name or IR of the PowerVC compute flavor to use (obtained from PowerVC)
network - name of the PowerVC network to use (obtained from PowerVC)
key_name - name of the SSH key pair to inject into the new VM (obtained from PowerVC)
When we run this task, Ansible will use the openstack.cloud.server module to connect to the
IBM PowerVC cloud and create the VM using the name provided. The results for running our
playbook are shown in Example 5-10.
Example 5-10 Output from create VM using OpenStack modules on IBM PowerVC
PLAY [Connect to PowerVC/Openstack and build VM]
*****************************************************************************
We can also see the new VM being created on the IBM PowerVC UI, as shown in Figure 5-1.
Note: In Example 5-10 we have allowed PowerVC to assign an IP address from its IP pool.
If any of the variables passed are incorrect, the module will fail without creating the VM. For
example if you passed the module an incorrect image name, it would fail with a message
similar to that shown in Example 5-11.
In Example 5-12 we only had to pass the openstack.cloud.server module three variables to
destroy the VM:
cloud - The name of the PowerVC cloud defined in clouds.ym
state - absent (if the VM exists, remove it)
name - name of the existing VM to destroy (in this example we’ve used ‘aix-vm-1’)
As the VM is managed by IBM PowerVC, by default when it is destroyed all of its resources
including the storage volumes, the SAN zones ands its IP allocations are also removed.
When we use the openstack.cloud.server module to destroy an existing VM we simply get the
message that the status was changed, as shown in Example 5-13.
We can also see the new VM being destroyed on the IBM PowerVC UI, as shown in Figure 5-2.
If we try and destroy a VM that does not exist, the openstack.cloud.server modules does not
by default return an error, as we stated the VM should be of ‘state: absent’. It simply tells us
the status has not changed (false), as shown in Example 5-14.
Example 5-14 Message showing attempt to destroy a VM that does not exist via PowerVC
TASK [Destroying VM aix-vm-1 using PowerVC]
*******************************************************************************
ok: [localhost]
The output collected shows a large amount of information that PowerVC was able to retrieve
about the VM and is shown in Example 5-16.
"OS-EXT-IPS-MAC:mac_addr": "fa:5f:75:e3:yy:xx",
"OS-EXT-IPS:type": "fixed",
"addr": "x.x.x.x",
"version": 4
}
]
},
"admin_password": null,
"attached_volumes": [
{
"attachment_id": null,
"bdm_id": null,
"delete_on_termination": true,
"device": null,
"id": "5119dfc1-8fc2-4a70-943d-da6266d71f9b",
"location": null,
"name": null,
"tag": null,
"volume_id": null
},
{
"attachment_id": null,
"bdm_id": null,
"delete_on_termination": false,
"device": null,
"id": "25d70a6e-0923-491c-bb87-47b484b11c16",
"location": null,
"name": null,
"tag": null,
"volume_id": null
}
],
"availability_zone": "Default Group",
"block_device_mapping": null,
"compute_host": "828422A_XXXXXXX",
"config_drive": "",
"created_at": "2023-06-28T09:09:50Z",
"description": "aix-vm-1",
"disk_config": "MANUAL",
"fault": null,
"flavor": {
"description": null,
"disk": 0,
"ephemeral": 0,
"extra_specs": {
"powervm:availability_priority": "127",
"powervm:dedicated_proc": "false",
"powervm:enable_lpar_metric": "true",
"powervm:enforce_affinity_check": "false",
"powervm:max_mem": "4096",
"powervm:max_proc_units": "0.5",
"powervm:max_vcpu": "1",
"powervm:min_mem": "2048",
"powervm:min_proc_units": "0.1",
"powervm:min_vcpu": "1",
"powervm:proc_units": "0.1",
"powervm:processor_compatibility": "default",
"powervm:secure_boot": "0",
"powervm:shared_proc_pool_name": "DefaultPool",
"powervm:shared_weight": "128",
"powervm:srr_capability": "true",
"powervm:uncapped": "true"
},
"id": "xtiny",
"is_disabled": null,
"is_public": true,
"location": null,
"name": "xtiny",
"original_name": "xtiny",
"ram": 4096,
"rxtx_factor": null,
"swap": 0,
"vcpus": 1
},
"flavor_id": null,
"has_config_drive": "",
"host_id": "6e82dcb4ed92b0e70c305e2ee1021f0019d3bd88e9dd910b5a81xxxx",
"host_status": "UP",
"hostname": "aix-vm-1",
"hypervisor_hostname": "XXXXX",
"id": "371aa5fe-b5c2-4660-978b-09b323a49f66",
"image": {
"architecture": null,
"checksum": null,
"container_format": null,
"created_at": null,
"direct_url": null,
"disk_format": null,
"file": null,
"has_auto_disk_config": null,
"hash_algo": null,
"hash_value": null,
"hw_cpu_cores": null,
"hw_cpu_policy": null,
"hw_cpu_sockets": null,
"hw_cpu_thread_policy": null,
"hw_cpu_threads": null,
"hw_disk_bus": null,
"hw_machine_type": null,
"hw_qemu_guest_agent": null,
"hw_rng_model": null,
"hw_scsi_model": null,
"hw_serial_port_count": null,
"hw_video_model": null,
"hw_video_ram": null,
"hw_vif_model": null,
"hw_watchdog_action": null,
"hypervisor_type": null,
"id": "71c5ddb5-f4f9-431b-917d-e0c0df581xxx",
"instance_type_rxtx_factor": null,
"instance_uuid": null,
"is_hidden": null,
"is_hw_boot_menu_enabled": null,
"is_hw_vif_multiqueue_enabled": null,
"is_protected": null,
"kernel_id": null,
"location": null,
"locations": null,
"metadata": null,
"min_disk": null,
"min_ram": null,
"name": null,
"needs_config_drive": null,
"needs_secure_boot": null,
"os_admin_user": null,
"os_command_line": null,
"os_distro": null,
"os_require_quiesce": null,
"os_shutdown_timeout": null,
"os_type": null,
"os_version": null,
"owner": null,
"owner_id": null,
"properties": {
"links": [
{
"href":
"https://x.x.x.x:8774/6a01a6c6f13c40f79b7ff55xxxx70a371/images/71c5ddb5-f4f9-431b-917d-e0c0
xxx",
"rel": "bookmark"
}
]
},
"ramdisk_id": null,
"schema": null,
"size": null,
"status": null,
"store": null,
"tags": [],
"updated_at": null,
"url": null,
"virtual_size": null,
"visibility": null,
"vm_mode": null,
"vmware_adaptertype": null,
"vmware_ostype": null
},
"image_id": null,
"instance_name": "aix-vm-1-371aa5fe-00000b9e",
"is_locked": false,
"kernel_id": "",
"key_name": "ssh-key",
"launch_index": 0,
"launched_at": "2023-06-28T09:13:22.000000",}
],
"max_count": null,
"metadata": {
"enforce_affinity_check": "false",
"hostname": "aix-vm-1",
"move_pin_vm": "false",
"original_host": "828422A_xxxxx",
},
"min_count": null,
"name": "aix-vm-1",
"networks": null,
"power_state": 1,
"progress": 100,
"project_id": "6a01a6c6f13c40f79b7ff5552170axxx",
"ramdisk_id": "",
"reservation_id": "r-4n2mezi3",
"root_device_name": "/dev/sda",
"scheduler_hints": null,
"security_groups": null,
"server_groups": null,
"status": "ACTIVE",
"tags": [],
"task_state": null,
"terminated_at": null,
"trusted_image_certificates": null,
"updated_at": "2023-08-15T13:08:46Z",
"user_data": null,
"user_id":
"0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9",
"vm_state": "active",
"volumes": [
{
"attachment_id": null,
"bdm_id": null,
"delete_on_termination": true,
"device": null,
"id": "5119dfc1-8fc2-4a70-943d-da6266d71f9b",
"location": null,
"name": null,
"tag": null,
"volume_id": null
},
{
"attachment_id": null,
"bdm_id": null,
"delete_on_termination": false,
"device": null,
"id": "25d70a6e-0923-491c-bb87-47b484b11c16",
"location": null,
"name": null,
"tag": null,
"volume_id": null
}
]
Display only VMs hosted on a specific IBM Power server via PowerVC
Using the output shown in Example 5-16 on page 237 we are able to select which VMs we
want to display by filtering on items such as status, network, image or hosted IBM Power
server. We are also able to select which values we display in our output. In Example 5-17 we
use the openstack.cloud.server_info module to retrieve information about VMs on a certain
IBM Power server, and display the name, status and memory allocation of those VMs.
Example 5-17 Display name, status and memory of all VMs on a specific IBM Power Server
- name: Collect information of all VMs on PowerServer1 via PowerVC
openstack.cloud.server_info:
cloud: powervc_cloud
filters:
compute_host: “{{ Server_serial_number }}”
register: vm_on_host_results
In Example 5-19 we show an example of using this module to create a 10GB storage volume.
Example 5-19 Create a 10GB storage volume using the OpenStack Cloud collection
- name: Create a new {{ new_disk_size }}GB volume, called {{ new_disk_name }} using storage
template {{ storage_template }}
openstack.cloud.volume:
cloud: powervc_cloud
state: present
name: "{{ new_disk_name }}"
size: "{{ new_disk_size }}"
volume_type: "{{ storage_template }}"
register: volume_create_information
Note: The ‘size’ of the volume is in GB, and the ‘volume_type’ refers to the PowerVC
storage template to use.
Once we have created the new storage volume we can attach it to an existing VM. We do this
using the OpenStack Cloud server volume module. The documentation for that modules can
be found here:
In Example 5-20 we show an example of using this module to attach the volume to an existing
VM in PowerVC.
We can combine both tasks in the same playbook to first create, then attach the new storage
to an existing VM, as shown in Example 5-21 and Figure 5-3.
Example 5-21 Output showing create and attach a new storage volume via PowerVC
PLAY [Connected to PowerVC/Openstack VM, create new disk and attach to VM]
********************************************************************************
TASK [Create a new 10GB volume, called data_volume_1 using storage template V7K1 Secondary
Pool] *****************************************************************
changed: [localhost]
The results can be seen in the PowerVC user interface as shown in Figure 5-3.
5.1.2 Using the URI modules to interact with PowerVC API services
Another method of automating IBM PowerVC using Ansible is to take advantage of the REST
(Representational State Transfer) APIs IBM PowerVC provides. The OpenStack software has
industry-standard interfaces that are released under the terms of the Apache License. IBM
PowerVC interfaces are a subset of OpenStack northbound APIs.
In this section we will be covering the following IaC options using the URI module utilizing
PowerVC API services:
Authentication
Creating a new VM
Destroying an existing VM
Showing resource information
Resizing an online VM
A number of interfaces were added or extended to enhance the capabilities that are
associated with the IBM Power platform REST APIs.
APIs use a common set of methods which we will use to perform operations on IBM
PowerVC.
POST - Create operation
GET - Read operation
PUT - Update operation
DELETE - Delete operation
PowerVC APIs are provided by a number of specialized inter-operable services. Each service
is accessible on a distinct port number and provides a set of APIs that run specialized
functions that are related to that service. The services are shown in Table 5-2.
OpenStack Projects
PowerVC services
The OpenStack APIs as shown in Table 5-2 include the ability to read, create, update and
delete IBM PowerVC resources including VMs, networks, storage, key pairs, images and
projects. The reference documentation can be found at the OpenStack organization site.
The IBM PowerVC APIs (along with references to the OpenStack APIs) are documented in
the PowerVC documentation.
Each OpenStack and IBM PowerVC API service uses a unique port. Some of the key API
ports are shown in Table 5-3.
To access IBM PowerVC APIs via Ansible, we can use the ‘uri’ module, which is part of
ansible-core (ansible.builtin,uri).
To do this we have to perform an API POST to the PowerVC server with the following
information:
API URL of the PowerVC server (IP or hostname)
Keystone authentication port (default 5000)URI
PowerVC user name and password
Tenant/Project name
Domain name (only ‘default’ is supported)
Example 5-22 shows Ansible URI module authenticating with IBM PowerVC, obtaining the
information required, setting a fact to store the authorization token, then displaying the token.
Example 5-22 Obtaining the authorization token from PowerVC using URI module
- name: Connect to PowerVC and collect auth token
uri:
url: https://{{ powervc_host }}:{{ auth_port }}/v3/auth/tokens
method: POST
body: '{"auth":{
"scope":{
"project":{
"domain":{
"name":"Default"},
"name":"ibm-default"}},
"identity":{
"password":{
"user":{
"domain":{
"name":"Default"},
"password":"{{ PowerVC_password }}",
"name":"{{ PowerVC_ID }}"}},
"methods":["password"]}}}'
body_format: json
use_proxy: no
validate_certs: no
status_code: 201
register: auth
- name: Set Auth Token
set_fact:
auth_token: "{{ auth.x_subject_token }}"
- name: Display Auth Token
debug:
var: auth_token
Although we wouldn’t normally display the token, we do it in this case to demonstrate that
Ansible has been able to authenticate with the PowerVC server. The output is shown in
Example 5-23.
Now that we have the fact set – in Example 5-23 we called it ‘auth_token’ – we can perform
API operations against our PowerVC environment using Ansible.
There are a number of other optional values you can supply including availability zone (host
group or name of server), key_name (SSH key pair name) and network fixed IP (specific IP).
These values are detailed in the OpenStack API compute (nova) documentation.
In Example 5-24 we show how to build a new VM using the URI module via IBM PowerVC’s
API nova service.
Example 5-24 Create a new VM on PowerVC using the URI module and API
- name: Connect to PowerVC with token and create a new VM
uri:
url: https://{{ powervc_host }}:{{ nova_port }}/v2.1/{{ project_id }}/servers
method: POST
use_proxy: no
validate_certs: no
return_content: no
body: '{
"server": {
"name": "{{ new_vm_name }}",
"imageRef": "{{ image_UID }}",
"flavorRef": "{{ flavor_UID }}",
"availability_zone": "{{ host_group_name }}",
"networks": [{
"uuid": "{{ network_UID }}"
}]
}
}'
body_format: json
headers:
Accept: "application/json"
Content-Type: "application/json"
OpenStack-API-Version: "compute 2.46"
User-Agent: "python-novaclient"
X-Auth-Token: "{{ auth_token }}"
X-OpenStack-Nova-API-Version: "2.46"
status_code: 202
register: vm_create
Note: In Example 5-24 on page 246 you need to pass the project id, image id, flavor id and
network id.
Note: When we created a new VM we passed the VM name to the API, however when
deleting an existing VM we have to use the VMs unique ID.
We discuss how to retrieve a project ID in “Collect project ID using the project name from
PowerVC using URI module” on page 248.
We discuss how to retrieve a VSI ID in “Collect VM ID using the VSI name from PowerVC
using URI module” on page 249.
In Example 5-25 we show how to destroy an existing VM using the URI module via the IBM
PowerVC API nova service.
Example 5-25 Destroy an existing VM on PowerVC using the URI module and API
- name: Connect to PowerVC with token and destroy a VM
uri:
url: https://{{ powervc_host }}:{{ nova_port }}/v2.1/{{ project_id }}/servers/{{
vm_id }}
method: DELETE
use_proxy: no
validate_certs: no
return_content: no
headers:
Collect project ID using the project name from PowerVC using URI module
In Example 5-26 we show the URI module connecting to the PowerVC ‘projects’ API service
to retrieve all project information, using the authorization token collected in Example 5-23 on
page 246. We then filter that information to select just the project we are interested in
(ibm-default in this case). Finally we set a fact called ‘project_id’ containing just the ID of our
selected project and display that ID.
Example 5-27 Output from using URI module to retrieve a project ID from a project name
TASK [Collect ID of chosen project in array format]
******************************************************************************
ok: [localhost]
ok: [localhost]
We can now use that project ID variable in future PowerVC API Ansible playbooks such as
creating a new VM.
Collect VM ID using the VSI name from PowerVC using URI module
In Example 5-28 we show the URI module connecting to the PowerVC ‘nova’ API service to
retrieve information about all the VMs (servers), using the authorization token collected in
Example 5-26 on page 248 and the project id collected in Example 5-27 on page 248. We
then filter that information to select just the VM we are interested in. Finally we set a fact
called ‘vm_id’ containing just the ID of our selected VM and display that ID.
- name: Show VM ID
debug:
var: vm_id
Note: In Example 5-28 we have had to use the ‘project_id’ in the API URL and the VM
name has been passed as a variable {{ vm_name }}.
Example 5-29 Output from using URI module to retrieve a VM ID from a VM name
TASK [Collect ID of chosen VM in array format]
*******************************************************************************
ok: [localhost]
ok: [localhost]
We can now use that VM ID variable in future PowerVC API Ansible playbooks such as
destroying an existing VM or performing PowerVC operations against that VM.
In this section we introduce the VM ‘action’ API service that allows us to perform a number of
different actions against an existing VM including online resizing. The options are
documented in the IBM PowerVC documentation. In Example 5-30 we pass the VM action
API service the new values for required CPU and memory.
Example 5-30 Resize an active VM using URI module and PowerVC API services
- name: "Connect to PowerVC with token and resize VM {{ vm_name }} to {{
new_total_proc_units }} procesors, and {{ new_total_memory_mb }}MB"
uri:
url: https://{{ powervc_host }}:{{ nova_port }}/v2.1/{{ project_id }}/servers/{{
vm_id }}/action
method: POST
use_proxy: no
validate_certs: no
return_content: no
body: {
"resize": {
"flavor": {
"vcpus": "{{ new_vcpus }}",
"disk": "0",
"extra_specs": {
"powervm:proc_units": "{{ new_total_proc_units }}",
},
"ram": "{{ new_total_memory_mb }}"
}
}
}
body_format: json
headers:
Accept: "application/json"
Content-Type: "application/json"
OpenStack-API-Version: "compute 2.46"
User-Agent: "python-novaclient"
X-Auth-Token: "{{ auth_token }}"
X-OpenStack-Nova-API-Version: "2.46"
status_code: 202
register: vm_resize_details
The VM we created in “Creating VMs with IBM PowerVC using URI module” on page 246 was
assigned 1 vCPU, 0.5 entitled cores and 4GB of memory, as we can see from the PowerVC
UI in Figure 5-4.
Figure 5-4 PowerVC UI showing VM resource before resize using API services
Example 5-31 Output showing resize of online VM using URI module and PowerVC API services
TASK [Show current CPU and memory allocation for VM aix-vm-1]
*******************************************************************************
ok: [localhost] => {
"current_vm_spec_details": {
"CPUs": "0.50",
"Memory": 4096,
"name": "aix-vm-1",
"vCPUs": 1
}
}
TASK [Connect to PowerVC and resize VM aix-vm-1 to 0.75 procesors, and 6144MB]
******************************************************************************
ok: [localhost]
TASK [Connect to PowerVC with token and wait for VM aix-vm-1 to be in state
'VERIFY_RESIZE'] ***************************************************************
FAILED - RETRYING: [localhost]: Connect to PowerVC with token and wait for VM aix-vm-1 to
be in state 'VERIFY_RESIZE' (6 retries left).
FAILED - RETRYING: [localhost]: Connect to PowerVC with token and wait for VM aix-vm-1 to
be in state 'VERIFY_RESIZE' (5 retries left).
ok: [localhost]
TASK [Connect to PowerVC and collect new CPU and memory information for VM aix-vm-1 after
the resize]
******************************************************************************
ok: [localhost]
"new_vm_spec_details": {
"CPUs": "0.75",
"Memory": 6144,
"name": "aix-vm-1",
"vCPUs": 1
}
}
The output shown in Example 5-31 on page 251 that the VM reported 0.5 CPU entitlement
and 4 GB of memory before the resize and 0.75 CPU entitlement and 6 GB of memory after
the resize.
We can also see the resize being performed in the PowerVC UI in Figure 5-5
Once the resize has completed we can very on the PowerVC UI as shown in Figure 5-6.
Note: Within PowerVS, virtual machines (VMs) are referred to as virtual server instances
(VSIs).
Note: The IBM Cloud PowerVS modules generate Terraform code to perform the actions
against the PowerVS API services. They currently require Terraform v0.10.20 to be
installed. The Terraform resources and data sources they call can be found in:
https://registry.terraform.io/providers/IBM-Cloud/ibm/latest/docs
Example 5-33 Create a new VSI using the IBM Cloud Collection module ibm_pi_instance
- name: Create a POWER Virtual Server Instance
ibm.cloudcollection.ibm_pi_instance:
state: available
pi_cloud_instance_id: "{{ pi_cloud_instance_id }}"
ibmcloud_api_key: "{{ ibmcloud_api_key }}"
id: "{{ pi_instance.resource.id | default(omit) }}"
region: "{{ region }}"
pi_memory: "{{ memory }}"
pi_processors: "{{ processors }}"
pi_instance_name: "{{ vsi_name }}"
pi_proc_type: "{{ proc_type }}"
pi_image_id: "{{ image_dict[image_name_to_be_created] }}"
pi_volume_ids: []
pi_network_ids:
- "{{ pi_network.id }}"
pi_key_pair_name: "{{ pi_ssh_key.pi_key_name }}"
pi_sys_type: "{{ sys_type }}"
pi_replication_policy: none
pi_replication_scheme: suffix
pi_replicants: "1"
pi_storage_type: "{{ disk_type }}"
register: pi_instance_create_output
Note: The ‘state’ option for the ibm_pi_instance module to ensure a VSI exists, is
‘available’.
Example 5-34 Destroy a VSI using the IBM cloud collection ibm_pi_instance module
- name: Destory a POWER Virtual Server Instance
ibm.cloudcollection.ibm_pi_instance:
state: absent
pi_cloud_instance_id: "{{ pi_cloud_instance_id }}"
ibmcloud_api_key: "{{ ibmcloud_api_key }}"
id: "{{ pi_instance.resource.id | default(omit) }}"
region: "{{ region }}"
register: pi_instance_destroy_output
Just like OpenStack, IBM PowerVS has a large set of API services that allow us to manage
resources such as Virtual Server Instances (VSIs), images, storage volumes, key pairs,
networks, snapshots, VPNs etc. These APIs are documented in the IBM Cloud
documentation.
PowerVS services use regional endpoints over both public and private networks. To target the
public service you need to replace {region} with the prefix that represents the geographic
area where the public facing service is located in the URL shown in Example 5-35 on
page 255. Currently these are us-east (Washington DC), us-south (Dallas, Texas), eu-de
(Frankfurt, Germany), lon (London, UK), tor (Toronto, Canada), syd (Sydney, Australia), and
tok (Tokyo, Japan).
To target the private service you need to replace {region} with the prefix that represents the
geographic area where the private facing service is located in the URL shown in Example 5-36.
Currently these are us-east (Washington DC), us-south (Dallas, Texas), eu-de (Frankfurt,
Germany), eu-gb (London, UK), ca-tor (Toronto, Canada), au-syd (Sydney, Australia), jp-tok
(Tokyo, Japan), jp-osa (Osaka, Japan), br-sao (Sao Paolo, Brazil), and ca-mon (Montreal,
Canada).
All the IBM Cloud PowerVS API methods are also documented, along with the API service
URL, the required parameters and the response body. For example, to obtain information
about all the VSIs, the request is documented at:
https://cloud.ibm.com/apidocs/power-cloud#pcloud-pvminstances-getall
Example 5-37 shows an example request to retrieve all VSIs within IBM PowerVS.
IBM Cloud API keys are associated with a user's identity and can be used to access cloud
platform and APIs, depending on the access that is assigned to the user.The API access key
is created using https://cloud.ibm.com/iam/apikeys.
Note: When creating an IBM Cloud API key record the key in a safe location as you will be
unable to retrieve the contents after it has been created.
In Example 5-38 on page 256 we show how to obtain the auth token using the URI module
along with the IBM Cloud API key.
Example 5-38 Obtain the IAM access token (auth token) using URI module and IBM Cloud API key
- name: Obtain IBM Cloud PowerVS authorization token using IBM Cloud API key
hosts: localhost
gather_facts: no
vars:
- auth_data: "grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey="
api_key: "xxxxxxxxxxxxxxxxxxxxxxxxxx"
tasks:
- name: Get IAM access token
uri:
url: "https://iam.cloud.ibm.com/identity/token"
method: POST
force_basic_auth: true
validate_certs: yes
headers:
content-type: "application/x-www-form-urlencoded"
accept: "application/json"
body: "{{ auth_data }}{{ api_key|trim }}"
body_format: json
register: iam_token_request
- name: Set auth token fact
set_fact:
auth_token: "{{ iam_token_request.json.access_token }}"
- name: Show token
debug:
var: auth_token
In Example 5-39 we point to the IBM Cloud identity and in the body of the API POST we pass
two values:
Authorization Data (grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey=)
IBM API key (trimmed)
The last task in the playbook outputs the ‘auth_token’ fact, which we populated. Normally this
would be hidden but we have included it so we can confirm it has been created correctly.
This ‘auth_token’ can now be used within the Ansible playbook to perform actions against the
IBM PowerVS API services.
Before we perform an action against the IBM PowerVS API services associated with our IBM
Power resources, we need to know the Power Systems Virtual Server Instance ID (also
known as the Cloud Resource Name or CRN). We can obtain our CRN using the ibmcloud
command line or via the IBM Cloud UI as shown in Example 5-40.
Example 5-40 Obtain our Cloud Resource Name (CRN) via the ibmcloud command line
% ibmcloud resource service-instance "Power Virtual Server-London 06" --id
Retrieving service instance Power Virtual Server-London 06 in all resource groups under
account XXX YYYYY's Account as [email protected]...
crn:v1:bluemix:public:power-iaas:lon06:a/abcdefghijklmnopqrstuvwxyzabcdef:121d5ee5-b87d-4a0
e-86b8-aaff422135478::
As well as the CRN tenant ID and cloud instance ID shown above, we also have to define the
following CRN values:
Example 5-41 CRN values required to connect to PowerVS London 04 API services
crn:
version: "v1"
cname: "bluemix"
ctype: "public"
service_name: "power-iaas"
location: "lon04"
tenant_id: “abcdefghijklmnopqrstuvwxyzabcdef”
cloud_instance_id: “121d5ee5-b87d-4a0e-86b8-aaff422135478”
Note: In the CRN values shown in Example 5-42, tenant id and cloud instance id are those
collected previously in Example 5-41 on page 257. Location must be one of the locations
listed in the ‘ibmcloud catalog locations’ command line output e.g. fra01, fra02, lon04,
lon06, dal10, dal12, wdc06, wdc07, mon01, tor01, osa21,sao01, sao04, syd04, syd05 and
tok04
Example 5-42 Retrieve all VSI names in PowerVS using URI module and APIs
- name: Collect information about all the VSI's in this cloud instance
uri:
url: "https://{{ region }}.power-iaas.cloud.ibm.com/pcloud/{{ api_version
}}/cloud-instances/{{ crn.cloud_instance_id }}/pvm-instances"
method: GET
headers:
Authorization: "Bearer {{ auth_token }}"
CRN: "crn:{{ crn.version }}:{{ crn.cname }}:{{ crn.ctype }}:{{ crn.service_name
}}:{{ crn.location }}:a/{{ crn.tenant_id }}:{{ crn.cloud_instance_id }}::"
Content-Type: application/json
register: pvs_existing_vsi_results
We can see the output of our VSI retrieval request in Example 5-43.
Example 5-43 Display the names of all PowerVS VSIs using URI module and APIs
TASK [Collect information about all the VSI's in this cloud instance]
************************************************************************
ok: [localhost]
Example 5-44 Retrieve all VSI images within our PowerVS environment using URI module and APIs
- name: Collect information about all the images in this cloud instance
uri:
url: "https://{{ region }}.power-iaas.cloud.ibm.com/pcloud/{{ api_version
}}/cloud-instances/{{ crn.cloud_instance_id }}/images"
method: GET
headers:
Authorization: "Bearer {{ auth_token }}"
CRN: "crn:{{ crn.version }}:{{ crn.cname }}:{{ crn.ctype }}:{{ crn.service_name
}}:{{ crn.location }}:a/{{ crn.tenant_id }}:{{ crn.cloud_instance_id }}::"
Content-Type: application/json
register: pvs_images_results
Example 5-46 Retrieve all networks within our PowerVS environment using URI module and APIs
- name: Collect information about all the networks within PowerVS environment
uri:
url: "https://{{ region }}.power-iaas.cloud.ibm.com/pcloud/{{ api_version
}}/cloud-instances/{{ crn.cloud_instance_id }}/networks"
method: GET
headers:
Authorization: "Bearer {{ auth_token }}"
CRN: "crn:{{ crn.version }}:{{ crn.cname }}:{{ crn.ctype }}:{{ crn.service_name
}}:{{ crn.location }}:a/{{ crn.tenant_id }}:{{ crn.cloud_instance_id }}::"
Content-Type: application/json
register: pvs_network_results
The output of the network retrieval can be seen in Example 5-47 on page 260.
In Example 5-48 we use the URI module and the PowerVS ‘pvm-instances’ API service to
retrieve information about all the VSIs. We then filter that information and collect just the ID of
the VM we are interested in {{ vsi_name }}.
The output from the VSI ID retrieval can be seen in Example 5-49
Create a new VSI in IBM PowerVS using URI module and API services
In this section we will create a new PowerVS VSI using the authorization token, along with the
image name and network name collected in the previous sections.
The syntax required to create a new VSI using a POST HTTP method is documented at:
https://cloud.ibm.com/apidocs/power-cloud#pcloud-pvminstances-post
In Example 5-24 on page 246 we showed an example where the content of the URI POST
was contained within the body. In Example 5-50 we define a variable called {{ vsi_info }} which
contains all the required information such as VSI name, image, network etc. We then use that
variable in the body section of the URI module.
Example 5-50 Create a new VSI within PowerVS using URI module and API services
Variable definition
vsi_info:
serverName: "aix-vsi-1"
imageID: "7300-01-01"
processors: 1
procType: "shared"
memory: 4
sysType: "s922"
storageType: "tier3"
networkIDs:
- "public-192_168_xxx_xxx-VLAN_2044"
We can see the VSI being built on the PowerVS UI in Figure 5-7.
Destroy a VSI in IBM PowerVS using URI module and API services
In this section we will destroy a existing PowerVS VSI using the authorization token and the
VSI name. To destroy an existing VSI you are required to pass its ID, along with the DELETE
HTTP method. As we only know the VSI name, we have to obtain its ID first.
The syntax required to destroy a VSI using a DELETE HTTP method is documented at:
https://cloud.ibm.com/apidocs/power-cloud#pcloud-pvminstances-delete
In Example 5-51 we show how to destroy a VSI in PowerVS by passing the VSI name – {{
vsi_name }}. This is then converted into the VSI ID which is used for the deletion command.
Example 5-51 Destroy a PowerVS VSI using URI module and API
- name: Collect information about all the VSI's in this cloud instance
uri:
url: "https://{{ region }}.power-iaas.cloud.ibm.com/pcloud/{{ api_version
}}/cloud-instances/{{ crn.cloud_instance_id }}/pvm-instances"
method: GET
headers:
Authorization: "Bearer {{ auth_token }}"
CRN: "crn:{{ crn.version }}:{{ crn.cname }}:{{ crn.ctype }}:{{ crn.service_name
}}:{{ crn.location }}:a/{{ crn.tenant_id }}:{{
crn.cloud_instance_id }}::"
Content-Type: application/json
register: pvs_existing_vsi_results
The output from-running the playbook shown in Example 5-51 on page 262 is shown in
Example 5-52 where the name and ID of the VSI are displayed before destroying it.
Example 5-52 Output of destroying VSI in PowerrVS using URI module and API
TASK [Collect ID of chosen VSI in array format]
*******************************************************************************
ok: [localhost]
We can see the VSI delete tasks recorded in the event logs on the PowerVS UI in Figure 5-8.
In Example 5-53 we show an example of using the URI module and API services to resize
that VSI to 0.75 cores and 6GB of memory.
Example 5-53 Resize a PowerVS VSI using URI module and API services
- name: "Resize VSI {{ vsi_name }} to 0.75 cores and 6GB of memory"
uri:
url: "https://{{ region }}.power-iaas.cloud.ibm.com/pcloud/{{ api_version
}}/cloud-instances/{{ crn.cloud_instance_id }}/pvm-instances/{{ vsi_id }}"
method: PUT
status_code: 202
body_format: json
body: '{
"processors": 0.75,
"memory": 6
}'
headers:
Authorization: "Bearer {{ auth_token }}"
CRN: "crn:{{ crn.version }}:{{ crn.cname }}:{{ crn.ctype }}:{{ crn.service_name
}}:{{ crn.location }}:a/{{ crn.tenant_id }}:{{ crn.cloud_instance_id }}::"
Content-Type: application/json
register: vsi_resize_details
During the resize, we can see the status change on the PowerVS UI, as shown in
Figure 5-10.
The output from the playbook also shows us the resize taking place as seen in Example 5-54.
TASK [Show existing CPU and memory allocation for VSI aix-vsi-1]
*****************************************************************************
ok: [localhost] => {
"current_vsi_spec_details": {
"CPUs": 0.5,
"Memory": 4,
"name": "aix-vsi-1",
"vCPUs": 1
}
}
TASK [Show new CPU and memory allocation for VSI aix-vsi-1]
**********************************************************************************
ok: [localhost] => {
"new_vsi_spec_details": {
"CPUs": 0.75,
"Memory": 6,
"name": "aix-vsi-1",
"vCPUs": 1
}
}
PLAY RECAP *******************************************************************************
localhost : ok=16 changed=0 unreachable=0 failed=0 skipped=2 rescued=0
ignored=0
The output from the PowerVS UI confirms the resize was successful as shown in Figure 5-11.
Figure 5-11 PowerVS UI showing VSI post resize using API services
In the following sections we discuss using Ansible automation in your IBM Power environment
– whether using IBM AIX, IBM i, or Linux on Power – to assist you in managing the following
functions within your environment:
Storage
Security and compliance
Fixes or Upgrades
Configuration and Tuning
Storage
The storage tasks discuss how to manage and maintain how your data is stored in your
servers. This involves things like monitoring file systems to ensure that they do not run out of
space, monitoring the performance of the storage to ensure it is meeting business
requirements. and generally managing a logical volume manager (LVM) and local filesystems
(FS) at the operating systems (OS) level. Using Ansible, you can automate the storage
related tasks with playbooks that are already available for your operating system, or create
your own playbooks accordingly. Some of the functions your Ansible playbooks can do are:
Create a file system
Remove a file system
Mount a file system
Unmount a file system
Create LVM volume groups
Organizations that you work with may require companies in their supply chain to prove
compliance using an independent third-party validation exercise. Ansible Automation Platform
can be an optimal solution for an organization to automate regulatory compliance, security
configuration and remediation across systems and within containers. An organization can use
existing playbook roles that are already available in a community repository or they can
develop their own playbooks and roles to meet their specific business requirements.
There are two roles that need to be considered when you design your security playbooks.
These roles can be run in separate playbooks or they can be combined into a single
playbook. These roles are:
– scanning playbook roles: This role is designed to scan systems based on the
requirements set by the business and generate a report file as both a list of system
updates required in your systems, and a proof of compliance.
– remediation playbook roles: This role applies the appropriate system setting and
applies the required changes to the systems based on their business requirements or
industry or governmental requirements for each operating system.
Fixes or Upgrades
To keep your system up-to-date involves:
– planning and configuring how and when security updates are installed
– applying changes introduced by newly updated packages or filesets
– keeping track of security advisories.
As security vulnerabilities are discovered, the affected software must be updated in order to
limit any potential security risks. Keeping your system up-to-date requires a patch
management solution to manage and install updates. Updates can fix issues that have been
discovered, improve the performance of existing features, or add new features to software.
Fixes and patch management solutions for each of the supported operating systems for IBM
Power are discussed in the OS related sections later in this chapter.
Due to the scale and complexity of most enterprise environments, IT teams now use
automation to define and maintain the desired state of their various systems. For more
information see the Red Hat documentation on Configuration Management.
For more details on the Red Hat Enterprise Linux (RHEL) System Roles refer to this link.
The first step is to install the rhel-system-roles package on the Ansible controller node. This is
done using the following command:
yum install rhel-system-roles -y
Note: The blivet API packages are also needed which is the python interface that can be
used to create scripts for use with administration. The blivet API can be installed with the
yum command. The needed packages are blivet-data and python3-blivet.
https://access.redhat.com/solutions/3776171
https://access.redhat.com/solutions/3776211
6.2.1 Storage
In this section we show some simple tasks as examples of things that you might want to
automate in your storage environment on your Linux on IBM Power VMs.
Create a new file system and lv using the RHEL System Role for storage
In this section we will create an Ansible playbook that will allow us to create a new file system
in your RHEL virtual machine. To do this follow the steps below:
1. Create a playbook using the RHEL System Role called rhel-system-roles.storage. This is
shown in Example 6-1.
# cat create_lvm_filesystem_playbook1.yaml
---
- hosts: all
vars:
storage_pools:
- name: myvg
disks:
- /dev/mapper/360050768108201d83800000000008e08p1
- /dev/mapper/360050768108201d83800000000008e08p2
volumes:
- name: mylv1
size: 1 GiB
fs_type: xfs
mount_point: /opt/mount1
roles:
- rhel-system-roles.storage
2. Check the inventory file for the list of target systems and then run the play book created in
Example 6-1 on page 271. The inventory file used in this example is the hosts file. The
process is shown in Example 6-2.
# ls -la
total 52
-rw-r--r--. 1 root root 39216 Aug 27 15:30 ansible.cfg
-rw-r--r--. 1 root root 298 Aug 27 15:35 create_lvm_filesystem_playbook1.yaml
-rw-r--r--. 1 root root16 Aug 27 15:30 hosts
-rw-r--r--. 1 root root 319 Aug 27 15:54 resize_lvm_filesystem_playbook1.yaml
-rw-r--r--. 1 root root 320 Aug 27 16:14 resize_lvm_filesystem_playbook2.yaml
# cat hosts
bs-rbk-lnx-1.power-iaas.cloud.ibm.com
# ansible-playbook create_lvm_filesystem_playbook1.yaml
TASK [rhel-system-roles.storage : Set the list of pools for test verification]
*********************************************************
ok: [bs-rbk-lnx-1.power-iaas.cloud.ibm.com]
PLAY RECAP
*******************************************************************************
**********************************************
bs-rbk-lnx-1.power-iaas.cloud.ibm.com : ok=21 changed=3 unreachable=0
failed=0 skipped=11 rescued=0 ignored=0
3. Verify the storage configuration using the commands shown in Example 6-3.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
mylv1 myvg -wi-ao---- 1.00g
As we have verified in Example 6-3, the playbook has created a new volume group and
logical volume. It also created an xfs file system and persistently mounted it as
/opt/mount1 directory.
Resize an existing LVM file system using the RHEL System Role for
storage
In this section we will create an Ansible playbook that will resize an existing LVM based file
system. The first step is to extend the existing volume group which was created in
Example 6-2 on page 272.
Note: Only attempt one change at a time. That is, don't try to extend the volume group
together with resizing the existing file system in a single playbook.
storage_pools:
- name: myvg
disks:
- /dev/mapper/360050768108201d83800000000008e08p1
- /dev/mapper/360050768108201d83800000000008e08p2
- /dev/mapper/360050768108201d83800000000008e08p2
volumes:
- name: mylv1
size: 1 GiB
fs_type: xfs
mount_point: /opt/mount1
roles:
- rhel-system-roles.storage
2. Copy some files to the /opt/mount1 mount point to validate that it is available and then run
the second playbook as shown in Example 6-5.
# ansible-playbook resize_lvm_filesystem_playbook1.yaml
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync
Convert
mylv1 myvg -wi-ao---- 1.00g
# ls -l /opt/mount1/
total 8
-rw-r--r--. 1 root root 146 Aug 27 22:17 fstab
-rw-r--r--. 1 root root 225 Aug 27 22:17 hosts
# cat /opt/mount1/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.159.133 bs-rbk-lnx-1.power-iaas.cloud.ibm.combs-rbk-lnx-1
As we have verified in Example 6-6, the playbook has expanded the existing volume
group, but the logical volume, file system and persistent mount point remains the same
and the data in the file system is still accessible.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
mylv1 myvg -wi-ao---- 10.00g
# df -h |grep '/opt/mount1'
/dev/mapper/myvg-mylv1 10G 106M 9.9G 2% /opt/mount1
# cat /opt/mount1/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.159.133 bs-rbk-lnx-1.power-iaas.cloud.ibm.combs-rbk-lnx-1
As we have verified in Example 6-8, the playbook has expanded the existing logical
volume along with file system and the data is still accessible
Additional storage options can be done using Ansible playbooks. These can allow you to:
– Remove a file system
– Unmount a file system
– Remove LVM volume groups
– Remove logical volumes
For more information on how to use Ansible for these functions, refer to the following link:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/a
utomating_system_administration_by_using_rhel_system_roles/intro-to-rhel-system
-roles_automating-system-administration-by-using-rhel-system-roles
in addition, the README file for the role will have more details on how to use the
rhel-system-roles.storage role. It is located on your controller system at:
/usr/share/ansible/roles/rhel-system-roles.storage/README.md
# tree -d rhel-hardening-scanning/
rhel-hardening-scanning/
└── roles
├── rhel8hardening
│ ├── defaults
│ ├── files
│ │ └── pam.d
│ ├── handlers
│ ├── tasks
│ └── templates
└── rhel8scanning
├── defaults
├── files
│ └── pam.d
├── tasks
└── templates
defaults Contains the default variables for the role and has defined all the
required variables.
handlers Will have a list of tasks that run only when a change is made on a
machine, and run only after all the tasks in a particular play have been
completed.
files Contains all the files will be under this directory that the role deploys.
templates Contains all the configuration template files will be under this directory
that the role deploys.
tasks Contains the list of tasks that the role executes and the main list of
tasks will be in the file called main.yml.
Note: For more information on Ansible roles, see the Playbook Guide. For assistance in
developing a role, see Developing an Ansible Role.
The list of files under the sub-directory of rhel-hardening-scanning project directory is shown
in Example 6-10.
Example 6-10 Listing of files under the sub-directory of rhel-hardening-scanning project directory
# tree -f rhel-hardening-scanning/
rhel-hardening-scanning
├── rhel-hardening-scanning/ansible.cfg
├── rhel-hardening-scanning/hosts
├── rhel-hardening-scanning/playbook-rhel8hardening.yml
├── rhel-hardening-scanning/playbook-rhel8scanning.yml
└── rhel-hardening-scanning/roles
├── rhel-hardening-scanning/roles/rhel8hardening
│ ├── rhel-hardening-scanning/roles/rhel8hardening/defaults
│ │ └── rhel-hardening-scanning/roles/rhel8hardening/defaults/main.yml
│ ├── rhel-hardening-scanning/roles/rhel8hardening/files
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/files/chrony.conf
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/files/pam.d
│ │ │ ├── rhel-hardening-scanning/roles/rhel8hardening/files/pam.d/password-auth
│ │ │ ├── rhel-hardening-scanning/roles/rhel8hardening/files/pam.d/su
│ │ │ └── rhel-hardening-scanning/roles/rhel8hardening/files/pam.d/system-auth
│ │ └── rhel-hardening-scanning/roles/rhel8hardening/files/rsyslog.conf
│ ├── rhel-hardening-scanning/roles/rhel8hardening/handlers
│ │ └── rhel-hardening-scanning/roles/rhel8hardening/handlers/main.yml
│ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/main.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/prerequisite.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_A.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_B.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_C.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_D.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_E.yml
│ │ ├── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_F.yml
│ │ └── rhel-hardening-scanning/roles/rhel8hardening/tasks/section_G.yml
│ └── rhel-hardening-scanning/roles/rhel8hardening/templates
│ └── rhel-hardening-scanning/roles/rhel8hardening/templates/login.defs.j2
└── rhel-hardening-scanning/roles/rhel8scanning
├── rhel-hardening-scanning/roles/rhel8scanning/defaults
│ └── rhel-hardening-scanning/roles/rhel8scanning/defaults/main.yml
├── rhel-hardening-scanning/roles/rhel8scanning/files
│ ├── rhel-hardening-scanning/roles/rhel8scanning/files/pam.d
│ │ ├── rhel-hardening-scanning/roles/rhel8scanning/files/pam.d/password-auth
│ │ ├── rhel-hardening-scanning/roles/rhel8scanning/files/pam.d/su
│ │ └── rhel-hardening-scanning/roles/rhel8scanning/files/pam.d/system-auth
│ └── rhel-hardening-scanning/roles/rhel8scanning/files/rsyslog.conf
├── rhel-hardening-scanning/roles/rhel8scanning/tasks
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/main.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/postreport.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/prerequisite.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_A-report.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_B-report.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_C-report.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_D-report.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_E-report.yml
│ ├── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_F-report.yml
│ └── rhel-hardening-scanning/roles/rhel8scanning/tasks/section_G-report.yml
└── rhel-hardening-scanning/roles/rhel8scanning/templates
└── rhel-hardening-scanning/roles/rhel8scanning/templates/report.html.j2
Example 6-11 shows how to run one of the provided playbooks to scan a system from the
Ansible controller node using the Ansible command line.
# ls -l
total 32
-rw-r--r--. 1 root root 19971 Aug 26 12:00 ansible.cfg
-rw-r--r--. 1 root root1031 Aug 28 21:25 hosts
-rwxrwxrwx. 1 mhaque mhaque 123 Aug 26 11:43 playbook-rhel8hardening.yml
-rwxrwxrwx. 1 mhaque mhaque 125 Aug 26 11:58 playbook-rhel8scanning.yml
drwxrwxr-x. 4 mhaque mhaque 49 Aug 26 11:42 roles
# cat hosts
135.90.72.133
Figure 6-1 shows the screen for Job Templates and Workflow Job Templates configuration.
One benefit of the Ansible Automation Platform – beyond the GUI interface provided – is the
additional management functions provided such as the ability to require approval before
starting any sensitive playbooks execution that may change system settings. Figure 6-2
shows defining a Workflow Job Template which is configured to require approval.
Figure 6-4 shows an example of the bottom most part of the report file.
The sample playbooks for security and compliance for Linux on IBM Power System are freely
available in the git repository for your use at:
https://github.com/IBMRedbooks/SG248551-Using-Ansible-for-Automation-in-IBM-Power-
Environments
Red Hat Insights is a software-as-a-service (SaaS) offering that is included with your Red Hat
Enterprise Linux subscription. It includes several capabilities to help with various aspects of
management. The Patch capability can help customers understand which advisories are
applicable in their environments, and can help automate the process of patching via Ansible
playbooks.
For example, if a Red Hat Security Advisory were issued, you could go into the Insights Patch
dashboard to see a list of systems in your environment that are impacted. With a few clicks
from within the Patch dashboard, you can generate an Ansible Playbook that can automate
the advisory installation. Figure 6-5 shows an example architecture for managing fixes and
updates.
Figure 6-5 A sample architecture diagram for RHEL patch management automation
If you have Red Hat Smart Management, the Cloud Connector functionality lets you run the
Ansible playbook right from the Insights web interface. Smart Management, Satellite and
Cloud Connector are not required for use with Insights, and if you are in an environment
without Red Hat Satellite you can still utilize Insights Patch and generate Ansible playbooks
that can be downloaded and manually run.
For more information on getting started with the Red Hat Insights patch capability and how to
download Ansible playbooks refer to this Red Hat document.
Prerequisites
One of the following two options need to be in place to allow pulling patches and upgrades
from the Red Hat repositories.
1. Use the Red Hat Satellite with the Red Hat Insights patch capability to enable and manage
a standard operating environment for the patches or fixes repository. For more information
see this Red Hat patch management document.
2. Alternatively, you can provide an individual Red Hat Enterprise Linux subscription and
connect with Red Hat Insights.
We have downloaded an Ansible playbook from the Red Hat Insights web console
(https://console.redhat.com/insights/inventory) for advisory patches as seen in
Figure 6-6.
Figure 6-6 Creating a playbook (remediations) to apply patches from Red Hat Insights
After customizing some of the variables, the playbook shown in Example 6-13 is executed.
# Reboots a system if any of the preceeding plays sets the 'insights_needs_reboot' variable
to true.
# The variable can be overridden to suppress this behavior.
- name: Reboot system (if applicable)
hosts: "bs-rbk-lnx-1.power-iaas.cloud.ibm.com"
become: true
gather_facts: false
vars:
tasks:
- when:
- insights_needs_reboot is defined
- insights_needs_reboot
block:
- name: Reboot system
shell: sleep 2 && shutdown -r now "Ansible triggered reboot"
async: 1
poll: 0
ignore_errors: true
Note: Using customized yum/dnf commands will help create a customized list of rpms for
RHEL patching when a RHEL minor version upgrade is not supported by a running
application or is restricted by the application. Some example yum/dnf commands are
shown here:
– yum --bugfix check-update
– yum --security check-update
– yum --advisory check-update
– yum --secseverity=Important check-update
– yum --sec-severity=Critical check-update
– yum check-update --cve CVE-2008-0947
– yum check-update --cve CVE-2008-0947
– yum check-update --bz 1305903
– yum updateinfo list
– yum updateinfo list security all
– yum updateinfo list bugfix all
– yum info-sec
The system has been patched with selective advisory rpms and was successfully rebooted as
some of the rpms required a system reboot.
The Ansible template engine uses the Jinja21 template language, a popular template
language for the Python ecosystem. Jinja2 allows you to interpolate variables and
expressions with regular text by using special characters such as { and {%. By doing this, you
can keep most of the configuration file as regular text and inject logic only when necessary,
making it easier to create, understand, and maintain template files.
Note: For more information on how to create dynamic configuration files using Ansible
templates refer to https://www.redhat.com/sysadmin/ansible-templates-configuration.
Jinja2 templates are files that use variables to include static values and dynamic values. One
powerful thing about a template is that you can have a basic data file but use variables to
generate values dynamically based on the destination host. Ansible processes templates
using Jinja2.
Example 6-14 shows how to create a file called index.html.j2 for an apache web server.
Now, Example 6-15 shows how to use the template created in Example 6-14 in an Ansible
playbook.
Note: For more information on managing Apache web servers using jinja2 templates
and filters refer to: https://www.redhat.com/sysadmin/manage-apache-jinja2-ansible.
In Example 6-16 on page 286 we show how to create a playbook file (~/opening-a-port.yml)
which will allow incoming HTTPS traffic to the local host.
1 https://jinja.palletsprojects.com/en/3.1.x/
Example 6-16 firewall to allow incoming HTTPS traffic to the local host
# cat ~/opening-a-port.yml
---
- name: Configure firewalld
hosts: managed-node-01.example.com
tasks:
- name: Allow incoming HTTPS traffic to the local host
include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- port: 443/tcp
service: http
state: enabled
runtime: true
permanent: true
value: 65536
kernel_settings_sysfs:
- name: /sys/class/net/lo/mtu
value: 65000
kernel_settings_transparent_hugepages: madvise
6.3.1 Storage
Storage management is a very common task, many administrators face on daily basis. You
can easily automate storage management tasks with Ansible and give your playbooks to your
operating team to win more time for important tasks.
When an Ansible playbooks starts its first task is usually to collect some information about the
running system - gathering facts. The information Ansible has collected is available through
the variable ansible_facts. This variable has some parts regarding storage configuration. First
of all there is a list called ansible_facts.devices with the information about all devices on the
system, including storage devices. You can also find a list of all mounted file systems in
ansible_facts.mounts. Your volume groups are listed in ansible_facts.vgs.
One of the first problems almost every young administrator has on AIX is the absence of “df
-h” (human readable output of all mounted filesystems). Let us solve the problem with
Ansible. As we know Ansible stores the information about mounted filesystems in
ansible_facts.mounts. We only need to print this information as shown in Example 6-19.
Example 6-19 Print information about mounted file systems in human readable format
---
- name: print information about mounted file systems in human readable format
host: all
gather_facts: true
tasks:
- name: print mounted filesystems
ansible.builtin.debug:
msg: "{{ item.device }} {{ item.size_total | ansible.builtin.human_readable }} {{
item.size_available | ansible.builtin.human_readable }} {{ item.mount }}"
loop: "{{ ansible_facts.mounts | sort(attribute='mount') }}"
loop_control:
label: "{{ item.device }}"
Remember that Ansible can be run on several hosts in parallel. So you just get “distributed df
-h” command.
When working with storage you want to understand how you get your disks from a Virtual I/O
Server. You can find the information by analyzing ansible_facts.devices tree. If you find disks
of type “Virtual SCSI Disk Drive”, you are using VSCSI. If you find disks of type “MPIO IBM
2145 FC Disk” or similar, you are using NPIV. Example 6-20 shows using Ansible facts to
differentiate NPIV and VSCSI disks in a SAN attached IBM DS8000®.
tasks:
- name: find VSCSI disks
ansible.builtin.set_fact:
vscsi_disks: "{{ ansible_facts.devices | dict2items | community.general.json_query(q)
}}"
vars:
q: "[?value.type == 'Virtual SCSI Disk Drive'].{ name: key }"
- name: find NPIV disks
ansible.builtin.set_fact:
npiv_disks: "{{ ansible_facts.devices | dict2items | community.general.json_query(q)
}}"
vars:
q: "[?contains(value.type, 'IBM 2145')].{ name: key }"
- name: VSCSI disks on the system
ansible.builtin.debug:
var: vscsi_disks
- name: NPIV disks on the system
ansible.builtin.debug:
var: npiv_disks
One of the very common tasks in AIX system administration is to create a volume group on a
new disk. How can you find out which disk is new? With Ansible it is easy. You start cfgmgr to
find new disks and then re-collect the information about devices. The difference between two
fact sets is your new disk. Example 6-21 demonstrates this capability,
tasks:
- name: find all existing hdisks
ansible.builtin.set_fact:
existing_disks: "{{ ansible_facts.devices | dict2items |
community.general.json_query(q) }}"
vars:
q: "[?starts_with(key, 'hdisk')].{ name: key }"
- name: search for new disks
ansible.builtin.command:
cmd: cfgmgr
changed_when: false
- name: renew facts
ansible.builtin.gather_facts:
ignore_errors: true
- name: get new list of disks
ansible.builtin.set_fact:
disks_after_cfgmgr: "{{ ansible_facts.devices | dict2items |
community.general.json_query(q) }}"
vars:
q: "[?starts_with(key, 'hdisk')].{ name: key }"
- name: get new disks
ansible.builtin.set_fact:
new_disks: "{{ disks_after_cfgmgr | ansible.builtin.difference(existing_disks) }}"
- name: print new disks
ansible.builtin.debug:
var: new_disks
The resulting output from running the playbook is shown in Example 6-22.
PLAY RECAP
*******************************************************************************************
***********************************************
localhost : ok=7 changed=0 unreachable=0 failed=0 skipped=0
rescued=0 ignored=0
After you find the disk, you may want to set some attributes for it like reserve policy or
hcheck_mode. This can be done using the ibm.power_aix.devices module as shown in
Example 6-23.
tasks:
- name: create volume group
ibm.power_aix.lvg:
vg_name: datavg
pvs: hdisk6
vg_type: scalable
pp_size: 256
state: present
The created volume group is automatically activated and ready for further work like creating
logical volumes or file systems.
If you use the a variable new_disks created in Example 6-22 on page 289 instead of inputting
the name manually, you must build a string pointing to the variable as shown in Example 6-25
on page 291.
A similar method can be used to delete an unneeded volume group. You need only two
attributes – the volume group’s name and state – this is shown in Example 6-26.
The volume group will be deleted even it is open (varied on), but it can’t be deleted if there are
allocations (logical partitions) in it. In this case you must collect LVM-related information,
unmount all filesystems which are located on the volume group, remove all logical volumes
and then delete the volume group as shown in Example 6-27.
tasks:
- name: gather LVM facts
ibm.power_aix.lvm_facts:
- name: "get logical volumes on {{ vgname }}"
ansible.builtin.set_fact:
lvols: "{{ ansible_facts.LVM.LVs | dict2items | community.general.json_query(q) }}"
vars:
q: "[?value.vg == '{{ vgname }}'].{ name: key, mount: value.mount_point }"
- name: unmount all filesystems
ibm.power_aix.mount:
state: umount
mount_over_dir: "{{ item.mount }}"
force: true
loop: "{{ lvols }}"
- name: remove all logical volumes
ibm.power_aix.lvol:
lv: "{{ item.name }}"
state: absent
loop: "{{ lvols }}"
- name: delete volume group
ibm.power_aix.lvg:
vg_name: "{{ vgname }}"
state: absent
Another common task is when you want to expand a volume group by adding new disks to it
or to shrink it by removing unneeded disks. It is the same procedure as for creating and
deleting a volume group. Adding a disk is demonstrated in Example 6-28.
Remember that the disk must be empty before removing it. If you need to move all logical
volumes to another disk use migratepv command. Currently, there is not a special module or
role to free up a disk. However, you can use ansible.builtin.command which passes any
command to be run as if from the CLI. This is shown in Example 6-30.
Using this same logic with a logical volume name specified, but with state specified as absent,
the logical volume will be deleted. This is shown in Example 6-32.
Sometimes you need to change already existing logical volumes. For example, a common
failure during filesystem expansion occurs if the logical volume was sized too small. This
failure is shown in Example 6-33.
You can change the maximum allocation or any other value by using “extra_opts” attribute for
ibm.power_aix.lvol. This can be seen in Example 6-34.
Example 6-34 Set maximum allocation for logical volume using extra_opts
- name: set maximum allocation for logical volume
ibm.power_aix.lvol:
vg: datavg
lv: lv01
size: 1G
extra_opts: "-x 512"
state: present
Note: In this case, you must specify volume group name and the size of the logical volume
even if it doesn’t change. To see which options you can use in extra_opts attribute please
refer to chlv command.
To create a new filesystem when you have an existing logical volume, you can specify the
logical volume name as shown in Example 6-35.
If you don’t have a prepared logical volume you must specify a volume group name to be used
to create the new logical volume and include the size of the logical volume that will be created
as shown in Example 6-36.
auto_mount: true
permissions: rw
attributes: agblksize=4096,logname=INLINE,size=1G
state: present
After you have created a filesystem, you should mount it as shown in Example 6-37.
If you want to change the mount point of an existing filesystem, there is no special module for
this. You can unmount the filesystem, but you can’t change it using ibm.power_aix.filesystem
module. To do this, you must unmount the filesystem, execute the command chfs, and then
mount the filesystem on the new location.
tasks:
- name: unmount filesystem
ibm.power_aix.mount:
state: umount
mount_over_dir: /old_mount
force: true
- name: change mount point
ansible.builtin.command:
cmd: chfs -m /new_mount /old_mount
- name: mount filesytem
ibm.power_aix.mount:
state: mount
mount_dir: /new_mount
Very often as an AIX administrator you need to change the size of a filesystem. One of the
biggest advantages of AIX is that you can change your filesystem configuration dynamically.
Example 6-39 shows how to expand an existing filesystem.
Currently there is a known bug in the ibm.power_aix.filesystem module. If there was an error
during chfs execution and no parameters of the original filesystem were changed, you will
often still get a status returned status of OK instead of FAILED. One of the reasons why the
chfs fails can be the maximum allocations value on the underlying logical volume as we saw
“Working with logical volumes” on page 292. In this case we need to first find the new value
for the maximum allocation and set it before changing the size of the filesystem as
demonstrated in Example 6-41.
Example 6-41 Expand filesystem and change the underlying logical volume
---
- name: expand filesystem
hosts: all
gather_facts: false
vars:
fs: /lv02
size: 10G
tasks:
- name: get LVM facts
ibm.power_aix.lvm_facts:
- name: find logical volume for the filesystem
ansible.builtin.set_fact:
lvol: "{{ ansible_facts.LVM.LVs | dict2items | community.general.json_query(q) |
first }}"
vars:
q: "[?value.mount_point == '{{ fs }}'].{ lvname: key, vgname: value.vg, mount:
value.mount_point, lps: value.LPs }"
- name: find pp size for the logical volume
ansible.builtin.set_fact:
ppsize: "{{ ansible_facts.LVM.VGs | dict2items | community.general.json_query(q) |
first | human_to_bytes | int }}"
vars:
q: "[?key == '{{ lvol.vgname }}'].value.pp_size"
- name: recalculate new size in bytes
set_fact:
newsize: "{{ size | human_to_bytes | int }}"
- name: find new max lp alloc
set_fact:
maxlp: "{{ ((newsize | int) / (ppsize | int)) | round(0, 'ceil') | int }}"
- name: set max lp to logical volume
ibm.power_aix.lvol:
vg: "{{ lvol.vgname }}"
lv: "{{ lvol.lvname }}"
size: "{{ lvol.lps }}"
extra_opts: "-x {{ maxlp }}"
state: present
- name: expand filesystem
ibm.power_aix.filesystem:
filesystem: "{{ fs }}"
state: present
attributes: "size={{ size }}"
If you want to remove an existing filesystem, you should unmount it first as shown in
Example 6-42.
Important: Please bear in mind that AIX automatically deletes the underlying logical
volume with all data on it if you remove a filesystem.
tasks:
- name: unmount filesystem
ibm.power_aix.mount:
mount_over_dir: /lv02
state: umount
- name: delete filesystem
ibm.power_aix.filesystem:
filesystem: /lv02
state: absent
Since one of the common tasks in many some environments is to add new disks, create a
new volume group on those disks and then create a new filesystem on it using 100% of the
disk. This can be done by combining the code from the previous tasks to automate the whole
workflow as shown in Example 6-43.
Example 6-43 Create a filesystem on a new disk using 100% of its space
---
- name: create new filesystem on a new disk
hosts: all
gather_facts: true
vars:
vgname: vgora1
lvname: lvora1
fsname: /ora1
tasks:
- name: find hdisks
ansible.builtin.set_fact:
existing_disks: "{{ ansible_facts.devices | dict2items |
community.general.json_query(q) }}"
vars:
q: "[?starts_with(key, 'hdisk')].{ name: key }"
- name: search for new disks
ansible.builtin.command:
cmd: cfgmgr
changed_when: false
- name: renew facts
ansible.builtin.gather_facts:
ignore_errors: true
- name: get new list of disks
ansible.builtin.set_fact:
mode: 0755
state: directory
Now you can compare how much time it would take to execute all of these commands
manually and how much time it takes to run Ansible playbook. Also consider that with
changing the inventory files and variables, this playbook can be reused multiple times.
6.3.2 Security
Security is a really big topic with a lot of nuances. It is impossible to describe the whole set of
different configuration options which can be set in AIX to make it secure. We will go through
some of them. You will find some more information about fixes, updates and general
configuration tuning in the sections 6.3.3, “Fixes” on page 303 and 6.4, “Day 2 operations in
IBM i environments” on page 309.
All possible values for the attributes section can be found in this chuser description. These are
standard AIX attributes you usually use with mkuser command.
During user creation you can set the user’s password. But it must be encrypted before. It is
copied one-to-one into /etc/security/passwd. If you want that the user changes the password
after the first login, add the attribute change_passwd_on_login to the task.
Let’s create simple password reset task for an AIX user. We generate a new password for the
user and set the flag that the user must change the password at the first login. At the end we
print the new generated password. This is shown in Example 6-45 on page 299.
tasks:
- name: generate random password
ansible.builtin.set_fact:
newpw: "{{ lookup('ansible.builtin.password', '/dev/null', chars=['ascii_lowercase',
'ascii_uppercase', 'digits', '.,-:_'], length=8) }}"
- name: encrypt password
ansible.builtin.shell:
cmd: "echo \"{smd5}$(echo \"{{ newpw }}\" | openssl passwd -aixmd5 -stdin)\""
changed_when: false
register: newpw_enc
- name: password reset
ibm.power_aix.user:
name: "{{ username }}"
state: modify
password: "{{ newpw_enc.stdout }}"
change_passwd_on_login: true
- name: show the generated password
ansible.builtin.debug:
msg: "The new password for {{ username }} is {{ newpw }}"
If you don’t need the user anymore you can set the state to absent and the user will be
deleted as shown in Example 6-46.
In similar way you can create and delete groups - see Example 6-47 for creating a group.
Another very usual action on AIX systems is to add and to remove members from different
groups. This is also easy to implement with Ansible and ibm.power_aix.group module. Adding
members to a group is shown in Example 6-49.
tasks:
- name: add members to group
ibm.power_aix.group:
name: security
state: modify
user_list_action: add
user_list_type: members
users_list: "{{ newmembers }}"
tasks:
- name: add members to group
ibm.power_aix.group:
name: security
state: modify
user_list_action: remove
user_list_type: members
users_list: "{{ rmmembers }}"
Note: Please note the inconsistency in the naming. The attributes user_list_action and
user_list_type are in singular (user without s at the end), but the attribute users_list is in
plural users with s at the end).
If you need to configure IP filter without Ansible, you need to go through several smitty menus
or learn the command and parameters to set the filters using the command line. With Ansible
you define a variable with your filter configuration and use one task to activate it as shown in
Example 6-51 on page 301.
If you need to add a new rule, you can add it into the list and run the playbook again. The rule
will be added. If you want to disable IP filter, you remove all filters, set it to allow by default and
then close ipsec devices. This is shown in Example 6-52.
tasks:
- name: Remove all user-defined and auto-generated filter rules
ibm.power_aix.mkfilt:
ipv4:
default: permit
force: yes
rules:
- action: remove
id: all
- name: stop IPSec devices
ibm.power_aix.devices:
If you want to be sure that IP filter rules are loaded in the correct order, you may remove all
rules before loading them again.
Important: Please be careful changing IP filter configuration. Any mistake in the rules can
lead to lost network connectivity.
Understand that it is aixpert which is doing the work this time, not Ansible. That’s why if you
apply the configuration twice, Ansible will run aixpert twice. It will not change your
configuration the second time and it may even be faster then the first time run. But it is aixpert
which checks and applies security settings, not Ansible.
You can check every time if the configuration is still intact by using check mode as shown in
Example 6-54.
In case the configuration was changed, you can re-apply it as shown in Example 6-55.
Example 6-55 Check and re-apply AIX security settings if they were changed
- name: check applied settings
ibm.power_aix.aixpert:
mode: check
ignore_errors: true
register: aixpert_check
- name: re-apply security settings
ibm.power_aix.aixpert:
mode: apply
level: medium
when: aixpert_check.rc == 1
If you want to restore your previous settings, you can do it. This is possible as aixpert saves
the old configuration and you can “undo” all the changes it made as is shown in
Example 6-56.
6.3.3 Fixes
The topic of fixes, especially emergency or interim fixes, is a point of disagreement for many
AIX administrators. Many administrators live according to the rule “if it works, don’t break it”.
Unfortunately what many administrators misunderstand is the first part of the rule - “if it
works”.
If a security issue was found in some AIX component, then it doesn’t work as it was designed
to anymore. You don’t “break” it, you fix it. With Ansible it is easy to fix these security issues. If
your environment supports Live Update, it can even be done while your systems are online
and does not require any downtime.
If your systems do not have access to the Internet then you are not able to automatically
check and install fixes and you need to provide a method of checking acquiring the
appropriate fixes somewhere in your environment. You need to have at least one server
where you can download fixes which can then be distributed to the other systems.
If you do have access to the Internet, but only through a proxy, you will need to configure your
proxy settings. Often it is as easy as to export the https_proxy variable in your environment,
but sometimes it requires more complex work depending on the specifics of your
environment. Setting up proxy configurations can be automated with Ansible, but it is beyond
the scope of this discussion which is why we built our examples assuming that you have
access to the Internet.
Note: During our tests we saw sometimes messages like KeyError: 'message'. It seems
that FLRT service occasionally fails and delivers answers which are not understood by
Ansible. Please repeat the task and next time it runs without any problems.
Example 6-57 shows how to generate a report on which fixes are available for your system.
Example 6-57 Generate report about available security fixes for AIX
- name: generate report about available fixes for the system
ibm.power_aix.flrtvc:
apar: sec
verbose: true
check_only: true
register: flrtvc_out
- name: print the report
ansible.builtin.debug:
msg: "{{ flrtvc_out.meta['0.report'] | join('\n') }}"
If you want to see the report in a better format, you should export
ANSIBLE_STDOUT_CALLBACK=debug or set it in the command as you call
ansible-playbook as shown in Example 6-58.
Installing fixes
To automatically install the fixes you use the same command you used to check for fixes, but
you need to remove “check_only” as shown in Example 6-59.
Ansible will download all of the fixes into /var/adm/ansible. As long as you have enough space
in rootvg this is not a problem as Ansible will automatically expand the filesystem if it requires
more space. You can choose a different location for temporary files by setting the attribute
path to another directory.
Example 6-60 Copy interim fixes from Ansible controller node to remote AIX server and install them
---
- name: find, copy and install security fixes
hosts: all
gather_facts: false
vars:
local_fixes_dir: /var/adm/fixes
remote_fixes_dir: /var/tmp/fixes
tasks:
- name: find all fixes
ansible.builtin.set_fact:
Sometimes especially before performing AIX update you want to remove all fixes. If you
needed many commands to execute or a separate script earlier, now you can do it with only
two tasks in an Ansible playbook.
tasks:
- name: find all installed fixes
ibm.power_aix.emgr:
action: list
register: emgr
- name: uninstall fix
ibm.power_aix.emgr:
action: remove
ifix_label: "{{ item.LABEL }}"
loop: "{{ emgr.ifix_details }}"
loop_control:
label: "{{ item.LABEL }}"
Let’s assume we have an AIX server with AIX 7.3 and want to update it to AIX 7.3 TL1 SP2.
The server is registered as NIM client and has access to NIM resources. Example 6-62
shows an example of how you can update AIX using NIM’s lpp_source resource. It checks
that the AIX server is registered as NIM client validates that it does not already have the
update.
Example 6-62 Update AIX server using NIM lpp_source from NIM server
---
- name: update AIX server using NIM
gather_facts: false
hosts: nim
vars:
client: aix73
aixver: 7300-01-02-2320
reboot: false
tasks:
- name: check if client is defined
ansible.builtin.command:
cmd: lsnim {{ client }}
changed_when: false
register: registered
ignore_errors: true
- name: stop if the client is not registered
meta: end_play
when: registered.rc != 0
- name: get client version
ansible.builtin.command:
cmd: /usr/lpp/bos.sysmgt/nim/methods/c_rsh {{ client }} '( LC_ALL=C
/usr/bin/oslevel -s)'
changed_when: false
register: oslevel
- name: stop if the client is already updated
meta: end_play
when: oslevel.stdout == aixver
- name: update client
ansible.builtin.command:
cmd: nim -o cust -a lpp_source={{ aixver }}-lpp_source -a fixes=update_all -a
accept_licenses=yes {{ client }}
- name: reboot client
ibm.power_aix.reboot:
when: reboot
Of course you can do it in another direction - from NIM client as shown in Example 6-63.
Example 6-63 Update AIX using NIM’s lpp_source from NIM client
---
- name: update AIX server using NIM
gather_facts: false
hosts: aix73
vars:
aixver: 7300-01-02-2320
reboot: false
tasks:
- name: check if NIM client is configured
ansible.builtin.command:
cmd: nimclient -l master
changed_when: false
register: registered
ignore_errors: true
- name: stop if the client is not registered
meta: end_play
when: registered.rc != 0
- name: get client version
ansible.builtin.command:
cmd: oslevel -s
changed_when: false
register: oslevel
- name: stop if the client is already updated
meta: end_play
when: oslevel.stdout == aixver
- name: update client
ansible.builtin.command:
cmd: nimclient -o cust -a lpp_source={{ aixver }}-lpp_source -a fixes=update_all -a
accept_licenses=yes
- name: reboot client
ibm.power_aix.reboot:
when: reboot
In a similar way you can update any software which is packed in lpp_source on your NIM
server.
Most of AIX settings are stored in so called stanza files. They have sections, attributes and
values. You can change values of attributes by using chsec module as shown in
Example 6-64.
Of course this is not the only way to change AIX settings. You can use standard
ansible.builtin.template and ansible.builtin.copy modules to set AIX settings in the
configuration files.
Security is not the only reason to automate AIX configuration. Another reason can be
maintaining performance baselines. An application vendor like Oracle or SAP can define
some values you have to set up on AIX to achieve better performance. An AIX administrator
usually does it by using tunables commands like vmo, no, schedo or by setting device
attributes, all of which is possible with Ansible. Example 6-65 shows how we can set
reserve_policy to no_reserve and queue_depth to 24 for each disk we find in the system.
tasks:
- name: find hdisks
ansible.builtin.set_fact:
disks: "{{ ansible_facts.devices | dict2items | community.general.json_query(q) }}"
vars:
q: "[?starts_with(key, 'hdisk')].key"
- name: set hdisk attributes
ibm.power_aix.devices:
device: "{{ item }}"
attributes:
reserve_policy: no_reserve
queue_depth: 24
chtype: reset
loop: "{{ disks }}"
Example 6-66 shows how we can set some popular network tunables using Ansible.
In similar way you can other tunables like vmo, schedo, ioo, nfso.
It is not possible is to provide detailed descriptions of all of the options available nor can we
describe each use case as every environment is different from. We have tried to provide a
little guidance and an initial impression of what can be done on AIX using Ansible.
Remember, Ansible has a very vibrant ecosystem. Every month you see new features in
Ansible collections and Ansible itself. If you can’t find some feature or module please report it.
Create an issue on Github or a topic on the IBM community site. You will help yourself and
others to better automate by doing so.
6.4.1 Storage
In the following section, we explore in the context of storage management focused for IBM i.
This segment aims to explore and demystify key storage-related tasks and configurations,
harnessing the power of Ansible automation. Utilizing Ansible's capabilities, we aim to
streamline and enhance the storage management experience for IBM i users, offering
efficient solutions to common challenges.
The os_volume module interacts seamlessly with designated clouds, improving operations by
providing default authentication values. Users can specify target cloud environments,
ensuring secure communication between Ansible and the cloud.
Notably, this module allows granular control over volume size (gigabytes), accommodating
precise resource allocation for IBM i. Volume naming enhances organization, and the module
supports volume type specification, tailoring resources for various workloads.
Incorporating the os_volume module into Ansible simplifies storage management, promoting
efficient provisioning and configuration. This exemplifies the dynamic storage landscape
required by IBM i environments. Refer to Example 6-67 on page 310 for a playbook
showcasing volume creation.
With its core functionality, os_server_volume presents administrators with the option to define
the desired state of the resource, whether it can be present or absent. By accommodating
named clouds, this module permits precise targeting of cloud environments for the operation.
Default authentication values simplify setup and bolster secure communication between
Ansible and the cloud.
The module's efficacy lies in its capacity to associate volumes with specific IBM i virtual
machines. By providing the name of the target virtual machine and the volume, administrators
can quickly attach storage resources, enabling the VM to access the necessary data. This
module exemplifies the synergy between Ansible's automation capabilities and the storage
demands of IBM i. Example 6-68 presents a sample playbook to attach volume to IBM i VM
as follows:
Note: For the effective utilization of the os_server_volume and os_volume modules, a
prerequisite is the presence of IBM PowerVC as the orchestrator for your IBM i virtual
machines. This integration emphasizes the significance of IBM PowerVC in bolstering the
flexibility and resilience of your IBM i infrastructure.
The os_iasp_volume role plays a crucial role in this configuration, efficiently orchestrating
volume integration. The playbook first checks the IASP's existence and creates it if needed,
showcasing Ansible's adaptability.
IASPs offer distinct benefits, enabling isolated disk unit management. The playbook
demonstrates Ansible's integration with os_iasp_volume, effortlessly configuring
non-configured disks into the IASP. This integration optimizes storage alignment, bolstering
overall efficiency.
The playbook highlights Ansible's excellence in configuring IASP volumes. Administrators can
expertly manage storage, ensuring a resilient landscape meeting evolving IBM i demands.
Example 6-69 presents a sample setting up IASP volumes.
Digital signing emerges as a critical practice for ensuring the authenticity and integrity of
software objects. This becomes especially pertinent when objects traverse the Internet or
reside on media that could be susceptible to unauthorized modifications. The use of digital
signatures, managed through mechanisms like the Verify Object Restore (QVFYOBJRST)
system value and the Check Manager tool, aids in detecting any unauthorized alterations.
Single sign-on (SSO) amplifies user convenience by allowing access to multiple systems with
a single set of credentials. IBM facilitates SSO through Network Authentication Service (NAS)
and Enterprise Identity Mapping (EIM), both utilizing the Kerberos protocol for user
authentication. User profiles serve as a versatile tool to enforce role-based access control
and personalize user experiences within the system. Group profiles extend this concept by
centralizing authority assignments for groups of users.
Resource security is implemented through the concept of authorities, governing the ability to
access objects. The system offers finely grained authority definitions, including subsets such
as *ALL, *CHANGE, *USE, and *EXCLUDE. This mechanism applies not only to files,
programs, and libraries but also to any object within the system.
Encryption stands relevant for security, with IBM i enabling data encryption at the ASP and
Database Column levels. However, encryption operations can be carefully managed to
mitigate performance implications. Security audit journals provide a means to monitor
security effectiveness, allowing selected security-related events to be logged for review.
In the context of Ansible for IBM i, the integration of security measures is accommodated
through a range of purpose-built modules. These modules permit diverse requirements,
including security and compliance checks, enabling administrators to configure, verify, and
optimize security settings. Ansible for IBM i offers a robust ecosystem that allows
administrators to ensure security compliance by referencing the CIS IBM i Benchmark
documentation and employing regularly updated security compliance playbooks. This
dynamic framework contributes to the creation of secure IBM i environments that align with
modern security paradigms.
Currently, the focus of these playbooks centers around security compliance checks. These
playbooks are initially presented as basic examples, with plans to expand their contents
based on security compliance suggestions outlined in the CIS IBM i Benchmark
documentation.
Note: For more detailed guidance on implementing security practices on IBM i systems
using Ansible, explore the provided use cases and security management resources
available at GitHub repository IBM i Security Management
To stay up-to-date, it is recommended to regularly review this directory under the 'devel'
branch.
In this part you see the explanation of the playbooks that are involved in this use case as
follows:
1. main.yml: This playbook serves as an entry point, orchestrating the execution of all other
playbooks contained within this directory. Running this playbook will initiate the execution
of the entire suite.
2. manage_system_values.yml: The purpose of this playbook is to verify security-related
system values against recommendations from the CIS IBM i Benchmark documentation.
This playbook offers two separate YAML files for checking and remediating, along with
three distinct modes of operation.
a. system_value_check.yml: Conducts a compliance check on system values by
comparing them with the expected values.
b. system_value_remediation.yml: Provides remediation options based on user input,
allowing remediation to be performed after a comprehensive review of the report.
3. manage_user_profiles.yml: This playbook leverages the 'ibmi_user_compliance_check'
module and the 'ibmi_sql_query' module to assess user profile settings.
a. user_profile_check.yml: Performs a compliance check on user profiles.
b. user_profile_remediation.yml: Offers suggestions for remediation and carries out
remediation actions based on user input.
4. manage_network_settings.yml: This playbook verifies a single network attribute setting by
invoking the “Retrieve Network Attributes (RTVNETA)” command.
5. manage_object_authorities.yml: This playbook validates object authorities. It currently
offers a basic example utilizing the 'ibmi_object_authority' module.
Additional Information
For a comprehensive understanding of how to execute a playbook dedicated to Secure
Compliance for IBM i, it is recommended you refer Appendix A of the IBM Redbook
publication IBM Power Systems Cloud Security Guide: Protect IT Infrastructure In All Layers,
REDP-5659. That section presents a detailed use case specifically focusing on security
compliance for IBM Power Systems using Red Hat Ansible. Within the section titled “Security
and Compliance with Red Hat Ansible for IBM i,” you will find a thorough walk through of the
configuration process.
The section covers essential elements such as configuring the Ansible configuration file and
the inventory file. It then proceeds to demonstrate the execution of the Ansible playbook. It is
relevant to note that the playbook contains prompts related to specific checks for the
managed node, remediation, or both. Additionally, prompts regarding the level of security
definitions are also present, including Level 1 for Corporate and Enterprise Environment and
Level 2 for High Security and Sensitive Data Environment.
During the execution of the playbook, you encounter fail messages within certain tasks. It is
essential to interpret these messages as false-positive results. The playbook generates
JSON files as final reports. These files serve as comprehensive reports to assess the
outcomes on the managed IBM i node. The generated JSON files are stored in the /tmp
directory.
In particular, the JSON report provides a systematic overview of the security management
system values report. This detailed report sheds light on the security aspects analyzed during
the execution of the playbook, offering insights into the compliance status of the IBM i
environment.
The mechanism centers on IBM Fix Central, an online portal that establishes an Internet
connection with multiple IBM i instances. These instances consist of two parts:
1. The first instance is the “PTF and Image Repository IFS.” This is where the acquired fixes
are stored and where automatic detection of new PTF groups is enabled.
2. The second instance is the “PTF and PTF Group Catalog.” This stores information about
downloaded fixes and assembles catalogs with new media.
This dynamic use case, equipped with comprehensive functionalities, is available for
download and adaptation. For more information, refer to GitHub repository: Fix Management
The playbook commences with gathering essential facts, excluding default fact collection,
through the gather_facts: false parameter. It uses the ibm.power_ibmi collection to execute
its tasks smoothly. The become_user_name and become_user_password variables are
employed for privilege escalation.
The playbook orchestrates a series of tasks that offer in-depth insights into network
communications:
1. Review most data transfer connections: This task employs the ibmi_sql_query module
to retrieve connections transferring substantial data (over 1 GB). The retrieved results are
registered, providing crucial metrics about data flow. The subsequent debug task
showcases these results, aiding administrators in evaluating resource utilization and
potential bottlenecks.
2. Analyze remote IP address detail for password failures: Utilizing SQL queries, this
task identifies and counts occurrences of failed password attempts from remote IP
addresses within the past 24 hours. The gathered information is registered and presented
through the debug task. This analysis assists administrators in detecting potential security
threats and unauthorized access attempts.
3. Review TCP/IP routes: This task utilizes the ibmi_sql_query module to pinpoint TCP/IP
routes with inactive local binding interfaces. By registering and displaying the details of
these routes, administrators can identify and rectify potential network configuration issues
that might impact communication reliability.
Beginning with factual data collection disabled via gather_facts: false, the playbook
harnesses the ibm.power_ibmi collection for execution.
The playbook orchestrates a series of tasks for thorough message handling optimization:
1. Analyze next IPL status: This task employs the ibmi_sql_query module to examine
history log messages since the last Initial Program Load (IPL) to predict the nature of the
next IPL. It assesses if it will be normal or abnormal based on specific messages. The
results are registered, and an assert task ensures the next IPL is predicted to be normal,
enhancing system predictability and stability.
2. Examine system operator inquiry messages with replies: This task employs SQL
queries to retrieve system operator inquiry messages and their associated replies from the
message queue 'QSYSOPR'. The gathered information is registered, offering
administrators insights into system operator interactions and responses, thereby
promoting efficient communication and issue resolution.
3. Examine system operator inquiry messages without replies: By analyzing system
operator inquiry messages that have not received replies, this task enhances message
handling efficiency. SQL queries extract relevant data from the 'QSYSOPR' message
queue, and the results are registered and presented through the debug task.
The playbook orchestrates two central tasks, key for maintaining system health:
This playbook utilizes a series of critical tasks geared towards efficient work management:
1. Scheduled job evaluation: The playbook employs the ibmi_sql_query module to assess
job schedule entries that are no longer effective due to explicit holding or scheduling
limitations. The inspection, centered on the QSYS2.SCHEDULED_JOB_INFO view,
targets 'HELD' and 'SAVED' status entries. Results, emphasizing maintained efficiency,
are stored under the job_schedule_status variable.
2. Job queue and temporary storage analysis: The playbook capitalizes on SQL queries
to uncover jobs awaiting execution within job queues (QSYS2.JOB_INFO). Furthermore, it
scrutinizes the top four consumers of temporary storage based on memory pool usage,
isolating jobs with temporary storage exceeding 1GB. These analyses contribute to
optimized system resource allocation.
3. Lock contention evaluation: Through SQL queries, the playbook identifies jobs
encountering excessive lock contention. By querying the QSYS2.ACTIVE_JOB_INFO
view, it isolates jobs with combined database and non-database lock waits surpassing
2000. Insights are essential for maintaining operations.
4. QTEMP resource utilization inspection: The playbook evaluates host server jobs
utilizing more than 10MB of QTEMP storage. With qsys2.active_job_info, jobs meeting this
criterion are identified, enabling efficient resource allocation.
IBM continues to develop new content, and improve existing content within these collections.
If we take a look at the IBM Power AIX collections via Galaxy (for example), we can see the
version release cycle in Figure 7-1.
7.1.1 Working closely with the IBM Power collections and their contents
You can see what is included in the Ansible IBM Power collections and sample playbooks on
the IBM github pages.
ibm.power_aix https://github.com/IBM/ansible-power-aix
ibm.power_ibmi https://github.com/IBM/ansible-for-i
ibm.power_hmc https://github.com/IBM/ansible-power-hmc
ibm.power_vios https://github.com/IBM/ansible-power-vios
Within the github repositories you can see the code used to supply the collection, including
the readme, the modules and some sample playbooks.
This is done within the ‘issues’ section of the github repository for the collection. You will see
three options ‘bug report’, ‘custom issue template’ and ‘feature request’ as shown in
Figure 7-2.
You can also contribute to the collections by creating your own fork from the repository,
making your changes to your fork and raising a pull request. This way the development team
will see your proposed changes and either merge them into the collection or reject them.
What is new in Ansible Automation Platform v2.4 including the Technical Preview
announcement can be seen in the link below.
https://www.ansible.com/blog/whats-new-in-ansible-automation-platform-2.4
Details on the general availability of Ansible Automation Platform can be found on this Red
Hat Blog.
Not only does this allow you to run the Ansible Controller (formerly Tower) on IBM Power, but
all the other components that go to make up Ansible Automation Platform including execution
environments, event-driven Ansible and automation hub.
The Ansible extension provides smart auto completion, syntax highlighting, validation,
documentation reference, integration with ansible-lint, diagnostics, goto definition support,
and command windows to run ansible-playbook and ansible-navigator tool for both local and
execution-environment setups.
Figure 7-3 Installing the Ansible extension for Visual Studio Code
The next step is to open the folder that will contain your Ansible files using the Explorer icon in
the top left as shown in Figure 7-4.
The first time you open a Ansible file (either .yaml or .yml), there are a few steps you will need
to do.
1. Define which Python environment the Ansible extension should use, by clicking on the
Python version indicator - which is located on the right hand of the Status Bar, as shown in
Figure 7-5.
2. Associate the .yaml or .yml files with the Ansible file type, by clicking on the language
indicator - which is located on the right hand of the Status Bar, and selecting Ansible from
the drop down as shown in Figure 7-6 on page 322. The language indicator will probably
be set to YAML before associating with the Ansible file type.
The Ansible extension should now recognize YAML files as Ansible language, offer syntax
checking, documentation links, and other contextual aids to help you write Ansible code.
Note: If you have the ansible-lint package installed, this will automatically be integrated for
syntax checking and code validation.
If you are working in a larger environment, you will probably not write, test and run your
Ansible code on the same workstation where you installed Visual Studio Code. In this case,
you will need to install the Remote SSH extension.
Once installed you can open the VS Code Command Palette by pressing F1, and search for
‘remote’, as shown in Figure 7-8. Use either ‘Remote-SSH: Connect to Host...’ or
‘Remote-SSH: Add New SSH Host’ to enter the hostname and user credentials for the remote
machine you will use to test and run your Ansible code.
Syntax highlighting
Ansible module names, module option, and keywords are recognized and displayed in
distinctive colors to allow the developer to see if the language syntax matches the intended
purpose. Default colors will change depending on the color theme used. A sample in Visual
Studio Dark theme is shown in Figure 7-9.
Validation
The Ansible extension provides feedback regarding syntax as you type and any potential
problems are shown in the ‘Problems’ tab of the integrated terminal, as shown in Figure 7-10
on page 324.
Documentation reference
Hovering over a module name, module option, or keyword will show you a brief description of
the item as a ‘tooltip’, as shown in Figure 7-13. You can display a full definition by either right
clicking on the item, and selecting ‘Go to definition’ and the full definition will appear in a
separate tab. Alternatively, you can select ‘peek’ from the menu to display the definition as a
pop-up.
The ‘Source Control’ icon on the left taskbar will show an overview of changed files that may
need to be updated in your Github repository. Clicking on the icon will show the Source
Control view that easily allows you to commit changes with a message and push to your
repository as shown in Figure 7-15.
Clicking on ‘Views and More Actions’ menu in the top right of the Source Control view will
allow many more git operations such as clone, branch, and configure remote repo, as shown
in Figure 7-16 on page 327.
A full discussion of using git in Visual Studio is beyond the scope of this book, but more
information can be found on various sources on the Internet including:
https://code.visualstudio.com/docs/sourcecontrol/overview
https://www.ibm.com/garage/method/practices/code/visual-studio/
7.2.3 IBM watsonx Code Assistant for Red Hat Ansible Lightspeed
IBM watsonx Code Assistant for Red Hat Ansible Lightspeed is a joint project between IBM
and Red Hat that offers access to Ansible content recommendations through the use of
natural language automation descriptions. This project is accessible through the integration of
an IBM AI cloud service operated by Red Hat and the Ansible Virtual Studio Code plugin, and
is offered to the Ansible community to use, without cost. This service uses, among other data,
roles and collections that are available through the community website, Ansible Galaxy.
IBM watsonx Code Assistant for Red Hat Ansible Lightspeed has been released and is
available for use for Red Hat customers. At this point IBM watsonx Code Assistant for Red
Hat Ansible Lightspeed does not write complete playbooks, but can generate syntactically
correct and contextually relevant content using natural language requests written in plain
English text.
Getting Started
To enable IBM watsonx Code Assistant for Red Hat Ansible Lightspeed, you need the Visual
Studio Ansible Extension from Red Hat discussed in 7.2.2, “Visual Studio Code” on page 320.
You will also need a Red Hat login.
Once you have Visual Studio Code and the Ansible extension installed, go to the Settings
panel for the Ansible extension as shown in Figure 7-17.
Then click on the Ansible icon (the letter A) in the left taskbar to display the Ansible
Lightspeed Login panel as shown in Figure 7-18.
Click on the Connect button in the ‘Ansible Lightspeed Login’ panel and you will be redirected
to the IBM watsonx Code Assistant for Red Hat Ansible Lightspeed login web page. Follow
the prompts to login with your Red Hat credentials as shown in Figure 7-19.
Once authenticated, accept the Terms & Conditions to enable IBM watsonx Code Assistant
for Red Hat Ansible Lightspeed.
The final step is to authorize VS Code to interact with IBM watsonx Code Assistant for Red
Hat Ansible Lightspeed extension by sending prompts and receiving code suggestions as
shown in Figure 7-20. You should then see in the left taskbar that you are logged into IBM
watsonx Code Assistant for Red Hat Ansible Lightspeed.
Figure 7-20 Authorize Ansible to interact with Ansible Lightspeed with Watson Code Assistant
Using IBM watsonx Code Assistant for Red Hat Ansible Lightspeed
To use IBM watsonx Code Assistant for Red Hat Ansible Lightspeed to get code
recommendations for Ansible tasks, open a valid Ansible YAML file in the code editor. Check
the bottom status bar of VS Code to ensure that the YAML file is recognized as Ansible
language, and that Lightspeed is enabled.
Enter a task name and a description of what you want the task to do. Press Enter at the end
of the line, and you should receive a code suggestion as shown in Figure 7-21 on page 330.
The code suggestion will be shown in a gray font. Review the suggested code and either
press Tab to accept the recommendation, or press Esc or Enter to dismiss it.
The source of the code suggestion is shown in the ‘Ansible: Lightspeed Training Matches’ tab
of the panel below the code editor window as shown in Figure 7-22.
Note: The panel below the code editor shows various tabs including: Problems, Output,
Debug Console and Integrated Terminal. It can be toggled on and off with Command-J
(Mac) or Control-J (Windows/Linux)
If a recommendation is accepted, and then further edits are performed, then the act of
changing the recommendation to something else will be considered a modification of the
recommendation. This will tell Red Hat and IBM that the recommendation required extra
action in order to meet the intended use. This information will be used for context in training
the model for similar prompts in the future.
The telemetry data is first anonymized and then sent whenever you switch to a different file in
Visual Studio code or create a new Ansible task in the same Ansible Playbook. For more
information see Getting a Recommendation.
Some example tasks to try out in IBM watsonx Code Assistant for Red Hat Ansible
Lightspeed are shown in Example 7-1.
Given the nature of deep learning technology, as well as the kinds of content used to train and
which are generated by Lightspeed, it is not possible to identify specific training data inputs
that contributed to particular Lightspeed output recommendations. Nevertheless, Lightspeed
includes a feature to help users that are interested in understanding possible origins of
generated content recommendations. When Lightspeed generates a recommendation, it will
attempt to find items in the training dataset that closely resemble the recommendation. In
such cases, Lightspeed will display licensing information and a source repository link for the
training data matches in a panel interface in the VS Code extension.
This feature may enable users to ascertain open source license terms that are associated
with related training data. This feature has been implemented even though it is believed to be
unlikely that either the training data used in fine-tuning or the output recommendations
themselves are generally protected by copyright, or output reproduces training data content
controlled by copyright licensing terms.
Red Hat does not claim any copyright or other intellectual property rights in the suggestions
generated by the IBM watsonx Code Assistant for Red Hat Ansible Lightspeed service. For
more information see Matching Recommendation to Training Data.
A.1 Introduction
In today's rapidly evolving business landscape, IBM i customers face the pressing need to
modernize their applications and stay ahead of the game. As you plan your IT environment,
we understand the top concerns that you grapple with, from “cobbling together” various tools
to “force-fitting” IBM i native file systems. Many are still reliant on outdated technologies which
lack automated change control and project builds, and grapple with monolithic, non-modular
designs and ancient source editors.
At IBM, we are committed to guiding the market towards “IBM i Next Gen Apps” – applications
that can quickly respond to business needs through DevOps, CI/CD, and Agile
methodologies. With a focus on encapsulating processes and data, we help you create
assets for the business by blending technology to achieve the best fit for purpose. Moreover,
we enable you to easily incorporate new technologies, even if they are not currently
“in-house.”
To get to “IBM i Next Gen”, you need to address various challenges, such as:
– Converting fixed-format RPG to free format
– Understanding and managing high volumes of code
– Refactoring mega-programs into modules
– Ensuring intelligent builds amidst spaghetti code.
Exposing embedded logic as services and adopting a “service consumption” mindset and
tools are crucial steps forward. Utilizing modern tools such as Git for common source code
management can be transformative.
Fortunately, IBM i offers a range of modernization technologies to bridge the gap. Modern
RPG and its integration with contemporary development tools helps address the talent gap.
Connectivity with cloud-based and containerized applications via Rest APIs facilitates smooth
communication between systems. We provide ISV and Open Source tools to modernize old
source code and fully adopt DevOps practices.
One such tool is IBM i Modernization Engine for Lifecycle Integration (Merlin) – an innovative
set of OpenShift-based tools designed to guide and assist software developers in
modernizing IBM i applications and development processes. Running in OpenShift
containers, IBM i Merlin permits you to unlock the value of hybrid cloud and provides a
multi-platform DevOps implementation. The framework simplifies the adoption of DevOps and
CI/CD practices, while utilizing technologies that promote services-based software through
RESTful interface connections and enterprise message technologies.
The Merlin platform includes IBM i VM management, which provisions, manages, and deletes
IBM i virtual machines through PowerVC or PowerVS in IBM Cloud. One of the actions
available to run on the IBM i server is Enable Ansible environment, which helps initiate the
yum, python, and Ansible packages. With IBM i Merlin, we pave the way to the future of
application development, propelling you towards the realms of efficiency, agility, and
innovation.
standardization, making it accessible to the younger generation already familiar with these
tools.
Central to Merlin's impact is the inclusion of the RPG converter, a pivotal tool enabling the
modernization of core RPG code. With this advancement, RPG becomes more appealing and
user-friendly, enticing new developers into the fold.
Moreover, Merlin champions cloud infrastructure migration, providing agile Dev and test
environments. This paradigm shift offers productivity gains, opening doors to new resources
accessible from any location with top-notch security measures.
Figure A-1 illustrates a depiction of the IBM i Modernization Engine for Lifecycle Integration
GUI - overview. This interface has been meticulously crafted to enhance the capabilities of
IBM i users, allowing interaction with hybrid cloud work tools. These tools facilitate modern
development and deployment of IBM i native applications, utilizing standardized cloud
methods. The GUI showcases Merlin's commitment to providing accessible and efficient
solutions for application modernization and integration in the dynamic IT landscape.
IBM i Merlin strategically emphasizes adopting modern tools and processes such as DevOps,
cloud services, and hybrid cloud solutions – propelling the IBM i ecosystem into a new era of
efficiency and adaptability. Through the integration of container-based tools, clients gain the
agility to keep pace with the ever-evolving demands of the market.
IBM i Merlin takes center stage in the Red Hat OpenShift conversation, positioning IBM i
directly at the forefront of modernization discussions. Its containerized architecture opens
doors for clients to use the potential of hybrid cloud and multi-platform DevOps
implementation to their advantage.
A crucial factor to the development of IBM i Merlin was the active involvement of IBM i
customer advisory councils, ensuring that the solution aligns with the specific needs and
aspirations of clients. Furthermore, expert minds in the IBM i modernization domain stand
ready to assist clients in adopting IBM i Merlin, ensuring an effortless and successful
transition.
With IBM i Merlin as your ally, embark on a journey of transformation, bridging the gap
between existing systems and cutting-edge technologies. Confidently step into the future of
the IBM i ecosystem, and let IBM i Merlin be your guide to a brighter, more agile tomorrow.
Utilize Merlin's
Use GitHub, GitLab, Explore Merlin's
capabilities for impact
Bitbucket or Gitbucket browser-centric IDE
Transformation of analysis, program
to enhance source with features such as
RPG code from fixed understanding, data
control efficiency and outline view,
to free format for usage analysis, and
facilitate efficient tokenization, content
enhanced modularity program flow
branching processes assist, code
and readability visualization, ensuring
for improved formatting, and
through refactoring. informed decisions
development language
and application
workflows. understanding.
integrity.
Benefit Description
In 2003, ARCAD achieved integration with WDSC (predecessor of RDi), laying the
groundwork for their future endeavors. Four years later, in 2007, they introduced the RPG
Free Form converter, revolutionizing RPG development on IBM i. Their commitment to
innovation was recognized in 2012 when ARCAD was awarded the prestigious IBM Rational®
Innovation Award.
The collaboration deepened further in 2013 with ARCAD licenses becoming available in IBM
Passport Advantage®, completing RTC integration. In 2016, ARCAD smoothly integrated with
Urbancode, enhancing their DevOps capabilities for IBM i. The following year, in 2017,
ARCAD Observer and RPG converter found their way into the e-config Channel, expanding
accessibility to these powerful tools.
Note: IBM and ARCAD Software have had a long-standing partnership. ARCAD had
previously created plugins to the architecture being designed. To deliver the best value to
the market as fast as possible, IBM chose to work with ARCAD to deliver a product with an
integrated RPG modernization and impact analysis.
The following individual components empower Merlin to excel in both problem-solving and
modernization:
Eclipse Theia
At the heart of Merlin's development environment resides Eclipse Theia, an open-source
iteration of Microsoft's original Visual Studio Code (VS Code). This dynamic platform
offers a versatile and user-friendly integrated development environment (IDE) for crafting
and refining IBM i applications. For further about IDE take a look at “Summary of the
Integrated Development Environment (IDE)” on page 339
Eclipse Che
A pivotal element in Merlin's infrastructure, Eclipse Che provides the workplace server
responsible for crafting, managing, and orchestrating the IDE within a Kubernetes
environment. This integration ensures a fluid and efficient development process which is
further enhanced by Kubernetes orchestration.
ARCAD Transformer
Integral to the Merlin ecosystem, Transformer – formerly known as ARCAD Converter –
exemplifies the powerful software that facilitates the conversion and transformation of
existing code into more contemporary and efficient forms. This essential tool simplifies the
modernization process and contributes to the advancement of IBM i applications.
ARCAD Builder
A cornerstone of Merlin's capabilities, ARCAD's build management software (Builder)
empowers developers with advanced tools for efficiently assembling and managing
application components. This software promotes consistency, reliability, and efficient
deployment practices throughout the development lifecycle.
ARCAD Observer
Within Merlin's toolkit, the Observer part of ARCAD emerges as a noteworthy software
component. This tool provides comprehensive insights into the application development
and deployment processes, offering valuable visibility and control over critical aspects of
the development lifecycle.
Integration with Git
Merlin integrates with Git, a widely adopted version control system, enhancing
collaboration and code management. This integration streamlines code repository
operations and aligns Merlin's capabilities with modern development practices.
Jenkins
A crucial component, Jenkins facilitates continuous integration and continuous
deployment (CI/CD) pipelines, a central focus of Merlin's development approach. IBM has
incorporated Jenkins as part of Merlin, enhancing the platform's ability to automate and
optimize the application delivery pipeline.
Red Hat OpenShift
Merlin finds its home and operational foundation within Red Hat OpenShift, a robust
container platform. OpenShift empowers Merlin to be installed, managed, and executed
efficiently, supporting its role as a transformative solution for IBM i modernization.
IBM Cloud Pak foundational services
IBM Cloud Pak® foundational services constitute the cornerstone of the Cloud Pak
ecosystem, encompassing critical tools such as Certificate Manager for efficient certificate
administration, enabling secure connections, and License Manager for centralized
software entitlement tracking, enhancing licensing efficiency. These services underscore a
commitment to a robust and secure cloud environment.
Note: Customers do not need to pay extra to acquire the ARCAD functions. These
functions are fully integrated into the Merlin product. As a result, developers have complete
access to “Fixed to Free” format conversion, an integrated impact analysis tool, and the
capability to utilize intelligent build support directly within the IDE. These valuable features,
powered by ARCAD, are fully integrated and included as essential components of the
Merlin solution.
Integrated without disruption with Code Ready Efficient Git repository setup, enabling code
Work Spaces. migration from previous library to Git.
Robust project explorer facilitating efficient IBM i Conversion of code to Fully Free form, utilizing
environment and source management. deep expertise in transitioning source control
from existing to Git-centric.
Intelligent build with integrated compile feedback, ARCAD dependency-based build, addressing the
defined metadata, and a comprehensive joblog complexities of IBM i applications through
explorer. automated tooling and processes.
Git integration for utilizing Git-based tooling, ARCAD impact analysis, offering valuable
encompassing actions such as pull, push, and insights into application linkages, data usage,
merge. and code flow visualization.
Key attributes:
A modernization platform guiding IBM i applications toward hybrid DevOps.
Exposing IBM i native functions via Restful interfaces and centralizing IBM i connections
management.
Facilitating the use of tools for DevOps and services-based software implementation.
This process is visually depicted in Figure A-3, where the OperatorHub under Red Hat
OpenShift for Merlin is displayed.
Figure A-3 OperatorHub integration with Red Hat OpenShift for Merlin
In Figure A-4 you can observe the extensive capabilities of the Merlin platform. These
includes the Merlin tool lifecycle, authentication, certification management, user
management, monitoring, inventory management, credential management, IBM i VM
management, and IBM i software installer.
Moreover, the Merlin layer itself showcases the Merlin platform's composition, which includes
both the GUI and engine. The Merlin tools are further depicted, incorporating the IDE, CI/CD,
and other essential elements.
Key points:
1. User-friendly GUI for REST services: Merlin's platform provides an intuitive interface for
launching RESTful service creation.
2. Develop RESTful services: Create RESTful services for IBM i programs and data on IBM
i systems in the initial release.
3. Native IBM i execution: RESTful services continue to run natively on IBM i.
4. Wide language support: Support for RPG, COBOL, and program/service program
(PGM/SRVPGM).
5. Data Integration: Integrate data stored in DB2 for i into your RESTful services.
Figure A-5 shows the creation of a Web Services server based on IBM i objects including
RPG and COBOL programs, alongside SQL statements.
Merlin incorporates specialized Ansible playbooks for PowerVC and PowerVS environments.
While not the mainstream approach due to often static IBM i LPAR structures, administrators
can enable this functionality. This process involves:
1. Initiate the process by crafting a PowerVC template within the Inventory. Figure A-6
shows a visual depiction of the Inventory interface, which facilitates this essential step in
the workflow.
2. In this step, you configure the PowerVC credentials, enabling Merlin to securely
communicate with the PowerVC instance. This involves providing the necessary
authentication details to establish a connection. Figure A-7 shows editing the Inventory.
3. In this step, leverage Merlin's intuitive GUI to initiate VM provisioning which is working
with a Merlin Template. This process simplifies the creation of virtual machines,
facilitating the deployment of IBM i instances on PowerVC or PowerVS. Follow the
on-screen instructions as shown in Figure A-8 and input the necessary details to
customize your VM's configuration.
4. After completing the configuration, a dedicated Merlin menu becomes available within the
GUI, offering access to the VM provisioning process. This feature transforms Merlin into a
central hub, that permits developers to provision PowerVC or PowerVS VMs tailored for
modernization projects. New VMs integrate dynamically into the Inventory for use by 'IBM
i Developer' or 'CI CD' services. In Figure A-9 on page 344 go to Provision -> Deploy
Virtual Machine.
Figure A-9 Navigating the Deploy Virtual Machine menu in Merlin's GUI
Note: Similar steps above apply to PowerVS provisioning, requiring an IBM Cloud API Key
for credentials. Refer to Manage IBM i Servers
Within the Merlin pod, a set of Ansible playbooks and the Ansible engine facilitate internal
setup, initialization of the Merlin-IBM i environment, and PowerVS or PowerVC provisioning.
Note: It is worth noting that Merlin does not incorporate Terraform. This signifies that
Terraform is not utilized as an automation tool for provisioning IBM i VMs within Merlin.
Instead, the automation tool employed is Ansible.
Merlin version 1.0 introduces an Ansible controller integrated into the Merlin engine. This
controller orchestrates:
Internal playbooks designed for IBM i VM provisioning using PowerVC or PowerVS
through the use of OpenStack and IBM Cloud modules. This includes two key playbooks:
VM Provisioning and VM Destroy.
A default set of six playbooks which are referred to as 'actions' in Merlin.
Administrators are required to execute these actions in a sequential manner to prepare the
target IBM i LPAR for efficient management by Merlin. These actions pave the way for
development with Merlin, build processes, and CI/CD practices.
The six actions provided by Merlin are:
i. Enabling Ansible: Installs essential packages such as yum, python, and Ansible on
the IBM i server.
ii. Validating PTF Level: Verifies the PTF (Program Temporary Fix) level using Merlin.
Figure A-10 Six actions on IBM i performed from Merlin by the administrator.
Note: In future releases, based on the evolving product roadmap which is subject to
potential changes, additional playbooks will be introduced. These forthcoming playbooks
will utilize tasks such as PTF management, Security & Compliance management, and
more, utilizing Ansible to effectively manage IBM i environments. However, such additional
playbooks are not included in version 1.0, reflecting Merlin's aim to offer user-friendly and
supported automation even for those with limited Ansible knowledge or skills.
Note: By following these guidelines, your organization can make significant strides towards
a more efficient and innovative future.
However, for the IBM i ecosystem, these attempts resulted in a force-fit scenario, as the
unique requirements of the IBM i platform needed to be integrated within these existing
frameworks. Figure A-11 illustrates the complexities and challenges faced by those striving to
align their processes. This diagram showcases the DevOps landscape prevalent before
Merlin's introduction, emphasizing the need for a more tailored solution.
Figure A-11 DevOps MVP Architecture and integration diagram before Merlin
Figure A-12 on page 347 depicts the architecture constructed. It is important to note that the
primary focus is on continuous integration (CI), although continuous deployment (CD) require
additional workflows.
Figure A-12 End-to-end CI/CD process diagram for IBM i with Ansible
Now examine the playbooks central to this use case. The set of playbooks, along with an
inventory file and the Ansible configuration file, is outlined in Example A-1.
Example A-1 Set of playbooks to run full cycle of the CI/CD process on IBM i
|-- add_build_system.yml
|-- ansible.cfg
|-- build.yml
|-- cleanup.yml
|-- git_clone.yml
|-- hosts.ini
|-- main.yml
|-- post_build_actions.yml
|-- provision_vars.yml
|-- provision_vm.yml
`-- put_code.yml
The core playbook for CI/CD is labeled as main.yml. Example A-2 offers a clear insight into
the upcoming execution process.
- name: git_branch
prompt: "Enter a git branch"
private: no
collections:
- ibm.power_ibmi
tasks:
- set_fact:
build_lib: "BUILD_{{ build_number }}"
- set_fact:
build_path: "/tmp/{{ build_lib }}"
local_workspace: '~/workspace/{{ build_lib }}'
- block:
- name: Step 1 - clone source code from git
include: git_clone.yml
- block:
- name: include provision related vars if provision is true
include_vars: provision_vars.yml
always:
- name: Step 6 - cleanup on demand
include: cleanup.yml
when: cleanup
...
The steps outlined in the main.yml (as shown in Example A-2 on page 347) call distinct YAML
files. These steps are:
1. Clone source code: When exploring the realm of CI/CD, the underlying objective remains
consistent. Essentially, you initiate a pipeline with an input – often your source code – and
as this section progresses, an outcome is generated, which can be a packaged program.
Consider tasks akin to a “git clone.” Consequently, it is likely that the initial step in your
pipeline predominantly involves cloning.
The subsequent YAML file, dedicated to this task, follows a sequence:
– If a local workspace already exists on the system, it is removed.
– Following this, a new local workspace is created at the localhost.
– Subsequently, the action involves cloning a git repository at the localhost, where the
localhost functions as the IBM i control node in this context.
The requisites for this action include the repository URL, the previously created local
workspace, and the designated git branch.
Notably, the variables pertaining to this process are specified within the main.yml file The
YAML file dedicated to the cloning process is shown in Example A-3.
b. Provision virtual machine: This YAML file introduces the inclusion of a PowerVC host
into the in-memory inventory, effectively supplanting the manual approach of editing
the inventory file for host addition. In the process of provisioning, the os_server
compute instance from OpenStack is employed. Post-provisioning, an additional task is
executed to sift through the output, thereby revealing the IP address of the freshly
provisioned IBM i virtual machine. This is shown in Example A-5.
Example A-5 Playbook that introduces a PowerVC host into the in-memory inventory
---
# Add powervc host to in-memory inventory
- name: Add PowerVC host {{ powervc_host }} to Ansible in-memory inventory
add_host:
name: 'powervc'
ansible_user: '{{ powervc_admin}}'
ansible_ssh_pass: '{{ powervc_admin_password }}'
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_python_interpreter: /usr/bin/python3
ansible_ssh_host: '{{ powervc_host }}'
no_log: true
3. Add build system: This YAML file introduces the IBM i VM deployed in-memory inventory.
Notably, the inventory is populated with values through the utilization of set_fact. The IP
address of the IBM i VM, obtained from the register during deployment
(vm_info.server.accessIPv4), is a key inclusion.
Additionally, values from variables such as ansible_ssh_user and ansible_ssh_pass,
which are defined in the hosts.ini file, are integrated. Subsequently, the known_hosts
module plays a role in adding or removing the host key (SSH) for the deployed IBM i VM.
This key is essential for the control node to manage the new virtual machine via Ansible. It
is relevant that the term “non-fixed” refers to a new VM, which requires verification of the
program (PGM).
Another crucial module comes into play: wait_for_connection. This module monitors the
new VM's status until it successfully establishes an SSH connection. It is important to
recall that the managed node necessitates Python 3 and associated packages.
Consequently, these prerequisites are installed using the raw module. The structure of the
YAML file is shown in Example A-6.
Example A-6 Playbook that introduces the IBM i VM deployed in-memory inventory.
---
- block:
- name: set_fact for non-fixed build environment
set_fact:
build_system_ip: "{{ vm_info.server.accessIPv4 }}"
build_system_user: '{{ hostvars["non-fixed"]["ansible_ssh_user"] }}'
build_system_pass: '{{ hostvars["non-fixed"]["ansible_ssh_pass"] }}'
- name: remove existing entry for vm in case ssh header change occurs.
known_hosts:
name: "{{ build_system_ip }}"
path: ~/.ssh/known_hosts
state: absent
4. Put code: This YAML file orchestrates a sequence of tasks essential for code deployment.
Initially, the ibmi_cl_command module is employed to establish a library at the new virtual
machine. This operation is delegated to the IBM i controller node. The subsequent task
centers around the .netrc file, housing login credentials for authorized access to the
newly created IBM i VM. This file resides in the home directory of the IBM i controller
node.
A set of tasks ensue within a block structure, focusing on path creation. The
ansible.builtin.file module is employed in this process to enable the creation of
directories, guided by the specified variables from main.yml. Notably, this approach
encompasses subdirectories as required, ensuring a comprehensive directory structure.
The ensuing task is centered on the transfer of the 'C' program from the IBM i control node
to the new IBM i virtual machine. To facilitate this transfer and eliminate interactive prompt
passwords, the task utilizes the sshpass utility. The structure of the YAML file is provided in
Example A-7.
Example A-7 Playbook for orchestration the tasks for code deployment
---
- name: Create {{ build_lib }} on {{ build_system_ip }}
ibmi_cl_command:
cmd: CRTLIB {{ build_lib }}
delegate_to: "build_system"
- block:
- name: Create {{ build_path }} on remote IBM i
ansible.builtin.file:
path: "{{ build_path }}"
state: "directory"
delegate_to: "build_system"
5. Build: This YAML file orchestrates the execution of crucial tasks within the build process.
Here, the ibmi_cl_command modules come into play, specifically for invoking the Create
Bound C++ Program (CRTBNDCPP) command, thereby initiating the ILE C++ compiler. This
operation incorporates the utilization of specific variables, as defined in main.yml, for the
purpose of parameterizing the IBM i command. Notably, build_lib and build_path are
among the variables employed.
Of significance is the use of the source stream file (SRCSTMF), which accommodates the
program's source code. This code is initially cloned from the Git repository and
subsequently transferred to the newly created IBM i VM. Specifically, the program source
file named sendMsg.c is involved in this process. Upon compilation, an ILE C++ program
object named SENDMSG is generated.
Example A-8 shows the structure of the YAML file providing insight into the build process.
Example A-8 Playbook to orchestrate the execution of tasks within the build process
---
- block:
- name: call CL command to build application
ibm.power_ibmi.ibmi_cl_command:
cmd: CRTBNDCPP PGM({{ build_lib }}/SENDMSG) SRCSTMF('{{ build_path }}/sendMsg.c')
when: build_with_stmfs
delegate_to: 'build_system'
...
6. Post-build actions: In this section, the focus shifts to the execution of essential tasks
following the build process. The pivotal task encompasses the initiation of the SENDMSG
program, followed by the registration of the subsequent output task. This outcome is then
systematically filtered to present the outcome resulting from the program invocation.
It is noteworthy to highlight the utilization of a conditional 'when' directive within this context.
Specifically, the 'when' directive is employed to evaluate a predefined condition, denoted
as 'true' within main.yml. This conditional assessment serves as the enabling factor for the
execution of the task program, ensuring its activation in the appropriate scenario.
The structure and sequence of tasks pertaining to post-build actions are outlined in
Example A-9.
Example A-9 Playbook for running built programs with Stream Files (STMFs)
---
- name: run PGM built with STMFs
ibm.power_ibmi.ibmi_cl_command:
cmd: CALL {{ build_lib }}/SENDMSG
joblog: true
register: callpgm
when: build_with_stmfs
- name: PGM output
debug:
var: callpgm.job_log[0].MESSAGE_TEXT
...
7. Cleanup: In this YAML file (shown in Example A-10), the process involves the removal of
the local workspace and directories from the newly created IBM i VM. Additionally, it
encompasses the deletion of the IBM i VM, which was utilized to test the program. Notably,
the pause module, coupled with a prompt, is employed to ensure that the cleanup tasks
proceed only upon pressing the Enter key.
Note: Before the advent of Merlin, DevOps environments often involved intricate setups
using tools such as Jenkins and Ansible. While effective for various platforms, integrating
IBM i requirements posed unique challenges. The DevOps MVP architecture overview,
explored through steps such as clone source code, provisioning, and more, highlights the
complexities faced in harmonizing processes. This retrospective underscores Merlin's role
in offering a more tailored and efficient IBM i DevOps solution.
The focal point of attention revolves around DevOps, a pivotal paradigm driving the
continuous integration and delivery (CI/CD) pipeline. Within this domain, a meticulously
crafted suite of tools has emerged, centered on Git and Jenkins. These tools are purposefully
designed to strengthen the IBM i platform by facilitating a dynamic approach to continuous
development. This involves the automated compilation of code segments extracted from RPG
or COBOL applications, subsequently deploying them to diverse IBM i endpoints. The
orchestration of these tasks is expertly managed by Jenkins. For a visual depiction, consult
Figure A-13.
Figure A-13 Enhancing IBM i platform with Git and Jenkins in DevOps CI/CD pipeline
Furthermore, the collaborative partnership between ARCAD and IBM has yielded a deep
understanding of the specific needs and intricacies of the IBM i platform. This knowledge has
been instrumental in fine-tuning the integration of ARCAD's solutions with Merlin, ensuring a
harmonious alignment with IBM i requirements. The robustness of this collaboration is
exemplified by ARCAD's suite of plugins that interact with Merlin's capabilities, facilitating a
well-integrated and efficient development experience. The resulting synergy between
ARCAD's expertise and Merlin's capabilities empowers organizations to achieve optimized
DevOps practices and realize the full potential of their IBM i investments. Refer to Figure A-14
for a visual representation of this enriching collaboration.
Figure A-14 Enriching the DevOps landscape: ARCAD's integration with Merlin
Note: The OpenShift environment can be situated on an IBM Power Systems server or any
location compatible with current OpenShift implementations. Additionally, OpenShift can be
hosted within a cloud instance, such as IBM Cloud (IBM Power Virtual Servers), or within
any cloud platform that accommodates OpenShift environments. Clients who already have
workloads functioning in the cloud can effortlessly extend their operations by integrating
Merlin into an OpenShift environment within the cloud.
Merlin is specifically designed for Red Hat OpenShift containers, applicable to both IBM
Power Systems (ppc64) and x86 architectures.
Note: While some familiarity with Git and Jenkins is essential, in-depth knowledge of Linux
or OpenShift is not a prerequisite for the role.
In the depicted scenario, a Merlin administrator assumes a pivotal role, wielding direct access
to the Merlin platform GUI while orchestrating a spectrum of activities tailored for IBM i users.
The scope of responsibilities entails crucial operations to ensure the platform's efficient
functionality for IBM i users. These include, but are not limited to:
The installation and deployment of Merlin tools, encompassing essential components
such as the IDE, CI/CD functionalities, and more.
In the depicted scenario, a Merlin user enjoys direct access to the Merlin platform and tools.
This user engages with Merlin to achieve the following objectives:
Access and utilize the deployed IDE, CI/CD, and other functionalities.
Ably manage inventory, user profiles, and credentials for targeted systems.
Independently oversee the lifecycle of Merlin tools, subject to the requisite authority.
Skillfully create Restful services on designated IBM i systems.
workflow and enhances efficiency. Importantly, Merlin provides a user-friendly GUI tailored
specifically for IBM i, ensuring a smooth experience. Refer to Figure A-18 for a visual
representation of these capabilities.
Key features:
Simplified Jenkins complexity: Merlin shields you from the intricacies of Jenkins, enabling
you to concentrate on your IBM i CI/CD processes.
Choice of deployment: You can choose to deploy without a Jenkins server, utilizing your
own Jenkins instance, or take advantage of the provided Jenkins server integrated with
ARCAD plugins.
Flexible profile management: Private and public profiles offer a robust means to generate
Jenkins pipelines dynamically. These profiles can be easily shared among Merlin users,
promoting collaboration and consistent practices
In the next Figure A-19, you can observe the representation of the many developers who are
still utilizing PDM.
Yet, the primary aim is to assist IBM i customers seeking to adopt Git as their source control
repository while embracing a modern browser-based development arena. Thus, IBM crafted a
code-ready workspace, harnessing Eclipse Theia and Che along with an array of code
What adds significance is the integration of ARCAD's longstanding tools that have played a
pivotal role in development pursuits over the years. Features such as effortless RPG
conversion to modern free format and immediate access to an impact analysis tool are now
essential aspects rather than mere afterthoughts within this innovative framework.
Figure A-20 demonstrates the IDE, setting up your development environment, the
incorporation of rich editing capabilities, and building and compiling your project. Designed for
contemporary developers, the offering includes natural Git integration, intelligent Build
functions, self-contained projects, code comprehension, and integrated impact analysis.
Note: BOB refers to the “Build on the Board” feature. BOB is a tool provided by ARCAD
that facilitates the automated compilation and build process for RPG and COBOL
applications on the IBM i platform. It allows developers to initiate the build process directly
from a visual board or interface, helping to streamline the development workflow and
enhance efficiency. The BOB tool is designed to integrate with DevOps practices and
continuous integration processes, allowing for faster and more automated application
builds.
While RDi caters to creating and updating native ILE applications on IBM i, Merlin brings a
holistic Continuous Integration/Continuous Deployment (CI/CD) ecosystem centered around
Jenkins. It serves as more than just an Integrated Development Environment (IDE), offering a
comprehensive suite of tools and plugins to facilitate modern development practices. Merlin
equips developers with features such as Fixed to Free conversion, integration with Git-based
source control, and real-time application impact analysis. Furthermore, it integrates with
automated build and deployment pipelines, streamlining the entire development lifecycle.
A key distinction lies in the modernization capabilities of Merlin. Code crafted within Merlin
can still be modified using RDi. While SEU (Source Entry Utility) can also be used for further
code modification, Merlin's support for the latest RPG versions emphasizes a shift towards
contemporary coding approaches, encouraging developers to permit newer paradigms for
enhanced efficiency and sustainability.
This robust integration extends its benefits to both the Integrated Development Environment
(IDE) and the Continuous Integration/Continuous Delivery (CI/CD) process. Users can initiate
actions such as impact analysis directly from the IDE, while in the CI/CD pipeline, objects are
automatically built along with their dependencies, all thanks to the consistently maintained
metadata repository. This automated process eliminates the necessity of manually managing
make files, a task that can become unwieldy in enterprise-level settings. Importantly, the
Transformer RPG component serves as the initial step in the modernization journey. It
enables transitioning code to a contemporary standard – fully free-format RPG – before
engaging the broader range of modern tools and coding capabilities offered by Merlin.
This intricate integration and comprehensive enhancement underscore Merlin's pivotal role as
a potent modernization engine, focused on lifecycle integration.
Figure A-21 IBM Merlin - ARCAD - powered tooling for enhanced development
Developer Merlin
The Developer Merlin environment provides a range of capabilities to enhance the
development process:
Connections: Includes features such as inventory management, credentials management,
and template setup, enabling efficient access to the resources needed for development
tasks.
Tools: Developer Merlin offers a suite of tools, including those that have been deployed
and configured for specific tasks. Of particular importance is the IBM i Developer tool,
which empowers developers with seamless integration. Within this tool, you can perform
actions such as right-clicking to run applications, streamlining the development workflow.
Create workspace: This function allows developers to establish a dedicated workspace
tailored to their requirements, ensuring an organized and efficient development
experience.
Refer to the Figure A-22 for an illustrative portrayal of the initial workspace within IBM i -
Developer, providing a visual representation of the starting point for developers using Merlin.
Development Flow
Below are the key principles that define the Merlin development flow:
Inspired by GitFlow: The development flow model takes inspiration from the GitFlow
methodology, a well-established branching strategy. This approach provides a structured
framework for managing code changes, releases, and collaboration among developers.
Adaptable: The development flow is designed to be adaptable, accommodating various
project requirements and team dynamics. It can be tailored to suit the specific needs of the
development team, making it versatile and flexible.
Master and development - no direct changes: The primary development branches, namely
'Master' and 'Development,' are kept free from direct changes. Instead, developers work
on feature branches or other specialized branches, ensuring that the main development
branches remain stable and reliable.
Other Branches: The development flow includes several types of branches that serve
distinct purposes:
– Feature branches are created for developing new features or functionalities. These
branches allow developers to work on isolated changes without affecting the main
codebase.
– Release branches are used to prepare the codebase for a new release. They are ideal
for bug fixes, last-minute adjustments, and testing before a release.
– HotFix branches are created to address critical issues in the production environment.
They enable swift fixes without interrupting ongoing development efforts.
Branch – ARCAD version: Each branch created within the development flow is
accompanied by an associated ARCAD version. This version management ensures
proper tracking and integration of changes, providing clear visibility into the status and
progress of development activities.
Note: Git, a distributed version control system, offers several compelling advantages for
source code management:
Line-level visibility of changes: Unlike traditional change management systems, Git
provides a granular view of changes at the line level. This allows developers to precisely
track modifications, enhancing transparency.
Enhanced management of concurrent development: Git's decentralized nature allows
multiple developers to work on different branches simultaneously, facilitating smoother
collaboration and concurrent development efforts.
Explicit merges: When two changes are merged into the same codebase, Git makes
this process explicit. This ensures that changes are intentionally combined, reducing
the risk of accidental conflicts.
Controlled commits: Git's commit process includes conflict checks, enabling developers
to review and manage potential conflicts before finalizing changes. This enhances code
quality and reduces integration challenges.
Offline usage: Git's offline capabilities enable developers to track local changes even
when disconnected from a network. This flexibility supports productivity in various work
environments.
Incredible traceability: Git's version control offers unparalleled traceability, allowing you
to track the history of changes, contributors, and decisions made throughout the
development lifecycle.
Merlin preferences
Within Merlin, user preferences are meticulously designed to enhance the development
experience. These preferences are tailored to integrate with ARCAD's tools, ensuring a
cohesive workflow.
1. Builder:
– Port: 5252:This preference configures the communication port for Builder, ensuring
interaction between components.
2. IBM i Developer:
– Build settings: The preferences for IBM i Developer encompass a range of build
settings. These include options related to Build on Build (BOB).
– Formatting options are available, allowing developers to tailor their development
environment to their coding style and preferences.
3. Customizable color scheme: One of the user-friendly preferences provided by Merlin is the
ability to modify the color scheme. This customization feature empowers developers to
create a coding environment that is visually pleasing and conducive to their individual
needs.
4. Provide the SSH URL of the Git repository you intend to clone, establishing the
connection between Merlin and the Git repository.
5. Upon the successful completion of the clone process, your source code becomes visible
and accessible within the Merlin workspace. This integration streamlines version control
and source code management, enhancing your development workflow.
6. To begin the process of creating a branch, locate the “Feature/xxxx” section, where the
mapping between Git and ARCAD, labeled as “awrkvertyp,” is defined. As shown in
Figure A-24.
7. To create the branch, press F1 and select “git create branch”. As shown in Figure A-25.
8. Further, in the bottom left corner, click on “master” to proceed with the branch creation
process. Figure A-26 is displayed.
9. After making changes to your local repository, use the push command to upload your
committed changes to the remote Git repository. This synchronizes the changes you
made on your local machine with the online repository, ensuring that other team members
can access your updates.
10. As a result of pushing your changes, the remote Git repository is updated with the latest
changes you committed. This allows other team members to access and work with the
most recent version of the codebase.
11. A webhook is a mechanism that allows real-time communication between different
systems. In the context of Git and ARCAD Builder, you can set up a webhook to notify
Builder about certain events in the remote repository. This integration is enabled by
copying the GitHub webhook from Builder's webhook processing tool, known as “smee.”
To use this feature, ensure that webhook processing is activated in Builder, and use the
provided webhook link (for example, https://smee.io/IzfhozWff1rfGlOt) to establish
the connection as shown in Figure A-27
12. Upon using the check version command, an automatic commit is generated in your local
repository. By pulling from your local repository, you retrieve the latest commit message
from the remote repository. This process ensures that you are always working with the
most up-to-date code and information, promoting collaboration and reducing potential
conflicts.
13. To incorporate the necessary IBM i views, right-click on CHE, as illustrated in Figure A-28
Library lists: Easily configure library lists, allowing you to access the necessary libraries
and resources for your projects without the hassle of manual setup.
Object libraries: Access and organize object libraries efficiently, simplifying the
management of your IBM i resources.
My queries: Utilize built-in query functionalities to retrieve specific information from your
IBM i system, enhancing your ability to gather relevant data for your projects.
ARCAD view
The ARCAD view is a user interface provided by the ARCAD software suite that offers a
consolidated and organized perspective into various aspects of the software development
lifecycle. You can efficiently manage your development tasks and processes:
Sites: Gain a comprehensive overview of your different development environments or
locations, allowing you to organize and navigate your projects effectively.
Builds: Track the progress of builds, ensuring a clear understanding of the current status of
your development efforts.
Versions: Easily access and manage different versions of your projects. Each version is
linked to a specific branch, providing a way to navigate between various stages of
development.
Prompting
Prompting in the context of Merlin refers to the interactive assistance provided to developers
during various stages of application development. It offers guidance and suggestions as
developers write code, aiding in the creation of accurate and efficient programs. Prompting
enhances the development experience by reducing errors, improving consistency, and
increasing productivity.
Changed source
Changed source in the Merlin platform is tracked through a “Modified” flag, which indicates
that alterations have been made to the code. These changes are managed within the
platform's source control system.
Git compare
Merlin offers a Git compare feature that allows developers to efficiently analyze differences
between various versions of source code. This tool enhances collaboration by providing an
intuitive visual representation of changes, aiding in code review, error identification, and
maintaining code quality throughout the development lifecycle.
Git process
As part of the Git process within Merlin, developers can conveniently stage changes and
commit them locally. This ensures that modifications are organized and tracked effectively
before they are pushed to the shared repository, contributing to a structured and controlled
development workflow.
Tokenization
Tokenization in the context of Merlin refers to the process of categorizing and highlighting
different elements in your code using appropriate colors. This visual differentiation helps you
quickly identify and distinguish between various components within your codebase, leading to
improved readability and ease of understanding.
Code Formatting
Merlin's Code Formatting ensures consistent and organized code for readability. Customize
preferences for automatic formatting alignment with your style. Right-click selected code and
choose “Reformat” to instantly apply chosen rules. This maintains uniformity and enhances
comprehension.
Refactoring
Merlin's Refactoring enhances code structure and readability without sacrificing functionality.
Rename symbols intelligently, updating code and model for consistency. “Shift+Enter”
previews changes before committing. The model auto-updates, reflecting modifications. This
approach ensures accurate and efficient code improvement.
Content Assist
Content Assist in Merlin provides intelligent suggestions as you code. Use “Ctrl+Space” to
invoke it. It works for both language and model, enhancing accuracy and speed. The live
problem view identifies and highlights issues, allowing direct navigation and automatic
updates as you fix them.
SQL
SQL in Merlin brings advanced features for efficient database interaction and management.
Tokenization: Clearly divides SQL into meaningful elements for easy comprehension and
editing.
Formatting: Ensures consistent and readable SQL code by automatically applying
formatting rules.
Code collapse: Organizes SQL blocks, making it simpler to navigate and focus on relevant
sections.
Embedded SQL: Integrate SQL statements within host languages, enhancing database
interaction within application code.
Furthermore, there are specific cases where operation codes cannot be converted as follows:
TIME: If the result field length is equal to 14 characters.
SCAN, CHECK, CHECKR: When the result field is an array.
BITON, BITOFF: When factor 2 is a named constant.
POST: When the result field (data structure name) is used.
MOVE, MOVEL: When the factor 2/result is a varying-length field.
MOVEA: When the field is defined as a CONST parameter.
KLIST, KFLD: When located or used in a COPY clause.
CALL, PARM: For CALL operations, the indicator “LR” (positions 75-76) and the CALL
Pgm(idx) syntax are not converted.
GOTO, TAG: GOTO within a sub-procedure and TAG in the “Main” program, or when
GOTO and TAG do not comply with structured programming. Also, when TAG is used by
WHENEVER GOTO (SQL).
Table A-4 Supported versions of IBM i and OpenShift Container Platform for Merlin installation
IBM i OpenShift Container Platform
7.3 4.8
7.4 4.9
7.5 4.10
Table A-5
CPU CPU Memory Memory
Name request limit request limit Note
IBM i Developer Tool 0.5a 2.7b 1.5Ga 3Gb The resource is per each instancec
IBM i CI/CD 0.5a 1b 1Ga 2Gb The resource is per each instancec
a. Request signifies the minimum required amount.
b. Limit signifies the maximum anticipated utilization.
c. When an administrator installs either of IBM i Developer or IBM i CI/CD Tools within an OpenShift project, it corresponds
to one instance.
IBM i requirements
Prerequisites for the IBM i environment encompass the following:
IBM i 7.3 or a more contemporary release, complemented by the latest application of the
HTTP PTF Group.
Rational Development Studio (5770-WDS) is an essential requirement for the compilers,
enabling the conversion of source code into object code.
Entitlement
Clients can acquire Merlin through IBM Passport Advantage, enabling a smooth and efficient
acquisition process. Upon purchase, users authenticate via their IBM ID on Passport
Advantage. Subsequently, a designated entitlement key associated with the acquired product
is activated within the IBM Marketplace. This entitlement key, coupled with an active paid
entitlement, grants customers access to the container images available within the Entitled
Registry.
Price: Merlin follows a “per-developer” pricing model, aligning with its deployment within
the Red Hat OpenShift Container Platform (OCP). Utilizing the inherent license monitoring
mechanism of OCP, Merlin employs the VPC (Virtual Processor Core) framework. To
secure Merlin entitlement, customers can place an order for 1 VPC unit per developer,
resulting in the creation of an individual CodeReady workspace for each developer. This
offering is available at a rate of $4500.00 per VPC.
Tip: For further information about installation Merlin and more, see Installing Merlin
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Introduction to IBM PowerVM, SG24-8535
Deploying SAP Software in Red Hat OpenShift on IBM Power Systems, REDP-5619
IBM Power Systems Cloud Security Guide: Protect IT Infrastructure In All Layers,
REDP-5659
Oracle on IBM Power Systems, SG24-8485
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
SG24-8551-00
ISBN DocISBN
Printed in U.S.A.
®
ibm.com/redbooks