Red Hat Open Shift DO280 Student Guide
Red Hat Open Shift DO280 Student Guide
The contents of this course and all its modules and related materials, including handouts to audience members, are ©
2024 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to [email protected] [mailto:[email protected]] or phone toll-free (USA) +1 (866) 626-2994 or +1 (919)
754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, RHCA, RHCE,
RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United
States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is a trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open
source or commercial project.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks
of OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's
permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the
OpenStack community.
Introduction xi
Red Hat OpenShift Administration II: Configuring a Production Cluster ............................. xi
Orientation to the Classroom Environment .................................................................. xii
Performing Lab Exercises ......................................................................................... xx
DO280-OCP4.14-en-2-20240725 vii
Lab: Enable Developer Self-Service ........................................................................ 274
Summary ............................................................................................................ 283
7. Manage Kubernetes Operators 285
Kubernetes Operators and the Operator Lifecycle Manager ....................................... 286
Quiz: Kubernetes Operators and the Operator Lifecycle Manager ................................ 290
Install Operators with the Web Console ................................................................... 292
Guided Exercise: Install Operators with the Web Console ............................................ 297
Install Operators with the CLI ................................................................................ 308
Guided Exercise: Install Operators with the CLI .......................................................... 315
Lab: Manage Kubernetes Operators ........................................................................ 324
Summary ............................................................................................................ 332
8. Application Security 333
Control Application Permissions with Security Context Constraints .............................. 334
Guided Exercise: Control Application Permissions with Security Context Constraints ....... 337
Allow Application Access to Kubernetes APIs ............................................................ 341
Guided Exercise: Allow Application Access to Kubernetes APIs .................................... 346
Cluster and Node Maintenance with Kubernetes Cron Jobs ........................................ 351
Guided Exercise: Cluster and Node Maintenance with Kubernetes Cron Jobs ................ 358
Lab: Application Security ....................................................................................... 366
Summary ............................................................................................................ 374
9. OpenShift Updates 375
The Cluster Update Process .................................................................................. 376
Quiz: The Cluster Update Process ........................................................................... 387
Detect Deprecated Kubernetes API Usage .............................................................. 389
Quiz: Detect Deprecated Kubernetes API Usage ....................................................... 396
Update Operators with the OLM ............................................................................ 398
Quiz: Update Operators with the OLM .................................................................... 403
Quiz: OpenShift Updates ....................................................................................... 405
Summary ............................................................................................................ 409
10. Comprehensive Review 411
Comprehensive Review .......................................................................................... 412
Lab: Cluster Self-service Setup ............................................................................... 414
Lab: Secure Applications ........................................................................................ 431
Lab: Deploy Packaged Applications ........................................................................ 446
viii DO280-OCP4.14-en-2-20240725
Document Conventions
This section describes various conventions and practices that are used
throughout all Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
These describe where to find external documentation that is relevant to
a subject.
Note
Notes are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
Important sections provide details of information that is easily missed:
configuration changes that apply only to the current session, or
services that need restarting before an update applies. Ignoring these
admonitions will not cause data loss, but might cause irritation and
frustration.
Warning
Do not ignore warnings. Ignoring these admonitions will most likely
cause data loss.
DO280-OCP4.14-en-2-20240725 ix
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas to help remove any
potentially offensive terms. This is an ongoing process and requires alignment with the products
and services that are covered in Red Hat Training courses. Red Hat appreciates your patience
during this process.
x DO280-OCP4.14-en-2-20240725
Introduction
Course Objectives
Audience
Prerequisites
DO280-OCP4.14-en-2-20240725 xi
Introduction
A Red Hat OpenShift Container Platform (RHOCP) 4.14 single-node (SNO) bare metal User
Provisioned Infrastructure (UPI) installation is used in this classroom. Infrastructure systems for
the RHOCP cluster are in the ocp4.example.com DNS domain.
All student computer systems have a standard user account, student, with student as the
password. The root password on all student systems is redhat.
Classroom Machines
xii DO280-OCP4.14-en-2-20240725
Introduction
The primary function of bastion is to act as a router between the network that connects the
student machines and the classroom network. If bastion is down, then other student machines
do not function properly, or might even hang during boot.
The utility system acts as a router between the network that connects the RHOCP cluster
machines and the student network. If utility is down, then the RHOCP cluster does not
function properly, or might even hang during boot.
For some exercises, the classroom contains an isolated network. Only the utility system and
the cluster are connected to this network.
Several systems in the classroom provide supporting services. The classroom server hosts
software and lab materials for the hands-on activities. The registry server is a private Red Hat
Quay container registry that hosts the container images for the hands-on activities. Information
about how to use these servers is provided in the instructions for those activities.
The master01 system serves as the control plane and compute node for the RHOCP cluster.
The cluster uses the registry system as its own private container image registry and GitLab
server. The idm system provides LDAP services to the RHOCP cluster for authentication and
authorization support.
Students use the workstation machine to access a dedicated RHOCP cluster, for which they
have cluster administrator privileges.
API https://api.ocp4.example.com:6443
The RHOCP cluster has a standard user account, developer, which has the developer
password. The administrative account, admin, has the redhatocp password.
Classroom Registry
The DO280 course uses a private Red Hat Quay container image registry that is accessible only
within the classroom environment. The container image registry hosts the container images that
students use in the hands-on activities. By using a private container image registry, the classroom
environment is self-contained to not require internet access.
DO280-OCP4.14-en-2-20240725 xiii
Introduction
The following table provides the container image repositories that are used in this course and their
public repositories.
quay.io/jkube/jkube-java- registry.ocp4.example.com:8443/jkube/
binary-s2i:0.0.9 jkube-java-binary-s2i:0.0.9
quay.io/openshift/origin- registry.ocp4.example.com:8443/
cli:4.12 openshift/origin-cli:4.12
quay.io/redhattraining/ registry.ocp4.example.com:8443/
books:v1.4 redhattraining/books:v1.4
quay.io/redhattraining/builds- registry.ocp4.example.com:8443/
for-managers redhattraining/builds-for-managers
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
beeper-api:1.0 redhattraining/do280-beeper-api:1.0
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
payroll-api:1.0 redhattraining/do280-payroll-api:1.0
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
product:1.0 redhattraining/do280-product:1.0
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
product-stock:1.0 redhattraining/do280-product-
stock:1.0
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
project-cleaner:v1.0 redhattraining/do280-project-
cleaner:v1.0
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
project-cleaner:v1.1 redhattraining/do280-project-
cleaner:v1.1
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
show-config-app:1.0 redhattraining/do280-show-config-
app:1.0
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
stakater-reloader:v0.0.125 redhattraining/do280-stakater-
reloader:v0.0.125
quay.io/redhattraining/ registry.ocp4.example.com:8443/
exoplanets:v1.0 redhattraining/exoplanets:v1.0
xiv DO280-OCP4.14-en-2-20240725
Introduction
quay.io/redhattraining/famous- registry.ocp4.example.com:8443/
quotes:2.1 redhattraining/famous-quotes:2.1
quay.io/redhattraining/famous- registry.ocp4.example.com:8443/
quotes:latest redhattraining/famous-quotes:latest
quay.io/redhattraining/gitlab- registry.ocp4.example.com:8443/
ce:8.4.3-ce.0 redhattraining/gitlab-ce:8.4.3-ce.0
quay.io/redhattraining/hello- registry.ocp4.example.com:8443/
world-nginx:latest redhattraining/hello-world-
nginx:latest
quay.io/redhattraining/hello- registry.ocp4.example.com:8443/
world-nginx:v1.0 redhattraining/hello-world-nginx:v1.0
quay.io/redhattraining/ registry.ocp4.example.com:8443/
loadtest:v1.0 redhattraining/loadtest:v1.0
quay.io/redhattraining/php- registry.ocp4.example.com:8443/
hello-dockerfile redhattraining/php-hello-dockerfile
quay.io/redhattraining/php- registry.ocp4.example.com:8443/
ssl:v1.0 redhattraining/php-ssl:v1.0
quay.io/redhattraining/php- registry.ocp4.example.com:8443/
ssl:v1.1 redhattraining/php-ssl:v1.1
quay.io/redhattraining/ registry.ocp4.example.com:8443/
scaling:v1.0 redhattraining/scaling:v1.0
quay.io/redhattraining/todo- registry.ocp4.example.com:8443/
angular:v1.1 redhattraining/todo-angular:v1.1
quay.io/redhattraining/todo- registry.ocp4.example.com:8443/
angular:v1.2 redhattraining/todo-angular:v1.2
quay.io/redhattraining/todo- registry.ocp4.example.com:8443/
backend:release-46 redhattraining/todo-
backend:release-46
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
roster:v1 redhattraining/do280-roster:v1
quay.io/redhattraining/do280- registry.ocp4.example.com:8443/
roster:v2 redhattraining/do280-roster:v2
quay.io/redhattraining/ registry.ocp4.example.com:8443/
wordpress:5.7-php7.4-apache redhattraining/wordpress:5.7-php7.4-
apache
registry.access.redhat.com/ registry.ocp4.example.com:8443/rhscl/
rhscl/httpd-24-rhel7:latest httpd-24-rhel7:latest
DO280-OCP4.14-en-2-20240725 xv
Introduction
registry.access.redhat.com/ registry.ocp4.example.com:8443/rhscl/
rhscl/mysql-57-rhel7:latest mysql-57-rhel7:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/rhscl/
rhscl/nginx-18-rhel7:latest nginx-18-rhel7:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/rhscl/
rhscl/nodejs-6-rhel7:latest nodejs-6-rhel7:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/rhscl/
rhscl/php-72-rhel7:latest php-72-rhel7:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi7/
ubi7/nginx-118:latest nginx-118:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/httpd-24:latest httpd-24:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/
ubi8:latest/ ubi8:latest/
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/nginx-118:latest nginx-118:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/nodejs-10:latest nodejs-10:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/nodejs-16:latest nodejs-16:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/php-72:latest php-72:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/php-73:latest php-73:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/ubi:8.0 ubi:8.0
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/ubi:8.4 ubi:8.4
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi8/
ubi8/ubi:latest ubi:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi9/
ubi9/httpd-24:latest httpd-24:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi9/
ubi9/nginx-120:latest nginx-120:latest
registry.access.redhat.com/ registry.ocp4.example.com:8443/ubi9/
ubi9/ubi:latest ubi:latest
xvi DO280-OCP4.14-en-2-20240725
Introduction
registry.redhat.io/redhat- registry.ocp4.example.com:8443/
openjdk-18/openjdk18- redhat-openjdk-18/openjdk18-
openshift:1.8 openshift:1.8
registry.redhat.io/redhat- registry.ocp4.example.com:8443/
openjdk-18/openjdk18- redhat-openjdk-18/openjdk18-
openshift:latest openshift:latest
registry.redhat.io/rhel8/ registry.ocp4.example.com:8443/rhel8/
mysql-80:1-211.1664898586 mysql-80:1-211.1664898586
registry.redhat.io/rhel8/ registry.ocp4.example.com:8443/rhel8/
mysql-80:latest mysql-80:latest
registry.redhat.io/rhel8/ registry.ocp4.example.com:8443/rhel8/
postgresql-13:1-7 postgresql-13:1-7
registry.redhat.io/rhel8/ registry.ocp4.example.com:8443/rhel8/
postgresql-13:latest postgresql-13:latest
registry.redhat.io/ubi8/ registry.ocp4.example.com:8443/ubi8/
ubi:8.6-943 ubi:8.6-943
DO280-OCP4.14-en-2-20240725 xvii
Introduction
Machine States
active The virtual machine is running and available. If the virtual machine just
started, then it might still be starting services.
stopped The virtual machine is shut down. On starting, the virtual machine
boots into the same state that it was in before shutdown. The disk
state is preserved.
Classroom Actions
CREATE Create the ROLE classroom. Creates and starts all the necessary
virtual machines for this classroom.
CREATING The ROLE classroom virtual machines are being created. Creation can
take several minutes to complete.
DELETE Delete the ROLE classroom. Deletes all virtual machines in the
classroom. All saved work on those systems' disks is lost.
Machine Actions
OPEN CONSOLE Connect to the system console of the virtual machine in a new
browser tab. You can log in directly to the virtual machine and run
commands, when required. Normally, log in to the workstation
virtual machine only, and from there, use ssh to connect to the other
virtual machines.
ACTION > Gracefully shut down the virtual machine, and preserve disk contents.
Shutdown
ACTION > Power Forcefully shut down the virtual machine, and still preserve disk
Off contents. This action is equivalent to removing the power from a
physical machine.
xviii DO280-OCP4.14-en-2-20240725
Introduction
ACTION > Reset Forcefully shut down the virtual machine and reset the associated
storage to its initial state. All saved work on that system's disks is
lost.
At the start of an exercise, if you are instructed to reset a single virtual machine node, then click
ACTION > Reset for that specific virtual machine only.
At the start of an exercise, if you are instructed to reset all virtual machines, then click ACTION >
Reset on every virtual machine in the list.
To return the classroom environment to its original state at the start of the course, you can click
DELETE to remove the entire classroom environment. After the lab is deleted, click CREATE to
provision a new set of classroom systems.
Warning
The DELETE operation cannot be undone. All completed work in the classroom
environment is lost.
To adjust the timers, locate the two + buttons at the bottom of the course management page.
Click the auto-stop + button to add another hour to the auto-stop timer. Click the auto-destroy +
button to add another day to the auto-destroy timer. Auto-stop has a maximum of 11 hours,
and auto-destroy has a maximum of 14 days. Be careful to keep the timers set while you are
working, so that your environment is not unexpectedly shut down. Be careful not to set the timers
unnecessarily high, which could waste your subscription time allotment.
DO280-OCP4.14-en-2-20240725 xix
Introduction
• A guided exercise is a hands-on practice exercise that follows a presentation section. It walks
you through a procedure to perform, step by step.
• A quiz is typically used when checking knowledge-based learning, or when a hands-on activity is
impractical for some other reason.
• An end-of-chapter lab is a gradable hands-on activity to help you to check your learning. You
work through a set of high-level steps, based on the guided exercises in that chapter, but the
steps do not walk you through every command. A solution is provided with a step-by-step walk-
through.
• A comprehensive review lab is used at the end of the course. It is also a gradable hands-on
activity, and might cover content from the entire course. You work through a specification of
what to do in the activity, without receiving the specific steps to do so. Again, a solution is
provided with a step-by-step walk-through that meets the specification.
To prepare your lab environment at the start of each hands-on activity, run the lab start
command with a specified activity name from the activity's instructions. Likewise, at the end of
each hands-on activity, run the lab finish command with that same activity name to clean up
after the activity. Each hands-on activity has a unique name within a course.
The action is a choice of start, grade, or finish. All exercises support start and finish.
Only end-of-chapter labs and comprehensive review labs support grade.
start
The start action verifies the required resources to begin an exercise. It might include
configuring settings, creating resources, confirming prerequisite services, and verifying
necessary outcomes from previous exercises. You can perform an exercise at any time, even
without performing preceding exercises.
grade
For gradable activities, the grade action directs the lab command to evaluate your work, and
shows a list of grading criteria with a PASS or FAIL status for each. To achieve a PASS status
for all criteria, fix the failures and rerun the grade action.
finish
The finish action cleans up resources that were configured during the exercise. You can
perform an exercise as many times as you want.
The lab command supports tab completion. For example, to list all exercises that you can start,
enter lab start and then press the Tab key twice.
xx DO280-OCP4.14-en-2-20240725
Introduction
The lab script copies the necessary files for each course activity to the workspace directory.
For example, the lab start updates-rollout command does the following tasks:
• /tmp/log/labs: This directory contains log files. The lab script creates a unique log file for
each activity. For example, the log file for the lab start updates-rollout command is /
tmp/log/labs/updates-rollout.
The lab start commands usually verify whether the Red Hat OpenShift Container Platform
(RHOCP) cluster is ready and reachable. If you run the lab start command right after creating
the classroom environment, then you might get errors when the command verifies the cluster API
or the credentials. These errors occur because the RHOCP cluster might take up to 15 minutes
to become available. A convenient solution is to run the lab finish command to clean up the
scenario, wait a few minutes, and then rerun the lab start command.
Important
In this course, the lab start scripts normally create a specific RHOCP project
for each exercise. The lab finish scripts remove the exercise-specific RHOCP
project.
If you are retrying an exercise, then you might need to wait before running the lab
start command again. The project removal process might take up to 10 minutes to
be fully effective.
DO280-OCP4.14-en-2-20240725 xxi
xxii DO280-OCP4.14-en-2-20240725
Chapter 1
Declarative Resource
Management
Goal Deploy and update applications from resource
manifests that are parameterized for different
target environments.
DO280-OCP4.14-en-2-20240725 1
Chapter 1 | Declarative Resource Management
Resource Manifests
Objectives
• Deploy and update applications from resource manifests that are stored as YAML files.
An application in a Kubernetes cluster often consists of multiple resources that work together.
Each resource has a definition and a configuration. Many of the resource configurations share
common attributes that must match to operate correctly. Imperative commands configure each
resource, one at time. However, using imperative commands has some issues:
• Impaired reproducibility
• Lacking version control
• Lacking support for GitOps
Rather than imperative commands, declarative commands are instead the preferred way to
manage resources, by using resource manifests. A resource manifest is a file, in JSON or YAML
format, with resource definition and configuration information. Resource manifests simplify the
management of Kubernetes resources, by encapsulating all the attributes of an application in a file
or a set of related files. Kubernetes uses declarative commands to read the resource manifests and
to apply changes to the cluster to meet the state that the resource manifest defines.
The resource manifests are in YAML or JSON format, and thus can be version-controlled. Version
control of resource manifests enables tracing of configuration changes. As such, adverse changes
can be rolled back to an earlier version to support recoverability.
Resource manifests ensure that applications can be precisely reproduced, typically with a single
command to deploy many resources. The reproducibility from resource manifests supports the
automation of the GitOps practices of continuous integration and continuous delivery (CI/CD).
Given a new or updated resource manifest, Kubernetes provides commands that compare the
intended state that is specified in the resource manifest to the current state of the resource.
These commands then apply transformations to the current state to match the intended state.
2 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Imperative Workflow
An imperative workflow is useful for developing and testing. The following example uses the
kubectl create deployment imperative command, to create a deployment for a MYSQL
database.
In addition to using verbs that reflect the action of the command, imperative commands use
options to provide the details. The example command uses the --port and the --image options
to provide the required details to create the deployment.
The use of imperative commands affects applying changes to live resources. For example, the
pod from the previous deployment would fail to start due to missing environment variables. The
following kubectl set env deployment imperative command resolves the problem by adding
the required environment variables to the deployment:
Executing this kubectl set env deployment command changes the deployment resource
named db-pod, and provides the extra needed variables to start the container. A developer
can continue building out the application, by using imperative commands to add components,
such as services, routes, volume mounts, and persistent volume claims. With the addition of each
component, the developer can run tests to ensure that the component correctly executes the
intended function.
Imperative commands are useful for developing and experimenting. With imperative commands,
a developer can build up an application one component at a time. When a component is added,
the Kubernetes cluster provides error messages that are specific to the component. The process is
analogous to using a debugger to step through code execution one line at a time. Using imperative
commands usually provides clearer error messages, because an error occurs after adding a
specific component.
However, long command lines and a fragmented application deployment are not ideal for
deploying an application in production. With imperative commands, changes are a sequence of
commands that must be maintained to reflect the intended state of the resources. The sequence
of commands must be tracked and kept up to date.
Although manifest files can also use the JSON syntax, YAML is generally preferred and is more
popular. To continue the debugging analogy, debugging an application that is deployed from
DO280-OCP4.14-en-2-20240725 3
Chapter 1 | Declarative Resource Management
manifests is similar to trying to debug a full, completed running application. It can take more effort
to find the source of the error, especially when the error is not a result of manifest errors.
• Use imperative commands with the --dry-run=client option to generate manifests that
correspond to the imperative command.
The kubectl explain command provides the details for any field in the manifest. For example,
use the kubectl explain deployment.spec.template.spec command to view field
descriptions that specify a pod object within a deployment manifest.
To create a starter deployment manifest, use the kubectl create deployment command to
generate a manifest by using the --dry-run=client option:
The --dry-run=client option prevents the command from creating resources in the
cluster.
The following example shows a minimal deployment manifest file, not production-ready, for the
hello-openshift deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
...output omitted...
creationTimestamp: null
labels:
app: hello-openshift
name: hello-openshift
spec:
replicas: 1
selector:
matchLabels:
app: hello-openshift
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
4 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
app: hello-openshift
spec:
containers:
- image: quay.io/redhattraining/hello-world-nginx:v1.0
name: hello-world-nginx
resources: {}
status: {}
When using imperative commands to create manifests, the resulting manifests might contain fields
that are not necessary for creating a resource. For example, the following example changes the
manifest by removing the empty and null fields. Removing unnecessary fields can significantly
reduce the length of the manifests, and in turn reduce the overhead to work with them.
Additionally, you might need to further customize the manifests. For example, in a deployment,
you might customize the number of replicas, or declare the ports that the deployment provides.
The following notes explain the additional changes:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: resource-manifests
labels:
app: hello-openshift
name: hello-openshift
spec:
replicas: 2
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- image: quay.io/redhattraining/hello-world-nginx:v1.0
name: hello-world-nginx
ports:
- containerPort: 8080
protocol: TCP
You can create a manifest file for each resource that you manage. Alternatively, add each of the
manifests to a single multi-part YAML file, and use a --- line to separate the manifests.
---
apiVersion: apps/v1
kind: Deployment
metadata:
DO280-OCP4.14-en-2-20240725 5
Chapter 1 | Declarative Resource Management
namespace: resource-manifests
annotations:
...output omitted...
---
apiVersion: v1
kind: Service
metadata:
namespace: resource-manifests
labels:
app: hello-openshift
name: hello-openshift
spec:
...output omitted...
Using a single file with multiple manifests versus using manifests that are defined in multiple
manifest files is a matter of organizational preference. The single file approach has the advantage
of keeping together related manifests. With the single file approach, it can be more convenient to
change a resource that must be reflected across multiple manifests. In contrast, keeping manifests
in multiple files can be more convenient for sharing resource definitions with others.
After creating manifests, you can test them in a non-production environment, or proceed to
deploy the manifests. Validate the resource manifests before deploying applications in the
production environment.
Declarative Workflows
Declarative commands use a resource manifest instead of adding the details to many options
on the command line. To create a resource, use the kubectl create -f resource.yaml
command. Instead of a file name, you can pass a directory to the command to process all the
resource files in a directory. Add the --recursive=true or -R option to recursively process
resource files that are provided in multiple subdirectories.
The following example creates the resources from the manifests in the my-app directory. In this
example, the my-app directory contains the example-deployment.yaml and service/
example-service.yaml files from previously.
6 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Updating Resources
The kubectl apply command can also create resources with the same -f option that is
illustrated with the kubectl create command. However, the kubectl apply command can
also update a resource.
Updating resources is more complex than creating resources. The kubectl apply command
implements several techniques to apply the updates without causing issues.
The kubectl apply command writes the contents of the configuration file to the
kubectl.kubernetes.io/last-applied-configuration annotation. The kubectl
create command can also generate this annotation by using the --save-config option.
The kubectl apply command uses the last-applied-configuration annotation to
identify fields that are removed from the configuration file and that must be cleared from the live
configuration.
Although the kubectl create -f command can create resources from a manifest, the
command is imperative and thus does not account for the current state of a live resource.
Executing kubectl create -f against a manifest for a live resource gives an error. In contrast,
the kubectl apply -f command is declarative, and considers the difference between the
current resource state in the cluster and the intended resource state that is expressed in the
manifest.
For example, to update the container's image from version v1.0 to latest, first update the
YAML resource manifest to specify the new tag on the image. Then, use the kubectl apply
command to instruct Kubernetes to create a version of the deployment resource by using the
updated image version that is specified in the manifest.
YAML Validation
Before applying the changes to the resource, use the --dry-run=server and the --
validate=true flags to inspect the file for errors.
• The --dry-run=server option submits a server-side request without persisting the resource.
• The --validate=true option uses a schema to validate the input and fails the request if it is
invalid.
Any syntax errors in the YAML are included in the output. Most importantly, the --dry-
run=server option prevents applying any changes to the Kubernetes runtime.
The output line that ends in (server dry-run) provides the action that the resource file
would perform if applied.
Note
The --dry-run=client option prints only the object that would be sent to the
server. The cluster resource controllers can refuse a manifest even if the syntax is
valid YAML. In contrast, the --dry-run=server option sends the request to the
server to confirm that the manifest conforms to current server policies, without
creating resources on the server.
DO280-OCP4.14-en-2-20240725 7
Chapter 1 | Declarative Resource Management
Comparing Resources
Use the kubectl diff command to review differences between live objects and manifests.
When updating resource manifests, you can track differences in the changed files. However, many
manifest changes, when applied, do not change the state of the cluster resources. A text-based
diff tool would show all such differences, and result in a noisy output.
In contrast, using the kubectl diff command might be more convenient to preview changes.
The kubectl diff command emphasizes the significant changes for the Kubernetes cluster.
Review the differences to validate that manifest changes have the intended effect.
The line that starts with the - character shows that the current deployment is on generation 1.
The following line, which starts with the + character, shows that the generation changes to 2
when the manifest file is applied.
The image line, which starts with the - character, shows that the current image uses the
v1.0 version. The following line, which starts with the + character, shows a version change to
latest when the manifest file is applied.
Kubernetes resource controllers automatically add annotations and attributes to the live resource
that make the output of other text-based diff tools misleading, by reporting many differences
that have no impact on the resource configuration. Extracting manifests from live resources
and making comparisons with tools such as the diff command reports many differences of no
value. Using the kubectl diff command confirms that a live resource matches a resource
configuration that a manifest provides. GitOps tools depend on the kubectl diff command to
determine whether anyone changed resources outside the GitOps workflow. Because the tools
themselves cannot know all details about how any controllers might change a resource, the tools
defer to the cluster to determine whether a change is meaningful.
8 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Update Considerations
When using the oc diff command, recognize when applying a manifest change does not
generate new pods. For example, if an updated manifest changes only values in secret or a
configuration map, then applying the updated manifest does not generate new pods that
use those values. Because pods read secret and configuration maps at startup, in this case
applying the updated manifest leaves the pods in a vulnerable state, with stale values that are not
synchronized with the updated secret or with the configuration map.
In deployments with a single replica, you can also resolve the problem by deleting the pod.
Kubernetes responds by automatically creating a pod to replace the deleted pod. However, for
multiple replicas, using the oc rollout command to restart the pods is preferred, because the
pods are stopped and replaced in a smart manner that minimizes downtime.
This course covers other resource management mechanisms that can automate or eliminate some
of these challenges.
Applying Changes
The kubectl create command attempts to create the specified resources in the manifest
file. Using the kubectl create command generates an error if the targeted resources are
already live in the cluster. In contrast, the kubectl apply command compares three sources to
determine how to process the request and to apply changes.
If the specified resource in the manifest file does not exist, then the kubectl apply command
creates the resource. If any fields in the last-applied-configuration annotation of the
live resource are not present in the manifest, then the command removes those fields from the
live configuration. After applying changes to the live resource, the kubectl apply command
updates the last-applied-configuration annotation of the live resource to account for the
change.
When creating a resource, the --save-config option of the kubectl create command
produces the required annotations for future kubectl apply commands to operate.
To patch an object from a snippet, use the oc patch command with the -p option and the
snippet. The following example updates the hello deployment to have a CPU resource request of
100m with a JSON snippet:
DO280-OCP4.14-en-2-20240725 9
Chapter 1 | Declarative Resource Management
To patch an object from a patch file, use the oc patch command with the --patch-file
option and the location of the patch file. The following example updates the hello deployment to
include the content of the ~/volume-mount.yaml patch file:
The contents of the patch file describe mounting a persistent volume claim as a volume:
spec:
template:
spec:
containers:
- name: hello
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
volumes:
- name: www
persistentVolumeClaim:
claimName: nginx-www
This patch results in the following manifest for the hello deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
...output omitted...
name: server
...output omitted...
volumeMounts:
- mountPath: /usr/share/nginx/html/
name: www
- mountPath: /etc/nginx/conf.d/
name: tls-conf
...output omitted...
volumes:
- configMap:
defaultMode: 420
name: tls-conf
10 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
name: tls-conf
- persistentVolumeClaim:
claimName: nginx-www
name: www
...output omitted...
The patch applies to the hello deployment regardless of whether the www volume mount exists.
The oc patch command modifies existing fields in the object that are specified in the patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
...output omitted...
name: server
...output omitted...
volumeMounts:
- mountPath: /usr/share/nginx/www/
name: www
- mountPath: /etc/nginx/conf.d/
name: tls-conf
...output omitted...
volumes:
- configMap:
defaultMode: 420
name: tls-conf
name: tls-conf
- persistentVolumeClaim:
claimName: deprecated-www
name: www
...output omitted...
The www volume already exists. The patch replaces the existing data with the new data.
DO280-OCP4.14-en-2-20240725 11
Chapter 1 | Declarative Resource Management
References
For more information, refer to the OpenShift CLI Developer Command Reference
section in the OpenShift CLI (oc) chapter in the Red Hat OpenShift Container
Platform 4.14 CLI Tools documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/cli_tools/index#cli-developer-commands
For more information, refer to the Using Deployment Strategies section in the
Deployments chapter in the Red Hat OpenShift Container Platform 4.14 Building
Applications documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/building_applications/index#deployment-strategies
For more information about the oc patch command, refer to the oc patch section
in the OpenShift CLI Developer Command Reference chapter in the Red Hat
OpenShift Container Platform 4.14 CLI Tools documentation at
https://access.redhat.com/documentation/en-us/
openshift_container_platform/4.14/html-single/cli_tools/index#oc-patch
12 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Guided Exercise
Resource Manifests
Deploy and update an application from resource manifests from YAML files that are stored in
a Git server.
Outcomes
• Deploy applications from resource manifests from YAML files that are stored in a GitLab
repository.
Instructions
1. Log in to the OpenShift cluster and create the declarative-manifests project.
...output omitted...
DO280-OCP4.14-en-2-20240725 13
Chapter 1 | Declarative Resource Management
3.2. List the commits, branches, and tags on the Git repository.
4.1. Switch to the v1.0 branch, which contains the YAML manifests for the first version of
the application.
14 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
4.4. List the deployments and pods. The exoplanets pod can go into a temporary crash
loop backoff state if it attempts to access the database before it becomes available.
Wait for the pods to be ready. Press Ctrl+C to exit the watch command.
DO280-OCP4.14-en-2-20240725 15
Chapter 1 | Declarative Resource Management
4.6. Open the route URL in the web browser. The application version is v1.0.
http://exoplanets-declarative-manifests.apps.ocp4.example.com/
The new version changes the image that is deployed to the cluster. Because
the change is in the deployment, the new manifest produces new pods for the
application.
16 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
5.4. List the deployments and pods. Wait for the application pod to be ready. Press
Ctrl+C to exit the watch command.
5.6. Open the route URL in the web browser. The application version is v1.1.0.
http://exoplanets-declarative-manifests.apps.ocp4.example.com/
DO280-OCP4.14-en-2-20240725 17
Chapter 1 | Declarative Resource Management
6.2. View the differences between the currently deployed version of the application and
the updated resource manifests.
18 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Although the secret is updated, the deployed application pods are not changed.
These non-updated pods are a problem, because the pods load secrets and
configuration maps at startup. Currently, the pods have stale values from the previous
configuration, and therefore could crash.
7. Force the exoplanets application to restart, to flush out any stale configuration data.
7.4. List the pods. The exoplanets pod can go into a temporary crash loop backoff
state if it attempts to access the database before it becomes available. Wait for the
application pod to be ready. Press Ctrl+C to exit the watch command.
7.5. Use the oc get deployment command with the -o yaml option to view the
last-applied-configuration annotation.
DO280-OCP4.14-en-2-20240725 19
Chapter 1 | Declarative Resource Management
7.6. Open the route URL in the web browser. The application version is v1.1.1.
http://exoplanets-declarative-manifests.apps.ocp4.example.com/
[student@workstation declarative-manifests]$ cd
[student@workstation ~]
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
20 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Kustomize Overlays
Objectives
• Deploy and update applications from resource manifests that are augmented by Kustomize.
Kustomize
When using Kubernetes, multiple teams use multiple environments, such as development, staging,
testing, and production, to deploy applications. These environments use applications with minor
configuration changes.
Many organizations deploy a single application to multiple data centers for multiple teams and
regions. Depending on the load, the organization needs a different number of replicas for every
region. The organization might need various configurations that are specific to a data center or
team.
All these use cases require a single set of manifests with multiple customizations at multiple levels.
Kustomize can support such use cases.
Base
A base directory contains a kustomization.yaml file. The kustomization.yaml file has a list
resource field to include all resource files. As the name implies, all resources in the base directory
are a common resource set. You can create a base application by composing all common resources
from the base directory.
base
├── configmap.yaml
├── deployment.yaml
├── secret.yaml
├── service.yaml
├── route.yaml
└── kustomization.yaml
DO280-OCP4.14-en-2-20240725 21
Chapter 1 | Declarative Resource Management
The base directory has YAML files to create configuration map, deployment, service, secret, and
route resources. The base directory also has a kustomization.yaml file, such as the following
example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- configmap.yaml
- deployment.yaml
- secret.yaml
- service.yaml
- route.yaml
Overlays
Kustomize overlays declarative YAML artifacts, or patches, that override the general settings
without modifying the original files. The overlay directory contains a kustomization.yaml file.
The kustomization.yaml file can refer to one or more directories as bases. Multiple overlays
can use a common base kustomization directory.
The following example shows the directory structure of the frontend-app directory containing
the base and overlay directories:
22 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
└── development
└── kustomization.yaml
└── testing
└── kustomization.yaml
└── production
├── kustomization.yaml
└── patch.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev-env
resources:
- ../../base
Kustomize provides fields to set values for all resources in the kustomization file:
Field Description
You can customize for multiple environments by using overlays and patching. The patches
mechanism has two elements: patch and target.
You can use JSON Patch and strategic merge patches. See the references section for further
information about both patch formats.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-env
patches:
- patch: |-
- op: replace
path: /metadata/name
DO280-OCP4.14-en-2-20240725 23
Chapter 1 | Declarative Resource Management
value: frontend-test
target:
kind: Deployment
name: frontend
- patch: |-
- op: replace
path: /spec/replicas
value: 15
target:
kind: Deployment
name: frontend
resources:
- ../../base
commonLabels:
env: test
The patch field defines operation, path, and value keys. In this example, the name changes
to frontend-test.
The target field specifies the kind and name of the resource to apply the patch. In this
example, you are changing the frontend deployment name to frontend-test.
The commonLabels field adds the env: test label to all resources.
The patches mechanism also provides an option to include patches from a separate YAML file by
using the path key.
The following example shows a kustomization.yaml file that uses a patch.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: prod-env
patches:
- path: patch.yaml
target:
kind: Deployment
name: frontend
options:
allowNameChange: true
resources:
- ../../base
commonLabels:
env: prod
The patches field lists the patches that are applied by using a production kustomization file.
The path field specifies the name of the patching YAML file.
24 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
The target field specifies the kind and name of the resource to apply the patch. In this
example, you are targeting the frontend deployment.
The allowNameChange field enables kustomization to update the name by using a patch
YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-prod
spec:
replicas: 5
The metadata.name field in the patch file updates the frontend deployment name to
frontend-prod if the allowNameChange field is set to true in the kustomization YAML
file.
The spec/replicas field in the patch file updates the number of replicas of the
frontend-prod deployment.
The kubectl apply command applies configurations to the resources in the cluster. If resources
are not available, then the kubectl apply command creates resources. The kubectl apply
command applies a kustomization with the -k flag.
DO280-OCP4.14-en-2-20240725 25
Chapter 1 | Declarative Resource Management
Kustomize Generators
Configuration maps hold non-confidential data by using a key-value pair. Secrets are similar
to configuration maps, but secrets hold confidential information such as usernames and
passwords. Kustomize has configMapGenerator and secretGenerator fields that generate
configuration map and secret resources.
The configuration map and secret generators can include content from external files in the
generated resources. By keeping the content of the generated resources outside the resource
definitions, you can use files that other tools generated, or that are stored in different systems.
Generators help to manage the content of configuration maps and secrets, by taking care of
encoding and including content from other sources.
The following example adds a configuration map by using the configMapGenerator field in the
staging kustomization file. The hello application deployment has two environment variables to
refer to the hello-app-configmap configuration map.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: hello-stage
resources:
- ../../base
configMapGenerator:
- name: hello-app-configmap
literals:
- msg="Welcome!"
- enable="true"
26 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
labels:
app: hello
name: hello
spec:
...output omitted...
spec:
containers:
- name: hello
image: quay.io/hello-app:v1.0
env:
- name: MY_MESSAGE
valueFrom:
configMapKeyRef:
name: hello-app-configmap
key: msg
- name: MSG_ENABLE
valueFrom:
configMapKeyRef:
name: hello-app-configmap
key: enable
You can view and deploy all resources and customizations that the kustomization YAML file
defines, in the development directory.
DO280-OCP4.14-en-2-20240725 27
Chapter 1 | Declarative Resource Management
valueFrom:
configMapKeyRef:
key: msg
name: hello-app-configmap-9tcmf95d77
- name: MSG_ENABLE
valueFrom:
configMapKeyRef:
key: enable
name: hello-app-configmap-9tcmf95d77
...output omitted...
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: hello-stage
resources:
- ../../base
configMapGenerator:
- name: hello-app-configmap
literals:
- msg="Welcome Back!"
- enable="true"
28 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
The kubectl apply -k command applies kustomization. Kustomize appends a new hash to the
configuration map name, which creates a hello-app-configmap-696dm8h728 configuration
map. The new configuration map triggers the generation of a new hello-55bc55ff9-hrszh
pod.
You can generate a configuration map by using the files key from the .properties file or
from the .env file by using the envs key with the file name as the value. You can also create a
configuration map from a literal key-value pair by using the literals key.
The following example shows a kustomization.yaml file with the configMapGenerator field.
...output omitted...
configMapGenerator:
- name: configmap-1
files:
- application.properties
- name: configmap-2
envs:
- configmap-2.env
- name: configmap-3
literals:
- name="configmap-3"
- description="literal key-value pair"
The following example shows the application.properties file that is referenced in the
configmap-1 key.
Day=Monday
Enable=True
The following example shows the configmap-2.env file that is referenced in the configmap-2
key.
Greet=Welcome
Enable=True
Run the kubectl kustomize command to view details of resources and customizations that the
kustomization YAML file defines:
DO280-OCP4.14-en-2-20240725 29
Chapter 1 | Declarative Resource Management
Day=Monday
Enable=True
kind: ConfigMap
metadata:
name: configmap-1-5g2mh569b5
---
apiVersion: v1
data:
Enable: "True"
Greet: Welcome
kind: ConfigMap
metadata:
name: configmap-2-92m84tg9kt
---
apiVersion: v1
data:
description: literal key-value pair
name: configmap-3
kind: ConfigMap
metadata:
name: configmap-3-k7g7d5bffd
---
...output omitted...
Secret Generator
A secret resource has sensitive data such as a username and a password. You can generate the
secret by using the secretGenerator field. The secretGenerator field works similarly to the
configMapGenerator field. However, the secretGenerator field also performs the base64
encoding that secret resources require.
The following example shows a kustomization.yaml file with the secretGenerator field:
...output omitted...
secretGenerator:
- name: secret-1
files:
- password.txt
- name: secret-2
envs:
- secret-mysql.env
- name: secret-3
30 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
literals:
- MYSQL_DB=mysql
- MYSQL_PASS=root
Generator Options
Kustomize provides a generatorOptions field to alter the default behavior of Kustomize
generators. The configMapGenerator and secretGenerator fields append a hash suffix to
the name of the generated resources.
Workload resources such as deployments do not detect any content changes to configuration
maps and secrets. Any changes to a configuration map or secret do not apply automatically.
Because the generators append a hash, when you update the configuration map or secret, the
resource name changes. This change triggers a rollout.
In some cases, the hash is not needed. Some operators observe the contents of the configuration
maps and secrets that they use, and apply changes immediately. For example, the OpenShift
OAuth operator applies changes to htpasswd secrets automatically. You can disable this feature
with the generatorOptions field.
You can also add labels and annotations to the generated resources by using the
generatorOptions field.
...output omitted...
configMapGenerator:
- name: my-configmap
literals:
- name="configmap-3"
- description="literal key-value pair"
generatorOptions:
disableNameSuffixHash: true
labels:
type: generated-disabled-suffix
annotations:
note: generated-disabled-suffix
You can use the kubectl kustomize command to render the changes to verify their effect.
DO280-OCP4.14-en-2-20240725 31
Chapter 1 | Declarative Resource Management
note: generated-disabled-suffix
labels:
type: generated-disabled-suffix
name: my-configmap
The my-configmap configuration map is without a hash suffix, and has a label and annotations
that are defined in the kustomization file.
References
Declarative Management of Kubernetes Objects Using Kustomize
https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
32 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Guided Exercise
Kustomize Overlays
Deploy and update an application by applying different Kustomize overlays that are stored in
a Git server.
Outcomes
• Deploy an application by using Kustomize from provided files.
Instructions
1. Clone the v1.1.0 version of the application. Because this repository uses Git branches to
represent application versions, you must use the v1.1.0 branch.
Clone the repository from the following URL:
https://git.ocp4.example.com/developer/declarative-kustomize.git
2.1. Use the tree command to review the structure of the repository.
DO280-OCP4.14-en-2-20240725 33
Chapter 1 | Declarative Resource Management
3 directories, 10 files
The repository has a kustomization.yaml file at the root, which uses two
other bases.
The name and values of the credentials that are stored in the secret.
34 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Key-value pairs of configuration data that are stored in the configuration map.
3.1. Log in to the OpenShift cluster as the developer user with the developer
password.
...output omitted...
3.3. Use the oc apply -k command to deploy the application with Kustomize.
3.4. Use the watch command to wait until the workloads are running.
DO280-OCP4.14-en-2-20240725 35
Chapter 1 | Declarative Resource Management
replicaset.apps/database-55d6c77787 1 1 1 57s
replicaset.apps/exoplanets-d6f57869d 1 1 1 57s
NAME
HOST/PORT ...
route.route.openshift.io/exoplanets
exoplanets-declarative-kustomize.apps.ocp4.example.com ...
4. Change to the v1.1.1 version of the application and examine the changes.
4.2. Use the git show command to display the last commit.
36 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
imagePullPolicy: Always
livenessProbe:
httpGet:
The v1.1.1 version updates the application to the v1.1.1 image in the base/
exoplanets/deployment.yaml file.
5. Deploy the updated application and verify that the URL now displays the v1.1.1 version.
5.2. Use the watch command to wait until the application redeploys.
NAME
DO280-OCP4.14-en-2-20240725 37
Chapter 1 | Declarative Resource Management
HOST/PORT ...
route.route.openshift.io/exoplanets
exoplanets-declarative-kustomize.apps.ocp4.example.com ...
6. Change to the v1.1.2 version of the application and examine the changes.
6.2. Use the git show command to display the last commit.
38 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
The v1.1.2 version updates the base kustomization. This update changes the
password that the database uses. This change is possible because the sample
application re-creates the database on startup.
6.4. Extract the contents of the secret. The name of the secret can change in your
environment. Use the output from a previous step to learn the name of the secret.
DO280-OCP4.14-en-2-20240725 39
Chapter 1 | Declarative Resource Management
service/database unchanged
service/exoplanets unchanged
deployment.apps/database configured
deployment.apps/exoplanets configured
route.route.openshift.io/exoplanets configured
Because the password is different, Kustomize creates another secret. Kustomize also
updates the two deployments that use the secret to use the new secret.
7.2. Use the watch command to wait until the application redeploys.
NAME
HOST/PORT ...
route.route.openshift.io/exoplanets
exoplanets-declarative-kustomize.apps.ocp4.example.com ...
40 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
7.5. Examine the secret. Use the name of the secret from a previous step.
8. Change to the v1.1.3 version of the application and examine the changes.
8.2. Use the git show command to display the last commit.
DO280-OCP4.14-en-2-20240725 41
Chapter 1 | Declarative Resource Management
index 0000000..a025aa0
--- /dev/null
+++ b/overlays/production/patch-replicas.yaml
@@ -0,0 +1,6 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: exoplanets
+spec:
+ replicas: 2
9.2. Use the watch command to wait until the application redeploys.
42 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
NAME
HOST/PORT ...
route.route.openshift.io/exoplanets
exoplanets-declarative-kustomize.apps.ocp4.example.com ...
Press Ctrl+C to exit the watch command. After you run the command, the
application has two replicas.
Note
Unchanged resources are not restarted.
10.1. Use the oc delete -k command to delete the resources that Kustomize manages.
[student@workstation declarative-kustomize]$ cd
[student@workstation ~]$
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 43
Chapter 1 | Declarative Resource Management
Lab
Outcomes
• Deploy an application by using Kustomize from provided files.
Instructions
1. Clone the v1.1.0 version of the application from the https://
git.ocp4.example.com/developer/declarative-review.git URL. Because this
repository uses Git branches to represent application versions, you must use the v1.1.0
branch.
2. Examine the first version of the application.
3. Log in to the OpenShift cluster as the developer user with the developer password.
Deploy the base directory of the repository to a new declarative-review project.
Verify that the v1.1.0 version of the application is available at http://exoplanets-
declarative-review.apps.ocp4.example.com.
4. Change to the v1.1.1 version of the application and examine the changes.
5. Deploy the updated application and verify that the URL now displays the v1.1.1 version.
6. Examine the overlay in the overlays/production path.
7. Deploy the production overlay to a new declarative-review-production project.
Verify that the v1.1.1 version of the application is available at http://exoplanets-
declarative-review-production.apps.ocp4.example.com with two replicas.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
44 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 45
Chapter 1 | Declarative Resource Management
Solution
Outcomes
• Deploy an application by using Kustomize from provided files.
Instructions
1. Clone the v1.1.0 version of the application from the https://
git.ocp4.example.com/developer/declarative-review.git URL. Because this
repository uses Git branches to represent application versions, you must use the v1.1.0
branch.
2.1. Use the tree command to review the structure of the repository.
46 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
5 directories, 15 files
The exoplanets base defines resources to deploy an application that uses the
database.
The repository has a kustomization.yaml file at the root, which uses two other
bases.
3. Log in to the OpenShift cluster as the developer user with the developer password.
Deploy the base directory of the repository to a new declarative-review project.
Verify that the v1.1.0 version of the application is available at http://exoplanets-
declarative-review.apps.ocp4.example.com.
3.1. Log in to the OpenShift cluster as the developer user with the developer password.
DO280-OCP4.14-en-2-20240725 47
Chapter 1 | Declarative Resource Management
...output omitted...
3.3. Use the oc apply -k command to deploy the application with Kustomize.
3.4. Use the watch command to wait until the workloads are running.
48 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
4. Change to the v1.1.1 version of the application and examine the changes.
4.2. Use the git show command to display the last commit.
5. Deploy the updated application and verify that the URL now displays the v1.1.1 version.
DO280-OCP4.14-en-2-20240725 49
Chapter 1 | Declarative Resource Management
5.2. Use the watch command to wait until the application redeploys.
50 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
- ../../base/
patches:
- path: patch-replicas.yaml
target:
kind: Deployment
name: exoplanets
This patch increases the number of replicas of the deployment, so that the production
deployment can handle more users.
7.3. Use the watch command to wait until the workloads are running.
DO280-OCP4.14-en-2-20240725 51
Chapter 1 | Declarative Resource Management
NAME HOST/PORT
route.../exoplanets exoplanets-declarative-review-production.apps.ocp4.example.com
[student@workstation declarative-review]$ cd
[student@workstation ~]$
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
52 DO280-OCP4.14-en-2-20240725
Chapter 1 | Declarative Resource Management
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 53
Chapter 1 | Declarative Resource Management
Summary
• Imperative commands perform actions, such as creating a deployment, by specifying all
necessary parameters as command-line arguments.
• In the declarative workflow, you create manifests that describe resources in the YAML or JSON
formats, and use commands such as kubectl apply to deploy the resources to a cluster.
• Kubernetes provides tools, such as the kubectl diff command, to review your changes
before applying them.
• You can use Kustomize to create multiple deployments from a single base code with different
customizations.
• The kubectl command integrates Kustomize into the apply subcommand and others.
• Bases and overlays can create and modify existing resources from other bases and overlays.
54 DO280-OCP4.14-en-2-20240725
Chapter 2
DO280-OCP4.14-en-2-20240725 55
Chapter 2 | Deploy Packaged Applications
OpenShift Templates
Objectives
• Deploy and update applications from resource manifests that are packaged as OpenShift
templates.
OpenShift Templates
A template is a Kubernetes custom resource that describes a set of Kubernetes resource
configurations. Templates can have parameters. You can create a set of related Kubernetes
resources from a template by processing the template, and providing values for the parameters.
Templates have varied use cases, and can create any Kubernetes resource. You can create a list of
resources from a template by using the CLI or, if a template is uploaded to your project or to the
global template library, by using the web console.
The template resource is a Kubernetes extension that Red Hat for OpenShift provides. The Cluster
Samples Operator populates templates (and image streams) in the openshift namespace. You
can opt out of adding templates during installation, and you can restrict the list of templates that
the operator populates.
You can also create templates from scratch, or copy and customize a template to suit the needs of
your project.
Discovering Templates
The templates that the Cluster Samples Operator provides are in the openshift namespace.
Use the following oc get command to view a list of these templates:
56 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
...output omitted...
Parameters:
Name: APPLICATION_NAME
Display Name: Application Name
Description: Specifies a name for the application.
Required: true
Value: cache-service
...output omitted...
Name: APPLICATION_PASSWORD
Display Name: Client Password
Description: Sets a password to authenticate client applications.
Required: false
Generated: expression
From: [a-zA-Z0-9]{16}
Message: <none>
Objects:
Secret ${APPLICATION_NAME}
Service ${APPLICATION_NAME}-ping
Service ${APPLICATION_NAME}
StatefulSet.apps ${APPLICATION_NAME}
The value field provides a default value that you can override.
The object labels are applied to all resources that the template creates.
The objects section lists the resources that the template creates.
In addition to using the oc describe command to view information about a template, the
oc process command provides a --parameters option to view only the parameters that a
template uses. For example, use the following command to view the parameters that the cache-
service template uses:
DO280-OCP4.14-en-2-20240725 57
Chapter 2 | Deploy Packaged Applications
Use the -f option to view the parameters of a template that are defined in a file:
Use the oc get template template-name -o yaml -n namespace command to view the
manifest for the template. The following example retrieves the template manifest for the cache-
service template:
In the template manifest, examine how the template creates resources. The manifest is also a
good resource for learning how to create your own templates.
Using Templates
The oc new-app command has a --template option that can deploy the template resources
directly from the openshift project. The following example deploys the resources that are
defined in the cache-service template from the openshift project:
58 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
Using the oc new-app command to deploy the template resources is convenient for
development and testing. However, for production usage, consume templates in a manner that
helps resource and configuration tracking. For example, the oc new-app command can only
create new resources, not update existing resources.
You can use the oc process command to apply parameters to a template, to produce manifests
to deploy the templates with a set of parameters. The oc process command can process both
templates that are stored in files locally, and templates that are stored in the cluster. However, to
process templates in a namespace, you must have write permissions on the template namespace.
For example, to run oc process on the templates in the openshift namespace, you must have
write permissions on this namespace.
Note
Unprivileged users can read the templates in the openshift namespace by
default. Those users can extract the template from the openshift namespace and
create a copy in a project where they have wider permissions. By copying a template
to a project, they can use the oc process command on the template.
The previous example uses the -p option to provide a parameter value to the only required
parameter without a default value.
Use the -f option with the oc process command to process a template that is defined in a file:
Use the -p option with key=value pairs with the oc process command to use parameter
values that override the default values. The following example passes three parameter values to
the my-cache-service template, and overrides the default values of the specified parameters:
Instead of specifying parameters on the command line, place the parameters in a file. This option
cleans up the command line when many parameter values are required. Save the parameters
file in a version control system to keep records of the parameters that are used in production
deployments.
DO280-OCP4.14-en-2-20240725 59
Chapter 2 | Deploy Packaged Applications
For example, instead of using the command-line options in the previous examples, place the key-
value pairs in a my-cache-service-params.env file. Add the key-value pairs to the file, with
each pair on a separate line:
TOTAL_CONTAINER_MEM=1024
APPLICATION_USER='cache-user'
APPLICATION_PASSWORD='my-secret-password'
The corresponding oc process command uses the --param-file option to pass the
parameters as follows:
Generating a manifest file is not required to use templates. Instead, pipe the output of the oc
process command directly to the input for the oc apply -f - command. The oc apply
command creates live resources on the Kubernetes cluster.
Because templates are flexible, you can use the same template to create different resources by
changing the input parameters.
To compare the results of applying a different parameters file to a template against the live
resources, pipe the manifest to the oc diff -f - command. For example, given a second
parameter file named my-cache-service-params-2.env, use the following command:
60 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
+ memory: 2Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
In this case, the configuration change increases the memory usage of the application. The output
shows that the second generation uses 2Gi of memory instead of 1Gi.
After verifying that the changes are what you intend, you can pipe the output of the oc process
to the oc apply -f - command.
Managing Templates
For production usage, make a customized copy of the template, to change the default values of
the template to suitable values for the target project. To copy a template into your project, use the
oc get template command with the -o yaml option to copy the template YAML to a file.
The following example copies the cache-service template from the openshift project to a
YAML file named my-cache-service.yaml:
After creating a YAML file for a template, consider making the following changes to the template:
• Give the template a new name that is specific to the target use of the template resources.
• Apply appropriate changes to the parameter default values at the end of the file.
You can process templates in other namespaces, if you can create the processed template
resource in those namespaces. Processing the template in a different project without changing
the template namespace to match the target namespace gives an error. Optionally, you can also
delete the namespace field from the metadata field of the template resource.
After you have a YAML file for a template, use the oc create -f command to upload the
template to the current project. In this case, the oc create command is not creating the
resources that the template defines. Instead, the command is creating a template resource in
the project. Using a template that is uploaded to a project clarifies which template provides the
resource definitions of a project. After uploading, the template is available to anyone with access
to the project.
The following example uploads a customized template that is defined in the my-cache-
service.yaml file to the current project:
Use the -n namespace option to upload the template to a different project. The following
example uploads the template that is defined in the my-cache-service.yaml file to the
shared-templates project:
Use the oc get templates command to view a list of available templates in the project:
DO280-OCP4.14-en-2-20240725 61
Chapter 2 | Deploy Packaged Applications
References
For more information, refer to the Understanding Templates section in the Using
Templates chapter in the Red Hat OpenShift Container Platform 4.14 Images
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/images/index#templates-overview_using-templates
For more information, refer to the OpenShift CLI Developer Command Reference
section in the OpenShift CLI (oc) chapter in the Red Hat OpenShift Container
Platform 4.14 CLI Tools documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/cli_tools/index#cli-developer-commands
62 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
Guided Exercise
OpenShift Templates
Deploy and update an application from a template that is stored in another project.
Outcomes
• Deploy and update an application from a template.
This command ensures that all resources are available for this exercise.
Instructions
1. Log in to the OpenShift cluster as the developer user with the developer password.
2. Examine the available templates in the cluster, in the openshift project. Identify an
appropriate template to deploy a MySQL database.
2.1. Use the get command to retrieve a list of templates in the cluster, in the openshift
project.
2.2. Use the oc process --parameters command to view the parameters of the
mysql-persistent template.
DO280-OCP4.14-en-2-20240725 63
Chapter 2 | Deploy Packaged Applications
All the required parameters have either default values or generated values.
3.3. Use the watch command to verify that the pods are running. Wait for the mysql-1-
deploy pod to show a Completed status. Press Ctrl+C to exit the watch
command.
64 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
+--------------------+
| information_schema |
| performance_schema |
| sampledb |
+--------------------+
pod "query-db" deleted
The query-db pod uses the mysql command from the mysql-80 image to send
the SHOW DATABASES; query. The --rm option deletes the pod after execution
terminates.
4.2. Use oc get templates to view the available templates in the packaged-
templates project.
4.3. Use the oc process --parameters command to view the parameter of the
roster-template template.
4.4. Use the oc process command to generate the manifests for the roster-
template application resources, and use the oc apply command to create the
resources in the Kubernetes cluster.
You must use the same database credentials that you used in an earlier step to
configure the database, so that the application can access the database.
DO280-OCP4.14-en-2-20240725 65
Chapter 2 | Deploy Packaged Applications
4.5. Use the oc get pods command to confirm that the application is running.
4.7. Open the application URL in the web browser. The header confirms the use of
version 1 of the application.
http://do280-roster-packaged-templates.apps.ocp4.example.com
4.8. Enter your information in the form and save it to the database.
5. Deploy an updated version of the do280/roster application from the custom template in
the roster-template template. Use version 2 of the application and do not overwrite the
data in the database.
5.1. Create a text file named roster-parameters.env with the following content:
MYSQL_USER=user1
MYSQL_PASSWORD=mypasswd
IMAGE=registry.ocp4.example.com:8443/redhattraining/do280-roster:v2
The option of using a parameter file helps version control software to track changes.
5.2. Use the oc process command and the oc diff command to view the changes in
the new manifests when compared to the live application.
66 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
- value: "true"
- image: registry.ocp4.example.com:8443/redhattraining/do280-roster:v1
+ value: "False"
+ image: registry.ocp4.example.com:8443/redhattraining/do280-roster:v2
imagePullPolicy: IfNotPresent
name: do280-roster-image
ports:
The IMAGE parameter changes the image that the template uses.
5.3. Use the oc process command to generate the manifests for the roster-
template application objects, and use the oc apply command to create the
application objects. With the changes from a previous step, you use the IMAGE
variable to use a different image for the update and omit the INIT_DB variable.
5.4. Use watch to verify that the pods are running. Wait for the mysql-1-deploy pod to
show a Completed status. Press Ctrl+C to exit the watch command.
5.5. Open the application URL in the web browser. The route is unchanged, so you can
refresh the previous browser page if the page is still open. The header confirms the
use of version 2 of the application. The data that is pulled from the database is
unchanged.
http://do280-roster-packaged-templates.apps.ocp4.example.com
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 67
Chapter 2 | Deploy Packaged Applications
Helm Charts
Objectives
• Deploy and update applications from resource manifests that are packaged as Helm charts.
Helm
Helm is an open source application that helps to manage the lifecycle of Kubernetes applications.
Helm introduces the concept of charts. A chart is a package that describes a set of Kubernetes
resources that you can deploy. Helm charts define values that you can customize when deploying
an application. Helm includes functions to distribute charts and updates.
Many organizations distribute Helm charts to deploy applications. Often, Helm is the supported
mechanism to deploy a specific application.
However, Helm does not cover all needs to manage certain kinds of applications. Operators have a
more complete model that can handle the lifecycle of more complex applications. For more details
about operators, refer to Kubernetes Operators and the Operator Lifecycle Manager .
Helm Charts
A Helm chart defines Kubernetes resources that you can deploy. A chart is a collection of files
with a defined structure. These files include chart metadata (such as the chart name or version),
resource definitions, and supporting material.
Chart authors can use the template feature of the Go language for the resource definitions. For
example, instead of specifying the image for a deployment, charts can use user-provided values
for the image. By using values to choose an image, cluster administrators can replace a default
public image with an image from a private repository.
sample/
├── Chart.yaml
├── templates
| |── example.yaml
└── values.yaml
The Chart.yaml file contains chart metadata, such as the name and version of the chart.
The templates directory contains files that define application resources such as
deployments.
Helm charts can contain hooks that Helm executes at different points during installations and
upgrades. Hooks can automate tasks for installations and upgrades. With hooks, Helm charts
can manage more complex applications than purely manifest-based processes. Review the chart
documentation to learn about the chart hooks and their implications.
68 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
Charts
Charts are the packaged applications that the helm command deploys.
Releases
A release is the result of deploying a chart. You can deploy a chart many times to the same
cluster. Each deployment is a different release.
Versions
A Helm chart can have many versions. Chart authors can release updates to charts, to adapt
to later application versions, introduce new features, or fix issues.
You can use and refer to charts in various ways. For example, if your local file system contains a
chart, then you can refer to that chart by using the path to the chart directory. You can also use a
path or a URL that contains a chart that is packaged in a tar archive with gzip compression.
The show values subcommand displays the default values for the chart. The output is in YAML
format and comes from the values.yaml file in the chart.
Chart resources use the values from the values.yaml file by default. You can override these
default values. You can use the output of the show values command to discover customizable
values.
DO280-OCP4.14-en-2-20240725 69
Chapter 2 | Deploy Packaged Applications
Always refer to the documentation of the chart before installation to learn about prerequisites,
extra installation steps, and other information.
Helm charts can contain Kubernetes resources of any kind. These resources can be namespaced
or non-namespaced. Like normal resource definitions, namespaced resources in charts can define
or omit a namespace declaration.
Most Helm charts that deploy applications do not create a namespace, and namespaced resources
in the chart omit a namespace declaration. Typically, when deploying a chart that follows this
structure, you create a namespace for the deployment, and Helm creates namespaced resources
in this namespace.
After deciding the target namespace, you can design the values to use. Inspect the
documentation and the output of the helm show values command to decide which values to
override.
You can define values by writing a YAML file that contains them. This file can follow the structure
from the output of the helm show values command, which contains the default values. Specify
only the values to override.
Consider the following output from the helm show values command for an example chart:
image:
repository: "sample"
tag: "1.8.10"
pullPolicy: IfNotPresent
Create a values.yaml file without the image key if you do not want to override any image
parameters. Omit the pullPolicy key to override the tag key but not the pull policy. For
example, the following YAML file would override only the image tag:
image:
tag: "1.8.10-patched"
Besides the YAML file, you can override specific values by using command-line arguments.
The final element to prepare a chart deployment is choosing a release name. You can deploy
a chart many times to a cluster. Each chart deployment must have a unique release name for
identification purposes. Many Helm charts use the release name to construct the name of the
created resources.
With the namespace, values, and release name, you can start the deployment process. The helm
install command creates a release in a namespace, with a set of values.
70 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
NOTES:
The application can be accessed via port 1234.
...output omitted...
A list of the resources that the helm install command would create
Additional information
Note
You define values to use for the installation with the --values values.yaml
option. In this file, you override the default values from the chart that are defined in
the values.yaml file that the chart contains.
Often, chart resource names include the release name. In the example output of the helm
install command, the service account is a combination of the release name and the -sa text.
Chart authors can provide installation notes that use the chart values. In the same example, the
port number in the notes reflects a value from the values.yaml file.
If the preview looks correct, then you can run the same command without the --dry-run option
to deploy the resources and create the release.
Releases
When the helm install command runs successfully, besides creating the resources, Helm
creates a release. Helm stores information about the release as a secret of the helm.sh/
release.v1 type.
Inspecting Releases
Use the helm list command to inspect releases on a cluster.
DO280-OCP4.14-en-2-20240725 71
Chapter 2 | Deploy Packaged Applications
Similarly to kubectl commands, many helm commands have the --all-namespaces and
--namespace options. The helm list command without options lists releases in the current
namespace. If you use the --all-namespaces option, then it lists releases in all namespaces. If
you use the --namespace option, then it lists releases in a single namespace.
Warning
Do not manipulate the release secret. If you remove the secret, then Helm cannot
operate with the release.
Upgrading Releases
The helm upgrade command can apply changes to existing releases, such as updating values or
the chart version.
Important
By default, this command automatically updates releases to use the latest version of
the chart.
The helm upgrade command uses similar arguments and options to the helm install
command. However, the helm upgrade command interacts with existing resources in the cluster
instead of creating resources from a blank state. Therefore, the helm upgrade command can
have more complex effects, such as conflicting changes. Always review the chart documentation
when using a later version of a chart, and when changing values. You can use the --dry-run
option to preview the manifests that the helm upgrade command uses, and compare them to
the running resources.
You can review this log by using the helm history command:
You can use the helm rollback command to revert to an earlier revision:
Rolling back can have greater implications than upgrading, because upgrades might not be
reversible. If you keep a test environment with the same upgrades as a production environment,
then you can test rollbacks before performing them in the production environment to find
potential issues.
72 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
Helm Repositories
Charts can be distributed as files, archives, or container images, or by using chart repositories.
The helm repo command provides the following subcommands to work with chart repositories.
Subcommand Description
This command and other repository commands change only local configuration, and do not
affect any cluster resources. The helm repo add command updates the ~/.config/helm/
repositories.yaml configuration file, which keeps the list of configured repositories.
When repositories are configured, other commands can use the list of repositories to perform
actions. For example, the helm search repo command lists all available charts in the configured
repositories:
By default, the helm search repo command shows only the latest version of a chart. Use
the --versions option to list all available versions. By default, the install and upgrade
commands use the latest version of the chart in the repository. You can use the --version
option to install specific versions.
References
Using Helm
https://helm.sh/docs/intro/using_helm/
Helm Charts
https://helm.sh/docs/topics/charts/
DO280-OCP4.14-en-2-20240725 73
Chapter 2 | Deploy Packaged Applications
Guided Exercise
Helm Charts
Deploy and update an application from a chart that is stored in a catalog.
Outcomes
• Deploy an application and its dependencies from a Helm chart.
Instructions
1. Add the classroom Helm repository at the following URL and examine its contents.
http://helm.ocp4.example.com/charts
1.1. Use the helm repo list command to list the repositories that are configured for
the student user.
If the do280-repo repository is present, then continue to the next step. Otherwise,
add the repository.
1.2. Use the helm search command to list all the chart versions in the repository.
The etherpad chart has the 0.0.7 and 0.0.6 versions. This chart is a copy of a chart
from the https://github.com/redhat-cop/helm-charts repository.
74 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
image:
repository: etherpad
name:
tag:
pullPolicy: IfNotPresent
...output omitted...
route:
enabled: true
host: null
targetPort: http
...output omitted...
resources: {}
...output omitted...
You can configure the image, the replica count, and other values. By default, the chart
creates a route. You can customize the route with the route.host key.
With the default configuration, the chart uses the docker.io/etherpad/
etherpad:latest image. The classroom environment is designed for offline use.
Use the registry.ocp4.example.com:8443/etherpad/etherpad:1.8.18
image from the local registry instead.
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: development-etherpad.apps.ocp4.example.com
2.3. Log in to the cluster as the developer user with the developer password.
DO280-OCP4.14-en-2-20240725 75
Chapter 2 | Deploy Packaged Applications
2.6. Get the route to verify that you customized the route correctly.
Note
The route in this example uses 'edge' TLS termination. TLS termination is explained
later in this course.
3.1. Use the helm list command to verify the installed version.
3.2. Use the helm search command to verify that the repository contains a later
version.
3.3. Use the helm upgrade command to upgrade to the latest version of the chart.
76 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
3.4. Use the helm list command to verify the installed version.
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: etherpad.apps.ocp4.example.com
4.3. Install the 0.0.7 version of the etherpad chart to the packaged-review-
production project.
Use the values.yaml file that you edited in a previous step. Use production as
the release name.
DO280-OCP4.14-en-2-20240725 77
Chapter 2 | Deploy Packaged Applications
4.4. Verify the deployment by opening a web browser and navigating to the application
URL. https://etherpad.apps.ocp4.example.com
This URL corresponds to the host that you specified in the values.yaml file. The
application welcome page appears in the production URL.
5. Reconfigure the production deployment to sustain heavier use. Change the number of
replicas to 3.
5.2. Edit the values.yaml file. Add a replicaCount key with the 3 value.
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: etherpad.apps.ocp4.example.com
replicaCount: 3
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
78 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
DO280-OCP4.14-en-2-20240725 79
Chapter 2 | Deploy Packaged Applications
Lab
Outcomes
• Deploy an application and its dependencies from resource manifests that are packaged as
a Helm chart.
Instructions
1. Log in to the cluster as the developer user with the developer password. Create the
packaged-review and packaged-review-prod projects.
2. Add the classroom Helm repository at the http://helm.ocp4.example.com/charts
URL and examine its contents. Use do280-repo for the name of the repository.
3. Install the 0.0.6 version of the etherpad chart on the packaged-review namespace,
with the test release name. Use the registry.ocp4.example.com:8443/etherpad/
etherpad:1.8.17 image in the offline classroom registry.
Create a values-test.yaml file with the image repository, name, and tag.
Field Value
image.repository registry.ocp4.example.com:8443/etherpad
image.name etherpad
image.tag 1.8.17
80 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
Field Value
image.tag 1.8.18
5. Using version 0.0.6, create a second deployment of the chart in the packaged-review-
prod namespace, with the prod release name. Copy the values-test.yaml file to the
values-prod.yaml file, and set the route host.
Field Value
route.host etherpad.apps.ocp4.example.com
Access the application in the route URL to verify that it is working correctly.
https://etherpad.apps.ocp4.example.com
6. Add limits to the etherpad instance in the packaged-review-prod namespace. The
chart values example contains comments that show the required format for this change. Set
limits and requests for the deployment in the values-prod.yaml file. Use version 0.0.7
of the chart.
Field Value
resources.limits.memory 256Mi
resources.requests.memory 128Mi
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 81
Chapter 2 | Deploy Packaged Applications
Solution
Outcomes
• Deploy an application and its dependencies from resource manifests that are packaged as
a Helm chart.
Instructions
1. Log in to the cluster as the developer user with the developer password. Create the
packaged-review and packaged-review-prod projects.
1.1. Log in to the cluster as the developer user with the developer password.
82 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
2.1. Use the helm repo list command to list the repositories that are configured for the
student user.
If the do280-repo repository is present, then continue to the next step. Otherwise,
add the repository.
2.2. Use the helm search command to list all the chart versions in the repository.
The etherpad chart has versions 0.0.6 and 0.0.7. This chart is a copy of a chart from
the https://github.com/redhat-cop/helm-charts repository.
3. Install the 0.0.6 version of the etherpad chart on the packaged-review namespace,
with the test release name. Use the registry.ocp4.example.com:8443/etherpad/
etherpad:1.8.17 image in the offline classroom registry.
Create a values-test.yaml file with the image repository, name, and tag.
Field Value
image.repository registry.ocp4.example.com:8443/etherpad
image.name etherpad
image.tag 1.8.17
DO280-OCP4.14-en-2-20240725 83
Chapter 2 | Deploy Packaged Applications
image:
repository: etherpad
name:
tag:
pullPolicy: IfNotPresent
...output omitted...
route:
enabled: true
host: null
targetPort: http
...output omitted...
resources: {}
...output omitted...
The resource requests and limits for this workload. This value is set by default to
{}, which indicates that it is an empty map.
3.3. With the default configuration, the chart uses the docker.io/etherpad/
etherpad:latest container image.
This image is not suitable for the classroom environment. Use the
registry.ocp4.example.com:8443/etherpad/etherpad:1.8.17 container
image instead.
Create a values-test.yaml file with the following content:
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.17
• Use the values-test.yaml file that you created in the previous step.
• Use test as the release name.
84 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
3.5. Use the helm list command to verify the installed version of the etherpad chart.
3.6. Verify that the pod is running and that the deployment is ready.
3.7. Verify that the pod executes the specified container image.
3.9. Open a web browser and navigate to the following URL to view the application page.
https://test-etherpad-packaged-review.apps.ocp4.example.com
Field Value
image.tag 1.8.18
4.1. Edit the values-test.yaml file and update the image tag value:
DO280-OCP4.14-en-2-20240725 85
Chapter 2 | Deploy Packaged Applications
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
4.2. Use the helm search command to verify that the repository contains a more recent
version of the etherpad chart.
4.3. Use the helm upgrade command to upgrade to the latest version of the chart.
4.4. Use the helm list command to verify the installed version of the etherpad chart.
4.5. Verify that the pod is running and that the deployment is ready.
4.6. Verify that the pod executes the updated container image.
4.7. Reload the test-etherpad application welcome page in the web browser.
86 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
5. Using version 0.0.6, create a second deployment of the chart in the packaged-review-
prod namespace, with the prod release name. Copy the values-test.yaml file to the
values-prod.yaml file, and set the route host.
Field Value
route.host etherpad.apps.ocp4.example.com
Access the application in the route URL to verify that it is working correctly.
https://etherpad.apps.ocp4.example.com
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: etherpad.apps.ocp4.example.com
5.4. Install the 0.0.6 version of the etherpad chart on the packaged-review-prod
namespace.
Use the values-prod.yaml file that you edited in the previous step. Use prod as the
release name.
5.5. Use the helm list command to verify the installed version of the etherpad chart.
5.6. Verify that the pod is running and that the deployment is ready.
DO280-OCP4.14-en-2-20240725 87
Chapter 2 | Deploy Packaged Applications
5.7. Verify that the pod executes the specified container image.
5.8. Verify the deployment by opening a web browser and navigating to the application
URL. This URL corresponds to the host that you specified in the values-prod.yaml
file. The application welcome page appears in the production URL.
https://etherpad.apps.ocp4.example.com
Field Value
resources.limits.memory 256Mi
resources.requests.memory 128Mi
6.1. Edit the values-prod.yaml file. Configure the deployment to request 128 MiB of
RAM, and limit RAM usage to 128 MiB.
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: etherpad.apps.ocp4.example.com
resources:
limits:
memory: 256Mi
requests:
memory: 128Mi
6.2. Use the helm upgrade command to upgrade to the latest version of the chart.
88 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
6.3. Verify that the pod is running and that the deployment is ready.
6.4. Examine the application pod from the production instance of the application to verify
the configuration change.
6.5. Examine the pod of the test instance of the application in the packaged-review
namespace. This deployment uses the values from the values-test.yaml file
that did not specify resource limits or requests. The pod in the packaged-review
namespace does not have a custom resource allocation.
6.6. Use the helm list command to verify the installed version of the etherpad chart.
6.7. Reload the application welcome page in the web browser. The deployment continues
working after you add the limits.
DO280-OCP4.14-en-2-20240725 89
Chapter 2 | Deploy Packaged Applications
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
90 DO280-OCP4.14-en-2-20240725
Chapter 2 | Deploy Packaged Applications
Summary
• Use templates to deploy workloads with parameterization.
• Use the oc process command and the oc apply -f - command to deploy template
resources to the Kubernetes cluster.
• Provide parameters to customize the template with the -p or --param-file arguments to the
oc command.
• View Helm charts with the helm show chart chart-reference and helm show values
chart-reference commands.
• Use the helm history release-name command to view the history of a release.
• Use the helm repo add repo-name repo-url command to add a Helm repository to the
~/.config/helm/repositories.yaml configuration file.
• Use the helm search repo command to search repositories in the ~/.config/helm/
repositories.yaml configuration file.
DO280-OCP4.14-en-2-20240725 91
92 DO280-OCP4.14-en-2-20240725
Chapter 3
Authentication and
Authorization
Goal Configure authentication with the HTPasswd
identity provider and assign roles to users and
groups.
DO280-OCP4.14-en-2-20240725 93
Chapter 3 | Authentication and Authorization
Objectives
• Configure the HTPasswd identity provider for OpenShift authentication.
User
In the OpenShift Container Platform architecture, users are entities that interact with the API
server. The user resource represents an actor within the system. Assign permissions by adding
roles to the user directly or to the groups that the user is a member of.
Identity
The identity resource keeps a record of successful authentication attempts from a specific
user and identity provider. Any data about the source of the authentication is stored on the
identity.
Service Account
In OpenShift, applications can communicate with the API independently when user credentials
cannot be acquired. To preserve the integrity of the credentials for a regular user, credentials
are not shared and service accounts are used instead. With service accounts, you can control
API access without the need to borrow a regular user's credentials.
Group
Groups represent a specific set of users. Users are assigned to groups. Authorization policies
use groups to assign permissions to multiple users at the same time. For example, to grant 20
users access to objects within a project, it is better to use a group instead of granting access
to each user individually. OpenShift Container Platform also provides system groups or virtual
groups that are provisioned automatically by the cluster.
Role
A role defines the API operations that a user has permissions to perform on specified resource
types. You grant permissions to users, groups, and service accounts by assigning roles to
them.
User and identity resources are usually not created in advance. OpenShift usually creates these
resources automatically after a successful interactive login with OAuth.
94 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
If the request does not present an access token or certificate, then the authentication layer
assigns it the system:anonymous virtual user and the system:unauthenticated virtual
group.
Identity Providers
The OpenShift OAuth server can be configured to use many identity providers. The following lists
includes the more common identity providers:
HTPasswd
Validates usernames and passwords against a secret that stores credentials that are
generated by using the htpasswd command.
Keystone
Enables shared authentication with an OpenStack Keystone v3 server.
LDAP
Configures the LDAP identity provider to validate usernames and passwords against an
LDAPv3 server, by using simple bind authentication.
OpenID Connect
Integrates with an OpenID Connect identity provider by using an Authorization Code Flow.
The OAuth custom resource must be updated with your chosen identity provider. You can define
multiple identity providers, of the same or different kinds, on the same OAuth custom resource.
To create additional users and grant them different access levels, you must configure an identity
provider and assign roles to your users.
DO280-OCP4.14-en-2-20240725 95
Chapter 3 | Authentication and Authorization
Note
In the classroom environment, the utility machine stores the kubeconfig file
at /home/lab/ocp4/auth/kubeconfig. Use the ssh lab@utility command
from the workstation machine to access the utility machine.
To use the kubeconfig file to authenticate oc commands, you must copy the file to your
workstation and set the absolute or relative path to the KUBECONFIG environment variable. Then,
you can run any oc command that requires cluster administrator privileges without logging in to
OpenShift.
The OpenShift installer dynamically generates a unique kubeadmin password for the cluster. The
installation logs provide the kubeadmin credentials to log in to the cluster. The cluster installation
logs also provide the login, password, and the URL for console access.
...output omitted...
INFO The cluster is ready when 'oc login -u kubeadmin -p shdU_trbi_6ucX_edbu_aqop'
...output omitted...
INFO Access the OpenShift web-console here:
https://console-openshift-console.apps.ocp4.example.com
INFO Login to the console with user: kubeadmin, password: shdU_trbi_6ucX_edbu_aqop
Note
In the classroom environment, the utility machine stores the password for the
kubeadmin user in the /home/lab/ocp4/auth/kubeadmin-password file.
96 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Warning
If you delete the kubeadmin secret before you configure another user with
cluster admin privileges, then you can administer your cluster only by using the
kubeconfig file. If you do not have a copy of this file in a safe location, then you
cannot recover administrative access to your cluster. The only alternative is to
destroy and reinstall your cluster.
Warning
Do not delete the kubeadmin user at any time during this course. The kubeadmin
user is essential to the course lab architecture. If you deleted this user, you would
have to delete the lab environment and re-create it.
Important
Use the -c option only when creating a file. The -c option replaces all file content if
the file already exists.
Delete credentials.
DO280-OCP4.14-en-2-20240725 97
Chapter 3 | Authentication and Authorization
Important
A secret that the HTPasswd identity provider uses requires adding the htpasswd=
prefix before specifying the path to the file.
By default, the oc extract command saves each key within a configuration map or secret as a
separate file. Alternatively, you can then redirect all data to a file or display it as standard output.
To extract data from the htpasswd-secret secret to the /tmp/ directory, use the following
command. The --confirm option replaces the file if it exists.
After updating the secret, the OAuth operator redeploys pods in the openshift-
authentication namespace. Monitor the redeployment of the new OAuth pods by running the
following command:
Test additions, changes, or deletions to the secret after the new pods finish deploying.
Managing users with the HTPasswd identity provider might suffice for a proof-of-concept
environment with a small set of users. However, most production environments require a more
powerful identity provider that integrates with the organization's identity management system.
98 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_htpasswd_provider
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpasswd-secret
This provider name is prefixed to provider user names to form an identity name.
Controls how mappings are established between provider identities and user objects. With
the default claim value, you cannot log in with different identity providers.
An existing secret that contains data that is generated by using the htpasswd command.
Then, open the resulting file in a text editor and make the needed changes to the embedded
identity provider settings.
After completing modifications and saving the file, you must apply the new custom resource by
using the oc replace command.
You must remove the password from the htpasswd secret, remove the user from the local
htpasswd file, and then update the secret.
DO280-OCP4.14-en-2-20240725 99
Chapter 3 | Authentication and Authorization
Identity resources include the name of the identity provider. To delete the identity resource for the
manager user, find the resource and then delete it.
References
For more information about identity providers, refer to the Understanding Identity
Provider Configuration chapter in the Red Hat OpenShift Container Platform 4.14
Authentication and Authorization documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/authentication_and_authorization/index#understanding-identity-
provider
100 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Guided Exercise
Outcomes
• Create users and passwords for HTPasswd authentication.
The command ensures that the cluster API is reachable, the httpd-tools package is
installed, and that the authentication settings are configured to the installation defaults.
Instructions
1. Add an entry for two users, new_admin and new_developer. Assign the new_admin user
the redhat password, and assign the new_developer user the developer password.
1.2. Add the new_developer user with the developer password to the ~/DO280/
labs/auth-providers/htpasswd file. The password for the new_developer
user is hashed with the MD5 algorithm, because no algorithm was specified and MD5
is the default hashing algorithm.
DO280-OCP4.14-en-2-20240725 101
Chapter 3 | Authentication and Authorization
2. Log in to OpenShift and create a secret that contains the HTPasswd users file.
...output omitted...
2.3. Assign the new_admin user the cluster-admin role. Ignore a warning that the user
is not found.
Note
When you execute the oc adm policy add-cluster-role-to-user
cluster-admin new-admin command, a naming collision occurs with an
existing cluster role binding object. Consequently, the system creates an object and
appends to the name -x, which is an iterating numeral that starts with -0.
To view the new cluster role binding, use the oc get clusterrolebinding |
grep ^cluster-admin command to list all cluster role bindings that begin with
cluster-admin. Then, run oc describe on the listed item with the highest -x
value to view the details for your new binding.
3. Update the HTPasswd identity provider for the cluster so that your users can authenticate.
Configure the custom resource file and update the cluster.
102 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
3.1. Export the existing OAuth resource to a file named oauth.yaml in the ~/DO280/
labs/auth-providers directory.
Note
For convenience, an oauth.yaml file that contains the completed custom resource
file is downloaded to ~/DO280/solutions/auth-providers.
apiVersion: config.openshift.io/v1
kind: OAuth
...output omitted...
spec:
identityProviders:
- ldap:
...output omitted...
type: LDAP
- htpasswd:
fileData:
name: localusers
mappingMethod: claim
name: myusers
type: HTPasswd
3.3. Apply the custom resource that was defined in the previous step.
DO280-OCP4.14-en-2-20240725 103
Chapter 3 | Authentication and Authorization
Note
Authentication changes require redeploying pods in the openshift-
authentication namespace.
Use the watch command to examine the status of workloads in the openshift-
authentication namespace.
A few minutes after you ran the oc replace command, the redeployment starts.
Wait until new pods are running. Press Ctrl+C to exit the watch command.
Provided that the previously created secret was created correctly, you can log in by
using the HTPasswd identity provider.
4. Log in as the new_admin and as the new_developer user to verify the HTPasswd user
configuration.
4.1. Log in to the cluster as the new_admin user to verify that the HTPasswd
authentication is configured correctly. The authentication operator takes some time
to load the configuration changes from the previous step.
Note
If the authentication fails, then wait a few moments and try again.
...output omitted...
4.2. Use the oc get nodes command to verify that the new_admin user has the
cluster-admin role.
4.3. Log in to the cluster as the new_developer user to verify that the HTPasswd
authentication is configured correctly.
...output omitted...
104 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
4.4. Use the oc get nodes command to verify that the new_developer and
new_admin users do not have the same level of access.
...output omitted...
Note
You might see additional users from previously completed exercises.
Note
You might see additional identities from previously completed exercises.
5. As the new_admin user, create a HTPasswd user named manager with a password of
redhat.
5.1. Extract the file data from the secret to the ~/DO280/labs/auth-providers/
htpasswd file.
DO280-OCP4.14-en-2-20240725 105
Chapter 3 | Authentication and Authorization
5.4. You must update the secret after adding additional users. Use the oc set data
secret command to update the secret. If the command fails, then wait a few
moments for the oauth operator to finish reloading, and rerun the command.
5.5. Use the watch command to examine the status of workloads in the openshift-
authentication namespace.
A few minutes after you ran the oc set data command, the redeployment starts.
Wait until new pods are running. Press Ctrl+C to exit the watch command.
Note
If the authentication fails, then wait a few moments and try again.
106 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
...output omitted...
6. Create an auth-providers project, and then verify that the new_developer user
cannot access the project.
...output omitted...
...output omitted...
7.2. Extract the file data from the secret to the ~/DO280/labs/auth-providers/
htpasswd file.
7.3. Generate a random user password and assign it to the MANAGER_PASSWD variable.
DO280-OCP4.14-en-2-20240725 107
Chapter 3 | Authentication and Authorization
7.4. Update the manager user to use the stored password in the MANAGER_PASSWD
variable.
7.6. Use the watch command to examine the status of workloads in the openshift-
authentication namespace.
A few minutes after you ran the oc set data command, the redeployment starts.
Wait until new pods are running. Press Ctrl+C to exit the watch command.
...output omitted...
Note
If the authentication fails, then wait a few moments and try again.
...output omitted...
8.2. Extract the file data from the secret to the ~/DO280/labs/auth-providers/
htpasswd file.
108 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
8.5. Use the watch command to examine the status of workloads in the openshift-
authentication namespace.
A few minutes after you ran the oc set data command, the redeployment starts.
Wait until new pods are running. Press Ctrl+C to exit the watch command.
8.6. Log in as the manager user. If the login succeeds, then try again until the login fails.
...output omitted...
8.10. List the current users to verify that you deleted the manager user.
DO280-OCP4.14-en-2-20240725 109
Chapter 3 | Authentication and Authorization
8.11. Display the list of current identities to verify that you deleted the manager identity.
8.12. Extract the secret and verify that only the new_admin and new_developer users
are displayed. Using --to - sends the secret to STDOUT rather than saving it to a
file.
...output omitted...
9.3. Edit the resource in place to remove the identity provider from OAuth:
Delete all the lines after the ldap identity provider definition on line 34. Your file
should match the following example:
110 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- ldap:
...output omitted...
type: LDAP
# Delete all lines below
- htpasswd:
fileData:
name: localusers
mappingMethod: claim
name: myusers
type: HTPasswd
Save your changes, and then verify that the oc edit command applied those
changes:
oauth.config.openshift.io/cluster edited
9.4. Use the watch command to examine the status of workloads in the openshift-
authentication namespace.
A few minutes after you ran the oc edit command, the redeployment starts. Wait
until new pods are running. Press Ctrl+C to exit the watch command.
Note
You might see additional identities from previously completed exercises.
DO280-OCP4.14-en-2-20240725 111
Chapter 3 | Authentication and Authorization
Note
You might see additional users from previously completed exercises.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
112 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Objectives
• Define role-based access controls and apply permissions to users.
Note
Authorization is a separate step from authentication.
Authorization Process
The authorization process is managed by rules, roles, and bindings.
Role Sets of rules. Users and groups can be associated with multiple roles.
RBAC Scope
Red Hat OpenShift Container Platform (RHOCP) defines two groups of roles and bindings
depending on the user's scope and responsibility: cluster roles and local roles.
Cluster RBAC Roles and bindings that apply across all projects.
Local RBAC Roles and bindings that are scoped to a given project. Local role
bindings can reference both cluster and local roles.
Note
This two-level hierarchy enables reuse across multiple projects through the cluster
roles, and enables customization inside individual projects through local roles.
Authorization evaluation uses both the cluster role bindings and the local role
bindings to allow or deny an action on a resource.
DO280-OCP4.14-en-2-20240725 113
Chapter 3 | Authentication and Authorization
For example, to change a regular user to a cluster administrator, use the following command:
For example, to change a cluster administrator to a regular user, use the following command:
Rules are defined by an action and a resource. For example, the create user rule is part of the
cluster-admin role.
You can use the oc adm policy who-can command to determine whether a user can execute
an action on a resource. For example:
Default Roles
OpenShift ships with a set of default cluster roles that can be assigned locally or to the entire
cluster. You can modify these roles for fine-grained access control to OpenShift resources. Other
required steps are outside the scope of this course.
admin Users with this role can manage all project resources, including
granting access to other users to access the project.
basic-user Users with this role have read access to the project.
cluster-admin Users with this role have superuser access to the cluster resources.
These users can perform any action on the cluster, and have full
control of all projects.
cluster-status Users with this role can access cluster status information.
cluster-reader Users with this role can access or view most of the objects but
cannot modify them.
114 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
edit Users with this role can create, change, and delete common
application resources on the project, such as services and
deployments. These users cannot act on management resources
such as limit ranges and quotas, and cannot manage access
permissions to the project.
self-provisioner Users with this role can create their own projects.
view Users with this role can view project resources, but cannot modify
project resources.
The admin role gives a user access to project resources such as quotas and limit ranges, and also
the ability to create applications.
The edit role gives a user sufficient access to act as a developer inside the project, but working
under the constraints that a project administrator configured.
Project administrators can use the oc policy command to add and remove namespace roles.
Add a specified role to a user with the add-role-to-user subcommand. For example:
For example, run the following command to add the dev user to the basic-user cluster role in
the wordpress project.
Even though basic-user is a cluster role, the add-role-to-user subcommand limits the
scope of the role to the wordpress namespace for the dev user.
User Types
Interaction with OpenShift Container Platform is associated with a user. An OpenShift Container
Platform user object represents a user who can be granted permissions in the system by adding
roles to that user or to a user's group via role bindings.
Regular users
Most interactive OpenShift Container Platform users are regular users, and are represented
with the User object. This type of user represents a person with access to the platform.
System users
Many system users are created automatically when the infrastructure is defined, mainly for the
infrastructure to securely interact with the API. System users include a cluster administrator
(with access to everything), a per-node user, users for routers and registries, and various
others. An anonymous system user is used by default for unauthenticated requests.
DO280-OCP4.14-en-2-20240725 115
Chapter 3 | Authentication and Authorization
Service accounts
Service accounts are system users that are associated with projects. Workloads can use
service accounts to invoke Kubernetes APIs.
Some service account users are created automatically during project creation. Project
administrators can create more service accounts to grant extra privileges to workloads. By
default, service accounts have no roles. Grant roles to service accounts to enable workloads to
use specific APIs.
Every user must authenticate before they can access OpenShift Container Platform. API requests
with no authentication or invalid authentication are authenticated as requests by the anonymous
system user. After successful authentication, the policy determines what the user is authorized to
do.
Group Management
A group resource represents a set of users. Cluster administrators can use the oc adm groups
command to add groups or to add users to groups. For example, run the following command to
add the lead-developers group to the cluster:
Likewise, the following command adds the user1 user to the lead-developers group:
References
For more information about RBAC, refer to the Using RBAC to Define and
Apply Permissions chapter in the Red Hat OpenShift Container Platform 4.14
Authentication and Authorization documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/authentication_and_authorization/index#authorization-
overview_using-rbac
Kubernetes Namespaces
https://kubernetes.io/docs/concepts/overview/working-with-objects/
namespaces/
116 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Guided Exercise
Outcomes
• Remove project creation privileges from users who are not OpenShift cluster
administrators.
• As a project administrator, assign read and write privileges to different groups of users.
This command ensures that the cluster API is reachable and creates some HTPasswd users
for the exercise.
Instructions
1. Log in to the OpenShift cluster and determine which cluster role bindings assign the self-
provisioner cluster role.
...output omitted...
1.2. List all cluster role bindings that reference the self-provisioner cluster role.
2. Remove the privilege to create projects from all users who are not cluster administrators by
deleting the self-provisioner cluster role from the system:authenticated:oauth
virtual group.
DO280-OCP4.14-en-2-20240725 117
Chapter 3 | Authentication and Authorization
2.1. Confirm that the self-provisioners cluster role binding that you found
in the previous step assigns the self-provisioner cluster role to the
system:authenticated:oauth group.
Important
Do not confuse the self-provisioner cluster role with the self-
provisioners cluster role binding.
Note
You can safely ignore the warning about your changes being lost.
2.3. Verify that the role is removed from the group. The cluster role binding self-
provisioners should not exist.
2.4. Determine whether any other cluster role bindings reference the self-
provisioner cluster role.
118 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
...output omitted...
3. Create a project and add project administration privileges to the leader user.
...output omitted...
...output omitted...
3.3. Grant project administration privileges to the leader user on the auth-rbac
project.
4. Create the dev-group and qa-group groups and add their respective members.
4.2. Add the developer user to the group that you created in the previous step.
DO280-OCP4.14-en-2-20240725 119
Chapter 3 | Authentication and Authorization
4.4. Add the qa-engineer user to the group that you created in the previous step.
4.5. Review all existing OpenShift groups to verify that they have the correct members.
Note
The lab environment already contains groups from the lab LDAP directory.
5. As the leader user, assign write privileges for dev-group and read privileges for qa-
group to the auth-rbac project.
...output omitted...
5.2. Add write privileges to the dev-group group on the auth-rbac project.
5.3. Add read privileges to the qa-group group on the auth-rbac project.
5.4. Review all role bindings on the auth-rbac project to verify that they assign roles to
the correct groups and users. The following output omits default role bindings that
OpenShift assigns to service accounts.
120 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
6. As the developer user, deploy an Apache HTTP Server to prove that the developer user
has write privileges in the project. Also try to grant write privileges to the qa-engineer
user to prove that the developer user has no project administration privileges.
...output omitted...
6.2. Deploy an Apache HTTP Server by using the standard image stream from OpenShift.
6.3. Try to grant write privileges to the qa-engineer user. The operation should fail.
7. Verify that the qa-engineer user can view objects in the auth-rbac project, but not
modify anything.
...output omitted...
7.2. Attempt to scale the httpd application. The operation should fail.
DO280-OCP4.14-en-2-20240725 121
Chapter 3 | Authentication and Authorization
...output omitted...
8.2. Restore project creation privileges for all users by re-creating the self-
provisioners cluster role binding that the OpenShift installer created.
Note
You can safely ignore the warning that the group was not found.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
122 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Lab
Outcomes
• Create users and passwords for HTPasswd authentication.
The command ensures that the cluster API is reachable, and that the cluster uses the initial
lab authentication settings.
Instructions
1. Update the existing ~/DO280/labs/auth-review/tmp_users HTPasswd authentication
file to remove the analyst user. Ensure that the tester and leader users in the
file use the L@bR3v!ew password. Add two entries to the file for the new_admin and
new_developer users. Use the L@bR3v!ew password for each new user.
2. Log in to your OpenShift cluster as the admin user with the redhatocp password.
Configure your cluster to use the HTPasswd identity provider by using the defined user
names and passwords in the ~/DO280/labs/auth-review/tmp_users file. For grading,
use the auth-review name for the secret.
3. Make the new_admin user a cluster administrator. Log in as both the new_admin and
new_developer users to verify HTPasswd user configuration and cluster privileges.
4. As the new_admin user, prevent users from creating projects in the cluster.
5. Create a managers group, and add the leader user to the group. Grant project creation
privileges to the managers group. As the leader user, create the auth-review project.
6. Create a developers group and grant edit privileges on the auth-review project. Add the
new_developer user to the group.
7. Create a qa group and grant view privileges on the auth-review project. Add the tester
user to the group.
DO280-OCP4.14-en-2-20240725 123
Chapter 3 | Authentication and Authorization
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
124 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Solution
Outcomes
• Create users and passwords for HTPasswd authentication.
The command ensures that the cluster API is reachable, and that the cluster uses the initial
lab authentication settings.
Instructions
1. Update the existing ~/DO280/labs/auth-review/tmp_users HTPasswd authentication
file to remove the analyst user. Ensure that the tester and leader users in the
file use the L@bR3v!ew password. Add two entries to the file for the new_admin and
new_developer users. Use the L@bR3v!ew password for each new user.
1.2. Update the entries for the tester and leader users to use the L@bR3v!ew
password. Add entries for the new_admin and new_developer users with the
L@bR3v!ew password.
DO280-OCP4.14-en-2-20240725 125
Chapter 3 | Authentication and Authorization
2. Log in to your OpenShift cluster as the admin user with the redhatocp password.
Configure your cluster to use the HTPasswd identity provider by using the defined user
names and passwords in the ~/DO280/labs/auth-review/tmp_users file. For grading,
use the auth-review name for the secret.
...output omitted...
126 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
apiVersion: config.openshift.io/v1
kind: OAuth
...output omitted...
spec:
identityProviders:
- ldap:
...output omitted...
type: LDAP
# Add the text after this comment.
- htpasswd:
fileData:
name: auth-review
mappingMethod: claim
name: htpasswd
type: HTPasswd
Note
For convenience, the ~/DO280/solutions/auth-review/oauth.yaml
file contains a minimal version of the OAuth configuration with the specified
customizations.
2.5. Apply the customized resource that you defined in the previous step.
Wait until the new oauth-openshift pods are ready and running, and the previous
pods have terminated.
DO280-OCP4.14-en-2-20240725 127
Chapter 3 | Authentication and Authorization
Note
Pods in the openshift-authentication namespace redeploy when the oc
replace command succeeds.
You can examine the status of pods and deployments in the openshift-
authentication namespace to monitor the authentication status. You can also
examine the authentication cluster operator for further status information.
Provided that the previously created secret was created correctly, you can log in by
using the HTPasswd identity provider.
3. Make the new_admin user a cluster administrator. Log in as both the new_admin and
new_developer users to verify HTPasswd user configuration and cluster privileges.
Note
You can safely ignore the warning that the new_admin user is not found.
3.2. Log in to the cluster as the new_admin user to verify that HTPasswd authentication is
configured correctly.
...output omitted...
3.3. Use the oc get nodes command to verify that the new_admin user has the
cluster-admin role. The names of the nodes from your cluster might be different.
3.4. Log in to the cluster as the new_developer user to verify that the HTPasswd
authentication is configured correctly.
...output omitted...
128 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
3.5. Use the oc get nodes command to verify that the new_developer user does not
have cluster administration privileges.
4. As the new_admin user, prevent users from creating projects in the cluster.
...output omitted...
Note
You can safely ignore the warning about your changes being lost.
5. Create a managers group, and add the leader user to the group. Grant project creation
privileges to the managers group. As the leader user, create the auth-review project.
DO280-OCP4.14-en-2-20240725 129
Chapter 3 | Authentication and Authorization
...output omitted...
The user who creates a project is automatically assigned the admin role on the project.
...output omitted...
6. Create a developers group and grant edit privileges on the auth-review project. Add the
new_developer user to the group.
...output omitted...
6.4. Grant edit privileges to the developers group on the auth-review project.
7. Create a qa group and grant view privileges on the auth-review project. Add the tester
user to the group.
130 DO280-OCP4.14-en-2-20240725
Chapter 3 | Authentication and Authorization
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 131
Chapter 3 | Authentication and Authorization
Summary
• A newly installed OpenShift cluster provides two authentication methods that grant
administrative access: the kubeconfig file and the kubeadmin virtual user.
• The HTPasswd identity provider authenticates users against credentials that are stored in a
secret. The secret name and other settings for the identity provider are stored inside the OAuth
custom resource.
• To manage user credentials by using the HTPasswd identity provider, you must extract data
from the secret, change that data using the htpasswd command, and then apply the data back
to the secret.
• Creating OpenShift users requires valid credentials, which an identity provider manages, plus
user and identity resources.
• Deleting OpenShift users requires deleting their credentials from the identity provider, and also
deleting their user and identity resources.
• OpenShift uses role-based access control (RBAC) to manage user actions. A role is a collection
of rules that govern interaction with OpenShift resources. Default roles exist for cluster
administrators, developers, and auditors.
• To control user interaction, assign a user to one or more roles. A role binding contains all of the
role's associations to users and groups.
• To grant a user cluster administrator privileges, assign the cluster-admin role to that user.
132 DO280-OCP4.14-en-2-20240725
Chapter 4
Network Security
Goal Protect network traffic between applications inside
and outside the cluster.
DO280-OCP4.14-en-2-20240725 133
Chapter 4 | Network Security
Objectives
• Allow and protect network connections to applications inside an OpenShift cluster.
With OpenShift routes, you can expose your applications to external networks, to reach the
applications with a unique, publicly accessible hostname. Routes rely on a router plug-in to redirect
the traffic from the public IP to pods.
The following diagram shows how a route exposes an application that runs as pods in your cluster:
134 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Note
For performance reasons, routers send requests directly to pods based on service
configuration.
The dotted line in the diagram indicates this implementation. The router accesses
the pods through the services network.
Encrypting Routes
Routes can be either encrypted or unencrypted. Encrypted routes support several types of
transport layer security (TLS) termination to serve certificates to the client. Unencrypted routes
are the simplest to configure, because they require no key or certificates. By contrast, encrypted
routes encrypt traffic to and from the pods.
An encrypted route specifies the TLS termination of the route. The following termination types are
available:
Edge
With edge termination, TLS termination occurs at the router, before the traffic is routed to
the pods. The router serves the TLS certificates, so you must configure them into the route;
otherwise, OpenShift assigns its own certificate to the router for TLS termination. Because
TLS is terminated at the router, connections from the router to the endpoints over the internal
network are not encrypted.
Passthrough
With passthrough termination, encrypted traffic is sent straight to the destination pod
without TLS termination from the router. In this mode, the application is responsible for
serving certificates for the traffic. Passthrough is a common method for supporting mutual
authentication between the application and a client that accesses it.
Re-encryption
Re-encryption is a variation on edge termination, whereby the router terminates TLS with a
certificate, and then re-encrypts its connection to the endpoint, which might have a different
certificate. Therefore, the full path of the connection is encrypted, even over the internal
network. The router uses health checks to determine the authenticity of the host.
If the --key and --cert options are omitted, then the RHOCP ingress operator provides a
certificate from the internal Certificate Authority (CA). In this case, the route will not reference
DO280-OCP4.14-en-2-20240725 135
Chapter 4 | Network Security
When using a route in edge mode, the traffic between the client and the router is encrypted, but
traffic between the router and the application is not encrypted:
Note
Network policies can help you to protect the internal traffic between your
applications or between projects.
To create a passthrough route, you need a certificate and a way for your application to access it.
The best way to provide the certificate is by using OpenShift TLS secrets. Secrets are exposed via
a mount point into the container.
The following diagram shows how you can mount a secret resource in your container. The
application is then able to access your certificate.
136 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Then, the router re-encrypts the connection when accessing an internal cluster service. This
internal communication requires a certificate for the target service with an OpenShift FQDN, such
as the my-app.namespace.svc.cluster.local hostname.
The certificates for internal TLS connections require a public key infrastructure (PKI) to sign
the certificate. OpenShift provides the service-ca controller to generate and sign service
certificates for internal traffic. The service-ca controller creates a secret that it populates with
a signed certificate and key. A deployment can mount this secret as a volume to use the signed
certificate. Using the service-ca controller is explained later in this chapter.
DO280-OCP4.14-en-2-20240725 137
Chapter 4 | Network Security
References
For more information about how to manage routes, refer to the Configuring
Routes chapter in the Red Hat OpenShift Container Platform 4.14 Networking
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#configuring-routes
For more information about how to configure ingress cluster traffic, refer to the
Configuring Ingress Cluster Traffic chapter in the Red Hat OpenShift Container
Platform 4.14 Networking documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#configuring-ingress-cluster-traffic
138 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Guided Exercise
Outcomes
• Deploy an application and create an unencrypted route for it.
The command ensures that the cluster API is reachable, and creates the network-ingress
OpenShift project. The command also gives the developer user edit access on the project.
Instructions
As an application developer, you are ready to deploy your application in OpenShift. In this activity,
you deploy two versions of the application: one that is exposed over unencrypted traffic (HTTP),
and one that is exposed over encrypted traffic (HTTPS).
• *.apps.ocp4.example.com
• *.ocp4.example.com
DO280-OCP4.14-en-2-20240725 139
Chapter 4 | Network Security
...output omitted...
...output omitted...
2.1. Use the oc create command to deploy the application in the network-ingress
OpenShift project.
2.2. Wait a few minutes, so that the application can start, and then review the resources in
the project.
2.3. Run the oc expose command to create a route for accessing the application. Give
the route a hostname of todo-http.apps.ocp4.example.com.
2.4. Retrieve the name of the route and copy it to the clipboard.
140 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
2.5. On the workstation machine, open Firefox and access the application URL.
Confirm that you can see the application.
• http://todo-http.apps.ocp4.example.com
2.6. Open a new terminal tab and run the tcpdump command with the following options
to intercept the traffic on port 80:
Note
The full command is available at ~/DO280/labs/network-ingress/tcpdump-
command.txt.
2.7. On Firefox, refresh the page and notice the activity in the terminal. Press Ctrl+C to
stop the capture.
...output omitted...
<script type="text/javascript" src="assets/js/libs/angular/angular.min.js">
<script type="text/javascript" src="assets/js/libs/angular/angular-route.min.js">
<script type="text/javascript" src="assets/js/libs/angular/angular-
animate.min.js">
...output omitted...
3. Create an encrypted edge route. Edge certificates encrypt the traffic between the client
and the router, but leave the traffic between the router and the service unencrypted.
OpenShift generates its own certificate that it signs with its CA.
In later steps, you extract the CA to ensure that the route certificate is signed.
DO280-OCP4.14-en-2-20240725 141
Chapter 4 | Network Security
When the --key and --cert options are omitted, the RHOCP ingress operator
creates the required certificate with its own Certificate Authority (CA).
3.2. To test the route and read the certificate, open Firefox and access the application
URL.
• https://todo-https.apps.ocp4.example.com
Click the padlock, and then click the arrow next to Connection secure.
142 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Locate the CN entry to see that the OpenShift ingress operator created the
certificate with its own CA.
DO280-OCP4.14-en-2-20240725 143
Chapter 4 | Network Security
3.3. From the terminal, use the curl command with the -I and -v options to retrieve the
connection headers.
The Server certificate section shows some information about the certificate.
The alternative name matches the name of the route. The output indicates that the
remote certificate is trusted because it matches the CA.
3.4. Although the traffic is encrypted at the edge with a certificate, you can still access
the plain text traffic at the service level, because the pod behind the service does not
offer an encrypted route.
Retrieve the IP address of the todo-http service.
3.5. Create a debug pod in the todo-http deployment. Use the Red Hat Universal Base
Image (UBI), which contains tools to interact with containers.
144 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
3.6. From the debug pod, use the curl command to access the service over HTTP.
Replace the IP address with the one that you obtained in a previous step.
The output indicates that the application is available over HTTP.
sh-4.4$ exit
Removing debug pod ...
3.8. Delete the edge route. In the following steps, you define the passthrough route.
DO280-OCP4.14-en-2-20240725 145
Chapter 4 | Network Security
Note
The following commands for generating a signed certificate are all available in the
~/DO280/labs/network-ingress/certs/openssl-commands.txt file.
4.3. Generate the certificate signing request (CSR) for the todo-
https.apps.ocp4.example.com hostname.
Warning
Type the request subject on one line. Alternatively, remove the -subj
option and its content. Without the -subj option, the openssl command
prompts you for the values; indicate a common name (CN) of todo-
https.apps.ocp4.example.com.
4.4. Finally, generate the signed certificate. Notice the use of the -CA and -CAkey
options for signing the certificate against the CA. Use the -passin option to reuse
the password of the CA. Use the extfile option to define a Subject Alternative
Name (SAN).
4.5. Ensure that the newly created certificate and key are present in the current directory.
146 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
4.6. Return to the network-ingress directory. This step is important, because the next
step involves creating a route that uses the self-signed certificate.
5.1. Create a tls OpenShift secret named todo-certs. Use the --cert and --key
options to embed the TLS certificates. Use training.crt as the certificate, and
training.key as the key.
The todo-certs secret with the SSL certificate is mounted in the container in
the /usr/local/etc/ssl/certs directory to enable TLS for the application.
Additionally, the todo-app-v2 deployment changes the service to include port
8443.
DO280-OCP4.14-en-2-20240725 147
Chapter 4 | Network Security
5.3. Run the oc create command to create a deployment that uses that image.
5.4. Wait a couple of minutes to ensure that the application pod is running. Use the oc
set volumes command to review the volumes that are mounted inside the pod.
6.1. Run the oc create route command to define the new route.
Give the route a hostname of todo-https.apps.ocp4.example.com.
6.2. Use the curl command in verbose mode to test the route and to read the certificate.
Use the --cacert option to pass the CA certificate to the curl command.
The output indicates a match between the certificate chain and the application
certificate. This match indicates that the OpenShift router forwards only packets that
are encrypted by the application web server certificate.
7. Create a debug pod to further confirm proper encryption at the service level.
148 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
7.2. Create a debug pod in the todo-https deployment with the Red Hat UBI container
image.
7.3. From the debug pod, use the curl command to access the service over HTTP.
Replace the IP address with the one that you obtained in a previous step.
The output indicates that the application is not available over HTTP, and the web
server redirects you to the encrypted version.
7.4. Finally, access the application over HTTPS. Use the -k option, because the container
does not have access to the CA certificate.
sh-4.4$ exit
Removing debug pod ...
[student@workstation network-ingress]$ cd
DO280-OCP4.14-en-2-20240725 149
Chapter 4 | Network Security
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
150 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Objectives
• Restrict network traffic between projects and pods.
In contrast to traditional firewalls, Kubernetes network policies control network traffic between
pods by using labels instead of IP addresses. To manage network communication between pods
in two namespaces, assign a label to the namespace that needs access to another namespace,
and create a network policy that selects these labels. You can also use a network policy to select
labels on individual pods to create ingress or egress rules. In network policies, use selectors
under spec to assign which destination pods are affected by the policy, and selectors under
spec.ingress to assign which source pods are allowed. The following command assigns the
network=network-1 label to the network-1 namespace:
The following examples describe network policies that allow communication between pods in the
network-1 and network-2 namespaces:
• The following network policy applies to any pods with the deployment="product-
catalog" label in the network-1 namespace. The network-2 namespace has the
network=network-2 label. The policy allows TCP traffic over port 8080 from pods whose
label is role="qa" in namespaces with the network=network-2 label.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: network-1-policy
namespace: network-1
spec:
podSelector:
matchLabels:
deployment: product-catalog
ingress:
- from:
- namespaceSelector:
matchLabels:
network: network-2
podSelector:
matchLabels:
DO280-OCP4.14-en-2-20240725 151
Chapter 4 | Network Security
role: qa
ports:
- port: 8080
protocol: TCP
The top-level podSelector field is required and defines which pods use the network
policy. If the podSelector is empty, then all pods in the namespace are matched.
The ingress field defines a list of ingress traffic rules to apply to the matched pods from
the top-level podSelector field.
The from field defines a list of rules to match traffic from all sources. The selectors are not
limited to the project in which the network policy is defined.
The ports field is a list of destination ports that allow traffic to reach the selected pods.
• The following network policy allows traffic from any pods in namespaces with the
network=network-1 label into any pods and ports in the network-2 namespace. This policy
is less restrictive than the network-1 policy, because it does not restrict traffic from any pods
in the network-1 namespace.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: network-2-policy
namespace: network-2
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network: network-1
Note
Network policies are Kubernetes resources. As such, you can manage them with oc
commands.
The fields in the network policy that take a list of objects can either be combined in the same
object or can be listed as multiple objects. If combined, the conditions are combined with a logical
AND. If separated in a list, the conditions are combined with a logical OR. With the logic options,
you can create specific policy rules. The following examples highlight the differences that the
syntax can make:
• This example combines the selectors into one rule, and thereby allows access only from pods
with the app=mobile label in namespaces with the network=dev label. This sample shows a
logical AND statement.
152 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
...output omitted...
ingress:
- from:
- namespaceSelector:
matchLabels:
network: dev
podSelector:
matchLabels:
app: mobile
• By changing the podSelector field in the previous example to be an item in the from list, any
pods in namespaces with the network=dev label or any pods with the app=mobile label from
any namespace can reach the pods that match the top-level podSelector field. This sample
shows a logical OR statement.
...output omitted...
ingress:
- from:
- namespaceSelector:
matchLabels:
network: dev
- podSelector:
matchLabels:
app: mobile
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector: {}
Important
If a pod does not match any network policies, then OpenShift does not restrict
traffic to that pod. When creating an environment to allow network traffic only
explicitly, you must include a deny-all policy.
DO280-OCP4.14-en-2-20240725 153
Chapter 4 | Network Security
• The router pods that enable access from outside the cluster by using ingress or route resources
The following policies allow ingress from OpenShift monitoring and ingress pods:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring
Important
Network policies do not block traffic from pods that use host networking to pods in
the same node.
For example, on a single-node cluster, a deny-all network policy does not prevent
ingress pods that use the host network strategy from accessing application pods.
Inside a node, traffic from pods that use host networking is treated differently from
traffic from other pods. Network policies control only internal traffic from pods that
do not use host networking.
When traffic leaves a node, no such different treatment exists, and network policies
control all traffic from other nodes.
For more information about this topic, refer to Network Policies [https://
kubernetes.io/docs/concepts/services-networking/network-policies/#what-you-
can-t-do-with-network-policies-at-least-not-yet]
154 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
References
For more information about network policy, refer to the Network Policy chapter in
the Red Hat OpenShift Container Platform 4.14 Networking documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#network-policy
DO280-OCP4.14-en-2-20240725 155
Chapter 4 | Network Security
Guided Exercise
Outcomes
• Create network policies to control communication between pods.
This command ensures that the environment is ready and downloads the necessary resource
files for the exercise.
Instructions
1. Log in to the OpenShift cluster and create the network-policy project.
1.1. Log in to the cluster as the developer user with the developer password.
...output omitted...
...output omitted...
2. Create two identical deployments named hello and test. Create a route to the hello
deployment.
156 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
2.3. Use the oc expose command to create a route to the hello service.
3. Verify that the test pod can access the hello pod by using the oc rsh and curl
commands.
DO280-OCP4.14-en-2-20240725 157
Chapter 4 | Network Security
3.2. Access the hello pod IP address from the test pod by using the oc rsh and curl
commands.
3.3. Access the hello service IP address from the test pod by using the oc rsh and
curl commands.
3.4. Access the hello route hostname by using the curl command.
5. Access the hello and test pods in the network-policy project from the sample-app
pod in the different-namespace project.
5.1. In the second terminal, view the full name of the sample-app pod with the
display-project-info.sh script.
158 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
POD NAME
sample-app-d5f945-spx9q
===================================================================
5.2. In the first terminal, access the hello pod IP address from the sample-app pod by
using the oc rsh and curl commands.
5.3. Access the test pod IP address from the sample-app pod by using the oc rsh and
curl commands. Target the IP address that was previously retrieved for the test
pod.
6. In the network-policy project, create a deny-all network policy by using the resource
file at ~/DO280/labs/network-policy/deny-all.yaml.
6.3. Use a text editor to update the deny-all.yaml file with an empty podSelector
field to target all pods in the network-policy project.
DO280-OCP4.14-en-2-20240725 159
Chapter 4 | Network Security
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all
spec:
podSelector: {}
Note
A solution is provided at ~/DO280/solutions/network-policy/deny-
all.yaml.
7. Verify that the deny-all network policy forbids network access to pods in the network-
policy project.
7.1. Verify that the test pod can no longer access the IP address of the hello pod. Wait
a few seconds, and then press Ctrl+C to exit the curl command that does not reply.
7.3. Verify that the sample-app pod can no longer access the IP address of the test
pod. Wait a few seconds, and then press Ctrl+C to exit the curl command that
does not reply.
8. Create a network policy to allow traffic to the hello pod in the network-policy project
from the sample-app pod in the different-namespace project via TCP on port 8080.
Use the resource file at ~/DO280/labs/network-policy/allow-specific.yaml.
8.1. Use a text editor to replace the CHANGE_ME sections in the allow-specific.yaml
file as follows:
160 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
...output omitted...
spec:
podSelector:
matchLabels:
deployment: hello
ingress:
- from:
- namespaceSelector:
matchLabels:
network: different-namespace
podSelector:
matchLabels:
deployment: sample-app
ports:
- port: 8080
protocol: TCP
Note
A solution is provided at ~/DO280/solutions/network-policy/allow-
specific.yaml.
8.2. Apply the network policy from the allow-specific.yaml file with the oc create
command.
...output omitted...
DO280-OCP4.14-en-2-20240725 161
Chapter 4 | Network Security
Important
The allow-specific network policy uses labels to match the different-
namespace namespace. By default, namespaces and projects do not get any labels
automatically.
...output omitted...
10. Verify that the sample-app pod can access the IP address of the hello pod, but cannot
access the IP address of the test pod.
10.2. Access the hello pod in the network-policy namespace with the oc rsh and
curl commands via the 8080 port.
10.3. Verify that the hello pod cannot be accessed on another port. Because the network
policy allows access only to port 8080 on the hello pod, requests to any other port
are ignored and eventually time out. Wait a few seconds, and then press Ctrl+C to
exit the curl command when no response occurs.
162 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
10.4. Verify that the test pod cannot be accessed from the sample-app pod. Wait a
few seconds, and then press Ctrl+C to exit the curl command when no response
occurs.
11. Verify if the hello route cannot access the hello pod.
11.1. Verify if the hello pod cannot be accessed via its exposed route.
The lab environment is a single-node cluster. Because the ingress pods use host
networking and the application pods are in the same node, the network policy does
not block the traffic.
12. Create a network policy that allows traffic to the hello deployment via the exposed route.
Use the resource file at ~/DO280/labs/network-policy/allow-from-openshift-
ingress.yaml.
This step does not have an effect on the lab environment, because the lab environment
is a single-node cluster. On a cluster with multiple nodes, this step is required for correct
ingress operation.
12.1. Use a text editor to replace the CHANGE_ME values in the allow-from-
openshift-ingress.yaml file as follows:
...output omitted...
spec:
podSelector:
matchLabels:
deployment: 'hello'
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
Note
A solution is provided at ~/DO280/solutions/network-policy/allow-from-
openshift-ingress.yaml.
DO280-OCP4.14-en-2-20240725 163
Chapter 4 | Network Security
...output omitted...
12.5. Access the hello pod via the exposed route with the curl command.
13. Close the terminal window that contains the output of the display-project-info.sh
script, and navigate to the home directory.
[student@workstation network-policy]$ cd
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
164 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Objectives
• Configure and use automatic service certificates.
Zero-trust Environments
Zero-trust environments assume that every interaction begins in an untrusted state. Users can
access only files or objects that are specifically allowed; communication must be encrypted; and
client applications must verify the authenticity of servers.
By default, OpenShift encrypts network traffic between nodes and the control plane, and prevents
external entities from reading internal traffic. This encryption provides stronger security than
default Kubernetes, which does not automatically encrypt internal traffic. Although the control
plane traffic is encrypted, applications in OpenShift do not necessarily verify the authenticity of
other applications or encrypt application traffic.
Zero-trust environments require that a trusted certificate authority (CA) signs the certificates that
are used to encrypt traffic. By referencing the CA certificate, an application can cryptographically
verify the authenticity of another application with a signed certificate.
Service Certificates
OpenShift provides the service-ca controller to generate and sign service certificates for
internal traffic. The service-ca controller creates a secret that it populates with a signed
certificate and key. A deployment can mount this secret as a volume to use the signed certificate.
Additionally, client applications need to trust the service-ca controller CA.
The secret that contains the certificate and key pair is named hello-secret.
After OpenShift generates the secret, you must mount the secret in the application deployment.
The location to place the certificate and key is application-dependent. The following YAML patch
is for an NGINX deployment:
DO280-OCP4.14-en-2-20240725 165
Chapter 4 | Network Security
spec:
template:
spec:
containers:
- name: hello
volumeMounts:
- name: hello-volume
mountPath: /etc/pki/nginx/
volumes:
- name: hello-volume
secret:
defaultMode: 420
secretName: hello-secret
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: private/server.key
The secret has tls.crt as the signed certificate and tls.key as the key.
After mounting the secret to the application container, the application can use the signed
certificate for TLS traffic.
Configuration Maps
Apply the service.beta.openshift.io/inject-cabundle=true annotation to a
configuration map to inject the CA bundle into the data: { service-ca.crt } field.
The service-ca controller replaces all data in the selected configuration map with the CA
bundle. You must therefore use a dedicated configuration map to prevent overwriting existing
data.
166 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
API service
Applying the annotation to an API service injects the CA bundle into the spec.caBundle
field.
CRD
Applying the annotation to a CRD injects the CA bundle into the
spec.conversion.webhook.clientConfig.caBundle field.
Key Rotation
The service CA certificate is valid for 26 months by default and is automatically rotated after 13
months. After rotation is a 13-month grace period where the original CA certificate is still valid.
During this grace period, each pod that is configured to trust the original CA certificate must be
restarted in some way. A service restart automatically injects the new CA bundle.
You can also manually rotate the certificate for the service CA and for generated service
certificates. To rotate a generated service certificate, delete the existing secret, and the
service-ca controller automatically generates a new one.
To manually rotate the service CA certificate, delete the signing-key secret in the openshift-
service-ca namespace.
This process immediately invalidates the former service CA certificate. You must restart all pods
that use it, for TLS to function.
You can use the certmanager operator to delegate the certificate signing process to a trusted
external service, and also to renew a certificate.
You can also use Red Hat OpenShift Service Mesh for encrypted service-to-service
communication and for other advanced features. Service mesh is an advanced topic and is not
covered in the course.
DO280-OCP4.14-en-2-20240725 167
Chapter 4 | Network Security
References
For more information about service certificates, refer to the Securing Service Traffic
Using Service Serving Certificate Secrets section in the Configuring Certificates
chapter in the Red Hat OpenShift Container Platform 4.14 Security and Compliance
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/security_and_compliance/index#understanding-service-
serving_service-serving-certificate
For more information about service mesh, refer to the About OpenShift Service
Mesh section in the Service Mesh 2.x chapter in the Red Hat OpenShift Container
Platform 4.14 Service Mesh documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/service_mesh/index#ossm-about
For more information about the cert-manager operator, refer to the cert-manager
Operator for Red Hat OpenShift chapter in the Red Hat OpenShift Container
Platform 4.14 Security and Compliance documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/security_and_compliance/index#cert-manager-operator-about
168 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Guided Exercise
Outcomes
• Generate service certificates with the service-ca controller.
This command ensures that the OpenShift cluster is ready and creates the network-
svccerts project and server deployment for the exercise. The command also creates a
test pod named no-ca-bundle for use later in the exercise.
Instructions
In this exercise, you work with the server deployment, which has an NGINX container that serves
a "Hello World!" page with the HTTPS protocol. This deployment differs from earlier NGINX
deployments, because it allows only the HTTPS protocol. The server application expects the
existence of a certificate that you create in the exercise steps.
1. Log in to the OpenShift cluster as the admin user and switch to the network-svccerts
project.
...output omitted...
DO280-OCP4.14-en-2-20240725 169
Chapter 4 | Network Security
2. Generate a service certificate and secret that are named server-secret for the server
service, and then mount the secret in the server deployment.
2.2. Use the oc describe command to view the service and secret descriptions to verify
that OpenShift created the secret.
Data
====
tls.key: 1675 bytes
tls.crt: 2615 bytes
2.3. Use a text editor to create a patch file to mount the server-secret secret in
the server deployment. Edit the resource file at ~/DO280/labs/network-
svccerts/server-secret.yaml. Replace the CHANGE_ME sections as shown in
the following example:
spec:
template:
spec:
containers:
- name: server
volumeMounts:
- name: server-secret
mountPath: /etc/pki/nginx/
volumes:
- name: server-secret
secret:
defaultMode: 420
secretName: server-secret
items:
- key: tls.crt
170 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
path: server.crt
- key: tls.key
path: private/server.key
2.4. Apply the patch file to the server deployment with the oc patch command.
2.5. Use the openssl s_client command in the no-ca-bundle pod to verify that
OpenShift supplied the server deployment with a certificate. Verify that the no-
ca-bundle pod needs to configure the CA that issued the OpenShift service
certificate for certificate validation.
Note
The output shows the verify error:num=19:self signed certificate
in certificate chain error, because the no-ca-bundle pod is not
configured with the OpenShift cluster's CA bundle.
3. Generate the ca-bundle configuration map that contains the service CA bundle, and use
it to create the client pod.
3.1. Create an empty configuration map named ca-bundle by using the oc create
command.
DO280-OCP4.14-en-2-20240725 171
Chapter 4 | Network Security
3.3. View the YAML output of the ca-bundle configuration map to verify that the CA
bundle is present.
3.4. Use a text editor to add the ca-bundle configuration map to the client.yaml
pod definition. Edit the resource file at ~/DO280/labs/network-svccerts/
client.yaml. Replace the CHANGE_ME sections of the file as shown in the
following example:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
labels:
app: client
name: client
namespace: network-svccerts
spec:
replicas: 1
selector:
matchLabels:
deployment: client
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
labels:
deployment: client
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/hello-world-nginx
imagePullPolicy: IfNotPresent
name: client-deploy
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- mountPath: /etc/pki/ca-trust/extracted/pem
name: trusted-ca
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
172 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: ca-bundle
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
name: trusted-ca
name: trusted-ca
3.5. Apply the client.yaml file with the oc apply command to create the client
pod.
4. Show that the server service is now accessible over HTTPS with a certificate that is
signed by the OpenShift cluster.
4.1. Use the curl command within the client pod to test that the server service is
accessible on HTTPS.
4.2. Use the openssl s_client command within the client pod to verify that the
certificate is signed by the OpenShift cluster.
DO280-OCP4.14-en-2-20240725 173
Chapter 4 | Network Security
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
174 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Lab
Network Security
Configure firewall rules to protect microservice communication, and also configure TLS
encryption between those microservices and for external access.
Outcomes
• Encrypt internal traffic between pods by using TLS service secrets that OpenShift
generates.
This command ensures that the environment is ready and copies the necessary files for this
exercise.
This command also deploys an API that is composed of a product and a stock microservice
to the network-review project.
The product microservice is the entry point to the API. The stock microservice provides only
additional information to the product response. If the product microservice cannot reach the
stock microservice, then the product microservice returns the -1 value.
The developer deployed the API without the security configuration. You must configure TLS
for end-to-end communications and restrict the ingress to pods for both microservices.
To complete the exercise, the following URLs must respond without errors:
• https://stock.network-review.svc.cluster.local/product/1
• https://product.apps.ocp4.example.com/products
Note
The lab start deploys solution files in the ~/DO280/solutions/network-
review/ directory.
Instructions
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
2. Create the stock-service-cert secret for the OpenShift service certificate to encrypt
communications between the product and the stock microservices.
DO280-OCP4.14-en-2-20240725 175
Chapter 4 | Network Security
3. Configure TLS on the stock microservice by using the stock-service-cert secret that
OpenShift generates.
Use the following settings in the deployment to configure TLS:
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
176 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Solution
Network Security
Configure firewall rules to protect microservice communication, and also configure TLS
encryption between those microservices and for external access.
Outcomes
• Encrypt internal traffic between pods by using TLS service secrets that OpenShift
generates.
This command ensures that the environment is ready and copies the necessary files for this
exercise.
This command also deploys an API that is composed of a product and a stock microservice
to the network-review project.
The product microservice is the entry point to the API. The stock microservice provides only
additional information to the product response. If the product microservice cannot reach the
stock microservice, then the product microservice returns the -1 value.
The developer deployed the API without the security configuration. You must configure TLS
for end-to-end communications and restrict the ingress to pods for both microservices.
To complete the exercise, the following URLs must respond without errors:
• https://stock.network-review.svc.cluster.local/product/1
• https://product.apps.ocp4.example.com/products
Note
The lab start deploys solution files in the ~/DO280/solutions/network-
review/ directory.
Instructions
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
1.1. Use the oc login command to log in to your OpenShift cluster as the admin user.
DO280-OCP4.14-en-2-20240725 177
Chapter 4 | Network Security
...output omitted...
2. Create the stock-service-cert secret for the OpenShift service certificate to encrypt
communications between the product and the stock microservices.
2.3. Edit the stock-service.yaml manifest to configure the stock service with the
service.beta.openshift.io/serving-cert-secret-name: stock-
service-cert annotation. This annotation creates the stock-service-cert
secret with the service certificate and the key.
apiVersion: v1
kind: Service
metadata:
name: stock
namespace: network-review
annotations:
service.beta.openshift.io/serving-cert-secret-name: stock-service-cert
spec:
...output omitted...
2.4. Apply the stock service changes by using the oc apply command.
2.5. Verify that the stock-service-cert secret contains a valid certificate for the
stock.network-review.svc hostname in the tls.crt secret key. Decode the
secret output with the base64 command by using the -d option. Then, use the
openssl x509 command to read the output from standard input, and use the -text
option to print the certificate in text form.
178 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
3. Configure TLS on the stock microservice by using the stock-service-cert secret that
OpenShift generates.
Use the following settings in the deployment to configure TLS:
apiVersion: apps/v1
kind: Deployment
metadata:
name: stock
namespace: network-review
spec:
...output omitted...
spec:
containers:
- name: stock
...output omitted...
env:
- name: TLS_ENABLED
value: "false"
volumeMounts:
- name: stock-service-cert
mountPath: /etc/pki/stock/
volumes:
- name: stock-service-cert
secret:
defaultMode: 420
secretName: stock-service-cert
3.2. Edit the stock deployment in the stock-deployment.yaml file to configure TLS
for the application and for the liveness and readiness probes on lines 26, 31, and 34.
apiVersion: apps/v1
kind: Deployment
metadata:
name: stock
namespace: network-review
spec:
DO280-OCP4.14-en-2-20240725 179
Chapter 4 | Network Security
...output omitted...
spec:
containers:
- name: stock
...output omitted...
ports:
- containerPort: 8085
readinessProbe:
httpGet:
port: 8085
path: /readyz
scheme: HTTPS
livenessProbe:
httpGet:
port: 8085
path: /livez
scheme: HTTPS
env:
- name: TLS_ENABLED
value: "true"
...output omitted...
3.3. Apply the stock deployment updates by using the oc apply command.
3.4. Edit the stock-service.yaml file to configure the stock service to listen on the
standard HTTPS 443 port.
apiVersion: v1
kind: Service
metadata:
name: stock
namespace: network-review
annotations:
service.beta.openshift.io/serving-cert-secret-name: stock-service-cert
spec:
selector:
app: stock
ports:
- port: 443
targetPort: 8085
name: https
3.5. Apply the stock service changes by using the oc apply command.
4. Configure TLS between the product and the stock microservices by using the internal
Certificate Authority (CA) from OpenShift.
The product microservice requires the following settings:
180 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
4.1. Edit the configuration map in the service-ca-configmap.yaml file to add the
service.beta.openshift.io/inject-cabundle: "true" annotation. This
annotation injects the OpenShift internal CA into the service-ca configuration map.
apiVersion: v1
kind: ConfigMap
metadata:
name: service-ca
namespace: network-review
annotations:
service.beta.openshift.io/inject-cabundle: "true"
data: {}
4.2. Create the service-ca configuration map by using the oc create command.
4.3. Verify that OpenShift injects the CA certificate by describing the service-ca
configuration map with the oc describe command.
Data
====
service-ca.crt:
-----
-----BEGIN CERTIFICATE-----
apiVersion: apps/v1
kind: Deployment
metadata:
name: product
namespace: network-review
spec:
...output omitted...
spec:
containers:
DO280-OCP4.14-en-2-20240725 181
Chapter 4 | Network Security
- name: product
...output omitted...
env:
- name: CERT_CA
value: "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"
- name: TLS_ENABLED
value: "false"
- name: STOCK_URL
value: "https://stock.network-review.svc"
volumeMounts:
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
volumes:
- name: trusted-ca
configMap:
defaultMode: 420
name: service-ca
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
4.5. Apply the product deployment updates by using the oc apply command.
5.1. Create the passthrough-cert secret by using the product.pem certificate and the
product.key key from the lab directory.
182 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
apiVersion: apps/v1
kind: Deployment
metadata:
name: product
spec:
...output omitted...
spec:
containers:
- name: product
...output omitted...
volumeMounts:
- name: passthrough-cert
mountPath: /etc/pki/product/
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
volumes:
- name: passthrough-cert
secret:
defaultMode: 420
secretName: passthrough-cert
- name: trusted-ca
configMap:
defaultMode: 420
name: service-ca
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
5.3. Edit the product-deployment.yaml file to configure TLS for the application and for
the liveness and readiness probes on lines 26, 31, and 36.
apiVersion: apps/v1
kind: Deployment
metadata:
name: product
spec:
...output omitted...
spec:
containers:
- name: product
...output omitted...
ports:
- containerPort: 8080
readinessProbe:
httpGet:
port: 8080
path: /readyz
scheme: HTTPS
livenessProbe:
httpGet:
port: 8080
DO280-OCP4.14-en-2-20240725 183
Chapter 4 | Network Security
path: /livez
scheme: HTTPS
env:
- name: TLS_ENABLED
value: "true"
- name: STOCK_URL
value: "https://stock.network-review.svc"
...output omitted...
5.4. Apply the product deployment updates by using the oc apply command.
6. Expose the product microservice to outer cluster access by using the FQDN in the signed
certificate by the corporate CA. Use the product.apps.ocp4.example.com hostname.
6.1. Create a passthrough route for the product service by using the
product.apps.ocp4.example.com hostname.
6.2. Verify that you can query the product microservice from outside the cluster by using
the curl command with the ca.pem CA certificate.
7. Configure network policies to accept only ingress connections to the stock pod on the 8085
port that come from a pod with the app=product label.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-stock-policy
spec:
podSelector:
matchLabels:
app: stock
ingress:
- from:
- podSelector:
matchLabels:
app: product
ports:
- protocol: TCP
port: 8085
184 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
8. Configure network policies to accept only ingress connections to the product pod on the
8080 port that come from the OpenShift router pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: product-ingress-policy
spec:
podSelector:
matchLabels:
app: product
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
ports:
- protocol: TCP
port: 8080
[student@workstation network-ingress]$ cd
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 185
Chapter 4 | Network Security
186 DO280-OCP4.14-en-2-20240725
Chapter 4 | Network Security
Summary
• With OpenShift routes, you can expose your applications to external networks securely.
• With network policies, you can configure isolation policies for individual pods.
• You can use network policies to create logical zones in the SDN that map to your organization
network zones.
• In contrast to traditional firewalls, Kubernetes network policies control network traffic between
pods by using labels instead of IP addresses.
• OpenShift provides the service-ca controller to generate and sign service certificates for
internal traffic.
• OpenShift can inject its CA into configuration maps with a custom annotation. Client
applications can use these configuration maps to validate connections to services that run in the
cluster.
DO280-OCP4.14-en-2-20240725 187
188 DO280-OCP4.14-en-2-20240725
Chapter 5
Expose non-HTTP/SNI
Applications
Goal Expose applications to external access without
using an ingress controller.
DO280-OCP4.14-en-2-20240725 189
Chapter 5 | Expose non-HTTP/SNI Applications
Objectives
• Expose applications to external access by using load balancer services.
Many internet services implement a process that listens on a given port and IP address. For
example, a service that uses the 1.2.3.4 IP address runs an SSH server that listens on port 22.
Clients connect to port 22 on that IP address to use the SSH service.
Web servers implement the HTTP protocol and other related protocols such as HTTPS.
Kubernetes ingresses and OpenShift routes use the virtual hosting property of the HTTP protocol
to expose web services that are running on the cluster. Ingresses and routes run a single web
server that uses virtual hosting to route each incoming request to a Kubernetes service by using
the request hostname.
For example, ingresses can route requests for the https://a.example.com URL to a
Kubernetes service in the cluster, and can route requests for the https://b.example.com URL
to a different service in the cluster.
However, many protocols do not have equivalent features. Ingress and route resources can expose
only HTTP services. To expose non-HTTP services, you must use a different resource. Because
these resources cannot expose multiple services on the same IP address and port, they require
more setup effort, and might require more resources, such as IP addresses.
Important
Preferably use ingresses and routes to expose services when possible.
Kubernetes Services
Kubernetes workloads are flexible resources that can create many pods. By creating multiple pods
for a workload, Kubernetes can provide increased reliability and performance. If a pod fails, then
other pods can continue providing a service. With multiple pods, which possibly run on different
systems, workloads can use more resources for increased performance.
However, if many pods provide a workload service, then users of the service can no longer access
the service by using the combination of a single IP address and a port. To provide transparent
access to workload services that run on multiple pods, Kubernetes uses resources of the Service
type. A service resource contains the following information:
190 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Internal communication
Services of the ClusterIP type provide service access within the cluster.
Different providers can implement Kubernetes services, by using the type field of the service
resource.
Although these services are useful in specific scenarios, some services require extra configuration,
and they can pose security challenges. Load balancer services have fewer limitations and provide
load balancing.
For example, cloud providers typically provide their own load balancer services. These services use
features that are specific to the cloud provider.
If you run a Kubernetes cluster on a cloud provider, controllers in Kubernetes use the cloud
provider's APIs to configure the required cloud provider resources for a load balancing service. On
environments where managed load balancer services are not available, you must configure a load
balancer component according to the specifics of your network.
MetalLB is an operator that you can install with the Operator Lifecycle Manager. After installing
the operator, you must configure MetalLB through its custom resource definitions. In most
situations, you must provide MetalLB with an IP address range.
For example, the following resource definition exposes port 1234 on pods with the example value
for the name label.
apiVersion: v1
kind: Service
metadata:
name: example-lb
namespace: example
DO280-OCP4.14-en-2-20240725 191
Chapter 5 | Expose non-HTTP/SNI Applications
spec:
ports:
- port: 1234
protocol: TCP
targetPort: 1234
selector:
name: example
type: LoadBalancer
Exposed port
Pod selector
LoadBalancer service type You can also use the kubectl expose command with the --
type LoadBalancer argument to create load balancer services imperatively.
After you create the service, the load balancer component updates the service resource with
information such as the public IP address where the service is available.
You can now connect to the service on port 1234 of the 192.168.50.20 address.
You can also obtain the address from the status field of the resource.
Each load balancer service allocates IP addresses for services by following different processes.
For example, when installing MetalLB, you must provide ranges of IPs that MetalLB assigns to
services.
After exposing a service by using a load balancer, always verify that the service is available from
your intended network locations. Use a client for the exposed protocol to ensure connectivity, and
test that load balancing works as expected. Some protocols might require further adjustments
to work correctly behind a load balancer. You can also use network debugging tools, such as the
ping and traceroute commands to examine connectivity.
References
For more information, refer to the Load Balancing with MetalLB chapter in the
Red Hat OpenShift Container Platform 4.14 Networking documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#load-balancing-with-metallb
Kubernetes Services
https://kubernetes.io/docs/concepts/services-networking/service/
MetalLB on OpenShift
https://metallb.universe.tf/installation/clouds/#metallb-on-openshift-ocp
192 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Guided Exercise
Outcomes
• Use load balancer services to expose the video streams that the application produces.
Instructions
1. Log in as the developer user, and list the YAML resource manifests for the video
streaming application in the ~/DO280/labs/non-http-lb directory.
...output omitted...
1.4. List the contents of the directory. The YAML resource manifests represent three
instances of the video streaming application.
DO280-OCP4.14-en-2-20240725 193
Chapter 5 | Expose non-HTTP/SNI Applications
[student@workstation non-http-lb]$ ls -l
total 12
-rw-rw-r--. 1 student student 1561 Jun 21 16:29 virtual-rtsp-1.yaml
-rw-rw-r--. 1 student student 1563 Jun 21 16:29 virtual-rtsp-2.yaml
-rw-rw-r--. 1 student student 1565 Jun 21 16:21 virtual-rtsp-3.yaml
1.5. Each deployment emulates the video stream from a security camera on port 8554.
2. Deploy the first instance of the application, and expose the video stream from the
downtown camera by using a load balancer service.
2.1. Create the first instance of the video stream deployment. This application produces
the video stream of the downtown camera.
2.2. Wait until the pod is running and the deployment is ready. Press Ctrl+C to exit the
watch command.
194 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
2.5. Verify that you can connect to the external IP address of the load balancer service on
port 8554.
2.6. Open the URL in the media player to confirm that the video stream of the downtown
camera is working correctly.
• rtsp://192.168.50.20:8554/stream
Close the media player window after confirming that the video stream works
correctly.
DO280-OCP4.14-en-2-20240725 195
Chapter 5 | Expose non-HTTP/SNI Applications
3. Deploy the remaining instances of the video stream application. Expose the video streams
from the roundabout and intersection cameras by using a load balancer service.
Understand that the classroom is configured to provide only two IP addresses.
3.1. Create the second instance of the video stream deployment. This application
produces the video stream of the roundabout camera.
3.2. Create the third instance of the video stream deployment. This application produces
the video stream of the intersection camera.
3.3. Wait until the pods are running and the deployments are ready. Press Ctrl+C to exit
the watch command.
3.6. Get the external IP address of the second load balancer service.
196 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
3.7. Open the URL in the media player to confirm that the video stream of the
roundabout camera is working correctly.
• rtsp://192.168.50.21:8554/stream
Close the media player window after confirming that the video stream works
correctly.
4. Delete the first service to reallocate the IP address to the third service, and view the video
stream of the intersection camera.
4.2. Verify that the third service has an assigned external IP address.
DO280-OCP4.14-en-2-20240725 197
Chapter 5 | Expose non-HTTP/SNI Applications
4.3. Open the URL in the media player to confirm that the video stream of the
intersection camera is working correctly.
• rtsp://192.168.50.20:8554/stream
Close the media player window after confirming that the video stream works
correctly.
[student@workstation non-http-lb]$ cd
[student@workstation ~]$
198 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 199
Chapter 5 | Expose non-HTTP/SNI Applications
Objectives
• Expose applications to external access by using a secondary network.
However, in some cases, connecting some pods to a different network can provide benefits or help
to address requirements.
For example, using a dedicated network with dedicated resources can improve the performance of
specific traffic. Additionally, a dedicated network can have different security properties from the
default network and help to achieve security requirements.
In addition to these advantages, using extra interfaces can also simplify some tasks, such as
controlling outgoing traffic from pods.
The Multus CNI (container network interface) plug-in helps to attach pods to custom networks.
These custom networks can be either existing networks outside the cluster, or custom networks
that are internal to the cluster.
You can use operators, such as the Kubernetes NMState operator or the SR-IOV (Single Root I/O
Virtualization) network operator, to customize node network configuration. With these operators,
you define custom resources to describe the intended network configuration, and the operator
applies the configuration.
The SR-IOV network operator configures SR-IOV network devices for improved bandwidth and
latency on certain platforms and devices.
200 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Pod Annotations
Network attachment resources are namespaced, and are available only to pods in their
namespace.
When the cluster has additional networks, you can add the k8s.v1.cni.cncf.io/networks
annotation to the pod's template to use one of the additional networks. The value of the
annotation is the name of the network attachment definition to use, or a list of maps with
additional configuration options. Besides network attachments, you can also add pods to networks
that the SR-IOV network operator configures.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
namespace: example
spec:
selector:
matchLabels:
app: example
name: example
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: example
labels:
app: example
name: example
spec:
...output omitted...
DO280-OCP4.14-en-2-20240725 201
Chapter 5 | Expose non-HTTP/SNI Applications
"mac": "52:54:00:01:33:0a",
"dns": {}
}]
The example pod is attached to the default pod network and to the example custom
network.
To access the custom network, Multus creates a network interface in the pod. Multus uses the
net string followed by a number to name these network interfaces.
Note
The period is the JSONPath field access operator. Normally, you use the period to
access parts of the resource, such as in the .metadata.annotations JSONPath
expression. To access fields that contain periods with JSONPath, you must escape
the periods with a backslash (\).
Host device
Attaches a network interface to a single pod.
Bridge
Uses an existing bridge interface on the node, or configures a new bridge interface. The pods
that are attached to this network can communicate with each other through the bridge, and to
any other networks that are attached to the bridge.
IPVLAN
Creates an IPVLAN-based network that is attached to a network interface.
MACVLAN
Creates an MACVLAN-based network that is attached to a network interface.
Bridges are network interfaces that can forward packets between different network interfaces that
are attached to the bridge. Virtualization environments often use bridges to connect the network
interfaces of virtual machines to the network.
IPVLAN and MACVLAN are Linux network drivers that are designed for container environments.
Container environments often use these network drivers to connect pods to the network.
Although bridge interfaces, IPVLAN, and MACVLAN have similar purposes, they have different
characteristics, such as different usage of MAC addresses, filtering capabilities, and other
features. For example, you might need to use IPVLAN instead of MACVLAN in networks with a
limit of MAC addresses, because IPVLAN uses fewer MAC addresses.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: example
spec:
config: |-
202 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
{
"cniVersion": "0.3.1",
"name": "example",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "dhcp"
}
}
The same value for the name parameter that you provided previously for this network
attachment definition
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
...output omitted...
additionalNetworks:
- name: example
namespace: example
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "example",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "dhcp"
}
}
type: Raw
The namespace
The same value for the name parameter that you provided previously for this additional
network
DO280-OCP4.14-en-2-20240725 203
Chapter 5 | Expose non-HTTP/SNI Applications
The IP Address Management (IPAM) CNI plug-in provides IP addresses for other CNI plug-ins.
In the previous examples, the ipam key contains a network configuration that uses DHCP. You
can provide more complex network configurations in the ipam key. For example, the following
configuration uses a static address.
"ipam": {
"type": "static",
"addresses": [
{"address": "192.168.X.X/24"}
]
}
Although all the pods in the cluster still use the cluster-wide default network to maintain
connectivity across the cluster, you can define more than one additional network for your cluster.
The added networks give you flexibility when you configure pods that deliver network functions.
The network isolation that an additional network provides is useful for enhanced performance or
for security, depending on your needs.
References
For more information, refer to the Multiple Networks chapter in the Red Hat
OpenShift Container Platform 4.14 Networking documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#multiple-networks
For more information about the SR-IOV network operator, including supported
platforms and devices, refer to the About Single Root I/O Virtualization (SR-IOV)
Hardware Networks section in the Hardware Networks chapter in the Red Hat
OpenShift Container Platform 4.14 Networking documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#about-sriov
For more information, refer to the About the Kubernetes NMState Operator section
in the About Networking chapter in the Red Hat OpenShift Container Platform 4.14
Networking documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/networking/index#kubernetes-nmstate
204 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Guided Exercise
Outcomes
• Make a PostgreSQL database accessible outside the cluster on an isolated network by
using an existing node network interface.
Instructions
1. Deploy a sample database.
1.1. Log in to the OpenShift cluster as the developer user with the developer
password.
...output omitted...
DO280-OCP4.14-en-2-20240725 205
Chapter 5 | Expose non-HTTP/SNI Applications
This application contains only a deployment, a persistent volume claim, and a secret.
The application does not contain any services, so the database is not accessible
outside the pod network.
This application uses a database that requires exclusive access to the database data.
On the database deployment, only one pod must be running at a time. To prevent
multiple pods from running at a time, the deployment uses the recreate strategy.
This scenario is part of the scenarios where you assign a network interface exclusively
to a pod. In these scenarios, the host device strategy is suitable. A network
attachment with the host device strategy is suitable only for a single pod.
In other scenarios, you must use more complex network attachments.
2. Examine the cluster nodes and inspect the network interface that you use in this exercise.
2.1. Log in to the OpenShift cluster as the admin user with the redhatocp password.
...output omitted...
2.2. Use the oc get node command to list the cluster nodes.
The cluster has a single node with the control plane and worker roles.
2.3. Run the ip addr command in the node, by using the oc debug command to
execute commands in the node.
206 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
The ens4 interface is an additional network interface for exercises that require
an additional network. This interface is attached to a 192.168.51.0/24 network,
with the 192.168.51.10 IP address.
The system has other interfaces, including bridges and pod network interfaces.
3.1. Use the ip addr command to examine the network interfaces in the workstation
machine.
DO280-OCP4.14-en-2-20240725 207
Chapter 5 | Expose non-HTTP/SNI Applications
3.2. Use the route command to view the routing table in the workstation machine.
The workstation routing table does not have a route to the 192.168.51.0/24
network.
3.3. Use the ping command to check connectivity to the ens4 interface in the cluster.
The command does not produce any output after printing the first line. Wait a
few seconds, and then press Ctrl+C to interrupt the ping command. The ping
command prints that after transmitting some packets, no response is received.
The workstation machine cannot connect to the additional cluster network.
Note
The network diagram in the Orientation to the Classroom Environment [https://
rol.redhat.com/rol/app/courses/do280-4.14/pages/pr01s02] lecture shows the
VMs and the networks that are available to the workstation machine.
4. Examine the networking configuration of the utility machine. The utility machine has
access to the 192.168.51.0/24 network.
4.2. Use the ip addr command to examine the network interfaces in the utility
machine.
208 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
4.3. Use the ping command to check connectivity to the ens4 interface in the cluster.
Wait a few seconds, and then press Ctrl+C to interrupt the ping command. The
ping command shows that the utility machine can connect to the additional
cluster network.
5. Configure a network attachment definition for the ens4 interface, so that the custom
network can be attached to a pod.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: custom
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "custom",
"type": "host-device",
"device": "ens4",
"ipam": {
DO280-OCP4.14-en-2-20240725 209
Chapter 5 | Expose non-HTTP/SNI Applications
"type": "static",
"addresses": [
{"address": "192.168.51.10/24"}
]
}
}
5.2. Use the diff command to compare your network attachment definition with
the solution in the ~/DO280/solutions/non-http-multus/network-
attachment-definition.yaml file. If the files are identical, then the diff
command does not return any output.
5.3. Use the oc create command to create the network attachment definition.
6.1. Log in to the OpenShift cluster as the developer user with the developer
password.
...output omitted...
apiVersion: v1
...output omitted...
- apiVersion: apps/v1
kind: Deployment
metadata:
name: database
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
name: database
app: database
template:
210 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
metadata:
labels:
name: database
app: database
annotations:
k8s.v1.cni.cncf.io/networks: custom
spec:
...output omitted...
DO280-OCP4.14-en-2-20240725 211
Chapter 5 | Expose non-HTTP/SNI Applications
Note
The period is the JSONPath field access operator. Normally, you use the period to
access parts of the resource, such as in the .metadata.annotations JSONPath
expression. To access fields that contain periods with JSONPath, you must escape
the periods with a backslash (\).
7. Verify that you can access the database from the utility machine.
7.2. Run a command to execute a query on the database. Use the IP address on the
custom network to connect to the database. Use password as the password for the
user.
8. Verify that you cannot use the same process to access the database from the
workstation machine, because the workstation machine cannot access the custom
network.
8.1. Run a command to execute a query on the database. Use the IP address on the
custom network to connect to the database.
After the image is downloaded, the command pauses for over a minute, because you
cannot access the custom network from the workstation machine.
The deployment uses the custom network, and you can access the database only
through the custom network.
212 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 213
Chapter 5 | Expose non-HTTP/SNI Applications
Lab
Outcomes
• Expose a non-http application to external access by using the LoadBalancer type
service.
This command ensures that the cluster API is reachable and configures the MetalLB
operator to provide a single IP address, 192.168.50.20, for the load balancer services.
Instructions
1. Deploy the virtual-rtsp application to a new non-http-review-rtsp project as the
developer user with the developer password, and verify that the virtual-rtsp pod is
running.
The application consists of the ~/DO280/labs/non-http-review/virtual-
rtsp.yaml file.
2. Expose the virtual-rtsp deployment by using the LoadBalancer service.
3. Access the virtual-rtsp application by using the URL in the media player. Run the totem
rtsp://EXTERNAL-IP:8554/stream command to play the stream in the media player.
4. Deploy the nginx deployment to a new non-http-review-nginx project as the
developer user with the developer password, and verify that the nginx pod is running.
The application consists of the ~/DO280/labs/non-http-review/nginx.yaml file.
Important
The exercise is using an HTTP application as a stand-in for testing connectivity to an
external network.
5. Configure a network attachment definition for the ens4 interface, so that the isolated
network can be attached to a pod.
The master01 node has two Ethernet interfaces. The ens3 interface is the main network
interface of the cluster. The ens4 interface is an additional network interface for exercises
214 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Parameter Value
name custom
type host-device
device ens4
ipam.type static
6. The nginx application does not contain any services, so the application is not accessible
outside the pod network.
Assign the ens4 network interface exclusively to the nginx pod, by using the
custom network attachment definition. Edit the nginx deployment to add the
k8s.v1.cni.cncf.io/networks annotation with the custom value as the developer
user with the developer password.
7. Verify that you can access the nginx application from the utility machine by using the
following URL:
http://isolated-network-IP-address:8080
8. Verify that you cannot access the nginx application from the workstation machine,
because the workstation machine cannot access the isolated network.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 215
Chapter 5 | Expose non-HTTP/SNI Applications
Solution
Outcomes
• Expose a non-http application to external access by using the LoadBalancer type
service.
This command ensures that the cluster API is reachable and configures the MetalLB
operator to provide a single IP address, 192.168.50.20, for the load balancer services.
Instructions
1. Deploy the virtual-rtsp application to a new non-http-review-rtsp project as the
developer user with the developer password, and verify that the virtual-rtsp pod is
running.
The application consists of the ~/DO280/labs/non-http-review/virtual-
rtsp.yaml file.
1.1. Log in to your OpenShift cluster as the developer user with the developer
password.
216 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
1.4. Use the oc create command to create the virtual-rtsp deployment by using the
virtual-rtsp.yaml file.
1.5. List the deployments and pods. Wait for the virtual-rtsp pod to be ready. Press
Ctrl+C to exit the watch command.
3. Access the virtual-rtsp application by using the URL in the media player. Run the totem
rtsp://EXTERNAL-IP:8554/stream command to play the stream in the media player.
3.1. Open the URL in the media player to confirm that the video stream is working correctly.
rtsp://192.168.50.20:8554/stream
DO280-OCP4.14-en-2-20240725 217
Chapter 5 | Expose non-HTTP/SNI Applications
Close the media player window after confirming that the video stream works correctly.
Important
The exercise is using an HTTP application as a stand-in for testing connectivity to an
external network.
4.2. Use the oc apply command to create the nginx deployment by using the
nginx.yaml file.
4.3. List the deployments and pods. Wait for the nginx pod to be ready. Press Ctrl+C to
exit the watch command.
218 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
5. Configure a network attachment definition for the ens4 interface, so that the isolated
network can be attached to a pod.
The master01 node has two Ethernet interfaces. The ens3 interface is the main network
interface of the cluster. The ens4 interface is an additional network interface for exercises
that require an additional network. The ens4 interface is attached to a 192.168.51.0/24
network, with the 192.168.51.10 IP address.
You can modify the ~/DO280/labs/non-http-review/network-attachment-
definition.yaml file to configure a network attachment definition by using the following
parameters:
Parameter Value
name custom
type host-device
device ens4
ipam.type static
5.1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: custom
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "custom",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "static",
"addresses": [
{"address": "192.168.51.10/24"}
]
}
}
DO280-OCP4.14-en-2-20240725 219
Chapter 5 | Expose non-HTTP/SNI Applications
5.3. Use the oc create command to create the network attachment definition.
6. The nginx application does not contain any services, so the application is not accessible
outside the pod network.
Assign the ens4 network interface exclusively to the nginx pod, by using the
custom network attachment definition. Edit the nginx deployment to add the
k8s.v1.cni.cncf.io/networks annotation with the custom value as the developer
user with the developer password.
6.1. Log in to the OpenShift cluster as the developer user with the developer password.
...output omitted...
...output omitted...
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
annotations:
k8s.v1.cni.cncf.io/networks: custom
spec:
containers:
...output omitted...
6.4. Wait for the nginx pod to be ready. Press Ctrl+C to exit the watch command.
220 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Note
The period is the JSONPath field access operator. Normally, you use the period to
access parts of the resource, such as in the .metadata.annotations JSONPath
expression. To access fields that contain periods with JSONPath, you must escape
the periods with a backslash (\).
7. Verify that you can access the nginx application from the utility machine by using the
following URL:
http://isolated-network-IP-address:8080
7.2. Verify that the nginx application is accessible. Use the IP address on the isolated
network to access the nginx application.
DO280-OCP4.14-en-2-20240725 221
Chapter 5 | Expose non-HTTP/SNI Applications
8. Verify that you cannot access the nginx application from the workstation machine,
because the workstation machine cannot access the isolated network.
8.1. Verify that the nginx application is not accessible from the workstation machine.
[student@workstation non-http-review]$ cd
[student@workstation ~]$
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
222 DO280-OCP4.14-en-2-20240725
Chapter 5 | Expose non-HTTP/SNI Applications
Summary
• Kubernetes ingresses and OpenShift routes use the virtual hosting property of the HTTP
protocol to expose web services that are running on the cluster.
• Different providers can implement Kubernetes services, by using the type field of the service
resource.
• When a load balancer component is configured for a cluster, you can create services of the
LoadBalancer type to expose non-HTTP services outside the cluster.
• The Multus CNI (container network interface) plug-in helps to attach pods to custom networks.
• You can configure the additional network by using a network attachment definition resource.
DO280-OCP4.14-en-2-20240725 223
224 DO280-OCP4.14-en-2-20240725
Chapter 6
DO280-OCP4.14-en-2-20240725 225
Chapter 6 | Enable Developer Self-Service
Objectives
• Configure compute resource quotas and Kubernetes resource count quotas per project and
cluster-wide.
Limiting Workloads
Kubernetes clusters can run heterogeneous workloads across many compute nodes. By using
Kubernetes role-based access control (RBAC), cluster administrators can allow users to create
workloads on their own. Although RBAC can limit the kinds of resources that users can create,
administrators might want further measures to ensure correct operation of the cluster.
Clusters have limited resources, such as CPU, RAM, and storage. If workloads on a cluster exceed
the available resources, then workloads might not work correctly. A cluster that is configured
to autoscale might also incur unwanted economic costs if the cluster scales to accommodate
unexpected workloads.
To help with this issue, Kubernetes workloads can reserve resources and declare resource limits.
Workloads can specify the following properties:
Resource limits
Kubernetes can limit the resources that a workload consumes. Workloads can specify an
upper bound of the resources that they expect to use under normal operation. If a workload
malfunctions or has unexpected load, then resource limits prevent the workload from
consuming an excessive amount of resources and impacting other workloads.
Resource requests
Workloads can declare their minimum required resources. Kubernetes tracks requested
resources by workloads, and prevents deployments of new workloads if the cluster has
insufficient resources. Resource requests ensure that workloads get their needed resources.
These measures prevent workloads from affecting other workloads. However, cluster
administrators might need to prevent other risks.
For example, users might mistakenly create unwanted workloads. The resource requests of those
unwanted workloads can prevent legitimate workloads from executing.
By dividing workloads into namespaces, Kubernetes can offer enhanced protection features. The
namespace structure often mirrors the organization that runs the cluster. Kubernetes introduces
resource quotas to limit resource usage by the combined workloads in a namespace.
Resource Quotas
Kubernetes administrators can create resources of the ResourceQuota type in a namespace for
this purpose. When a resource quota exists in a namespace, Kubernetes prevents the creation of
workloads that exceed the quota.
Whereas quota features in other systems often act on users or groups of users, Kubernetes
resource quotas act on namespaces.
226 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
apiVersion: v1
kind: ResourceQuota
metadata:
name: memory
namespace: example
spec:
hard:
limits.memory: 4Gi
requests.memory: 2Gi
scopes: {}
scopeSelector: {}
The scopes and scopeSelector keys define which namespace resources the quota applies
to. This course does not cover those keys.
The following sections describe the compute and object count quotas that you can include in the
hard key. Other components can define other quotas and enforce them.
• limits.cpu
• limits.memory
• requests.cpu
• requests.memory
Limit quotas interact with resource limits, and request quotas interact with resource requests.
Limit quotas control the maximum compute resources that the workloads in a namespace can
consume. Consider a namespace where all workloads have a memory limit. No individual workload
can consume enough memory to cause a problem. However, because users can create any number
of workloads, the workloads of a namespace can consume enough memory to cause a problem for
workloads in other namespaces. If you set a namespace memory usage limit, then the workloads in
the namespace cannot consume more memory than this limit.
Request quotas control the maximum resources that workloads in a namespace can reserve. If
you do not set namespace request quotas, then a single workload can request any quantity of
resources, such as RAM or CPU. This request can cause further requests in other namespaces
to fail. By setting namespace request quotas, the total requested resources by workloads in a
namespace cannot exceed the quota.
Excessive quotas can cause resource underutilization and can limit workload performance
unnecessarily.
After setting any compute quota, all workloads must define the corresponding request or resource
limit. For example, if you create a limits.cpu quota, then the workloads that you create require
the resources.limits.cpu key.
DO280-OCP4.14-en-2-20240725 227
Chapter 6 | Enable Developer Self-Service
Clusters store resource definitions in a backing store. Kubernetes backing stores are databases,
and like any other database, the more data that they store, the more resources are needed for
adequate performance. Namespaces with many resources can impact Kubernetes performance.
Additionally, any process that creates cluster resources might malfunction and create unwanted
resources.
Setting object count quotas can limit the damage from accidents, and maintain adequate cluster
performance.
Note
Red Hat validates the performance of OpenShift up to a specific number of objects
in a set of configurations. If you are planning a large cluster, then these results can
help you to size the cluster and to establish object count quotas.
Some Kubernetes resources might affect external systems. For example, creating a persistent
volume might create an entity in the storage provider. Many persistent volumes might cause
issues in the storage provider. Examine the systems that your cluster interacts with to learn about
possible resource constraints, and establish object count quotas to prevent issues.
Use the count/resource_type syntax to set a quota for resources of the core group. Use the
oc api-resources command with an empty api-group parameter to list resources of the core
group.
Kubernetes initially supported quotas for a limited set of resource types. These quotas do not
use the count/resource_type syntax. You might find a services quota instead of a count/
services quota. The Resource Quotas reference further describes these quotas.
You can also use the oc command to create a resource quota. The oc command can create
resource quotas without requiring a complete resource definition. Execute the oc create
resourcequota --help command to display examples and help for creating resource quotas
without a complete resource definition.
For example, execute the following command to create a resource quota that limits the number of
pods in a namespace:
The previous command is equivalent to creating a resource quota with the following definition:
228 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
apiVersion: v1
kind: ResourceQuota
metadata:
name: example
spec:
hard:
count/pods: "1"
After creating a resource quota, the status key in the resource describes the current values and
limits in the quota.
The oc get and oc describe commands show resource quota information in a custom format.
The oc get command displays the status of the quota in resource lists:
Resource quotas generate the kube_resourcequota metric. You can examine this metric for
planning and trend analysis.
DO280-OCP4.14-en-2-20240725 229
Chapter 6 | Enable Developer Self-Service
To ensure that a resource quota is correct, you can use the following procedures:
• Create a quota with an artificially low value in a testing environment, and ensure that the
resource quota has an effect.
For example, if a namespace contains a deployment, then an incorrectly defined resource quota
shows 0 deployments:
230 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Exceeding a quota often produces an error immediately. For example, if you create a deployment
that exceeds the deployment quota, then the deployment creation fails.
However, some quotas do not cause operations to fail immediately. For example, if you set a
resource quota for pods, then creating a deployment appears to succeed, but the deployment
never becomes available. When a resource quota is acting indirectly, namespace events might
provide further information.
The web console also shows quota information. Navigate to Administration > ResourceQuotas to
view resource quotas and their status. The project pages on both the developer and administrator
perspectives also show the quotas that apply to a specific project.
Resource restrictions often follow organization structure. Although namespaces often reflect
organization structure, cluster administrators might apply restrictions to resources without being
limited to a single namespace.
For example, a group of developers manages many namespaces. Namespace quotas can limit
RAM usage per namespace. However, a cluster administrator cannot limit total RAM usage by all
workloads that the group of developers manages.
DO280-OCP4.14-en-2-20240725 231
Chapter 6 | Enable Developer Self-Service
Cluster resource quotas follow a similar structure to namespace resource quotas. However, cluster
resource quotas use selectors to choose which namespaces the quota applies to.
apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: example
spec:
quota:
hard:
limits.cpu: 4
selector:
annotations: {}
labels:
matchLabels:
kubernetes.io/metadata.name: example
The quota key contains the quota definition. This key follows the structure of the
ResourceQuota specification. The hard key is nested inside the quota key, instead of
being directly nested inside the spec key as in resource quotas.
The selector key defines which namespaces the cluster resource quota applies to. Other
Kubernetes features, such as services and network policies, use the same selectors.
You can also use the oc command to create a cluster quota. The oc command can
create quotas without requiring a complete resource definition. Execute the oc create
clusterresourcequota --help command to display examples and help about creating
cluster resource quotas without a complete resource definition.
For example, execute the following command to create a resource quota that limits total CPU
requests. The quota limits the total CPU requests on namespaces that have the group label with
the dev value.
Cluster resource quotas collect total resource usage across namespaces and enforce the limits.
The following example shows the status of the previous cluster resource quota:
apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: example
spec:
quota:
232 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
hard:
requests.cpu: "10"
selector:
annotations: null
labels:
matchLabels:
group: dev
status:
namespaces:
- namespace: example-3
status:
hard:
requests.cpu: "10"
used:
requests.cpu: 500m
- namespace: example-2
status:
hard:
requests.cpu: "10"
used:
requests.cpu: 250m
_...output omitted..._
total:
hard:
requests.cpu: "10"
used:
requests.cpu: 2250m
The namespaces key lists the namespaces that the quota applies to. For each namespace,
the used key shows the current utilization.
Users might not have read access to cluster resource quotas. OpenShift creates resources
of the AppliedClusterResourceQuota type in namespaces that are affected by
cluster resource quotas. Project administrators can review quota usage by reviewing the
AppliedClusterResourceQuota resources. For example, use the oc describe command to
view the cluster resource quotas that apply to a specific namespace:
DO280-OCP4.14-en-2-20240725 233
Chapter 6 | Enable Developer Self-Service
Note
The --all-namespaces argument to oc commands such as the get and
describe commands does not work with AppliedClusterResourceQuota
resources. These resources are listed only when you select a namespace.
Navigate to Administration > ResourceQuotas to view quotas and their status. This page
displays cluster quotas along with namespace quotas. Although you can view resources of the
ClusterResourceQuota type and create resources of the ResourceQuota type in the
ResourceQuotas page, you cannot create objects of the ClusterResourceQuota in this page.
The project pages on both the developer and administrator perspectives also show the cluster
quotas that apply to a specific project.
References
For more information, refer to the Quotas chapter in the Red Hat OpenShift
Container Platform 4.14 Building Applications documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/building_applications/index#quotas-setting-per-project
For more information about object counts, refer to the Planning Your Environment
According to Object Maximums chapter in the Red Hat OpenShift Container
Platform 4.14 Scalability and Performance documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/scalability_and_performance/index#ibm-z-platform
Resource Quotas
https://kubernetes.io/docs/concepts/policy/resource-quotas/
234 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Guided Exercise
Outcomes
• Verify that requesting resources in one namespace can prevent creation of workloads in
different namespaces.
This command ensures that the cluster API is reachable and deletes the namespaces that
you use in this exercise.
Instructions
1. Log in to your OpenShift cluster as the developer user with the developer password.
...output omitted...
DO280-OCP4.14-en-2-20240725 235
Chapter 6 | Enable Developer Self-Service
3.2. Use the oc set resources command to request one CPU in the container
specification.
3.3. Use the oc get command to ensure that the deployment starts a pod correctly.
Execute the command until the deployment and the pod are ready.
Out of eight pods that the deployment creates, only some of them change to
Running status. The other pods stay in Pending status. Not all replicas of the
deployment are ready and available.
4.3. Use the oc get command to list events. Sort the events by their creation timestamp.
236 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
5.1. Log in to the cluster as the admin user with the redhatocp password.
...output omitted...
5.2. Use the oc adm top command to display the resource usage of nodes.
DO280-OCP4.14-en-2-20240725 237
Chapter 6 | Enable Developer Self-Service
The node has a capacity of six CPUs, and has more than five allocatable CPUs.
However, over five CPUs are requested, so less than one CPU is available for new
workloads.
6. Create a test project as an administrator, and verify that you cannot create new workloads
that request a CPU.
6.3. Use the oc set resources command to request one CPU in the container
specification.
6.4. Use the oc get command to review the pods and deployments in the test
namespace.
The deployment created one pod before adding the CPU request. When you updated
the deployment to request a CPU, the deployment tried to replace the pod to add
the CPU request. The new pod is in the Pending state, because the cluster has less
than one CPU available to request.
The workload in the selfservice-quotas namespace prevents the creation of
workloads in other namespaces.
238 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
7.2. Use the oc scale command to scale the test deployment to one replica.
9. Try to scale the deployment to eight replicas and to create a second deployment.
DO280-OCP4.14-en-2-20240725 239
Chapter 6 | Enable Developer Self-Service
The test deployment creates only two pods. The second deployment does not create
any pods.
The used status is kept at 1 because the test2 deployment can't request more
resources in the quota.
9.5. Use the oc get command to list events. Sort the events by their creation timestamp.
The test deployment cannot create further pods, because the new pods would
exceed the quota. The test2 deployment cannot create pods, because the
deployment does not set a CPU request.
10. Create a test project to verify that you can create new workloads in other namespaces
that request CPU resources.
240 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
10.3. Use the oc set resources command to request one CPU in the container
specification.
10.4. Use the oc get command to review the pods and deployments in the test
namespace.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 241
Chapter 6 | Enable Developer Self-Service
Objectives
• Configure default and maximum compute resource requirements for pods per project.
Kubernetes users might have further resource management needs within a namespace.
• Users might accidentally create workloads that consume too much of the namespace quota.
These unwanted workloads might prevent other workloads from running.
• Users might forget to set workload limits and requests, or might find it time-consuming to
configure limits and requests. When a namespace has a quota, creating workloads fails if the
workload does not define values for the limits or requests in the quota.
Kubernetes introduces limit ranges to help with these issues. Limit ranges are namespaced objects
that define limits for workloads within the namespace.
Limit Ranges
The following YAML file shows an example limit range:
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: default
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
Default limit
Use the default key to specify default limits for workloads.
Default request
Use the defaultRequest key to specify default requests for workloads.
Maximum
Use the max key to specify the maximum value of both requests and limits.
242 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Minimum
Use the min key to specify the minimum value of both requests and limits.
Limit-to-request ratio
The maxLimitRequestRatio key controls the relationship between limits and requests. If
you set a ratio of two, then the resource limit cannot be more than twice the request.
Limit ranges can apply to containers, pods, images, image streams, and persistent volume claims.
Use maximums to prevent accidentally high resource requests and limits. These situations can
exhaust quotas and cause other issues.
Consider allowing users who create workloads to edit maximum limit ranges. Although maximum
limit ranges act as a convenient safeguard, excessively low limits can prevent users from creating
legitimate workloads.
Minimum limit ranges are useful to ensure that users create workloads with enough requests and
limits. If users create such workloads often, then consider adding minimums.
Setting Defaults
Defaults are convenient in namespaces with quotas, and eliminate a need to declare limits
explicitly in each workload. When a quota is present, all workloads must specify the corresponding
limits and requests. When you set the default and defaultRequest keys, workloads use the
requests and limits from the limit range by default.
Defaults are especially convenient in scenarios where many workloads are created dynamically.
For example, continuous integration tools might run tests for each change to a source code
repository. Each test can create multiple workloads. Because many tests can run concurrently,
the resource usage of testing workloads can be significant. Setting quotas for testing workloads
is often needed to limit resource usage. If you set CPU and RAM quotas for requests and limits,
then the continuous integration tool must set the corresponding limits in every testing workload.
Setting defaults can save time with configuring limits. However, determining appropriate defaults
might be complex for namespaces with varied workloads.
apiVersion: v1
kind: ResourceQuota
metadata:
name: example
namespace: example
spec:
hard:
limits.cpu: "8"
DO280-OCP4.14-en-2-20240725 243
Chapter 6 | Enable Developer Self-Service
limits.memory: 8Gi
requests.cpu: "4"
requests.memory: 4Gi
apiVersion: v1
kind: LimitRange
metadata:
name: example
namespace: example
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 250m
memory: 256Mi
max:
cpu: "1"
memory: 1Gi
min:
cpu: 125m
memory: 128Mi
type: Container
Limit ranges do not affect existing pods. If you delete the deployment and run the oc create
command again, then the deployment creates a pod with the applied limit range.
244 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
memory: 512Mi
Requests:
cpu: 250m
memory: 256Mi
...output omitted...
The values correspond to the default and defaultRequest keys in the limit range.
The deployment does not contain any limits in the specification. The Kubernetes API server
includes an admission controller that enforces limit ranges. The controller affects pod definitions,
but not deployments, stateful sets, or other workloads.
You can replace the CPU limit, or add other resource specifications, by using the oc set
resources command:
If you request CPU values outside the range that the min and max keys define, then Kubernetes
does not create the pods, and it logs warnings.
Note
When you experiment with deployments and resource quotas, consider what
happens when you modify a deployment. Modifications create a replacement replica
set, and the existing replica set also continues to run until the rollout completes.
The pods of both replica sets count towards the resource quota.
If the new replica set satisfies the quota, but the combined replica sets exceed the
quota, then the rollout cannot complete.
When creating a limit range, you can specify any combination of the default, defaultRequest,
min, and max keys. However, if you do not specify the default or defaultRequest keys, then
Kubernetes modifies the limit range to add these keys. These keys are copied from the min or max
keys. For more predictable behavior, always specify the default and defaultRequest keys if
you specify the min or max keys.
Also, the values for CPU or memory keys must follow these rules:
• The max value must be higher than or equal to the default value.
DO280-OCP4.14-en-2-20240725 245
Chapter 6 | Enable Developer Self-Service
• The default value must be higher than or equal to the defaultRequest value.
• The defaultRequest value must be higher than or equal to the min value.
Do not create conflicting limit ranges in a namespace. For example, if two default CPU values are
specified, then it would be unclear which one is applied.
References
For more information, refer to the Restrict Resource Consumption with Limit Ranges
section in the Working with Clusters chapter in the Red Hat OpenShift Container
Platform 4.14 Nodes documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/nodes/index#nodes-cluster-limit-ranges
Limit Ranges
https://kubernetes.io/docs/concepts/policy/limit-range/
246 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Guided Exercise
Outcomes
• Verify that workloads have no limits by default.
• Create a workload and inspect the limits that the limit range adds to the containers.
This command ensures that the cluster API is reachable and deletes the namespace that you
use in this exercise.
Instructions
1. As the admin user, navigate and log in to the OpenShift web console.
1.2. Click Red Hat Identity Management and log in as the admin user with the
redhatocp password.
2.1. Navigate to Home > Projects, and then click Create Project.
2.2. Type selfservice-ranges in the Name field, and then click Create.
DO280-OCP4.14-en-2-20240725 247
Chapter 6 | Enable Developer Self-Service
3.1. Navigate to Workloads > Deployments, and then click Create Deployment.
3.2. Ensure that Form view is selected, and then type example in the Name field.
You create this deployment several times during this exercise. To use the terminal
instead for the exercise, copy the deployment definition from the YAML editor.
248 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
4.1. Wait a few seconds until the Deployment Details section shows that the deployment
scaled to three pods.
4.2. Click the Pods tab, and then click the name of any of the pods in the example
deployment.
The name of the pods might differ from the ones you get.
DO280-OCP4.14-en-2-20240725 249
Chapter 6 | Enable Developer Self-Service
4.4. Verify that the Resource requests and Resource limits fields show a hyphen.
Containers do not have resource requests nor limits by default.
5.1. Navigate to Administration > LimitRanges, and then click Create LimitRange.
250 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
5.2. The YAML editor displays a template that defines a limit range for containers. The
limit range sets a default memory request of 256 Mi and a default memory limit of
512 Mi.
6. Examine the containers of the original deployment to verify that the limit range did not add
resource requests nor limits.
6.1. Navigate to Workloads > Pods, and then click the name of any of the pods in the
example deployment.
DO280-OCP4.14-en-2-20240725 251
Chapter 6 | Enable Developer Self-Service
6.3. The Resource requests and Resource limits fields continue to show a hyphen.
252 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
7.1. Navigate to Workloads > Deployment. Click the vertical ellipsis (⋮) menu at the end
of the example row, and then click Delete Deployment. Click Delete to confirm.
8.1. Navigate to Workloads > Deployments, and then click Create Deployment.
8.2. Ensure that Form view is selected, and then type example in the Name field.
DO280-OCP4.14-en-2-20240725 253
Chapter 6 | Enable Developer Self-Service
9.1. Wait a few seconds until the Deployment Details section shows that the deployment
scaled to three pods.
9.2. Click the Pods tab, and then click the name of any of the pods.
254 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
9.4. Note that the Resource requests and Resource limits fields now have values that
correspond to the limit range.
DO280-OCP4.14-en-2-20240725 255
Chapter 6 | Enable Developer Self-Service
10.3. The YAML editor displays the resource definition of the deployment.
kind: Deployment
apiVersion: apps/v1
metadata:
name: example
namespace: selfservice-ranges
...output omitted...
spec:
replicas: 3
selector:
matchLabels:
app: example
template:
metadata:
creationTimestamp: null
labels:
app: example
spec:
containers:
- name: container
image: >-
image-registry.openshift-image-registry.svc:5000/openshift/
httpd:latest
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
...output omitted...
Although the containers have resource limits and requests, the resources key in
the deployment is empty. Limit ranges modify containers to add resource limits and
requests, but not deployments.
11.1. Navigate to Workloads > Pods, and then click the name of any of the pods.
256 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
The Memory usage graph displays the memory usage of the pod (about 50 MiB), the
request (256 MiB), and the limit (512 MiB).
The template deployment in the web console uses an httpd image that consumes
little memory. In this case, the limit range requests more memory than the container
requires to work. If you create many similar deployments, then the limit range can
cause the deployments to request more memory than they need. If the namespace
has resource quotas, then you might not be able to create workloads even if the
cluster has enough available resources.
Most real workloads have larger memory usage that varies with load. Evaluate the
resource usage of your workloads to decide whether limit ranges can help you to
manage cluster resources, and examine resource usage to find adequate values.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 257
Chapter 6 | Enable Developer Self-Service
Objectives
• Configure default quotas, limit ranges, role bindings, and other restrictions for new projects, and
the allowed users to self-provision new projects.
Project Creation
Kubernetes provides namespaces to isolate workloads.
Namespace metadata has security implications in clusters. For example, policy controllers might
use namespace labels to limit capabilities in a namespace. If users can modify namespaces, then
malicious users can modify namespace metadata to override security measures.
Note
Listing resources and viewing individual resources are different operations. You
can grant users permissions to view specific namespaces, but listing namespaces
requires a separate permission.
OpenShift introduces projects to improve security and users' experience of working with
namespaces. The OpenShift API server adds the Project resource type. When you make a query
to list projects, the API server lists namespaces, filters the visible namespaces to your user, and
returns the visible namespaces in project format.
Additionally, OpenShift introduces the ProjectRequest resource type. When you create
a project request, the OpenShift API server creates a namespace from a template. By using
a template, cluster administrators can customize namespace creation. For example, cluster
administrators can ensure that new namespaces have specific permissions, resource quotas, or
limit ranges.
258 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
a group of users. You can also add different permissions, such as more granular permissions
over specific resource types.
Even with quotas in all namespaces, users can create projects to continue adding workloads
to a cluster. If this scenario is a concern, then consider adding cluster resource quotas to the
cluster.
Network policies
Add network policies to the template to enforce organizational network isolation
requirements.
This template has the same behavior as the default project creation in OpenShift. The template
adds a role binding that grants the admin cluster role over the new namespace to the user who
requests the project.
Project templates use the same template feature as the oc new-app command.
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: admin
DO280-OCP4.14-en-2-20240725 259
Chapter 6 | Enable Developer Self-Service
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${PROJECT_ADMIN_USER}
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER
When a user requests a project, OpenShift replaces the ${VARIABLE} syntax with the parameters
of the project request, and creates the objects in the objects key.
Modify the object list to add the required resources for new namespaces.
The YAML output of oc commands that return lists of objects is formatted similarly to the
template objects key.
260 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
cpu: 125m
memory: 128Mi
type: Container
- apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2024-01-31T17:48:04Z"
name: example
namespace: example
resourceVersion: "881648"
uid: 108f0771-dc11-4289-ae76-6514d58bbece
spec:
hard:
count/pods: "1"
status:
...output omitted...
kind: List
metadata:
resourceVersion: ""
Some common resources in project templates, such as quotas, do not have strict validation. For
example, if the previous template contains the count/pod text instead of the count/pods text,
then the quota does not work. You can create the project template, and new namespaces contain
the quota, but the quota does not have an effect. To define a project template and to reduce the
risk of errors, you can perform the following steps:
• Create a namespace.
• Create your chosen resources and test until you get the intended behavior.
• Edit the resource listing to ensure that the definitions create the correct resources. For example,
remove elements that do not apply to resource creation, such as the creationTimestamp or
status keys.
• Add the list of resources to the project template that the oc adm create-bootstrap-
project-template command generates.
Note
Extracting a resource definition from an existing resource might not always produce
correct results. Besides including elements that do not apply to resource creation,
existing definitions might contain attributes that generate unexpected behavior. For
example, a controller might add to resources some annotations that are unsuitable
for template definitions.
Even after testing the resources in a test namespace, always verify that the projects
that are created from your template have only the required behavior.
Use the oc create command to create the template resource in the openshift-config
namespace:
DO280-OCP4.14-en-2-20240725 261
Chapter 6 | Enable Developer Self-Service
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...output omitted...
name: cluster
...output omitted...
spec:
projectRequestTemplate:
name: project-request
Note
During the apiserver deployment rollout, API requests can produce unexpected
results.
Control the binding of the role to limit which users can request new projects.
Important
Remember that users with namespace permissions can create namespaces that do
not use the project template.
262 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Kind: ClusterRole
Name: self-provisioner
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:authenticated:oauth
To make changes, disable automatic updates with the annotation, and edit the subjects in the
binding.
Important
The oc adm policy remove-cluster-role-from-group command removes
the cluster role binding when you remove the last subject.
Use extra caution or avoid this command to manage protected role bindings. The
command removes the permission, but only until the API server restarts. Removing
the permission permanently after deleting the binding is a lengthier process than
changing the subjects.
You can also use the oc edit command to modify any value of a resource. The command
launches the vi editor to apply your modifications. For example, to change the subject of the role
binding from the system:authenticated:oauth group to the provisioners group, execute
the followign command:
DO280-OCP4.14-en-2-20240725 263
Chapter 6 | Enable Developer Self-Service
References
For more information, refer to the Configuring Project Creation section in the
Projects chapter in the Red Hat OpenShift Container Platform 4.14 Building
Applications documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/building_applications/index#configuring-project-creation
264 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Guided Exercise
Outcomes
• Limit project creation to a group of users.
• Create the provisioner1 and provisioner2 users with the redhat password.
Instructions
In this exercise, you configure the cluster so that only members of the provisioners group can
create projects. Members of the provisioners group have full permissions on new projects.
Users cannot create workloads that request more than 1 GiB of RAM in new projects.
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
DO280-OCP4.14-en-2-20240725 265
Chapter 6 | Enable Developer Self-Service
2.2. Use the oc edit command to edit the self-provisioners cluster role binding.
The oc edit command launches the vi editor to apply your modifications. Change
the subject of the role binding from the system:authenticated:oauth group to
the provisioners group.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
...output omitted...
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: self-provisioner
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: provisioners
Note
The rbac.authorization.kubernetes.io/autoupdate annotation protects
this cluster role binding. If the API server restarts, then Kubernetes restores this
cluster role binding.
In this exercise context, you are not required to make the change permanent.
Not in this exercise, but in a real-world context, you would make the change
permanent by using the following command:
3. Verify that users outside the provisioners group cannot create projects.
3.1. Log in to the cluster as the developer user with the developer password.
266 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
You don't have any projects. Contact your system administrator to request a
project.
After the role binding is changed, the oc login command reports that you must
contact your system administrator to request a project, because the developer user
cannot create projects.
4.1. Log in to the cluster as the provisioner1 user with the redhat password.
You don't have any projects. You can try to create a new project, by running
...output omitted...
4.3. Verify that you can create resources in the test project.
5. Verify that another member of the provisioners group cannot access the test project.
5.1. Log in to the cluster as the provisioner2 user with the redhat password.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
The oc login command reports that the provisioner2 user does not have any
projects.
5.2. Try to change to the test project with the oc project command.
DO280-OCP4.14-en-2-20240725 267
Chapter 6 | Enable Developer Self-Service
6. Log in to the cluster as the admin user with the redhatocp password, to clean up.
7. Create a namespace to design a project template. Add a limit range that prevents users
from creating workloads that request more than 1 GiB of RAM.
apiVersion: v1
kind: LimitRange
metadata:
name: max-memory
namespace: template-test
spec:
limits:
- max:
memory: 1Gi
type: Container
7.3. Use the oc create command to create the limit range that the ~/DO280/labs/
selfservice-projtemplate/limitrange.yaml file defines.
268 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
name: test
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/hello-world-
nginx:v1.0
name: hello-world-nginx
resources:
limits:
memory: 2Gi
The limit range maximum prevents the deployment from creating pods.
DO280-OCP4.14-en-2-20240725 269
Chapter 6 | Enable Developer Self-Service
8.2. Use the oc command to list the limit range in YAML format. Redirect the output to
append to the template.yaml file.
• Apply the following changes to the subjects key in the admin role binding:
• Move the limit range to immediately after the role binding definition.
• Remove the following keys from the limit range and quota definitions:
– creationTimestamp
– resourceVersion
– uid
If you use the vi editor, then you can use the following procedure to move a block of
text:
• Press V to enter visual line mode. This mode selects entire lines for manipulation.
• Move to the end of the block. The editor highlights the selected lines.
• Press d to delete the lines and to store them in a register for later use.
You can also press dd to delete entire lines, and press . to repeat the operation.
The resulting file should match the following content:
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
270 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: admin
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: provisioners
- apiVersion: v1
kind: LimitRange
metadata:
name: max-memory
namespace: ${PROJECT_NAME}
spec:
limits:
- default:
memory: 1Gi
defaultRequest:
memory: 1Gi
max:
memory: 1Gi
type: Container
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER
Note
The limit range has default and defaultRequest limits, although the definition
does not contain these keys. When creating a limit range, always set the default
and defaultRequest limits for more predictable behavior.
9.2. Use the oc edit command to change the global cluster project configuration.
DO280-OCP4.14-en-2-20240725 271
Chapter 6 | Enable Developer Self-Service
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...output omitted...
name: cluster
...output omitted...
spec:
projectRequestTemplate:
name: project-request
9.3. Use the watch command to view the API server pods.
Wait until new pods are rolled out. The rollout can take a few minutes to start. Press
Ctrl+C to exit the watch command.
10.1. Log in to the cluster as the provisioner1 user with the redhat password.
11. Verify that the provisioner2 user can access the test project and create resources.
Verify that the limit range has the intended effect.
11.1. Log in to the cluster as the provisioner2 user with the redhat password.
The oc login command reports that the provisioner2 user has the test
project. The command selects the project.
272 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
The provisioner2 user can create resources in a project that the provisioner1
user created.
11.3. Create a deployment that exceeds the limit range by using the ~/DO280/labs/
selfservice-projtemplate/deployment.yaml file.
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 273
Chapter 6 | Enable Developer Self-Service
Lab
Outcomes
• Configure project creation to use a custom project template.
Instructions
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
2. Design a project template with the following properties:
• The user who requests the project has the default admin role binding.
• The workloads in the project cannot request a total of more than 2 GiB of RAM, and they
cannot use more than 4 GiB of RAM.
You can use the oc create quota command to create the resource quota without
creating a YAML definition. A template for the limit range is available at ~/DO280/labs/
selfservice-review/limitrange.yaml.
You can create a template-test namespace to design your project template.
Note
The next steps assume that you design the template in a template-test
namespace. The lab scripts clean and grade the design namespace only if you
create it with the template-test name.
274 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
3. Verify that the quota and limit range have the intended effect.
For example, create a deployment that uses the registry.ocp4.example.com:8443/
redhattraining/hello-world-nginx:v1.0 image without resource specifications.
Verify that the limit range adds requests and limits to the pods. Scale the deployment to 10
replicas. Examine the deployment and the quota to verify that they have the intended effect.
If you design your template without creating a test namespace, then you must verify your
design by other means.
4. Create a project template definition with the same properties.
Note
The solution for this step assumes that you designed your template in a template-
test namespace. If you do not create a template-test namespace to design the
template, then you must create the project template by other means.
Note
The lab scripts clean up only a template-validate namespace.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 275
Chapter 6 | Enable Developer Self-Service
Solution
Outcomes
• Configure project creation to use a custom project template.
Instructions
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
...output omitted...
• The user who requests the project has the default admin role binding.
• The workloads in the project cannot request a total of more than 2 GiB of RAM, and they
cannot use more than 4 GiB of RAM.
276 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
You can use the oc create quota command to create the resource quota without
creating a YAML definition. A template for the limit range is available at ~/DO280/labs/
selfservice-review/limitrange.yaml.
You can create a template-test namespace to design your project template.
Note
The next steps assume that you design the template in a template-test
namespace. The lab scripts clean and grade the design namespace only if you
create it with the template-test name.
2.2. Use the oc create quota command to create the memory quota in the template-
test namespace.
apiVersion: v1
kind: LimitRange
metadata:
name: memory
namespace: template-test
spec:
limits:
- min:
memory: 128Mi
defaultRequest:
memory: 256Mi
default:
memory: 512Mi
max:
memory: 1Gi
type: Container
3. Verify that the quota and limit range have the intended effect.
DO280-OCP4.14-en-2-20240725 277
Chapter 6 | Enable Developer Self-Service
3.1. Use the oc create deployment command to create a deployment without resource
specifications.
3.2. Use the oc command to view the resources key of the container specification.
Optionally, use the jq command to indent the output.
Although you create the deployment without specifying resources, the limit range
applies RAM requests and limits.
3.3. Use the oc scale command to scale the deployment to verify that the quota has an
effect.
The deployment uses the quota completely, and scales only to eight pods. Each pod
requests 256 MiB of RAM, and eight pods request 2 GiB of RAM. Each pod has a
512 MiB RAM limit, and eight pods have a 4 GiB RAM limit.
278 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Note
The solution for this step assumes that you designed your template in a template-
test namespace. If you do not create a template-test namespace to design the
template, then you must create the project template by other means.
4.2. Use the oc command to list the limit ranges and quotas in YAML format. Redirect the
output to append to the template.yaml file.
• Move the limit range and quota definitions immediately after the role binding
definition.
• Remove the following keys from the limit range and quota definitions:
– creationTimestamp
– resourceVersion
– uid
– status
If you use the vi editor, then you can use the following procedure to move a block of
text:
• Press V to enter visual line mode. This mode selects entire lines for manipulation.
• Move to the end of the block. The editor highlights the selected lines.
• Press d to delete the lines and to store them in a register for later usage.
You can also press dd to delete entire lines, and . to repeat the operation.
The resulting file should match the following content:
DO280-OCP4.14-en-2-20240725 279
Chapter 6 | Enable Developer Self-Service
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: admin
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${PROJECT_ADMIN_USER}
- apiVersion: v1
kind: LimitRange
metadata:
name: memory
namespace: ${PROJECT_NAME}
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
max:
memory: 1Gi
min:
memory: 128Mi
type: Container
- apiVersion: v1
kind: ResourceQuota
metadata:
name: memory
namespace: ${PROJECT_NAME}
spec:
280 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
hard:
limits.memory: 4Gi
requests.memory: 2Gi
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER
5.2. Use the oc edit command to change the cluster project configuration.
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...output omitted...
name: cluster
spec:
projectRequestTemplate:
name: project-request
5.3. Use the watch command to view the API server pods.
Wait until new pods are rolled out. Press Ctrl+C to exit the watch command.
Note
The lab scripts clean up only a template-validate namespace.
DO280-OCP4.14-en-2-20240725 281
Chapter 6 | Enable Developer Self-Service
6.4. Optionally, execute again the commands that you used earlier to create a deployment.
Scale the deployment, and verify the limits.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
282 DO280-OCP4.14-en-2-20240725
Chapter 6 | Enable Developer Self-Service
Summary
• Cluster administrators can create quotas to limit resource usage by namespace.
• Cluster resource quotas implement resource limits across groups of namespaces that
namespace selectors define.
• Limit ranges provide resource defaults, minimums, and maximums for workloads in a
namespace.
• Cluster administrators can configure project templates to add resources to all new projects.
These resources can implement permissions, quotas, network policies, and others.
• The self-provisioner role grants permissions to create projects. By default, this role is
bound to all authenticated users.
DO280-OCP4.14-en-2-20240725 283
284 DO280-OCP4.14-en-2-20240725
Chapter 7
DO280-OCP4.14-en-2-20240725 285
Chapter 7 | Manage Kubernetes Operators
Objectives
• Describe the operator pattern and different approaches for installing and updating Kubernetes
operators.
These resources are sufficient to deploy many workloads. However, more complex workloads
might require significant work to deploy with only these resources. For example, a workload can
involve different component workloads, such as a database server, a back-end service, and a
front-end service.
A workload might have maintenance tasks that can be automated, such as backing up data or
updating the workload.
The operator pattern is a way to implement reusable software to manage such complex workloads.
An operator typically defines custom resources (CRs). The operator CRs contain the needed
information to deploy and manage the workload. For example, an operator that deploys
database servers defines a database resource where you can specify the database name, sizing
requirements, and other parameters.
The operator watches the cluster for instances of the CRs, and then creates the Kubernetes
resources to deploy the custom workload. For example, when you create a database resource, the
database operator creates a stateful set and a persistent volume that provide the database that
is described in the database resource. If the database resource describes a backup schedule and
target, then the operator creates a cron job that backs up the database to the target according to
the schedule.
By using operators, cluster administrators create CRs that describe a complex workload, and the
operator creates and manages the workload.
Deploying Operators
Many pieces of software implement the operator pattern, in different ways.
Cluster operators
Cluster operators provide the platform services of OpenShift, such as the web console and
the OAuth server.
286 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Add-on operators
OpenShift includes the Operator Lifecycle Manager (OLM). The OLM helps users to install
and update operators in a cluster. Operators that the OLM manages are also known as add-on
operators, in contrast with cluster operators that implement platform services.
Other operators
Software providers can create software that follows the operator pattern, and then distribute
the software as manifests, Helm charts, or any other software distribution mechanism.
Cluster Operators
The Cluster Version Operator (CVO) installs and updates cluster operators as part of the
OpenShift installation and update processes.
The CVO provides cluster operator status information as resources of the ClusterOperator
type. Inspect the cluster operator resources to examine cluster health.
The status of cluster operator resources includes conditions to help with identifying cluster issues.
The oc command shows the message that is associated with the latest condition. This message
can provide further information about cluster issues.
To view cluster operator resources in the web console, navigate to Administration > Cluster
Settings, and then click the ClusterOperators tab.
You can use the web console to interact with the OLM. The OLM also follows the operator
pattern, and so the OLM provides CRs to manage operators with the Kubernetes API.
The OLM uses operator catalogs to find available operators to install. Operator catalogs are
container images that provide information about available operators, such as descriptions and
available versions.
Red Hat
Red Hat packages, ships, and supports operators in this catalog.
Certified
Independent software vendors support operators in this catalog.
Community
Operators without official support.
Marketplace
Commercial operators that you can buy from Red Hat Marketplace.
You can also create your own catalogs, or mirror catalogs for offline clusters.
DO280-OCP4.14-en-2-20240725 287
Chapter 7 | Manage Kubernetes Operators
Note
The lab environment includes a single catalog with the operators you use in the
course. The lab environment hosts the contents of this catalog, so that the course
can be completed without internet access.
The OLM creates a resource of the PackageManifest type for each available operator. The web
console also displays available operators and provides a wizard to install operators. You can also
install operators by using the Subscription CR and other CRs.
Note
Operators that are installed with the OLM have a different lifecycle from cluster
operators. The CVO installs and updates cluster operators in lockstep with the
cluster. Administrators use the OLM to install, update, and remove operators
independently from cluster updates.
Implementing Operators
An operator is composed of a set of custom resource definitions and a Kubernetes workload. The
operator workload uses the Kubernetes API to watch instances of the CRs and to create matching
workloads.
Note
A cluster contains two workload sets for each operator.
You can implement operators to automate any manual Kubernetes task that fits the operator
pattern. You can use most software development platforms to create operators. The following
SDKs provide components and frameworks to help with developing operators:
288 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
References
For more information, refer to the Operators guide in the Red Hat OpenShift
Container Platform 4.14 documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index
Operator SDK
https://sdk.operatorframework.io/
DO280-OCP4.14-en-2-20240725 289
Chapter 7 | Manage Kubernetes Operators
Quiz
Term Definition
Operator pattern
Cluster operator
Add-on operator
Operator Lifecycle
Manager
290 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Solution
Term Definition
DO280-OCP4.14-en-2-20240725 291
Chapter 7 | Manage Kubernetes Operators
Objectives
• Install and update operators by using the web console and the Operator Lifecycle Manager.
Important
Before installing an operator, review the operator information and consult the
operator documentation. You might need to configure the operator further for
successful deployment.
292 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Update channel
You can choose the most suitable operator update channel for your requirements. For more
information, refer to Operator Update Channels.
Installation mode
The default All namespaces on the cluster (default) installation mode should be suitable for
most operators. This mode configures the operator to monitor all namespaces for custom
resources.
For example, an operator that deploys database servers defines a custom resource that
describes a database server. When using the All namespaces on the cluster (default)
installation mode, users can create those custom resources in their namespaces. Then, the
operator deploys database servers in the same namespaces, along with other user workloads.
Cluster administrators can combine this mode with self-service features and other
namespace-based features, such as role-based access control and network policies, to control
user usage of operators.
Installed namespace
The OLM installs the operator workload to the selected namespace in this option. Some
operators install by default to the openshift-operators namespace. Other operators
suggest creating a namespace.
Although users might require access to the workloads that the operator manages, typically
only cluster administrators require access to the operator workload.
Update approval
The OLM updates operators automatically when new versions are available. Choose manual
updates to prevent automatic updates.
For an operator that includes monitoring in its definition, the wizard displays a further option to
enable the monitoring. Adding monitoring from non-Red Hat operators is not supported.
The installation mode and installed namespace options are related. Review the documentation of
the operator to learn the supported options.
After you configure the installation, click Install. The web console creates subscription and
operator group resources according to the selected options in the wizard. After the installation
starts, the web console displays progress information.
The Installed Operators page lists the installed cluster service version (CSV) resources that
correspond to installed operators.
Every version of an operator has a CSV. The OLM uses information from the CSV to install the
operator. The OLM updates the status key of the CSV with installation information.
DO280-OCP4.14-en-2-20240725 293
Chapter 7 | Manage Kubernetes Operators
CSVs are namespaced, so the Installed Operator page has a similar namespace filter to other web
console pages. Operators that were installed with the "all namespaces" mode have a CSV in all
namespaces.
Note
The operator installation mode determines which namespaces the operator
monitors for custom resources. This mode is a distinct option from the installed
namespace option, which determines the operator workload namespace.
The Installed Operators page shows information such as the operator status and available
updates. Click an operator to navigate to the Operator details page.
The Operator details page contains the following tabs, where you can view further details and
perform other actions.
Details
Displays information about the CSV.
YAML
Displays the CSV in YAML format.
Subscription
In this tab, you can change installation options, such as the update channel and update
approval. This tab also links to the install plans of the operator. When you configure an
operator for manual updates, you approve install plans for updates in this tab.
Events
Lists events that are related to the operator.
The Operator details page also has tabs for custom resources. For each custom resource that the
operator defines, a web console tab lists all resources of that type. Additionally, the All instances
tab aggregates all resources of types that the operator defines.
Using Operators
Custom resources are the most common way to interact with operators. You can create custom
resources by using the custom resource tabs on the Installed Operators page. Select the tab to
correspond to the custom resource type to create, and then click the create button.
Custom resources use the same creation page as other Kubernetes resources. You can choose
either the YAML view or the form view to configure the new resource.
In the YAML view, you use the YAML editor to compose the custom resource. The editor provides
a starting template that you can customize. The YAML view also displays documentation about the
custom resource schema. The oc explain command provides the same documentation.
The form view presents a set of fields in a resource. Instead of composing a full YAML definition,
you can edit the fields individually. When complete, OpenShift creates a resource from the values
in the form.
Fields might provide help text and further configuration help. For example, fields with a limited set
of values might provide a drop-down list with the possible values. The form view might provide
more guidance, but might not contain fields to customize all possible options of a custom resource.
294 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Troubleshooting Operators
The OLM might fail to install or update operators, or operators might not work correctly.
To identify operator installation issues, examine the status and conditions of the CSV, subscription,
and install plan resources.
Note
Installation issues can be operator-specific, so consult the documentation of
malfunctioning operators to determine support options.
To troubleshoot further issues that cause operators to work incorrectly, first identify the operator
workload. The Operator Deployments field in the Operator details page shows operator
workloads. Operators might create further workloads, including workloads that follow the
definitions that you provide in custom resources.
Identify and troubleshoot the operator workload as with any other Kubernetes workload. The
following resources are common starting points when troubleshooting:
DO280-OCP4.14-en-2-20240725 295
Chapter 7 | Manage Kubernetes Operators
References
For more information, refer to the Installing from OperatorHub Using the Web
Console section in the Administrator Tasks chapter in the Red Hat OpenShift
Container Platform 4.14 Operators documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#olm-installing-from-operatorhub-using-web-
console_olm-adding-operators-to-a-cluster
For more information about monitoring configuration, refer to the Maintenance and
Support for Monitoring section in the Configuring the Monitoring Stack chapter in
the Red Hat OpenShift Container Platform 4.14 Monitoring documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#olm-installing-from-operatorhub-using-web-
console_olm-adding-operators-to-a-cluster
296 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Guided Exercise
Outcomes
• Install and uninstall an operator with the web console.
• Examine the resources that the web console creates for the installation, and the operator
workloads.
Instructions
1. As the admin user, locate and navigate to the OpenShift web console.
...output omitted...
1.4. Click Red Hat Identity Management and log in as the admin user with the
redhatocp password.
2.1. Click Operators > OperatorHub. In the Filter by keyword field, type integrity to
locate the File Integrity operator, and then click File Integrity Operator.
DO280-OCP4.14-en-2-20240725 297
Chapter 7 | Manage Kubernetes Operators
2.2. The web console displays information about the File Integrity operator. Click Install to
proceed to the Install Operator page.
2.3. The Install Operator page contains installation options. You can use the default
options.
The lab environment cluster is a disconnected cluster to ensure that exercises are
reproducible. The Operator Lifecycle Manager is configured to use a mirror registry
with only the required operators for the course. In this registry, the File Integrity
operator has a single available update channel.
By default, the File Integrity operator installs to all namespaces and creates the
openshift-file-integrity namespace. The operator workload resides in this
namespace.
298 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
For more information about the File Integrity operator, refer to the File Integrity
Operator chapter in the Red Hat OpenShift Container Platform 4.14 Security
and Compliance documentation at https://docs.redhat.com/en/documentation/
openshift_container_platform/4.14/html-single/security_and_compliance/index#file-
integrity-operator-release-notes
Note
The web console might display the View Operator button briefly before the OLM
finishes the installation. The web console can also display errors briefly.
Wait until the web console displays View Operator for more than a few seconds.
DO280-OCP4.14-en-2-20240725 299
Chapter 7 | Manage Kubernetes Operators
300 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Scroll down to the Conditions section to review the evolution of the installation process.
The last condition is for the Succeeded phase, because the installation completed
correctly.
The YAML tab displays the cluster service version resource API resource in YAML format.
Click the Subscription tab to view information about the operator subscription resource. In
this tab, you can change the update channel and the update approval configuration. The
tab also links to the install plan. The install plan further describes the operator installation
process. When the OLM finds an update for an operator that is configured for manual
updates, then the OLM creates an install plan for the update. You approve the update in the
install plan details page.
4.1. Click the File Integrity tab, and click Create FileIntegrity.
DO280-OCP4.14-en-2-20240725 301
Chapter 7 | Manage Kubernetes Operators
4.2. Use YAML view and modify the gracePeriod to 60. Then, click Create to create a
file integrity resource.
4.3. Click the FileIntegrityNodeStatus tab. After a few minutes, the list shows a new
example-fileintegrity-master01 resource.
302 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Note
The first file integrity resource that you create might not work correctly.
DO280-OCP4.14-en-2-20240725 303
Chapter 7 | Manage Kubernetes Operators
5. Examine and differentiate the File Integrity operator workloads from the operator-
managed workloads.
304 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
5.2. Click Workloads > DaemonSets to list daemon sets in the openshift-file-
integrity namespace.
If you create a file integrity resource, then the operator creates an aide-example-
fileintegrity daemon set to verify file integrity.
DO280-OCP4.14-en-2-20240725 305
Chapter 7 | Manage Kubernetes Operators
6.3. Select Uninstall Operator from the Actions list, and then click Uninstall.
7.4. Select Delete Project from the Actions list. Then, type openshift-file-
integrity and click Delete.
306 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 307
Chapter 7 | Manage Kubernetes Operators
Objectives
• Install and update operators by using the Operator Lifecycle Manager APIs.
Installing Operators
To install an operator, you must perform the following steps:
• Review the operator and its documentation for installation options and requirements.
– Decide the installation mode. For most operators, you should make them available to all
namespaces.
– Decide whether the Operator Lifecycle Manager (OLM) applies updates automatically, or
requires an administrator to approve updates.
Operator Resources
The OLM uses the following resource types:
Catalog source
Each catalog source resource references an operator repository. Periodically, the OLM
examines the catalog sources in the cluster and retrieves information about the operators in
each source.
Package manifest
The OLM creates a package manifest for each available operator. The package manifest
contains the required information to install an operator, such as the available channels.
Operator group
Operator groups define how the OLM presents operators across namespaces.
Subscription
Cluster administrators create subscriptions to install operators.
Operator
The OLM creates operator resources to store information about installed operators.
308 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Install plan
The OLM creates install plan resources as part of the installation and update process. When
requiring approvals, administrators must approve install plans.
When installing an operator, an administrator must create only the subscription and the operator
group. The OLM generates all other resources automatically.
The OLM creates a package manifest for each available operator that a catalog source references.
List the package manifests to know which operators are available for installation.
To gather the required information to install an operator, view the details of a specific package
manifest. Use the oc describe command on a package manifest to view details about an
operator.
DO280-OCP4.14-en-2-20240725 309
Chapter 7 | Manage Kubernetes Operators
310 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
The catalog source and namespace for the operator, which are required to identify the
operator when creating the subscription.
Examine the available channels and CSVs to decide which upgrade path to use.
The description and links provide useful information and documentation for installation and
uninstallation procedures.
The install modes provide information about supported namespace operation modes.
Installing Operators
After you examine the package manifest, review the operator documentation. Operators might
require specific installation procedures.
If you decide to deploy the operator workload to a new namespace, then create the namespace.
Many operators recommend to use the existing openshift-operators namespace, or require
specific namespaces.
Determine whether you need to create an operator group. Operators use the operator group in
their namespace. Operators monitor custom resources in the namespaces that the operator group
targets.
If the global-operators operator group is not suitable, then create another operator group.
The following YAML definition describes the structure of an operator group:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: name
namespace: namespace
spec:
targetNamespaces:
- namespace
Operators follow the operator group in the namespace that they are deployed in.
List the namespaces that the operator monitors for custom resources. You can also use the
spec.selector field to select namespaces by using labels.
After creating the necessary namespaces or operator groups, you create a subscription. The
following YAML file is an example of a subscription:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms-operator
namespace: openshift-storage
spec:
channel: stable-4.14
name: lvms-operator
DO280-OCP4.14-en-2-20240725 311
Chapter 7 | Manage Kubernetes Operators
source: do280-catalog-cs
installPlanApproval: Automatic
sourceNamespace: openshift-marketplace
The update channel, from the discovered information from the oc describe
packagemanifest command
The source catalog, from the discovered information from the oc describe
packagemanifest command
Install Plans
The OLM creates an install plan resource to represent the required process to install or update
an operator. The OLM updates the operator resource to reference the install plan in the
status.components.refs field. You can view the reference by using the oc describe
command on the operator resource.
312 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
If the install plan mode is set to Manual in the subscription, then you must manually approve the
install plan. To approve an install plan, change the spec.approved field to true. For example,
you can use the oc patch command to approve an install plan:
With an Automatic install plan mode, the OLM applies updates as soon as they are available.
Using Operators
Typically, operators create custom resource definitions. You create instances of those custom
resources to use the operator. Review the operator documentation to learn how to use an
operator.
Additionally, you can learn about the available custom resource definitions by examining
the operator. The CSV contains a list of the custom resource definitions in the
spec.customresourcedefinitions field. For example, use the following command to list the
custom resource definitions:
You can also use the oc explain command to view the description of individual custom resource
definitions.
Troubleshooting Operators
Some operators require additional steps to install or update. Review the documentation to validate
whether you performed all necessary steps, and to learn about support options.
You can examine the status of the operator, install plan, and CSV resources. When installing or
updating operators, the OLM updates those resources with progress information.
Even if the OLM installs an operator correctly, the operator might not function correctly.
DO280-OCP4.14-en-2-20240725 313
Chapter 7 | Manage Kubernetes Operators
References
For more information about operators, refer to the Operators Overview chapter in
the Red Hat OpenShift Container Platform 4.14 Operators documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#operators-overview
For more information about installing operators, refer to the Installing from
OperatorHub Using the CLI section in the Administrator Tasks chapter in the Red Hat
OpenShift Container Platform 4.14 Operators documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#olm-installing-operator-from-operatorhub-using-
cli_olm-adding-operators-to-a-cluster
For more information about operator groups, refer to the Operator Groups section
in the Understanding Operators chapter in the Red Hat OpenShift Container
Platform 4.14 Operators documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#olm-operatorgroups-about_olm-understanding-olm
314 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Guided Exercise
Outcomes
• Install operators from the CLI with manual updates.
This command ensures that the cluster is ready, and removes the openshift-file-
integrity namespace and File Integrity operator if they exist.
Instructions
In this exercise, you install the File Integrity operator with manual updates. The documentation of
the File Integrity operator contains specific installation instructions.
For more information, refer to the Installing the File Integrity Operator Using the CLI section
in the File Integrity Operator chapter in the Red Hat OpenShift Container Platform 4.14
Security and Compliance documentation at https://docs.redhat.com/en/documentation/
openshift_container_platform/4.14/html-single/security_and_compliance/index#installing-file-
integrity-operator-using-cli_file-integrity-operator-installation
1. Log in to the OpenShift cluster as the admin user with the redhatocp password.
...output omitted...
2. Find the details of the File Integrity operator within the OpenShift package manifests.
2.1. View the available operators within the OpenShift Marketplace by using the oc get
command.
DO280-OCP4.14-en-2-20240725 315
Chapter 7 | Manage Kubernetes Operators
2.2. Examine the File Integrity operator package manifest by using the oc describe
command.
3. Install the File Integrity operator. By following the operator installation instructions, you
must install the operator in the openshift-file-integrity namespace. Also, you must
make the operator available only in that namespace. The File Integrity operator requires you
to create a namespace with specific labels.
316 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
name: openshift-file-integrity
3.2. Create an operator group in the operator namespace. The operator group targets the
same namespace. You can use the template in the ~/DO280/labs/operators-
cli/operator-group.yaml path. Edit the file and configure the namespaces.
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: file-integrity-operator
namespace: openshift-file-integrity
spec:
targetNamespaces:
- openshift-file-integrity
3.3. Create the subscription in the operator namespace. You can use the template in the
~/DO280/labs/operators-cli/subscription.yaml path. Edit the file with
the data that you obtained in a previous step. Set the approval policy to Manual.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: file-integrity-operator
namespace: openshift-file-integrity
spec:
channel: "stable"
installPlanApproval: Manual
name: file-integrity-operator
source: do280-catalog-cs
sourceNamespace: openshift-marketplace
DO280-OCP4.14-en-2-20240725 317
Chapter 7 | Manage Kubernetes Operators
Verify that the operator has a condition of the InstallPlanPending type. The
operator can have other conditions, and they do not indicate a problem. The operator
references the install plan. You use the install plan name in a later step. If the install
plan is not generated, then wait a few moments and run the oc describe command
again.
4.2. View the install plan specification with the oc get command. Replace the name with
the install plan name that you obtained in a previous step.
The install plan is set to manual approval, and the approved field is set to false.
4.3. Approve the install plan with the oc patch command. Replace the name with the
install plan name that you obtained in a previous step.
318 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
4.4. Verify that the operator installs successfully, by using the oc describe command.
Check the latest transaction for the current status. The installation might not
complete immediately. If the installation is not complete, then wait a few minutes and
view the status again.
...output omitted...
DO280-OCP4.14-en-2-20240725 319
Chapter 7 | Manage Kubernetes Operators
5. Test the operator to ensure that it is functional. The operator watches FileIntegrity
resources, runs file integrity checks on nodes, and creates FileIntegrityNodeStatus
with the results of the checks.
5.2. Verify that the operator functions, by viewing the worker-fileintegrity object
with the oc describe command.
5.3. Use oc edit to edit the Grace Period to 60 in the FileIntegrity custom
resource to trigger a failure.
320 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Node Selector:
node-role.kubernetes.io/worker:
Tolerations:
Effect: NoSchedule
Key: node-role.kubernetes.io/master
Operator: Exists
Effect: NoSchedule
Key: node-role.kubernetes.io/infra
Operator: Exists
Events: <none>
Note
The first file integrity resource that you create might not work correctly.
DO280-OCP4.14-en-2-20240725 321
Chapter 7 | Manage Kubernetes Operators
Note
It may take several minutes for aide-worker-fileintegrity-master01-
failed to show. Use the --watch flag and wait a few minutes until the failed
configmap shows to move on to the next step. Press Ctrl+C to exit.
Data
\====
integritylog:
\----
Start timestamp: 2024-01-26 18:31:16 +0000 (AIDE 0.16)
AIDE found differences between database and filesystem!!
Summary:
Total number of entries: 32359
Added entries: 1
Removed entries: 0
Changed entries: 0
---------------------------------------------------
Added entries:
---------------------------------------------------
f++++++++++++++++: /hostroot/etc/cni/multus/certs/multus-
client-2024-01-26-15-14-01.pem
f++++++++++++++++: /hostroot/etc/foobar
---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------
/hostroot/etc/kubernetes/aide.db.gz
MD5 : UswXQiVa/VpjlXF1rCP0vA==
SHA1 : s6t06MCRrDgc4xOWnX6vk5rflGU=
RMD160 : jvDdvAOC7/tI0TjDe7Kzmy5nUk8=
TIGER : TjW192YTQBmG4oGza7siI6CBRnztgrp6
SHA256 : E8rWurdI9HgGP6402qWY+lDAaLoGiyNs
PEka/siI1F0=
322 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
SHA512 : JPDhgoEnNiTaDLqawkGtHplRW8f6zm3g
jDB3E6X6XM4+13yhjwh/pokFAp5BhRSc
0C4XXibXsS4OYxYiE5hBaw==
BinaryData
\====
Events: <none>
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 323
Chapter 7 | Manage Kubernetes Operators
Lab
Outcomes
• Install the Compliance operator on the command line.
This command ensures that the cluster API is reachable and that the operator that is used in
this exercise is not present.
Instructions
In this exercise, you install the Compliance operator. For more information, refer to
the Compliance Operator chapter in the Red Hat OpenShift Container Platform 4.14
Security and Compliance documentation at https://docs.redhat.com/en/documentation/
openshift_container_platform/4.14/html-single/security_and_compliance/index#co-overview.
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
2. Examine the package manifest for the Compliance operator to discover the operator name,
catalog name, suggested namespace, and channel.
3. Create the recommended openshift-compliance namespace.
4. Create an operator group with the compliance-operator name in the openshift-
compliance namespace. The target namespace of the operator group is the openshift-
compliance namespace. You can use the ~/DO280/labs/operators-review/
operator-group.yaml file as a template.
5. Create a compliance-operator subscription in the openshift-compliance
namespace. The subscription has the following parameters:
Field Value
channel stable
spec.name compliance-operator
source do280-catalog-cs
sourceNamespace openshift-marketplace
324 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 325
Chapter 7 | Manage Kubernetes Operators
Solution
Outcomes
• Install the Compliance operator on the command line.
This command ensures that the cluster API is reachable and that the operator that is used in
this exercise is not present.
Instructions
In this exercise, you install the Compliance operator. For more information, refer to
the Compliance Operator chapter in the Red Hat OpenShift Container Platform 4.14
Security and Compliance documentation at https://docs.redhat.com/en/documentation/
openshift_container_platform/4.14/html-single/security_and_compliance/index#co-overview.
1. Log in to your OpenShift cluster as the admin user with the redhatocp password.
...output omitted...
2. Examine the package manifest for the Compliance operator to discover the operator name,
catalog name, suggested namespace, and channel.
326 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
Field Value
catalog do280-catalog-cs
catalog-namespace openshift-marketplace
suggested-namespace openshift-compliance
defaultChannel stable
packageName compliance-operator
DO280-OCP4.14-en-2-20240725 327
Chapter 7 | Manage Kubernetes Operators
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- openshift-compliance
Field Value
channel stable
spec.name compliance-operator
source do280-catalog-cs
sourceNamespace openshift-marketplace
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
channel: stable
installPlanApproval: Automatic
name: compliance-operator
source: do280-catalog-cs
sourceNamespace: openshift-marketplace
328 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
The Operator Lifecycle Manager creates a cluster service version in the openshift-
compliance namespace. Wait until the cluster service version resource (CSV) is in the
Succeeded phase.
Although the CSV defines a single compliance-operator deployment, the operator has
two additional deployments. Wait until the compliance-operator, ocp4-openshift-
compliance-pp, and rhcos4-openshift-compliance-pp deployments are ready.
The available CSV version in the lab might change. Commands in the following steps
require you to replace the available version in the lab.
6.3. Inspect the CSV to view the operator deployment. Replace the version that you
obtained in a previous step. The .spec.install.spec.deployments JSONPath
expression describes the location of the operator deployments in the CSV resource.
Optionally, use the jq command to indent the output.
6.4. Use the oc command to list the workloads in the operator namespace.
...output omitted...
DO280-OCP4.14-en-2-20240725 329
Chapter 7 | Manage Kubernetes Operators
...output omitted...
7.1. Examine the alm-examples annotation in the CSV. Replace the version that you
obtained in a previous step.
The annotation contains an example scan setting binding that you can use. The
example is in JSON format. When creating a scan setting binding in the web console,
the YAML editor loads the same example.
You can also use the oc explain command to describe the scan setting binding
resource.
330 DO280-OCP4.14-en-2-20240725
Chapter 7 | Manage Kubernetes Operators
7.2. Create the scan setting binding resource by using the example file in the ~/DO280/
labs/operators-review/scan-setting-binding.yaml path.
7.3. Use the oc command to list compliance suite and pod resources. Execute the
command repeatedly until the compliance suite resource is in the DONE phase.
NAME ...
pod/compliance-operator-... ...
pod/ocp4-openshift-compliance-pp-... ...
pod/rhcos4-openshift-compliance-pp-... ...
To execute the scan, the compliance operator creates extra pods. The pods disappear
when the scan completes.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 331
Chapter 7 | Manage Kubernetes Operators
Summary
• Operators extend the capabilities of a Kubernetes cluster.
• Cluster operators provide the platform services of OpenShift, such as the web console.
• The Operator Lifecycle Manager manages add-on operators, which are sourced from catalogs
such as the OperatorHub.
• Most operators create and manage complex workloads based on declarative custom resources.
• Users can view, install, update, and troubleshoot add-on operators by using the web console.
• Users can use the package manifest, subscription, operator group, and install plan resources to
manage add-on operators from the command line or from the API.
332 DO280-OCP4.14-en-2-20240725
Chapter 8
Application Security
Goal Run applications that require elevated or special
privileges from the host operating system or
Kubernetes.
DO280-OCP4.14-en-2-20240725 333
Chapter 8 | Application Security
Objectives
• Create service accounts and apply permissions, and manage security context constraints.
Cluster administrators can run the following command to list the SCCs that OpenShift defines:
• anyuid
• hostaccess
• hostmount-anyuid
• hostnetwork
• hostnetwork-v2
• lvms-topolvm-node
• lvms-vgmanager
• machine-api-termination-handler
• node-exporter
• nonroot
• nonroot-v2
• privileged
• restricted
• restricted-v2
334 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Most pods that OpenShift creates use the restricted-v2 SCC, which provides limited access
to resources that are external to OpenShift. Use the oc describe command to view the security
context constraint that a pod uses.
Container images that are downloaded from public container registries, such as Docker Hub, might
fail to run when using the restricted-v2 SCC. For example, a container image that requires
running as a specific user ID can fail because the restricted-v2 SCC runs the container by
using a random user ID. A container image that listens on port 80 or on port 443 can fail for a
related reason. The random user ID that the restricted-v2 SCC uses cannot start a service
that listens on a privileged network port (port numbers that are less than 1024). Use the scc-
subject-review subcommand to list all the security context constraints that can overcome the
limitations that hinder the container:
The anyuid SCC defines the run as user strategy to be RunAsAny, which means that the
pod can run as any available user ID in the container. With this strategy, containers that require a
specific user can run the commands by using a specific user ID.
To change the container to run with a different SCC, you must create a service account that is
bound to a pod. Use the oc create serviceaccount command to create the service account,
and use the -n option if the service account must be created in a different namespace from the
current one:
To associate the service account with an SCC, use the oc adm policy command. Identify a
service account by using the -z option, and use the -n option if the service account exists in a
different namespace from the current one:
Important
Only cluster administrators can assign an SCC to a service account or remove an
SCC from a service account. Allowing pods to run with a less restrictive SCC can
make your cluster less secure. Use with caution.
DO280-OCP4.14-en-2-20240725 335
Chapter 8 | Application Security
Change an existing deployment to use the service account by using the oc set
serviceaccount command:
If the command succeeds, then the pods that are associated with the deployment redeploy.
Privileged Containers
Some containers might need to access the runtime environment of the host. For example, the
S2I builder class of privileged containers requires access beyond the limits of its own containers.
These containers can pose security risks, because they can use any resources on an OpenShift
node. Use SCCs to enable access for privileged containers by creating service accounts with
privileged access.
References
For more information, refer to the Managing Security Context Constraints chapter
in the Red Hat OpenShift Container Platform 4.14 Authentication and Authorization
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/authentication_and_authorization/index#managing-pod-security-
policies
336 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Guided Exercise
Outcomes
• Create service accounts and assign security context constraints (SCCs) to them.
This command ensures that the cluster API is reachable and creates some HTPasswd users
for the exercise.
Instructions
1. Log in to the OpenShift cluster and create the appsec-scc project.
1.1. Log in to the cluster as the developer user with the developer password.
DO280-OCP4.14-en-2-20240725 337
Chapter 8 | Application Security
2.2. Determine whether the application is successfully deployed. It should give an error,
because this image needs root privileges to deploy.
Note
It might take some time for the image to reach the Error state. You might also see
the CrashLoopBackOff status when you validate the health of the pod.
2.3. Review the application logs to confirm that insufficient privileges caused the failure.
Chef::Exceptions::InsufficientPermissions
-----------------------------------------
directory[/etc/gitlab] (gitlab::default line 26) had an error:
Chef::Exceptions::InsufficientPermissions: Cannot create directory[/etc/gitlab]
at /etc/gitlab due to insufficient permissions
...output omitted...
The application tries to write to the /etc directory. To allow the application to write
to the /etc directory, you can make the application run as the root user. To run the
application as the root user, you can grant the anyuid SCC to a service account.
338 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
The output confirms that the anyuid SCC allows the gitlab deployment to create
and update pods.
4. Modify the gitlab application to use the newly created service account. Verify that the
new deployment succeeds.
4.3. Verify that the gitlab redeployment succeeds. You might need to run the oc get
pods command multiple times until you see a running application pod.
5.1. Expose the gitlab application. Because the gitlab service listens on ports 22, 80,
and 443, you must use the --port option.
DO280-OCP4.14-en-2-20240725 339
Chapter 8 | Application Security
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
340 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Objectives
• Run an application that requires access to the Kubernetes API of the application's cluster.
If the pod definition does not specify a service account, then the pod uses the default service
account. OpenShift grants no rights to the default service account, which is expected for
business workloads. It is not recommended to grant additional permissions to the default
service account, because it grants those additional permissions to all pods in the project, which
might not be intended.
Monitoring Applications
Applications in this category need read access to watch cluster resources or to verify cluster
health. For example, a service such as Red Hat Advanced Cluster Security (ACS) needs read
access to scan your cluster containers for vulnerabilities.
Controllers
Controllers are applications that constantly watch and try to reach the intended state of a
resource.
For example, GitOps tools, such as ArgoCD, have controllers that watch cluster resources that
are stored in a repository, and update the cluster to react to changes in that repository.
DO280-OCP4.14-en-2-20240725 341
Chapter 8 | Application Security
Operators
Operators automate creating, configuring, and managing instances of Kubernetes-native
applications. Therefore, operators need permissions for configuration and maintenance tasks.
For example, a database operator might create a deployment when it detects a CR that
defines a new database.
For example, you can create a cluster role for an application to read secrets.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
The API groups, where an empty string represents the core API
The verbs or actions that the role allows the application to perform on the resource
You can also use the default cluster roles that OpenShift defines, which have wider permissions.
For example, you can use the edit cluster role to get read access on secrets, as in the previous
secret-reader cluster role.
The edit cluster role is less restrictive, and allows the application to create or update most
objects.
To bind a role or cluster role to a service account in a namespace, you can use the oc adm
policy command with the add-role-to-user subcommand.
This command assigns a cluster role to a service account that exists in the current project:
To create a cluster role binding, you can use the oc adm policy command with the add-
cluster-role-to-user subcommand.
342 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
The following command assigns a cluster role to a service account with a cluster scope:
Applications must use the service account token internally when accessing a Kubernetes API. In
earlier OpenShift versions than 4.11, OpenShift generated a secret with a token when creating a
service account. Starting from OpenShift 4.11, tokens are no longer generated automatically. You
must use the TokenRequest API to generate the service account token. You must mount the token
as a pod volume for the application to access it.
You can use the following syntax to refer to service accounts from other projects:
system:serviceaccount:project:service-account
For example, if you have an application pod in the project-1 project that requires access to
project-2 secrets, then you must take these actions:
• Create a role binding on the project-2 project that references the app-sa service account
and the secret-reader role or cluster role.
DO280-OCP4.14-en-2-20240725 343
Chapter 8 | Application Security
In this way, you restrict an application's access to a Kubernetes API to specified namespaces.
344 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
References
For more information, refer to the Using RBAC to Define and Apply Permissions
chapter in the Red Hat OpenShift Container Platform 4.14 Authentication and
Authorization documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/authentication_and_authorization/index#authorization-
overview_using-rbac
For more information, refer to the Understanding and Creating Service Accounts
chapter in the Red Hat OpenShift Container Platform 4.14 Authentication and
Authorization documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/authentication_and_authorization/index#service-accounts-
overview_understanding-service-accounts
For more information, refer to the Using Service Accounts in Applications chapter in
the Red Hat OpenShift Container Platform 4.14 Authentication and Authorization
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/authentication_and_authorization/index#service-accounts-
overview_using-service-accounts
DO280-OCP4.14-en-2-20240725 345
Chapter 8 | Application Security
Guided Exercise
Outcomes
You should be able to grant Kubernetes API access to an application by using a service
account that has a role with the required privileges.
The lab command copies the following files to the lab directory:
• The manifests to install the config-app API, which has an endpoint to show its
internal configuration. The deployment manifest mounts the API configuration from a
configuration map.
In this exercise, you grant permissions on the appsec-api project to the Reloader
application, for read access to the configuration map API and edit access to the deployment
API.
346 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Warning
Using a controller to update a Kubernetes resource by reacting to changes
is an alternative to using GitOps. However, do not use both a controller and
GitOps for such changes because it might cause conflicts.
Instructions
1. Change to the lab directory.
2.1. Open a terminal window and log in as the admin user with the redhatocp password.
3. Create the configmap-reloader service account to hold the permissions for the
Reloader application. Then, assign the configmap-reloader service account to the
configmap-reloader deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configmap-reloader
DO280-OCP4.14-en-2-20240725 347
Chapter 8 | Application Security
name: configmap-reloader
namespace: configmap-reloader
spec:
selector:
matchLabels:
app: configmap-reloader
release: "reloader"
template:
metadata:
labels:
app: configmap-reloader
spec:
serviceAccountName: configmap-reloader-sa
containers:
...output omitted...
3.3. Use the oc command to create the configmap-reloader deployment from the
reloader-deployment.yaml file.
4.1. Log in to the cluster as the developer user with the developer password.
5.1. Assign the edit cluster role to the configmap-reloader-sa service account in
the appsec-api project. To assign the cluster role, create a local role binding by
using the oc policy add-role-to-user command with the following options:
• The system:serviceaccount:configmap-reloader:configmap-
reloader-sa username to reference the configmap-reloader-sa service
account in the configmap-reloader project.
• The --rolebinding-name option to use the reloader-edit name for the role
binding.
• The -n appsec-api, which is optional because you are already in the appsec-
api project.
348 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Note
The edit cluster role with the local role binding allows the configmap-
reloader-sa service account to modify most objects in the appsec-api project.
In a production scenario, it is best to grant access only to the APIs that your
application requires.
6. Install the config-app API by using the manifest files in the config-app directory.
6.1. Use the oc apply command with the -f option to create all the manifests in the
config-app directory.
6.2. Read the config.yaml content from the config-app configuration map by
running the oc get command.
6.3. Run the curl command to verify that the exposed route, https://config-app-
appsec-api.apps.ocp4.example.com/config, shows the config-app
configuration map content.
7. Update the config-app configuration map description key and query /config
endpoint to verify that the Reloader controller upgrades the config-app deployment.
DO280-OCP4.14-en-2-20240725 349
Chapter 8 | Application Security
7.1. Update the description data in the configuration map in the config-app/
configmap.yaml file to the API that exposes its configuration value.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-app
namespace: appsec-api
data:
config.yaml: |
application:
name: "config-app"
description: "API that exposes its configuration"
7.3. Use the watch command to query the API /config endpoint by using the curl
command to verify that the API configuration changes. Press Ctrl+C to exit.
{
"application": {
"description": "API that exposes its configuration",
"name": "config-app"
}
}
[student@workstation appsec-api]$ cd
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
350 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Objectives
• Automate regular cluster and application management tasks with Kubernetes cron jobs.
Maintenance Tasks
Cluster administrators can use scheduled tasks to automate maintenance tasks in the cluster.
Other users can create scheduled tasks for regular application maintenance.
Maintenance tasks vary in the privileges that they require. Cluster maintenance tasks require
privileged pods, whereas most applications might not require elevated privileges.
Job
Kubernetes jobs specify a task that is executed once.
Cron Job
Kubernetes cron jobs have a schedule to execute a task regularly.
When a cron job is due for execution, Kubernetes creates a job resource. Kubernetes creates these
jobs from a template in the cron job definition. Other than this relationship, Kubernetes jobs and
cron jobs are workload resource types, such as deployments or daemon sets.
Kubernetes Jobs
The job resource includes a pod template that describes the task to execute. You can use the oc
create job --dry-run=client command to get the YAML representation of the Kubernetes
job resource:
A job contains a pod template, and this pod template must specify at least one container. You can
add metadata such as labels or annotations to the job definition and pod template.
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: test
spec:
template:
metadata:
DO280-OCP4.14-en-2-20240725 351
Chapter 8 | Application Security
creationTimestamp: null
spec:
containers:
- command:
- curl
- https://example.com
image: registry.access.redhat.com/ubi8/ubi:8.6
name: test
resources: {}
restartPolicy: Never
status: {}
Job specification
Pod template
Pod specification
Pod containers
Command
Container image
In Kubernetes, cron job resources are similar to job resources. The jobTemplate key follows the
same structure as a job. The schedule key describes when the task runs.
apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: null
name: test
spec:
jobTemplate:
metadata:
creationTimestamp: null
name: test
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
352 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
- command:
- curl
- https://example.com
image: registry.access.redhat.com/ubi8/ubi:8.6
name: test
resources: {}
restartPolicy: OnFailure
schedule: 0 0 * * *
status: {}
Job template
Job specification
Pod template
Pod specification
Command
Container image
DO280-OCP4.14-en-2-20240725 353
Chapter 8 | Application Security
Note
Refer to the crontab(5) manual page for more information about the cron job
schedule specification.
For example, consider creating periodic backups for an application. This application requires the
following steps to create the backup:
The following cron job definition shows a possible implementation of these steps:
apiVersion: batch/v1
kind: CronJob
metadata:
name: wordpress-backup
spec:
schedule: 0 2 * * 7
jobTemplate:
spec:
template:
spec:
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: wp-cli
image: registry.io/wp-maintenance/wp-cli:2.7
resources: {}
command:
- bash
- -xc
args:
- >
wp maintenance-mode activate ;
wp db export | gzip > database.sql.gz ;
wp maintenance-mode deactivate ;
rclone copy database.sql.gz s3://bucket/backups/ ;
rm -v database.sql.gz ;
354 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Note
The > symbol uses the YAML folded style, which converts all newlines to spaces
when parsing. Each command is separated with a semicolon (;), because the string
in the args key is passed as a single argument to the bash -xc command.
This combination of the command and args keys has the same effect as executing
the commands in a single line inside the container:
For more information about the YAML folded style, refer to https://yaml.org/
spec/1.2.2/#folded-style
For example, when images are updated, clusters might accumulate unused images. These images
might occupy much space. Executing the crictl rmi --prune command on all nodes of the
cluster frees this space.
The following configuration map contains a shell script that cleans images in all cluster nodes by
executing a debug pod and running the crictl command with the chroot command to access
the root file system of the node:
apiVersion: v1
kind: ConfigMap
metadata:
name: maintenance
app: crictl
data:
maintenance.sh: |
#!/bin/bash
NODES=$(oc get nodes -o=name)
for NODE in ${NODES}
do
echo ${NODE}
oc debug ${NODE} -- \
DO280-OCP4.14-en-2-20240725 355
Chapter 8 | Application Security
chroot /host \
/bin/bash -xc 'crictl images ; crictl rmi --prune'
echo $?
done
This task can be scheduled regularly by using a cron job. The quay.io/openshift/origin-
cli:4.14 container provides the oc command that runs the debug pod. The pod mounts the
configuration map and executes the maintenance script.
apiVersion: batch/v1
kind: CronJob
metadata:
name: image-pruner
spec:
schedule: 0 * * * *
jobTemplate:
spec:
template:
spec:
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: image-pruner
image: quay.io/openshift/origin-cli:4.14
resources: {}
command:
- /opt/maintenance.sh
volumeMounts:
- name: scripts
mountPath: /opt
volumes:
- name: scripts
configMap:
name: maintenance
defaultMode: 0555
Cluster maintenance tasks might require elevated privileges. Administrators can assign service
accounts to any workload, including Kubernetes jobs and cron jobs.
You can create a service account with the required privileges, and specify the service account
with the serviceAccountName key in the pod definition. You can also use the oc set
serviceaccount command to change the service account of an existing workload.
356 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
References
Kubernetes Job
https://kubernetes.io/docs/concepts/workloads/controllers/job/
DO280-OCP4.14-en-2-20240725 357
Chapter 8 | Application Security
Guided Exercise
Outcomes
• Manually delete unused images from the nodes.
Instructions
1. Log in to the OpenShift cluster and switch to the appsec-prune project.
...output omitted...
...output omitted...
2.1. List the deployments and pods in the prune-apps namespace. Each deployment
has a pod that uses a different image.
358 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
2.2. List the container images in the node. The node has three httpd images and three
nginx images.
2.3. Remove the unused images in the node. Only the httpd container images are
deleted, because no other container uses them.
You can ignore the error that a container is using the image.
DO280-OCP4.14-en-2-20240725 359
Chapter 8 | Application Security
2.4. Delete the deployments in the prune-apps namespace to remove the pods that use
the nginx images.
Note
The cron job removes the unused container images in a later step.
apiVersion: v1
kind: ConfigMap
metadata:
name: maintenance
labels:
ge: appsec-prune
app: crictl
data:
maintenance.sh: |
#!/bin/bash -eu
NODES=$(oc get nodes -o=name)
for NODE in ${NODES}
do
echo ${NODE}
oc debug ${NODE} -- \
chroot /host \
/bin/bash -euxc 'crictl images ; crictl rmi --prune'
done
Note
The ~/DO280/solutions/appsec-prune/configmap-prune.yaml file
contains the correct configuration and can be used for comparison.
360 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
apiVersion: batch/v1
kind: CronJob
metadata:
name: image-pruner
labels:
ge: appsec-prune
app: crictl
spec:
schedule: '*/4 * * * *'
jobTemplate:
spec:
template:
spec:
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: crictl
image: registry.ocp4.example.com:8443/openshift/origin-cli:4.14
resources: {}
command:
- /opt/maintenance.sh
volumeMounts:
- name: scripts
mountPath: /opt
volumes:
- name: scripts
configMap:
name: maintenance
defaultMode: 0555
The registry.ocp4.example.com:8443/openshift/origin-cli:4.14
container image is a copy of the official quay.io/openshift/origin-
cli:4.14 image that contains the oc command.
Note
The ~/DO280/solutions/appsec-prune/cronjob-prune.yaml file contains
the correct configuration and can be used for comparison.
Note
A warning indicates that the pod would violate several policies. The pod fails when
the cron job is executed, because it lacks permissions to execute the maintenance
task. A fix for this issue is implemented in a later step.
DO280-OCP4.14-en-2-20240725 361
Chapter 8 | Application Security
3.5. Wait until the cron job is scheduled, and get the name of the associated job. The job
completion status is 0/1, and the pod has an error status. Press Ctrl+C to exit the
watch command.
3.7. Delete the failed cron job. This action deletes the failed job and pod resources.
Note
Recommended alternatives for image pruning are covered in the DO380 -
Red Hat OpenShift Administration III: Scaling Deployments in
the Enterprise course.
4. Set the appropriate permissions to run the image pruner cron job.
4.3. Add the cluster-admin role to the image-pruner service account of the
namespace.
362 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
apiVersion: batch/v1
kind: CronJob
metadata:
name: image-pruner
labels:
ge: appsec-prune
app: crictl
spec:
schedule: '*/4 * * * *'
jobTemplate:
spec:
template:
spec:
serviceAccountName: image-pruner
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: crictl
image: registry.ocp4.example.com:8443/openshift/origin-cli:4.14
resources: {}
command:
- /opt/maintenance.sh
volumeMounts:
- name: scripts
mountPath: /opt
volumes:
- name: scripts
configMap:
name: maintenance
defaultMode: 0555
4.6. Wait until the new job and the pod are created. Press Ctrl+C to exit the watch
command when the job and the pod are marked as completed.
DO280-OCP4.14-en-2-20240725 363
Chapter 8 | Application Security
4.7. Get the logs of the pod that executed the maintenance task.
You can ignore the error that a container is using the image.
5. Clean up resources.
[student@workstation appsec-prune]$ cd
[student@workstation ~]$
5.3. Remove the cron job resource and the configuration map.
364 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
Finish
On the workstation machine, use the lab command to complete this exercise. This step is
important to ensure that resources from previous exercises do not impact upcoming exercises.
DO280-OCP4.14-en-2-20240725 365
Chapter 8 | Application Security
Lab
Application Security
Deploy an application that requires additional operating system privileges to run.
Deploy an application that requires access to the Kubernetes APIs to perform cluster
maintenance tasks.
Outcomes
• Deploy a cluster maintenance application that must be executed regularly.
• A legacy payroll application that must run as the fixed 0 UID to open the TCP 80 port.
• A project cleaner deletes projects with the appsec-review-cleaner label and that are
longer than 10 seconds. This short expiration time is deliberate for the lab purposes.
You must deploy the project cleaner application to delete obsolete projects every minute.
The lab start command copies the required files for the exercise to the lab directory:
• A pod manifest that contains a project cleaner application. You can use this pod to test the
project cleaner application and copy the pod specification into the cron job to complete
the exercise.
• A manifest with the project-cleaner cluster role that grants the application access to
find and delete namespaces.
• A cron job template file that you can edit to create cron jobs.
• A script that generates projects to verify that the project cleaner application works.
Instructions
1. Log in to your OpenShift cluster as the developer user with the developer password and
create the appsec-review project.
2. Change to the ~/DO280/labs/appsec-review directory and deploy the payroll
application in the payroll-app.yaml file. Verify that the application cannot run.
366 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
3. As the admin user, look for an SCC that allows the workload in the payroll-app.yaml
deployment to run.
4. Create the payroll-sa service account and assign to it the SCC that the application
requires. Then, assign the payroll-sa service account to the payroll-api deployment.
5. Verify that the payroll API is accessible by running the curl command from the payroll-
api deployment. Use the http://localhost/payments/status URL to verify that the
API is working.
6. Create the project-cleaner-sa service account and assign it to the project-
cleaner.yaml pod manifest to configure the application permissions.
7. Create the project-cleaner role in the cluster-role.yaml file and assign it to the
project-cleaner-sa service account.
8. Edit the cron-job.yaml file to create the appsec-review-cleaner cron job by using
the project-cleaner.yaml pod manifest as the job template. Create the cron job and
configure it to run every minute. You can use the solution file in the ~/DO280/solutions/
appsec-review/cron-job.yaml path.
9. Optionally, verify that the project cleaner executed correctly. Use the generate-
projects.sh script from the lab directory to generate projects for deletion. Wait for the
next job execution and print the logs from that job's pod.
Note
The logs might not be in the last pod, but in the previous one.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 367
Chapter 8 | Application Security
Solution
Application Security
Deploy an application that requires additional operating system privileges to run.
Deploy an application that requires access to the Kubernetes APIs to perform cluster
maintenance tasks.
Outcomes
• Deploy a cluster maintenance application that must be executed regularly.
• A legacy payroll application that must run as the fixed 0 UID to open the TCP 80 port.
• A project cleaner deletes projects with the appsec-review-cleaner label and that are
longer than 10 seconds. This short expiration time is deliberate for the lab purposes.
You must deploy the project cleaner application to delete obsolete projects every minute.
The lab start command copies the required files for the exercise to the lab directory:
• A pod manifest that contains a project cleaner application. You can use this pod to test the
project cleaner application and copy the pod specification into the cron job to complete
the exercise.
• A manifest with the project-cleaner cluster role that grants the application access to
find and delete namespaces.
• A cron job template file that you can edit to create cron jobs.
• A script that generates projects to verify that the project cleaner application works.
Instructions
1. Log in to your OpenShift cluster as the developer user with the developer password and
create the appsec-review project.
368 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
...output omitted...
2.3. Verify that the application fails to run by reading the deployment logs.
3. As the admin user, look for an SCC that allows the workload in the payroll-app.yaml
deployment to run.
3.2. Run the oc adm policy scc-subject-review command to get an SCC that
allows the application to run.
DO280-OCP4.14-en-2-20240725 369
Chapter 8 | Application Security
4. Create the payroll-sa service account and assign to it the SCC that the application
requires. Then, assign the payroll-sa service account to the payroll-api deployment.
4.1. Run the oc create command to create the payroll-sa service account.
4.3. Use the oc set serviceaccount command to add the payroll-sa service
account to the payroll-api deployment.
5. Verify that the payroll API is accessible by running the curl command from the payroll-
api deployment. Use the http://localhost/payments/status URL to verify that the
API is working.
5.1. Use the oc exec command with the payroll-api deployment to run the curl
command. Provide the -sS option to hide progress output and show errors.
6.2. Edit the project-cleaner.yaml pod manifest file to use the project-cleaner-
sa service account.
370 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
apiVersion: v1
kind: Pod
metadata:
name: project-cleaner
namespace: appsec-review
spec:
restartPolicy: Never
serviceAccountName: project-cleaner-sa
containers:
- name: project-cleaner
...output omitted...
7. Create the project-cleaner role in the cluster-role.yaml file and assign it to the
project-cleaner-sa service account.
8. Edit the cron-job.yaml file to create the appsec-review-cleaner cron job by using
the project-cleaner.yaml pod manifest as the job template. Create the cron job and
configure it to run every minute. You can use the solution file in the ~/DO280/solutions/
appsec-review/cron-job.yaml path.
8.1. Edit the cron-job.yaml file to replace the CHANGE_ME string with the "*/1 * * *
*" schedule to execute the job every minute.
apiVersion: batch/v1
kind: CronJob
metadata:
name: appsec-review-cleaner
namespace: appsec-review
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
...output omitted...
8.2. Replace the CHANGE_ME label in the jobTemplate definition with the spec definition
from the project-cleaner.yaml pod manifest. Although the long image name
might show across two lines, you must add it as one line.
DO280-OCP4.14-en-2-20240725 371
Chapter 8 | Application Security
apiVersion: batch/v1
kind: CronJob
metadata:
name: appsec-review-cleaner
namespace: appsec-review
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: project-cleaner-sa
containers:
- name: project-cleaner
image: registry.ocp4.example.com:8443/redhattraining/do280-project-
cleaner:v1.1
imagePullPolicy: Always
env:
- name: "PROJECT_TAG"
value: "appsec-review-cleaner"
- name: "EXPIRATION_SECONDS"
value: "10"
9. Optionally, verify that the project cleaner executed correctly. Use the generate-
projects.sh script from the lab directory to generate projects for deletion. Wait for the
next job execution and print the logs from that job's pod.
Note
The logs might not be in the last pod, but in the previous one.
9.1. Run the generate-projects.sh script to create test projects that the project
cleaner will delete the next time that it runs.
372 DO280-OCP4.14-en-2-20240725
Chapter 8 | Application Security
9.2. List the pods in the appsec-review project until you see a pod with the Completed
status that is later than the last label that the script applied.
9.3. Print the logs from the last completed job, to verify that it deleted the obsolete
projects.
9.4. Change to the home directory to prepare for the next exercise.
[student@workstation appsec-review]$ cd
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 373
Chapter 8 | Application Security
Summary
• Security context constraints (SCCs) limit the access from a running pod in OpenShift to the
host environment.
• An application can assign an SCC to the application service account to use it.
• With the Kubernetes APIs, a user or an application can query and modify the cluster state.
• To give an application access to the Kubernetes APIs, you can create roles or cluster roles that
describe the application requirements, and assign those roles to the application service account.
• You can automate cluster and application management tasks by creating Kubernetes cron jobs
that run periodic management jobs.
374 DO280-OCP4.14-en-2-20240725
Chapter 9
OpenShift Updates
Goal Update an OpenShift cluster and minimize
disruption to deployed applications.
DO280-OCP4.14-en-2-20240725 375
Chapter 9 | OpenShift Updates
Objectives
• Describe the cluster update process.
This software distribution system for OTA manages the controller manifests, cluster roles, and
any other resources to update a cluster to a particular version. With this feature, a cluster can run
the 4.14.x version seamlessly. With OTA, a cluster can use new features as they become available,
including the latest bug fixes and security patches. OTA substantially decreases downtime due to
upgrades.
Red Hat hosts and manages this service at https://console.redhat.com/openshift, and hosts
cluster images at https://quay.io. You use a single interface to manage the lifecycle of all your
OpenShift clusters. With OTA, you can update faster by skipping intermediate versions. For
example, you can update from 4.14.1 to 4.14.3, and thus bypass 4.14.2.
Important
Starting with OpenShift 4.10, the OTA system requires a persistent connection to
the internet. For more information about how to update disconnected clusters,
consult the Updating a Restricted Network Cluster chapter in the references section.
376 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
The service defines upgrade paths that correspond to cluster eligibility for certain updates.
Upgrade paths belong to update channels. Consider a channel as a representation of the upgrade
path. The channel controls the frequency and stability of updates. The OTA policy engine
represents channels as a series of pointers to particular versions within the upgrade path.
A channel name consists of the following parts: the tier (release candidate, fast, stable, and
extended update support), the major version (4), and the minor version (.12). Example channel
names include: candidate-4.14, fast-4.14, stable-4.14, and eus-4.14. Each channel
delivers patches for a given cluster version.
Important
Red Hat does not support the updates that are listed only in the candidate channel.
Note
Customers can help to improve OpenShift by joining the Red Hat connected
customers program. If you join this program, then your cluster is registered to the
fast channel.
If Red Hat observes operational issues from a fast channel update, then that update is skipped in
the stable channel. The stable channel delay provides time to observe any unforeseen problems in
OpenShift clusters that testing did not reveal.
EUS releases have no difference between stable-4.x and eus-4.x channels (where x denotes
the even-numbered minor release) until OpenShift Container Platform moves to the EUS phase.
You can switch to the EUS channel as soon as it becomes available.
DO280-OCP4.14-en-2-20240725 377
Chapter 9 | OpenShift Updates
candidate-4.x Supported if the update is also listed in the fast or stable channels.
fast-4.x Supported
stable-4.x Supported
eus-4.x Supported
Note
The x in the channel name denotes the minor version.
Upgrade Paths
You can apply each of the upgrade channels to a Red Hat OpenShift Container Platform version
4.14 cluster in different environments. The following paragraphs describe an example scenario
where the 4.14.3 version has a defect.
Stable channel
When using the stable-4.14 channel, you can upgrade your cluster from 4.14.0 to 4.14.1
or to 4.14.2. If an issue is discovered in the 4.14.3 release, then you cannot upgrade to that
version. When a patch becomes available in the 4.14.4 release, you can update your cluster to
that version.
This channel is suited to production environments, because the Red Hat SRE teams and
support services test the releases in that channel.
Fast channel
The fast-4.14 channel can deliver 4.14.1 and 4.14.2 updates but not 4.14.3. Red Hat also
supports this channel, and you can apply it to development, QA, or production environments.
Candidate channel
You can use the candidate-4.14 channel to install the latest features of OpenShift. With
this channel, you can upgrade to all z-stream releases, such as 4.14.1, 4.14.2, and 4.14.3.
You use this channel to access the latest features of the product as they get released. This
channel is suited to development and pre-production environments.
EUS channel
When switching to the eus-4.14 channel, the stable-4.14 channel does not receive z-
stream updates until the next EUS version becomes available.
378 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Note
Starting with OpenShift Container Platform 4.8, Red Hat denotes all even-
numbered minor releases as Extended Update Support (EUS) releases.
The following graphic describes the update graphs for the stable and candidate channels:
Red Hat provides support for the General Availability (GA) updates that are released in the stable
and fast channels. Red Hat does not support updates that are listed only in the candidate channel.
To ensure the stability of the cluster and the proper level of support, switch only from a stable
channel to a fast channel. Although it is possible to switch from a stable channel or a fast channel
to a candidate channel, it is not recommended. The candidate channel is best suited to testing
feature acceptance and to assist in qualifying the next version of OpenShift Container Platform.
Note
The release of updates for patch and security fixes ranges from several hours to
a day. This delay provides time to assess any operational impacts to OpenShift
clusters.
Web console
Navigate to the Administration > Cluster Settings page on the details tab, and then click the
pencil icon.
DO280-OCP4.14-en-2-20240725 379
Chapter 9 | OpenShift Updates
Command line
Execute the following command to switch to another update channel by using the oc client.
You can also switch to another update channel, such as stable-4.14, to update to the next
minor version of OpenShift Container Platform.
Note
The prerequisite to pause the machine health check resources is not required on
single-node installations.
380 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Run the following command to list all the available machine health check resources.
Add the cluster.x-k8s.io/paused annotation to the machine health check resource to pause
it before updating the cluster.
Over-the-air Updates
OTA follows a client-server approach. Red Hat hosts the cluster images and the update
infrastructure. OTA generates all possible update paths for your cluster. OTA also gathers
information about the cluster and your entitlement to determine the available upgrade paths. The
web console sends a notification when a new update is available.
The following diagram describes the updates architecture: Red Hat hosts both the cluster images
and a "watcher", which automatically detects new images that are pushed to Quay. The Cluster
Version Operator (CVO) receives its update status from that watcher. The CVO starts by updating
the cluster components via their operators, and then updates any extra components that the
Operator Lifecycle Manager (OLM) manages.
DO280-OCP4.14-en-2-20240725 381
Chapter 9 | OpenShift Updates
With telemetry, Red Hat can determine the update path. The cluster uses a Prometheus-based
Telemeter component to report on the state of each cluster operator. The data is anonymized and
sent back to Red Hat servers that advise cluster administrators about potential new releases.
Note
Red Hat values customer privacy. For a complete list of the data that Telemeter
gathers, consult the Data Collection and Telemeter Sample Metrics documents in
the references section.
In the future, Red Hat intends to extend the list of updated operators that are included in the
upgrade path to include independent software vendor (ISV) operators.
382 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Important
Rolling back your cluster to an earlier version is not supported. If your update is
failing to complete, contact Red Hat support.
The update process also updates the underlying operating system when updates are available.
The updates use the rpm-ostree technology for managing transactional upgrades. Updates are
delivered via container images and are part of the OpenShift update process. When the update
deploys, the nodes pull the new image, extract it, write the packages to the disk, and then modify
the bootloader to boot into the new version. The machine reboots and implements a rolling update
to ensure that the cluster capacity is minimally impacted.
• Be sure to update all operators that are installed through the OLM to the 4.14 version before
updating the OpenShift cluster.
• Retrieve the cluster version and review the current update channel information. If you are
running the cluster in production, then ensure that the channel reads stable.
• View the available updates and note the version number of the update to apply.
DO280-OCP4.14-en-2-20240725 383
Chapter 9 | OpenShift Updates
Recommended updates:
VERSION IMAGE
4.14.10 quay.io/openshift-release-dev/ocp-release@sha256:...
...output omitted...
– Run the following command to install the latest available update for your cluster.
– Run the following command to install a specific version. VERSION corresponds to one of the
available versions that the oc adm upgrade command returns.
• The previous command initializes the update process. Run the following command to review the
status of the Cluster Version Operator (CVO) and the installed cluster operators.
• Use the following command to review the cluster version history and monitor the status of the
update. It might take some time for all the objects to finish updating.
The history contains a list of the most recent versions that were applied to the cluster. This list is
updated when the CVO applies an update. The list is ordered by date, where the newest update
is first in the list.
If the rollout completed successfully, then updates in the history have a Completed state.
Otherwise, the update has a Partial state if it failed or did not complete.
384 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Important
When an update is failing to complete, the Cluster Version Operator (CVO) reports
the status of any blocking components and attempts to reconcile the update.
Rolling back your cluster to a previous version is not supported. If your update is
failing to complete, contact Red Hat support.
• After the process completes, you can confirm that the cluster is updated to the new version.
DO280-OCP4.14-en-2-20240725 385
Chapter 9 | OpenShift Updates
References
For more information about update channels, update prerequisites, and updating
clusters in disconnected environments, refer to the Updating a Restricted Network
Cluster and Updating a Cluster Between Minor Versions chapters in the Red Hat
OpenShift Container Platform 4.14 Updating Clusters documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/updating_clusters/index#updating-restricted-network-cluster
For more information about updating operators that are installed through the
Operator Lifecycle Manager, refer to the Upgrading Installed Operators section in
the Administrator Tasks chapter in the Red Hat OpenShift Container Platform 4.14
Working with Operators documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#olm-upgrading-operators
For more information about the OpenShift Container Platform upgrade paths, visit
the following page in the customer portal:
https://access.redhat.com/solutions/4583231
For more information about the OpenShift Container Platform update graph, visit
the following page in the customer portal:
https://access.redhat.com/labs/ocpupgradegraph/update_path
For more information about OpenShift Extended Update Support (EUS), visit the
following page in the customer portal:
https://access.redhat.com/support/policy/updates/openshift-eus
For more information about the OpenShift Container Platform lifecycle policy, visit
the following page in the customer portal:
https://access.redhat.com/support/policy/updates/openshift
386 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Quiz
2. Which component manages the updates of operators that are not cluster operators?
a. Operator Lifecycle Manager (OLM)
b. Telemetry client (Telemeter)
c. Cluster Version Operator (CVO)
3. Which two commands can retrieve the currently running cluster version? (Choose two.)
a. oc get updatechannels
b. oc adm upgrade
c. oc get clusterchannel
d. oc get clusterversion
e. oc get clusterupgrades
DO280-OCP4.14-en-2-20240725 387
Chapter 9 | OpenShift Updates
Solution
2. Which component manages the updates of operators that are not cluster operators?
a. Operator Lifecycle Manager (OLM)
b. Telemetry client (Telemeter)
c. Cluster Version Operator (CVO)
3. Which two commands can retrieve the currently running cluster version? (Choose two.)
a. oc get updatechannels
b. oc adm upgrade
c. oc get clusterchannel
d. oc get clusterversion
e. oc get clusterupgrades
388 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Objectives
• Identify applications that use deprecated Kubernetes APIs.
OpenShift Versions
Kubernetes is an open source container orchestration engine for automating the deployment,
scaling, and management of containerized applications. The OpenShift Container Platform
foundation is based on Kubernetes and therefore shares the underlying technology. The following
table lists the OpenShift version and the Kubernetes version that it is based on:
4.12 1.25
4.13 1.26
4.14 1.27
When a stable version of a feature is released, the beta versions are marked as deprecated and are
removed after three Kubernetes releases. If a request uses a deprecated API version, then the API
server returns a deprecation warning that includes the name of the current version of the cluster.
DO280-OCP4.14-en-2-20240725 389
Chapter 9 | OpenShift Updates
If a request uses an API version that Kubernetes removed, then the API server returns an error,
because that API version is not supported in the cluster.
Note
For more information about the API versions that are deprecated and removed in
Kubernetes, consult Kubernetes Deprecated API Migration Guide in the references
section.
390 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
podsecuritypolicies.v1beta1.policy
1.25 28 77
...output omitted...
A blank REMOVEDINRELEASE column indicates that the current API version will be kept in
future releases. Even if an API has a blank value in the REMOVEDINRELEASE column, it might be
deprecated; although such an API is not directly scheduled for removal in the next release, it might
still be removed in the future.
Note
You can use a JSONPath filter to retrieve the results. The FILTER variable is written
on a single line.
If the command does not retrieve any information, then it indicates that none of the
installed APIs are deprecated.
You can use a JSONPath filter for a list of actions for that resource and who did them.
apirequestcount.apiserver.openshift.io/cronjobs.v1.batch
Verbs Username UserAgent
get update system:serviceaccount:kube-system:cronj... kube-controller-manager/
v1...
watch system:kube-controller-manager kube-controller-manager/
v1...
...output omitted...
Some features that were available in previous OpenShift releases are deprecated or removed.
A deprecated feature is not recommended for new deployments, because a future release will
remove it. The following table contains a short list of the deprecated and removed features in
OpenShift.
DO280-OCP4.14-en-2-20240725 391
Chapter 9 | OpenShift Updates
Note
For more information about the deprecated and removed API versions in
Kubernetes, consult the OpenShift Container Platform 4.14 release notes in the
references section.
APIRemovedInNextReleaseInUse
This alert is triggered for APIs that OpenShift Container Platform will remove in the next
release.
APIRemovedInNextEUSReleaseInUse
This alert is triggered for APIs that OpenShift Container Platform Extended Update Support
(EUS) will remove in the next release.
The alert describes the situation with context to identify the affected workload.
392 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
You can extract the alerts in JSON format from the Prometheus stateful set, and then filter the
result to retrieve the deprecated API alerts.
Note
If the output of the jq command is an empty JSON array [], then the alerts were
not reported.
DO280-OCP4.14-en-2-20240725 393
Chapter 9 | OpenShift Updates
Administrators must evaluate their cluster for workloads that use removed APIs, and migrate the
affected components to the appropriate new API version. After migration, the administrator can
provide an acknowledgment.
394 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
References
For more information about the removed features in OpenShift, refer to the
Deprecated and Removed Features section in the Red Hat OpenShift Container
Platform 4.14 release notes at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/release_notes/index#ocp-4-14-deprecated-removed-features
For more information about what version of the Kubernetes API is included with
each OpenShift 4.x release, visit the following page in the customer portal:
https://access.redhat.com/solutions/4870701
For more information about the Kubernetes API deprecations and removals, visit the
following page in the customer portal:
https://access.redhat.com/articles/6955985
For more information about the deprecated APIs in OpenShift Container Platform
4.14, visit the following page in the customer portal:
https://access.redhat.com/articles/6955381
For more information about how to get fired alerts on OpenShift by using the
command-line, visit the following page in the customer portal:
https://access.redhat.com/solutions/4250221
DO280-OCP4.14-en-2-20240725 395
Chapter 9 | OpenShift Updates
Quiz
1. Red Hat OpenShift Container Platform 4.14 is based on which version of Kubernetes?
a. Kubernetes 1.24
b. Kubernetes 1.27
c. Kubernetes 1.26
d. OpenShift Container Platform is not based on Kubernetes.
2. What is the feature maturity status for Kubernetes resources with the v1beta1 API
version?
a. Experimental
b. Pre-release
c. Stable
3. Which command can the cluster administrator use to identify deprecated API
resources?
a. oc get apirequestcounts
b. oc get deprecatedapis -n openshift-config
c. oc get apis --deprecated
d. oc get configmap deprecated-apis -n openshift-config
4. Which two alerts identify the use of deprecated API versions in the OpenShift cluster?
(Choose two.)
a. APIRemovedInNextReleaseInUse
b. APIRequestCounts
c. APIRemovedInNextEUSReleaseInUse
d. DeprecatedAPIRequestCountsInUse
396 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Solution
1. Red Hat OpenShift Container Platform 4.14 is based on which version of Kubernetes?
a. Kubernetes 1.24
b. Kubernetes 1.27
c. Kubernetes 1.26
d. OpenShift Container Platform is not based on Kubernetes.
2. What is the feature maturity status for Kubernetes resources with the v1beta1 API
version?
a. Experimental
b. Pre-release
c. Stable
3. Which command can the cluster administrator use to identify deprecated API
resources?
a. oc get apirequestcounts
b. oc get deprecatedapis -n openshift-config
c. oc get apis --deprecated
d. oc get configmap deprecated-apis -n openshift-config
4. Which two alerts identify the use of deprecated API versions in the OpenShift cluster?
(Choose two.)
a. APIRemovedInNextReleaseInUse
b. APIRequestCounts
c. APIRemovedInNextEUSReleaseInUse
d. DeprecatedAPIRequestCountsInUse
DO280-OCP4.14-en-2-20240725 397
Chapter 9 | OpenShift Updates
Objectives
• Update OLM-managed operators by using the web console and CLI.
Operator Updates
For operators that are installed in an OpenShift cluster, operator providers can release new
versions. These new versions can contain bug fixes and new features. The Operator Lifecycle
Manager (OLM) can update these operators.
Cluster administrators should define operator update policies to ensure that bug fixes and new
functions are adopted, with the cluster continuing to operate correctly.
• For each installed operator, you can decide whether the OLM automatically applies updates, or
whether the updates require administrator approval.
• Operator providers can create multiple channels for an operator. The provider can follow
different policies to push updates to each channel, so that each channel contains different
versions of the operator. When installing an operator, you choose the channel to follow for
updates.
• You can create custom catalogs, and decide which versions of operators to include in the
catalog. For example, in a multicluster environment, you configure operators to update
automatically, but add only tested versions to the catalog.
Providers can publish operators by other means than the OLM and operator catalogs. For
example, a provider can publish operators as Helm charts or YAML resource files. The OLM does
not manage operators that are installed by other means.
For example, a provider can create stable and preview channels for an operator. The provider
publishes each new version of the operator to the preview channel. You can use the preview
channel to test new features and to validate that the new versions fix bugs. If the provider receives
feedback for preview versions of the operator and finds no serious issues with the latest version,
then the provider publishes the version to the stable channel. You can use the stable channel for
environments with higher reliability requirements, and trade off slower adoption of new features
for improved stability.
Additionally, operators might have new features that introduce significant changes or
incompatibilities with earlier versions. Operator providers might adopt a versioning scheme for the
operator that separates major updates from minor updates, depending on the adoption cost of
the new version. In this scenario, providers can create channels for different major versions of the
operator.
398 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
For example, a provider creates an operator that installs an application. The provider creates
version-1 and version-2 channels, to correspond to different major versions of the
application. Users of the operator can stay on the version-1 channel in the production
environment, and test and design an update process to adopt the version-2 channel in a staging
environment.
When you install an operator, determine the most suitable channel for your requirements. Clusters
with varying reliability requirements might use different channels.
You can edit an operator subscription to switch channels. Switching channels does not cause any
operator update, unless switching channel makes a later version available and the operator is
configured for automatic updates. Switching channels might cause unwanted results; always refer
to the operator documentation to learn about possible issues.
If the publishing policies of an operator suit your requirements, then you can configure automatic
approvals. Click Operators > Installed Operators on the web console, or examine cluster service
versions with the oc command, to review the version of installed operators.
If you install an operator and configure manual approvals, then you must approve updates before
the OLM updates the operator.
The Installed Operators page in the web console displays available upgrades.
DO280-OCP4.14-en-2-20240725 399
Chapter 9 | OpenShift Updates
The subscription resources and the install plan resources contain information about upgrades. You
can use the oc command to examine those resources to find available upgrades.
The currentCSV key shows the latest available version in the channel.
The OLM also creates an install plan resource when the operator channel contains a later version
of an operator.
400 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
...output omitted...
phase: RequiresApproval
...output omitted...
To install the update, edit the specification of the install plan to change the approved key value to
true.
You can also use the web console to approve an update. In the Installed Operators, click Upgrade
available, and then click Preview InstallPlan to view the install plan. Review the install plan, and
then click Approve to update the operator.
DO280-OCP4.14-en-2-20240725 401
Chapter 9 | OpenShift Updates
When updating a cluster, you might need to update operators if the installed version of the
operator is not compatible with the updated OpenShift version. Before you update a cluster,
review and install any operator updates that are needed for compatibility fixes. If no compatible
updates are available, then you must update the cluster by uninstalling incompatible operators.
Uninstalling Operators
You can uninstall operators by using the web console or the oc command.
In the console, click Operators > Installed operators and locate the operator. Click the vertical
ellipsis (⋮) menu, and then click Uninstall Operator.
After confirming the operation by clicking Uninstall, the OLM uninstalls the operator.
Alternatively, delete the subscription and cluster service versions by using the oc command.
Important
Uninstalling an operator can leave operator resources on the cluster. Always review
the operator documentation to learn about cleanup processes that you must follow
to completely remove an operator.
References
Refer to the Upgrading Installed Operators section in the Administrator
Tasks chapter in the Red Hat OpenShift Container Platform 4.14 Operators
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/updating_clusters/index#updating-eus-to-eus-upgrade_eus-to-eus-
upgrade
For more information about creating custom catalogs with controlled operator
versions, refer to the Managing Custom Catalogs section in the Administrator
Tasks chapter in the Red Hat OpenShift Container Platform 4.14 Operators
documentation at
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/
html-single/operators/index#olm-deleting-operators-from-a-cluster
402 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Quiz
1. Which component manages the updates of operators that are not cluster operators?
a. Operator Lifecycle Manager (OLM)
b. Cluster Version Operator (CVO)
c. Telemetry client (Telemeter)
2. In which two ways can you configure operator updates? (Choose two.)
a. With automatic updates, the OLM updates an operator as soon as the configured channel
has a later version of the operator.
b. With automatic updates, the OLM switches the update channel automatically to the
channel with the latest version of the operator, and updates to this version.
c. With manual updates, the OLM does not monitor channels, and you apply updates
manually.
d. With manual updates, the OLM updates an operator when the configured channel has a
later version of the operator, and an administrator approves the update.
3. In which two ways can you approve updates of an operator? (Choose two.)
a. Update the subscription resource with the intended version.
b. Use the web console to review and approve the install plan resource.
c. Modify the install plan resource by using the Kubernetes API to approve the update.
d. Update the CVO resource specification with the intended version.
DO280-OCP4.14-en-2-20240725 403
Chapter 9 | OpenShift Updates
Solution
1. Which component manages the updates of operators that are not cluster operators?
a. Operator Lifecycle Manager (OLM)
b. Cluster Version Operator (CVO)
c. Telemetry client (Telemeter)
2. In which two ways can you configure operator updates? (Choose two.)
a. With automatic updates, the OLM updates an operator as soon as the configured channel
has a later version of the operator.
b. With automatic updates, the OLM switches the update channel automatically to the
channel with the latest version of the operator, and updates to this version.
c. With manual updates, the OLM does not monitor channels, and you apply updates
manually.
d. With manual updates, the OLM updates an operator when the configured channel has a
later version of the operator, and an administrator approves the update.
3. In which two ways can you approve updates of an operator? (Choose two.)
a. Update the subscription resource with the intended version.
b. Use the web console to review and approve the install plan resource.
c. Modify the install plan resource by using the Kubernetes API to approve the update.
d. Update the CVO resource specification with the intended version.
404 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Quiz
OpenShift Updates
Choose the correct answers to the following questions:
2. Which component manages the updates of operators that are not cluster operators?
a. Telemetry client (Telemeter)
b. Operator Lifecycle Manager (OLM)
c. Cluster Version Operator (CVO)
4. In which three ways can you discover usage of deprecated APIs? (Choose three.)
a. You can disable deprecated APIs, so that usage of deprecated APIs fails.
b. APIRequestCount objects count API requests. Review the request count for deprecated
APIs.
c. OpenShift monitoring includes alerts that notify administrators when the cluster receives
a request that uses a deprecated API.
d. OpenShift annotates workloads that use deprecated APIs.
e. If a request uses a deprecated API version, then the API server returns a deprecation
warning.
f. Cluster updates are not possible if deprecated APIs are in use.
DO280-OCP4.14-en-2-20240725 405
Chapter 9 | OpenShift Updates
406 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Solution
OpenShift Updates
Choose the correct answers to the following questions:
2. Which component manages the updates of operators that are not cluster operators?
a. Telemetry client (Telemeter)
b. Operator Lifecycle Manager (OLM)
c. Cluster Version Operator (CVO)
4. In which three ways can you discover usage of deprecated APIs? (Choose three.)
a. You can disable deprecated APIs, so that usage of deprecated APIs fails.
b. APIRequestCount objects count API requests. Review the request count for deprecated
APIs.
c. OpenShift monitoring includes alerts that notify administrators when the cluster receives
a request that uses a deprecated API.
d. OpenShift annotates workloads that use deprecated APIs.
e. If a request uses a deprecated API version, then the API server returns a deprecation
warning.
f. Cluster updates are not possible if deprecated APIs are in use.
DO280-OCP4.14-en-2-20240725 407
Chapter 9 | OpenShift Updates
408 DO280-OCP4.14-en-2-20240725
Chapter 9 | OpenShift Updates
Summary
• A major benefit of OpenShift 4 architectural changes is that you can update your clusters Over-
the-Air (OTA).
• Red Hat provides a software distribution system that ensures the best path for updating your
OpenShift 4 cluster and the underlying operating system.
– The stable channel delivers updates that passed additional testing and validation in
operational clusters.
– The candidate channel delivers updates for testing feature acceptance in the next version of
OpenShift Container Platform.
– The eus channel (which is available only for Extended Updated Support releases) extends the
maintenance phase.
• Red Hat does not support reverting your cluster to an earlier version.
• When a stable version of an API is released, the beta versions are marked as deprecated and are
removed after three Kubernetes releases.
• Requests to a deprecated API display warnings and trigger alerts. You can track deprecated API
usage by using APIRequestCount objects.
• The Operator Lifecycle Manager (OLM) can update operators that are installed in an OpenShift
cluster.
• For each installed operator, you can decide whether the OLM automatically applies updates, or
whether the updates require administrator approval.
• Operator providers can create multiple channels for an operator with different release policies.
DO280-OCP4.14-en-2-20240725 409
410 DO280-OCP4.14-en-2-20240725
Chapter 10
Comprehensive Review
Goal Review tasks from Red Hat OpenShift
Administration II: Configuring a Production Cluster.
• Secure Applications
DO280-OCP4.14-en-2-20240725 411
Chapter 10 | Comprehensive Review
Comprehensive Review
Objectives
After completing this section, you should have reviewed and refreshed the knowledge and skills
that you learned in Red Hat OpenShift Administration II: Configuring a Production Cluster.
• Deploy and update applications from resource manifests that are stored as YAML files.
• Deploy and update applications from resource manifests that are augmented by Kustomize.
• Deploy an application and its dependencies from resource manifests that are stored in an
OpenShift template.
• Deploy and update applications from resource manifests that are packaged as Helm charts.
412 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
• Configure compute resource quotas and Kubernetes resource count quotas per project and
cluster-wide.
• Configure default and maximum compute resource requirements for pods per project.
• Configure default quotas, limit ranges, role bindings, and other restrictions for new projects, and
the allowed users to self-provision new projects.
• Explain the operator pattern and different approaches for installing and updating Kubernetes
operators.
• Install and update operators by using the web console and the Operator Lifecycle Manager.
• Install and update operators by using the Operator Lifecycle Manager APIs.
• Create service accounts and apply permissions, and manage security context constraints.
• Run an application that requires access to the Kubernetes API of the application's cluster.
• Automate regular cluster and application management tasks by using Kubernetes cron jobs.
DO280-OCP4.14-en-2-20240725 413
Chapter 10 | Comprehensive Review
Lab
Outcomes
• Create a project template that sets quotas, ranges, and network policies.
The lab command copies the exercise files to the ~/DO280 directory and creates the
following users:
• do280-support
• do280-platform
• do280-presenter
• do280-attendee
The goal, as the cluster administrator, is to configure a dedicated cluster to host workshops
on different topics.
Each workshop requires a project, so that workshops are isolated from each other.
You must set up the cluster so that when the presenter creates a workshop project, the
project gets a base configuration.
The presenter must be mostly self-sufficient to administer a workshop with little help from
the workshop support team.
The workshop support team must deploy applications that administer workshops and that
enhance the workshop experience. You set up a project and the applications for this purpose
on a second lab.
Specifications
Use the following values to access the OpenShift cluster:
Item Value
414 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Item Value
• Create the groups with the specified users in the following table:
Group User
platform do280-platform
presenters do280-presenter
workshop-support do280-support
The lab start command creates the users with the redhat password.
• The presenters group consists of the people who deliver the workshops.
• The workshop-support group maintains the needed applications to support the workshops
and the workshop presenters.
• Ensure that only users from the following groups can create projects:
Group
platform
presenters
workshop-support
• An attendee must not be able to create projects. Because this exercise requires steps that
restart the Kubernetes API server, this configuration must persist across API server restarts.
• The platform group must be able to administer the cluster without restrictions.
• The workshop-support group must perform the following tasks for the workshop project:
– Create a workshop-specific attendees group.
– Assign the edit role to the attendees group.
– Add users to the attendees group.
• All the resources that the cluster creates with a new workshop project must use workshop as
the name for grading purposes.
DO280-OCP4.14-en-2-20240725 415
Chapter 10 | Comprehensive Review
• Each workshop must enforce constraints to prevent an attendee's workload from consuming all
the allocated resources for the workshop:
– A workload uses up to 750m CPUs.
– A workload uses up to 750 Mi.
You can use the templates that are provided in the quota.yaml, limitrange.yaml, and
networkpolicy.yaml files.
• As the do280-presenter user, you must create a workshop with the do280 name.
• As the do280-support user, you must create the do280-attendees group with the do280-
attendee user, and assign the edit role to the do280-attendees group.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
416 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Solution
Outcomes
• Create a project template that sets quotas, ranges, and network policies.
The lab command copies the exercise files to the ~/DO280 directory and creates the
following users:
• do280-support
• do280-platform
• do280-presenter
• do280-attendee
The goal, as the cluster administrator, is to configure a dedicated cluster to host workshops
on different topics.
Each workshop requires a project, so that workshops are isolated from each other.
You must set up the cluster so that when the presenter creates a workshop project, the
project gets a base configuration.
The presenter must be mostly self-sufficient to administer a workshop with little help from
the workshop support team.
The workshop support team must deploy applications that administer workshops and that
enhance the workshop experience. You set up a project and the applications for this purpose
on a second lab.
DO280-OCP4.14-en-2-20240725 417
Chapter 10 | Comprehensive Review
1.2. Open a terminal window and log in as the admin user with the redhatocp password.
2. Create the following groups and add a user as specified in the following table.
Group User
workshop-support do280-support
presenters do280-presenter
platform do280-platform
2.7. Use the oc get groups command to verify that the group configuration is correct.
418 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
3. Grant to the workshop-support group the admin and the custom manage-groups
cluster roles. You must create the manage-groups custom cluster role from the
groups-role.yaml file.
3.2. Run the oc create command to create the manage-groups cluster role in the
groups-role.yaml file.
4. Create a cluster role binding to assign the cluster-admin cluster role to the platform
group.
Note
When you execute the oc adm policy add-cluster-role-to-group
cluster-admin platform command to add the cluster role to the new group,
a naming collision occurs with an existing object with this name. Consequently, the
system creates an object and appends to the name -x, which is an iterating numeral
that starts with -0.
To view the new role binding, use the oc get clusterrolebinding | grep
^cluster-admin command to list all cluster role bindings that begin with
cluster-admin. Then, run oc describe on the listed item with the highest -x
value to view the details for your new binding.
DO280-OCP4.14-en-2-20240725 419
Chapter 10 | Comprehensive Review
these groups can create projects. Also, make this change permanent by setting the
rbac.authorization.kubernetes.io/autoupdate annotation with the false value.
5.1. Use the oc edit command to edit the self-provisioners cluster role binding.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "false"
creationTimestamp: "2023-01-24T23:31:00Z"
name: self-provisioners
resourceVersion: "250330"
uid: a6053896-f68f-41ff-9bb3-5da579a701bc
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: self-provisioner
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: platform
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: workshop-support
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: presenters
5.2. As the do280-attendee user, verify that you cannot create a project.
Log in as the do280-attendee user with the redhat password.
You don't have any projects. Contact your system administrator to request a
project.
6. As the admin user, create a template-test namespace to design the project template.
420 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Quota Value
limits.cpu 2
limits.memory 1Gi
requests.cpu 1500m
requests.memory 750Mi
7.1. Edit the quota.yaml file and replace the CHANGE_ME label to match the following
definition.
apiVersion: v1
kind: ResourceQuota
metadata:
name: workshop
namespace: template-test
spec:
hard:
limits.cpu: 2
limits.memory: 1Gi
requests.cpu: 1500m
requests.memory: 750Mi
7.2. Use the oc create command to create the quota in the template-test project.
DO280-OCP4.14-en-2-20240725 421
Chapter 10 | Comprehensive Review
max.cpu 750m
max.mem 750Mi
default.cpu 500m
default.memory 500Mi
defaulRequest.cpu 100m
defaulRequest.memory 250Mi
8.1. Edit the limitrange.yaml file and replace the CHANGE_ME label to match the
following definition.
apiVersion: v1
kind: LimitRange
metadata:
name: workshop
namespace: template-test
spec:
limits:
- max:
cpu: 750m
memory: 750Mi
default:
cpu: 500m
memory: 500Mi
defaultRequest:
cpu: 100m
memory: 250Mi
type: Container
8.2. Use the oc create command to create the limit range in the template-test
project.
9. Create a network policy to accept traffic from within the workshop project or from outside
the cluster. To identify the workshop project traffic, label the template-test namespace
with the workshop=template-test label.
9.1. Use the oc create deployment command to create a deployment without resource
specifications.
422 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
9.3. Use the oc debug command to run the curl command from a pod in the default
project.
Use the curl command from the default namespace to query the NGINX server
that runs in the test workload.
9.4. Use the oc label command to add the label to the template-test namespace.
9.5. Edit the network policy from the networkpolicy.yaml file. Replace the CHANGE_ME
labels according to the following specification.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: workshop
namespace: template-test
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
workshop: template-test
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
9.6. Run the oc create command to create the policy in the template-test project.
9.7. Verify that you cannot connect to the workshop pod from the default project.
DO280-OCP4.14-en-2-20240725 423
Chapter 10 | Comprehensive Review
9.8. Verify that you can connect to the workshop pod from the workshop project.
10. Create the workshop project template by using the previously created template resources.
10.2. Use the oc get command to create a YAML list with the following resources:
• resourcequota/workshop
• limitrange/workshop
• networkpolicy/workshop
• Cut the contents of the items stanza and paste them immediately before the
parameters stanza. Keep the original indentation, because every YAML item of the
list must appear at the beginning of the line.
• Remove the following keys from the limit range and quota definitions:
424 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
– creationTimestamp
– resourceVersion
– uid
– status
– generation
Then, move the resource list to the objects key after line 31. The project-
template.yaml file has the following expected content.
apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: project-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
name: ${PROJECT_NAME}
labels:
workshop: ${PROJECT_NAME}
spec: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: workshop
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${PROJECT_ADMIN_USER}
- apiVersion: v1
kind: ResourceQuota
DO280-OCP4.14-en-2-20240725 425
Chapter 10 | Comprehensive Review
metadata:
annotations:
name: workshop
namespace: ${PROJECT_NAME}
spec:
hard:
limits.cpu: "2"
limits.memory: 1Gi
requests.cpu: 1500m
requests.memory: 750Mi
- apiVersion: v1
kind: LimitRange
metadata:
annotations:
name: workshop
namespace: ${PROJECT_NAME}
spec:
limits:
- default:
cpu: 500m
memory: 500Mi
defaultRequest:
cpu: 100m
memory: 250Mi
max:
cpu: 750m
memory: 750Mi
type: Container
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
annotations:
name: workshop
namespace: ${PROJECT_NAME}
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
workshop: ${PROJECT_NAME}
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
podSelector: {}
policyTypes:
- Ingress
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER
426 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
10.4. Create the project template in the project-template.yaml file by using the oc
create command in the openshift-config namespace.
10.5. Use the oc edit command to change the cluster project configuration.
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...output omitted...
name: cluster
...output omitted...
spec:
projectRequestTemplate:
name: project-request
10.6. Use the watch command to view the API server pods.
Wait until new pods are created. Press Ctrl+C to exit the watch command.
11.3. Verify that the oc new-project command creates the following resources from the
template:
• Quota
• Limit range
• Network policy
DO280-OCP4.14-en-2-20240725 427
Chapter 10 | Comprehensive Review
11.4. Verify that the do280 project definition has the workshop=do280 label.
12. As the do280-support user, create the do280-attendees group. Then, assign the edit
role to the do280-attendees group, and add the do280-attendee user to the group.
12.3. Assign the edit role to the do280-attendees group in the do280-workshop
project.
Add the edit role to the do280-attendees group in the do280 project.
12.4. As the do280-attendee user, verify that you cannot access the do280 project.
Log in as the do280-attendee user with the redhat password.
428 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
12.5. As the do280-support user, add the do280-attendee user to the do280-
attendees group.
Log in as the do280-support user with the redhat password.
Use the oc adm groups command to add the do280-attendee user to the
workshop-do280-attendees group.
12.6. As the do280-attendee user, verify that you can create workloads in the do280
project.
Log in as the do280-attendee user with the redhat password.
13. Change to the home directory to prepare for the next exercise.
[student@workstation appsec-review]$ cd
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
DO280-OCP4.14-en-2-20240725 429
Chapter 10 | Comprehensive Review
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
430 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Lab
Secure Applications
Configure a project that requires custom settings.
Outcomes
• Create a project quota.
• Use role-based access control to grant permissions to service accounts and groups.
The lab command copies the exercise files into the ~/DO280/labs/compreview-apps
directory and creates the workshop-support group with the do280-support user. The
lab command also restores the project template configuration from the previous exercise.
You must set up an application that automatically deletes completed workshops, and set up
a social media API that attendees from all workshops use.
Specifications
• Create the workshop-support namespace with the category: support label.
• Workloads from the workshop-support namespace must enforce the following constraints:
DO280-OCP4.14-en-2-20240725 431
Chapter 10 | Comprehensive Review
• Any quota or limit range must have the workshop-support name for grading purposes.
• As the do280-support user, deploy the project-cleaner application from the project-
cleaner/example-pod.yaml file to the workshop-support namespace by using a
project-cleaner cron job that runs every minute.
The project cleaner deletes projects with the workshop label that exist for more than 10
seconds. This short expiration time is deliberate for this lab.
• You must create a project-cleaner-sa service account to use in the project cleaner
application.
• The role that the project cleaner needs is defined in the project-cleaner/cluster-
role.yaml file.
• You must configure this application to use TLS end-to-end by using the following specification:
– Use the beeper-api.pem certificate and the beeper-api.key in the certs directory.
– Configure the /etc/pki/beeper-api/ path as the mount point for the certificate and key.
– Set the TLS_ENABLED environment variable to the true value.
• The database pods, which are pods in the workshop-support namespace with the
app=beeper-db label, must accept only TCP traffic from the beeper-api pods in the
workshop-support namespace on the 5432 port. You can use the category=support label
to identify the pods that belong to the workshop-support namespace.
• Configure the cluster network so that the workshop-support namespace accepts only
external ingress traffic to pods that listen on the 8080 port, and blocks traffic from other
projects.
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
432 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 433
Chapter 10 | Comprehensive Review
Solution
Secure Applications
Configure a project that requires custom settings.
Outcomes
• Create a project quota.
• Use role-based access control to grant permissions to service accounts and groups.
The lab command copies the exercise files into the ~/DO280/labs/compreview-apps
directory and creates the workshop-support group with the do280-support user. The
lab command also restores the project template configuration from the previous exercise.
You must set up an application that automatically deletes completed workshops, and set up
a social media API that attendees from all workshops use.
434 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
2. Create and prepare the workshop-support namespace with the following actions:
2.2. Use the oc label command to add the category=support label to the
workshop-support namespace.
2.4. Create a cluster role binding to assign the admin cluster role to the workshop-
support group.
3. Create the resource quota for the workshop-support namespace with the following
specification.
Quota Value
limits.cpu 4
limits.memory 4Gi
requests.cpu 3500m
requests.memory 3Gi
DO280-OCP4.14-en-2-20240725 435
Chapter 10 | Comprehensive Review
default.cpu 300m
default.memory 400Mi
defaulRequest.cpu 100m
defaulRequest.memory 250Mi
4.1. Edit the limitrange.yaml file and replace the CHANGE_ME label to match the
following definition.
apiVersion: v1
kind: LimitRange
metadata:
name: workshop-support
namespace: workshop-support
spec:
limits:
- default:
cpu: 300m
memory: 400Mi
defaultRequest:
cpu: 100m
memory: 250Mi
type: Container
4.2. Use the oc apply command to create the limit range in the workshop-support
project.
436 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
[student@workstation compreview-apps]$ cd \
~/DO280/labs/compreview-apps/project-cleaner
6. As the do280-support user, create the project-cleaner cron job by editing the
cron-job.yaml file and by using the example-pod.yaml pod manifest as the job
template. Configure the cron job to run every minute.
• Replace the CHANGE_ME label with the "*/1 * * * *" schedule to execute the job
every minute.
• Replace the CHANGE_ME label in the jobTemplate definition with the spec
definition from the example-pod.yaml pod manifest.
• Replace the CHANGE_ME label in the serviceAccountName key with the project-
cleaner-sa service account.
Although the long image name might show across two lines, you must add it as one
line.
apiVersion: batch/v1
kind: CronJob
metadata:
name: project-cleaner
namespace: workshop-support
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
DO280-OCP4.14-en-2-20240725 437
Chapter 10 | Comprehensive Review
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: project-cleaner-sa
containers:
- name: project-cleaner
image: registry.ocp4.example.com:8443/redhattraining/do280-project-
cleaner:v1.1
imagePullPolicy: Always
env:
- name: "PROJECT_TAG"
value: "workshop"
- name: "EXPIRATION_SECONDS"
value: "10"
resources:
limits:
cpu: 100m
memory: 200Mi
6.4. Verify that the project cleaner application is deployed correctly, by creating a
clean-test project.
Wait for a successful job run. Then, get the pod name from the last job run.
438 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Note
You might see deleted projects from other exercises in the course.
6.5. Verify that the cron job deletes the clean-test project, by using the oc get
project command.
7.2. Use the oc apply command to create the database in the workshop-support
namespace.
7.3. Verify that the database pod is running by using the oc get pod command to get the
pods with the app=beeper-db label.
DO280-OCP4.14-en-2-20240725 439
Chapter 10 | Comprehensive Review
8.1. Create the beeper-api-cert secret by using the beeper-api.pem certificate and
the beeper-api.key key from the lab directory.
8.2. Edit the beeper-api deployment in the deployment.yaml file to mount the
beeper-api-cert secret on the /etc/pki/beeper-api/ path.
apiVersion: apps/v1
kind: Deployment
metadata:
name: beeper-api
namespace: workshop-support
spec:
...output omitted...
spec:
containers:
- name: beeper-api
...output omitted...
env:
- name: TLS_ENABLED
value: "false"
volumeMounts:
- name: beeper-api-cert
mountPath: /etc/pki/beeper-api/
volumes:
- name: beeper-api-cert
secret:
defaultMode: 420
secretName: beeper-api-cert
8.3. Edit the beeper-api deployment in the deployment.yaml file on lines 32, 37, 42,
and 47 to configure TLS for the application and the startup, readiness, and liveness
probes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: beeper-api
namespace: workshop-support
spec:
...output omitted...
spec:
containers:
- name: beeper-api
...output omitted...
ports:
- containerPort: 8080
readinessProbe:
httpGet:
port: 8080
440 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
path: /readyz
scheme: HTTPS
livenessProbe:
httpGet:
port: 8080
path: /livez
scheme: HTTPS
startupProbe:
httpGet:
path: /readyz
port: 8080
scheme: HTTPS
failureThreshold: 30
periodSeconds: 3
env:
- name: TLS_ENABLED
value: "true"
...output omitted...
8.5. Edit the service.yaml file to configure the beeper-api service to listen on
the standard HTTPS 443 port and to forward connections to pods with the app:
beeper-api label on port 8080.
apiVersion: v1
kind: Service
metadata:
name: beeper-api
namespace: workshop-support
spec:
selector:
app: beeper-api
ports:
- port: 443
targetPort: 8080
name: https
9. Expose the beeper API to outer cluster access by using the FQDN in the signed certificate by
the corporate CA.
9.1. Create a passthrough route for the beeper-api service by using the beeper-
api.apps.ocp4.example.com hostname.
DO280-OCP4.14-en-2-20240725 441
Chapter 10 | Comprehensive Review
10. Optionally, open a web browser and verify that you can access the API by navigating to the
https://beeper-api.apps.ocp4.example.com/swagger-ui.html URL. When you
see the warning about the security risk, click Advanced… and then click Accept the Risk and
Continue.
11. Configure network policies to allow only TCP ingress traffic on port 5432 to database pods
from the beeper-api pods.
11.1. Verify that you can access the beeper-db service from the workshop-support
namespace by testing TCP connectivity to the database service. Use the oc debug
command to create a pod with the nc command with the -z option to test TCP access.
11.2. Create an entry in the database by using the following curl command.
11.3. Edit the db-networkpolicy.yaml file so that only pods with the app: beeper-
api label can connect to database pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
namespace: workshop-support
spec:
442 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
podSelector:
matchLabels:
app: beeper-db
ingress:
- from:
- namespaceSelector:
matchLabels:
category: support
podSelector:
matchLabels:
app: beeper-api
ports:
- protocol: TCP
port: 5432
11.5. Verify that you cannot connect to the database, by running the previous nc command.
11.6. Verify that the API pods have access to the database pods, by running the curl
command to query the API by using the external route.
12. Configure network policies in the workshop-support namespace to accept only ingress
connections from the OpenShift router pods to port 8080.
12.1. Verify that you can access the API service from the workshop-support namespace
by testing TCP connectivity. Use the oc debug command to create a pod with the nc
command with the -z option to test TCP access.
DO280-OCP4.14-en-2-20240725 443
Chapter 10 | Comprehensive Review
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: beeper-api-ingresspolicy
namespace: workshop-support
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
ports:
- protocol: TCP
port: 8080
12.4. Verify that you cannot access the API service from the workshop-support
namespace. Use the oc debug command to create a pod with the nc command with
the -z option to test TCP access.
12.5. Verify that the API pods are accessible from outside the cluster by running the curl
command to query the API external route.
[student@workstation appsec-review]$ cd
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
444 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 445
Chapter 10 | Comprehensive Review
Lab
Outcomes
• Deploy an application from a chart.
Use the lab command to prepare your system for this exercise.
This command ensures that the cluster API is reachable and prepares the environment for
the exercise.
Specifications
Deploy an application that uses a database by using a Helm chart and Kustomization files. Access
the application by using a route.
• Use the developer user with the developer password for this exercise.
• Deploy a MySQL database by using the mysql-persistent Helm chart in the http://
helm.ocp4.example.com/charts repository. Use the latest version in the repository, and the
default resource names that the chart generates.
The /home/student/DO280/solutions/compreview-package/roster/overlays/
production/ directory contains the solution kustomization.yaml file and the patch-
roster-prod.yaml file.
• Verify that the application creates a route, and that the application is available through the route
by using the TLS/SSL protocol (HTTPS).
446 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
DO280-OCP4.14-en-2-20240725 447
Chapter 10 | Comprehensive Review
Solution
Outcomes
• Deploy an application from a chart.
Use the lab command to prepare your system for this exercise.
This command ensures that the cluster API is reachable and prepares the environment for
the exercise.
1.1. Use the helm repo list command to list the repositories that are configured for the
student user.
If the do280-repo repository is present, then continue to the next step. Otherwise,
add the repository.
1.2. Use the helm search command to list all the charts in the repository.
448 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
2.1. Log in to the cluster as the developer user with the developer password.
3.1. Use the helm install command to create a release of the do280-repo/mysql-
persistent chart.
3.2. Use the watch command to verify that the pods are running. Wait for the mysql-1-
deploy pod to show a Completed status.
4. Examine the provided Kustomize configuration and the deployed chart, and verify that
the production overlay generates a deployment, service, route, configuration map, and
a secret. Verify that the patch-roster-prod.yaml patch file applies the liveness and
readiness probes to the roster deployment.
DO280-OCP4.14-en-2-20240725 449
Chapter 10 | Comprehensive Review
4 directories, 8 files
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
spec:
replicas: 1
selector:
matchLabels:
app: roster
template:
...output omitted...
spec:
containers:
- image: registry.ocp4.example.com:8443/redhattraining/do280-roster:v1
name: do280-roster
envFrom:
- configMapRef:
name: roster
- secretRef:
name: roster
...output omitted...
The deployment does not set any configuration to access the database. The
deployment extracts environment variables from a roster configuration map and a
roster secret.
4.4. Use the oc kustomize command to verify that the production overlay generates
a deployment, service, route, configuration map, and a secret, and configures the
liveness and readiness probes to the roster deployment.
450 DO280-OCP4.14-en-2-20240725
Chapter 10 | Comprehensive Review
...output omitted...
kind: ConfigMap
...output omitted...
---
apiVersion: v1
kind: Secret
...output omitted...
---
apiVersion: v1
kind: Service
...output omitted...
---
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
...output omitted...
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 30
tcpSocket:
port: 9090
timeoutSeconds: 3
name: roster
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
initialDelaySeconds: 3
periodSeconds: 10
tcpSocket:
port: 9090
timeoutSeconds: 3
---
apiVersion: route.openshift.io/v1
kind: Route
...output omitted...
DO280-OCP4.14-en-2-20240725 451
Chapter 10 | Comprehensive Review
5.2. Use the watch command to verify that the pods are running. Wait for the roster pod
to show the Running status.
5.3. Use the oc get route command to obtain the application URL.
[student@workstation compreview-package]$ cd
Evaluation
As the student user on the workstation machine, use the lab command to grade your work.
Correct any reported failures and rerun the command until successful.
Finish
As the student user on the workstation machine, use the lab command to complete this
exercise. This step is important to ensure that resources from previous exercises do not impact
upcoming exercises.
452 DO280-OCP4.14-en-2-20240725