Cloud-Native Development With OpenShift and Kubernetes
Cloud-Native Development With OpenShift and Kubernetes
The contents of this course and all its modules and related materials, including handouts to audience members, are
Copyright © 2022 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced in any way, including, but
not limited to, photocopy, photograph, magnetic, electronic or other record, without the prior written permission of
Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any guarantees from Red Hat,
Inc. Red Hat, Inc. assumes no liability for damages or legal action arising from the use or misuse of contents or details
contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly distributed, please send
email to [email protected] or phone toll-free (USA) +1 (866) 626-2994 or +1 (919) 754-3700.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, JBoss, OpenShift, Fedora, Hibernate, Ansible, CloudForms,
RHCA, RHCE, RHCSA, Ceph, and Gluster are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries
in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United
States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is a trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open
source or commercial project.
The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks
of OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's
permission. Red Hat, Inc. is not affiliated with, endorsed by, or sponsored by the OpenStack Foundation or the
OpenStack community.
DO100B-K1.22-en-2-7067502 vii
viii DO100B-K1.22-en-2-7067502
Document Conventions
This section describes various conventions and practices used throughout all
Red Hat Training courses.
Admonitions
Red Hat Training courses use the following admonitions:
References
These describe where to find external documentation relevant to a
subject.
Note
These are tips, shortcuts, or alternative approaches to the task at hand.
Ignoring a note should have no negative consequences, but you might
miss out on something that makes your life easier.
Important
These provide details of information that is easily missed: configuration
changes that only apply to the current session, or services that need
restarting before an update will apply. Ignoring these admonitions will
not cause data loss, but may cause irritation and frustration.
Warning
These should not be ignored. Ignoring these admonitions will most likely
cause data loss.
Inclusive Language
Red Hat Training is currently reviewing its use of language in various areas
to help remove any potentially offensive terms. This is an ongoing process
and requires alignment with the products and services covered in Red Hat
Training courses. Red Hat appreciates your patience during this process.
DO100B-K1.22-en-2-7067502 ix
x DO100B-K1.22-en-2-7067502
Introduction
DO100B-K1.22-en-2-7067502 xi
Introduction
• Microsoft Windows 10
Memory 8 GB 16 GB or more
This course follows the DO100a course, which instructed you to install the following programs:
• Minikube (Optional)
Important
If you did not finish the prerequisite course, ensure you have installed and correctly
configured the required programs.
Visit the DO100a course for more information about the prerequisite workstation
configuration that is necessary for this course.
xii DO100B-K1.22-en-2-7067502
Introduction
• If you use Bash as the default shell, then your prompt might match the [user@host ~]$
prompt used in the course examples, although different Bash configurations can produce
different results.
• If you use another shell, such as zsh, then your prompt format will differ from the prompt used
in the course examples.
• When performing the exercises, interpret the [user@host ~]$ prompt used in the course as a
representation of your system prompt.
Ubuntu
• When performing the exercises, interpret the [user@host ~]$ prompt used in the course as a
representation of your Ubuntu prompt.
macOS
• When performing the exercises, interpret the [user@host ~]$ prompt used in the course as a
representation of your macOS prompt.
Microsoft Windows
• Windows does not support Bash natively. Instead, you must use PowerShell.
• When performing the exercises, interpret the [user@host ~]$ Bash prompt as a
representation of your Windows PowerShell prompt.
• For some commands, Bash syntax and PowerShell syntax are similar, such as cd or ls. You can
also use the slash character (/) in file system paths.
• For other commands, the course provides help to transform Bash commands into equivalent
PowerShell commands.
• The Windows firewall might ask for additional permissions in certain exercises.
DO100B-K1.22-en-2-7067502 xiii
Introduction
Alternatively, you can type commands in one line on all systems, such as:
xiv DO100B-K1.22-en-2-7067502
Chapter 1
Deploying Managed
Applications
Goal Introduce the deployment resource and link to
container management.
DO100B-K1.22-en-2-7067502 1
Chapter 1 | Deploying Managed Applications
Objectives
After completing this section, you should be able to use Kubernetes container management
capabilities to deploy containerized applications in a declarative way.
Managing Containers
One of the most significant features of Kubernetes is that it enables developers to use a
declarative approach for automatic container life cycle management. Declarative approach means
developers declare what should be the status of the application, and Kubernetes will update the
containers to reach that state.
• The number of instances (replicas) of the application that Kubernetes must run simultaneously.
• The strategy for updating the replicas when a new version of the application is available.
With this information, Kubernetes deploys the application, keeps the number of replicas, and
terminates or redeploys application containers when the state of the application does not match
the declared configuration. Kubernetes continuously revisits this information and updates the
state of the application accordingly.
Automatic deployment
Kubernetes deploys the configured application without manual intervention.
Automatic scaling
Kubernetes creates as many replicas of the application as requested. If the number of replicas
requested increases or decreases, then Kubernetes automatically creates new containers
(scale-up) or terminates exceeding containers (scale-down) to match the requested number.
Automatic restart
If a replica terminates unexpectedly or becomes unresponsive, then Kubernetes deletes the
associated container and automatically spins up a new one to match the expected replica
count.
Automatic rollout
When a new version of the application is detected, or a new configuration applies, Kubernetes
automatically updates the existing replicas. Kubernetes monitors this rollout process to make
sure the application retains the declared number of active replicas.
Creating a Deployment
A Deployment resource contains all the information Kubernetes needs to manage the life cycle of
the application's containers.
The simplest way to create a Deployment resource is by using the kubectl create
deployment command.
2 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
Use the --output yaml parameter to get detailed information about the resource in the YAML
format. Alternatively, you can use the short -o yaml version.
Note
Review kubectl get options and adapt the output to your needs.
For example, use the --show-managed-fields=false to skip the
metadata.managedFields section of the Deployment. Use different values for
the `-o' option for formatting or filtering the output. Find more details in the links in
the References section.
The Kubernetes declarative deployment approach enables you to use the GitOps principles.
GitOps focuses on a versioned repository, such as git, which stores your deployment
configuration.
DO100B-K1.22-en-2-7067502 3
Chapter 1 | Deploying Managed Applications
Following GitOps principles, you can store the Deployment manifest in YAML or JSON format in
your application repository. Then, after the appropriate changes, you can create the Deployment
manually or programmatically by using the kubectl apply -f deployment-file command.
You can also edit Deployment resource manifests directly from the command line. The kubectl
edit deployment deployment-name command retrieves the Deployment resource and
opens it in a local text editor (the exact editor depends on your system and local configuration).
When the editor closes, the kubectl edit command applies any changes to the manifest.
Note
The kubectl get deployment deployment-name -o yaml command
contains run time information about the deployment.
For example, the output contains current deployment status, creation timestamps,
and similar information. Deployment YAML files with run time information might not
be reusable across namespaces and projects.
Deployment YAML files that you want to check-in to your version control system,
such as git, should not contain any run time information. Kubernetes generates this
information at as needed.
apiVersion: apps/v1
kind: Deployment
metadata:
...output omitted...
labels:
app: versioned-hello
name: versioned-hello
...output omitted...
spec:
...output omitted...
replicas: 3
...output omitted...
selector:
matchLabels:
app: versioned-hello
strategy:
type: RollingUpdate
...output omitted...
template:
metadata:
labels:
app: versioned-hello
...output omitted...
spec:
containers:
- image: quay.io/redhattraining/versioned-hello:v1.1
4 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
name: versioned-hello
...output omitted...
status:
...output omitted...
replicas: 3
...output omitted...
Includes a list of pod definitions for each new container created by the deployment as well as
other fields to control container management.
Current status of the deployment. This section is automatically generated and updated by
Kubernetes.
Replicas
The replicas section under the spec section (also denoted as the spec.replicas section)
declares the number of expected replicas that Kubernetes should keep running. Kubernetes will
continuously review the number of replicas that are running and responsive, and scale accordingly.
Deployment Strategy
When the application changes due to an image change or a configuration change, Kubernetes
replaces the old running containers with updated ones. However, just redeploying all replicas at
once can lead to problems with the application, such as:
RollingUpdate
Kubernetes terminates and deploys pods progressively. This strategy defines a maximum
amount of pods unavailable anytime. It defines the difference between the available pods and
the desired available replicas. The RollingUpdate strategy also defines an amount of pods
deployed at any time over the number of desired replicas. Both values default to 25% of the
desired replicas.
Recreate
This strategy means that no issues are expected to impact the application, so Kubernetes
terminates all replicas and recreates them on a best effort basis.
DO100B-K1.22-en-2-7067502 5
Chapter 1 | Deploying Managed Applications
Note
Different distributions of Kubernetes include other deployment strategies. Refer to
the documentation of the distribution for details.
Template
When Kubernetes deploys new pods, it needs the exact manifest to create the pod. The
spec.template.spec section holds exactly the same structure as a Pod manifest. Kubernetes
uses this section to create new pods as needed.
Labels
Labels are key-value pairs assigned in resource manifests. Both developers and Kubernetes
use labels to identify sets of grouped resources, such as all resources belonging to the same
application or environment. Depending on the position inside the Deployment, labels have a
different meaning:
metadata.labels
Labels applied directly to the manifest, in this case the Deployment resource. You can find
objects matching these labels with the kubectl get kind --selector="key=value".
For example, kubectl get deployment --selector="app=myapp" returns all
deployments with a label app=myapp in the metadata.labels section.
spec.selector.matchLabels.selector
Determine what pods are under the control of the Deployment resource. Even if some pods
in the cluster are not deployed via this Deployment, if they match the labels in this section
then they will count as replicas and follow the rules defined in this Deployment manifest.
spec.template.metadata.labels
Like the rest of the template, it defines how Kubernetes creates new pods using this
Deployment. Kubernetes will label all the pods created by this Deployment resource with
these values.
6 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
References
Deployments
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
DeploymentSpec v1
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/
#deploymentspec-v1-apps
DO100B-K1.22-en-2-7067502 7
Chapter 1 | Deploying Managed Applications
Guided Exercise
Outcomes
You should be able to:
Make sure your kubectl context refers to a namespace where you have enough
permissions, usually username-dev or username-stage. Use the kubectl config
set-context --current --namespace=namespace command to switch to the
appropriate namespace.
Instructions
In this exercise, you will deploy an existing application by using the container image quay.io/
redhattraining/do100-versioned-hello:v1.0-external.
1.1. Use the kubectl create deployment command to create the deployment.
Name the deployment do100-versioned-hello.
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
8 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
1.2. Validate that the deployment created the expected application pod. Use the
kubectl get pods -w command in a new terminal. Keep the command running
for observing further updates.
1.3. Use the kubectl describe deployment command to get relevant information
about the deployment:
Optionally, use the kubectl get deployment command to get the full manifest
for the deployment:
DO100B-K1.22-en-2-7067502 9
Chapter 1 | Deploying Managed Applications
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
...output omitted...
labels:
app: do100-versioned-hello
spec:
containers:
- image: quay.io/redhattraining/do100-versioned-hello:v1.0-external
...output omitted...
name: do100-versioned-hello
...output omitted...
2.1. Edit the number of replicas in the Deployment resource using the kubectl scale
deployment command:
2.2. Validate that Kubernetes deployed a new replica pod. Go back to the terminal where
the kubectl get pods -w command is running. Observe how the output displays
a new pod named do100-versioned-hello-76c4494b5d-qtfs9. The pod
updates from the Pending status to ContainerCreating and finally to Running.
3. Verify high availability features in Kubernetes. Kubernetes must ensure that two replicas are
available at all times. Terminate one pod and observe how Kubernetes creates a new pod to
ensure the desired number of replicas.
3.1. Terminate one of the pods by using the kubectl delete pod command. This
action emulates a failing application or unexpected pod unavailability.
3.2. In the terminal running the kubectl get pods -w command, observe that the
deleted pod changed to the Terminating status.
10 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
Immediately after the pod becomes unavailable, Kubernetes creates a new replica:
Note
Note that the Terminating and Pending status might appear many times in the
output. This repetition reflects the fact that those statuses are aggregating status
for another fine-grained status for the deployment.
4. Deploy a new version of the application and observe the default deployment
rollingUpdate strategy.
4.1. Edit the deployment and update the container image version from v1.0-external
to v1.1-external. Use the kubectl edit deployment command to edit the
Deployment manifest:
4.2. Analyze the status timeline for the pods. Observe how Kubernetes orchestrates
the termination and deployment of the pods. The maximum unavailability is zero
pods (25% of 2 pods, rounding down), so there must always be at least two available
replicas. The maximum surge is 1 (20% of 2 pods, rounding up), so Kubernetes will
create new replicas one by one.
DO100B-K1.22-en-2-7067502 11
Chapter 1 | Deploying Managed Applications
Kubernetes creates a single new v1.1 replica, as maxSurge limits the over-
deployment to one.
Again, three pods are available, so Kubernetes terminates the last v1.0 replica.
Finish
Delete the deployment to clean your cluster. Kubernetes automatically deletes the associated
pods.
Observe in the other terminal that Kubernetes automatically terminates all pods associated with
the deployment:
12 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
DO100B-K1.22-en-2-7067502 13
Chapter 1 | Deploying Managed Applications
Quiz
2. Which two of the following statements about the provided YAML manifest are
correct? (Choose two.)
apiVersion: apps/v1
kind: Pod
metadata:
app: do100b
name: do100b
spec:
replicas: 1
selector:
matchLabels:
app: do100b
template:
metadata:
labels:
app: do100b
spec:
containers:
- name: do100b
14 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
3. Which of the following statements about the deployment strategy of the Deployment
resource is correct? (Choose one.)
a. The deployment strategy configuration enables developers to implement routing
strategies, such as blue-green deployment or A/B deployment.
b. The Deployment resource does not enable configuring a deployment strategy. Each
Pod resource contains a deployment strategy configuration.
c. The RollingUpdate deployment strategy terminates all application pods, and
recreates the new application pods.
d. The Deployment resource enables you to configure the RollingUpdate or Recreate
deployment strategies.
4. Based on the following information, which deployment strategy is most suitable for
updating the build system? (Choose one.)
You are a DevOps engineer responsible for updating of the internal build system
application to a new version.
The build system can experience downtime.
However, it is important for the build system to be updated as quickly as
possible.
The tool system has more than 100 replicas.
The update is thoroughly tested.
a. The Recreate strategy, because that is the fastest update strategy and the application
can experience downtime. Additionally, the new application is well tested.
b. The RollingUpdate strategy, because that is the fastest update strategy without
causing downtime.
c. The RollingUpdate strategy, because that is the only strategy for large applications.
Each Pod resource contains a deployment strategy configuration.
d. The Canary strategy, because we need to verify the new version before deploying it in a
production environment.
DO100B-K1.22-en-2-7067502 15
Chapter 1 | Deploying Managed Applications
Solution
2. Which two of the following statements about the provided YAML manifest are
correct? (Choose two.)
apiVersion: apps/v1
kind: Pod
metadata:
app: do100b
name: do100b
spec:
replicas: 1
selector:
matchLabels:
app: do100b
template:
metadata:
labels:
app: do100b
spec:
containers:
- name: do100b
16 DO100B-K1.22-en-2-7067502
Chapter 1 | Deploying Managed Applications
3. Which of the following statements about the deployment strategy of the Deployment
resource is correct? (Choose one.)
a. The deployment strategy configuration enables developers to implement routing
strategies, such as blue-green deployment or A/B deployment.
b. The Deployment resource does not enable configuring a deployment strategy. Each
Pod resource contains a deployment strategy configuration.
c. The RollingUpdate deployment strategy terminates all application pods, and
recreates the new application pods.
d. The Deployment resource enables you to configure the RollingUpdate or Recreate
deployment strategies.
4. Based on the following information, which deployment strategy is most suitable for
updating the build system? (Choose one.)
You are a DevOps engineer responsible for updating of the internal build system
application to a new version.
The build system can experience downtime.
However, it is important for the build system to be updated as quickly as
possible.
The tool system has more than 100 replicas.
The update is thoroughly tested.
a. The Recreate strategy, because that is the fastest update strategy and the application
can experience downtime. Additionally, the new application is well tested.
b. The RollingUpdate strategy, because that is the fastest update strategy without
causing downtime.
c. The RollingUpdate strategy, because that is the only strategy for large applications.
Each Pod resource contains a deployment strategy configuration.
d. The Canary strategy, because we need to verify the new version before deploying it in a
production environment.
DO100B-K1.22-en-2-7067502 17
Chapter 1 | Deploying Managed Applications
Summary
In this chapter, you learned:
• Deployment resources declare the images, replicas and other desired deployment information.
Kubernetes updates the state of the application to match the desired status.
• Changes on the application state (such as a container stopping unexpectedly) or in the desired
state (such as the desired number of replicas) trigger new container deployments.
• Deployment resources also define the strategy for updating pods, either RollingUpdate or
Recreate.
18 DO100B-K1.22-en-2-7067502
Chapter 2
Configuring Networking in
Kubernetes
Goal Introduce communication between Kubernetes
applications and the rest of the world.
DO100B-K1.22-en-2-7067502 19
Chapter 2 | Configuring Networking in Kubernetes
Objectives
After completing this section, you should be able to enable intra-pod network communications
for applications deployed in Kubernetes, and learn how to keep communication up even with
automatic deployments.
Kubernetes Networking
When pods are created, they are assigned an IP address. You use this IP to access the pod from
anywhere within the Kubernetes cluster. Containers inside a pod share the same network space,
which means that, within the pod, containers can communicate with each other by using the
localhost address.
A Kubernetes cluster might be split across different nodes. A node is a physical machine where
resources run. A cluster is a logical view of a set of nodes. These nodes are different machines, but
they work together as a logical unit. This makes it easier to work with different machines at the
same time because you can simply deploy resources to the cluster and not to individual nodes.
20 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
At the same time, applications usually have several replicas and traffic is split across the replicas.
This ensures that no single replica is overworked. This is called load-balancing.
In both use cases, the problem is the same: you need a way to reach the pods regardless of the
machine where they are located. To solve this, Kubernetes introduces the concept of Service.
A service is an abstraction that defines the access to a set of pods. By using a service, you don't
access pods directly through their private IP addresses. Instead, a service targets several pods
based on certain criteria (for example, a label) and forwards any requests to one of the pods
matching that criteria.
In other words, a service allows you to group pods with a logical relationship and it allows you to
reach them in a reliable way. At the same time, it implements a load-balancing mechanism among
the pods that it targets.
For example, if you want to have three replicas of your application then three pods will be created.
If you create a service that targets these pods, then the service receives any incoming requests
and it routes it to one of them.
By default, a service is given a cluster-internal IP address, which is only valid within the cluster.
This type of service is called ClusterIP. This means that pods deployed in the cluster can make
requests to the service by using the ClusterIP.
The following diagram illustrates the communication between pods and services. For example, Pod
1 uses the ClusterIP of Service 2 to make requests to the service.
DO100B-K1.22-en-2-7067502 21
Chapter 2 | Configuring Networking in Kubernetes
If you want to expose the service externally, then you can use other types of services such as
NodePort or LoadBalancer. However, the most common way to expose a service outside of
your cluster is by using another Kubernetes resource called Ingress. Ingress is covered in
upcoming sections of this course.
The easiest way to create a service is by using the kubectl expose command.
The previous command creates a service named service-name, which targets deployment
deployment-name. It listens on port 8081 and it points to port 3000 inside the pod.
Use the command kubectl get service to list the services available. The output will provide
you with information such as the ClusterIP (IP only valid within the Kubernetes cluster) and the
port used to access the service. A sample output might look like this:
22 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
• Applying a manifest
An approach in line with the DevOps principles is creating services through a manifest. The
following sample creates a service named nginx-service and targets any pod with the label
app: nginx. The service listens for requests in port 8081 and forwards them to port 3000 inside
the pod. Because the manifest does not include the type field, it creates a service with type
ClusterIP.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8081
targetPort: 3000
Port mapping
• Environment variables
By default, when a service is created, Kubernetes injects some environment variables in pods
within the same namespace. These variables follow the pattern:
SERVICE-NAME_VARIABLE-NAME
If you have a service named nginx-provider, that generates the following variables (non-
exhaustive) then you can simply inject these environment variables into your application:
• NGINX_PROVIDER_SERVICE_PORT, which contains the port where Service listens on. For
example, 6379
DO100B-K1.22-en-2-7067502 23
Chapter 2 | Configuring Networking in Kubernetes
However, your application tries to fetch the environment variables only on start-up. This means
that if the value of the variable changes (for example, a service gets a different IP) after your
application has started, then your application is not notified and it references an invalid value
(the previous IP address of the service). The same happens if the service is created after your
application boots-up.
• DNS
Given the limitations of the Kubernetes built-in environment variables, the preferred way of
accessing services from your application is using DNS.
Every service in the cluster is assigned a DNS name, which matches with the service's lower cased
name. This allows applications to access services using always the same reference. The default
FQDN follows the pattern:
service.namespace.svc.cluster.local
However, it is possible to avoid this long form. The DNS server also resolves the following hosts:
• service.namespace.cluster.local
• service.namespace
• service (in this case, Kubernetes expects the service to be in the same namespace)
For example, if you have a service named nginx-service that exposes an HTTP endpoint in the
default HTTP port (80), then you can use http://nginx-service if your application is in
the same namespace as the service. If the service was in a namespace named nginx-apps,
then you use http://nginx-service.nginx-apps.
References
Services
https://kubernetes.io/docs/concepts/services-networking/service/
Discovering services
https://kubernetes.io/docs/concepts/services-networking/service/#discovering-
services
24 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
Guided Exercise
Outcomes
You should be able to:
Instructions
To illustrate how communication is handled in Kubernetes, you use two applications.
The name-generator app produces random names that can be consumed in the /random-
name endpoint.
The email-generator app produces random emails that can be consumed in the /random-
email endpoint.
Make sure your kubectl context uses the namespace username-dev. This allows you to execute
kubectl commands directly into that namespace.
DO100B-K1.22-en-2-7067502 25
Chapter 2 | Configuring Networking in Kubernetes
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
1.2. Use the kubectl apply command to create a Deployment from the manifest
located in the kubernetes directory. It creates three replicas of the name-
generator app by using the quay.io/redhattraining/do100-name-
generator:v1.0-external image.
1.3. List the deployments to verify it has been created successfully. Use the command
kubectl get deployment.
2. Create a Service for the deployment of the name-generator app by using the kubectl
expose command.
2.1. Using the deployment name, expose the service at port number 80. The following
command creates a service that forwards requests on port 80 for the DNS name
name-generator.namespace.local-domain to containers created by the
name-generator deployment on port 8080.
2.2. List the services to verify that the name-generator service has been created
successfully. Use the command kubectl get service.
26 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
3. Review the code of the email-generator to see how the request to the name-
generator is made. Deploy the app in the username-dev namespace.
3.2. In the app directory, open the server.js file. The server.js file is a NodeJS
application, which exposes the endpoint /random-email on the 8081 port.
3.3. In the same folder, open the generate-email.js file. The generateEmail
method generates a random email by making an HTTP request to the name-
generator service.
3.7. List the deployments in the namespace to verify it has been created successfully. Use
the command kubectl get deployment.
4. Create a service for the deployment of the email-generator app by using a manifest.
4.1. Apply the Service manifest in the username-dev namespace. It is located in the
kubernetes folder. Use the kubectl apply command.
This command exposes the service in the 80 port and targets port 8081, which is
where the email-generator app serves.
4.2. List the services to verify that the email-generator service has been created
successfully. Use the command kubectl get service.
DO100B-K1.22-en-2-7067502 27
Chapter 2 | Configuring Networking in Kubernetes
5. Verify that everything works properly by making an HTTP request to the email-
generator app from the username-stage namespace. The result should contain a name
plus some numbers at the end.
5.1. To make a request to the email-generator app from another namespace, you use
the Kubernetes DNS resolution pattern service-name.namespace. In this case,
the host is email-generator.username-dev.
5.2. Create a temporary pod that enables you to make a request to the email-
generator application. Run the following command, which provides you with a
terminal to execute curl.
Note that:
• After Kubernetes creates the pod, you create an interactive remote shell session
into the pod.
• When you exit out of the interactive session, Kubernetes terminates the pod.
The command might take some time to execute. If you see the message If you
don't see a command prompt, try pressing enter., then press Enter on
your keyboard and the terminal opens.
5.3. In the terminal, make an HTTP request to the email-generator service by using
curl. Because the service runs on the default HTTP port (80), you do not need to
specify the port. You can also omit the local DNS domain.
{"email":"Susan487@host"}
5.4. Type exit to exit the terminal. The pod used to make the request is automatically
deleted.
Finish
Remove all resources used in this exercise.
28 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
You can delete all resources in the namespace with the following command:
Alternatively, you can delete the resources individually. Delete both the email-generator and
name-generator services:
DO100B-K1.22-en-2-7067502 29
Chapter 2 | Configuring Networking in Kubernetes
Quiz
2. Which two of the following statements about the Kubernetes Service resource are
correct? (Choose two.)
a. Kubernetes uses only the LoadBalancer service type.
b. The ClusterIP service type routes requests to a set of pods in a round-robin way to
load balance the requests.
c. A service cannot target less than three replicas. This is known as the N-1 rule.
d. You can use three Kubernetes service types: ClusterIP, LoadBalancer, and
NodePort. ClusterIP is the default service type which exposes traffic within the
Kubernetes cluster.
30 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
3. Based on the provided Service resource manifest, which of the following statements
are correct? (Choose two.)
apiVersion: v1
kind: Service
metadata:
name: camel-api
spec:
selector:
app: camel-api
ports:
- protocol: TCP
port: 443
targetPort: 8443
4. Your application uses the following FQDN to discover a Service resource: frontend-
v1.build-stage.svc.cluster.local. Which of the following statements are
correct? (Choose two.)
a. The service name is frontend.
b. The service is in the build-stage.svc namespace.
c. The service name is frontend-v1.
d. The service is in the build-stage namespace.
DO100B-K1.22-en-2-7067502 31
Chapter 2 | Configuring Networking in Kubernetes
Solution
2. Which two of the following statements about the Kubernetes Service resource are
correct? (Choose two.)
a. Kubernetes uses only the LoadBalancer service type.
b. The ClusterIP service type routes requests to a set of pods in a round-robin way to
load balance the requests.
c. A service cannot target less than three replicas. This is known as the N-1 rule.
d. You can use three Kubernetes service types: ClusterIP, LoadBalancer, and
NodePort. ClusterIP is the default service type which exposes traffic within the
Kubernetes cluster.
32 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
3. Based on the provided Service resource manifest, which of the following statements
are correct? (Choose two.)
apiVersion: v1
kind: Service
metadata:
name: camel-api
spec:
selector:
app: camel-api
ports:
- protocol: TCP
port: 443
targetPort: 8443
4. Your application uses the following FQDN to discover a Service resource: frontend-
v1.build-stage.svc.cluster.local. Which of the following statements are
correct? (Choose two.)
a. The service name is frontend.
b. The service is in the build-stage.svc namespace.
c. The service name is frontend-v1.
d. The service is in the build-stage namespace.
DO100B-K1.22-en-2-7067502 33
Chapter 2 | Configuring Networking in Kubernetes
Objectives
After completing this section, you should be able to expose service-backed applications to clients
outside the Kubernetes cluster.
Kubernetes Ingress
Kubernetes assigns IP addresses to pods and services. Pod and service IP addresses are not
usually accessible outside of the cluster. Unless prevented by network policies, the Kubernetes
cluster typically allows internal communication between pods and services. This internal
communication allows application pods to interact with services that are not externally accessible,
such as database services.
For a web application that should be accessible to external users, you must create a Kubernetes
ingress resource. An ingress maps a domain name, or potentially a URL, to an existing service. On
its own, the ingress resource does not provide access to the specified host or path. The ingress
resource interacts with a Kubernetes ingress controller to provide external access to a service over
HTTP or HTTPS.
As a developer, you cannot choose the ingress controller used by your environment you also
cannot configure it.
Operations teams will install and configure an ingress controller appropriate to their environment.
This includes configuring the ingress controller based on the networking characteristics of your
environment. Most cloud providers and Kubernetes distributions implement their own ingress
controllers, tailored for their products and network environments.
Local and self-managed Kubernetes distributions tend to use ingress controllers offered by open
source projects, network vendors or sample ingress controllers provided by Kubernetes. Find a list
of ingress controllers provided by upstream Kubernetes in the References section.
Production deployments must have DNS records pointing to the Kubernetes cluster. Some
Kubernetes distributions use wildcard DNS records to link a family of host names to the same
Kubernetes cluster. A wildcard DNS record is a DNS record that, given a parent wildcard domain,
maps all its subdomains to a single IP. For example, a wildcard DNS record might map the
34 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
wildcard domain *.example.com to the IP 10.0.0.8. DNS requests for Subdomains such as
hello.example.com or myapp.example.com will obtain 10.0.0.8 as a response.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
spec:
rules:
- host: hello.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 8080
This value is used in combination with pathType to determine if the URL request matches
any of the accepted paths. A path value of / with the pathType value of Prefix is the
equivalent of a wildcard that matches any path.
This value is used in combination with path to determine if the URL matches any of the
accepted paths. A pathType of Prefix offers a bit more flexibility allowing for matches
where the path and the requested URL can contain either a trailing / or not. A pathType of
Exact requires the requested URL to exactly match the path value.
After using either the kubectl create command or the kubectl apply command to create
an ingress resource, use a web browser to access your application URL. Browse to the host name
and the path defined in the ingress resource and verify that the request is forwarded to the
application and the browser get the response:
DO100B-K1.22-en-2-7067502 35
Chapter 2 | Configuring Networking in Kubernetes
Optionally you can use the curl command to perform simple tests.
If the browser does not obtain the expected response then verify that:
• The host name and paths are the ones used in the ingress resource.
• Your system can translate the host name to the IP address for the ingress controller (via your
hosts file or a DNS entry).
• The ingress resource is available and its information is correct.
• If applicable, verify that the ingress controller is installed and running in your cluster.
References
Ingress
https://kubernetes.io/docs/concepts/services-networking/ingress/
Ingress Controllers
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
36 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
Guided Exercise
Outcomes
You should be able to:
• Verify that the service IP address and the associated pod IP addresses for an application
are not accessible outside of the cluster.
• An ingress controller enabled in your cluster and the associated domain name mapping.
• Your Kubernetes context referring to your cluster and using the username-dev namespace.
Instructions
1. Deploy a sample hello application. The hello app displays a greeting and its local IP
address. When running under Kubernetes, this is the IP address assigned to its pod.
Create a new deployment named hello that uses the container image located at
quay.io/redhattraining/do100-hello-ip:v1.0-external in the username-
dev namespace. Configure the deployment to use three pods.
1.1. Create the hello deployment with three replicas. Use the container image located at
quay.io/redhattraining/do100-hello-ip:v1.0-external. This container
image simply displays the IP address of its associated pod.
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
DO100B-K1.22-en-2-7067502 37
Chapter 2 | Configuring Networking in Kubernetes
Run the kubectl get pods -w command to verify that three pods are running.
Press Ctrl+C to exit the kubectl command after all three hello pods display the
Running status.
1.2. Create a service for the hello deployment that redirects to pod port 8080.
Run the kubectl expose command to create a service that redirects to the hello
deployment. Configure the service to listen on port 8080 and redirect to port 8080
within the pod.
Note that the IP associated to the service is private to the Kubernetes cluster, You
can not access that IP directly.
2. Create an ingress resource that directs external traffic to the hello service.
2.1. Use a text editor to create a file in your current directory named ingress-
hello.yml.
Create the ingress-hello.yml file with the following content. Ensure correct
indentation (using spaces rather than tabs) and then save the file.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
labels:
app: hello
spec:
rules:
- host: INGRESS-HOST
38 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 8080
2.2. Use the kubectl create command to create the ingress resource.
2.3. Display information about the hello ingress. If the command does not display an IP
address, then wait up to a minute and try running the command again.
The value in the HOST column matches the host line specified in your ingress-
hello.yml file. Your IP address is likely different from the one displayed here.
3. Verify that the ingress resource successfully provides access to the hello service and the
pods associated with the service.
DO100B-K1.22-en-2-7067502 39
Chapter 2 | Configuring Networking in Kubernetes
The hello ingress queries the hello service to identify the IP addresses of the pod
endpoints. The hello ingress then uses round robin load balancing to spread the
requests among the available pods, and each pod responds to the curl command
with the pod IP address.
Optionally, open a web browser and navigate to the wildcard domain name. The web
browser displays a message similar to the following.
Refresh you browser window to repeat the request and see different responses.
Note
Because load balancers frequently create an association between a web client and
a server (one of the hello pods in this case), reloading the web page is unlikely to
display a different IP address. This association, sometimes referred to as a sticky
session, does not apply to the curl command.
Finish
Delete the created resources that have the app=hello label.
Verify that no resources with the app=hello label exist in the current namespace.
40 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
Quiz
2. Which of the following commands creates an Ingress resource? Assume the names
of resources are correct. (Choose one.)
a. kubectl expose pod mypod
b. kubectl create -f ingress.yaml
c. kubectl expose deployment mydeployment
d. kubectl edit service myservice
DO100B-K1.22-en-2-7067502 41
Chapter 2 | Configuring Networking in Kubernetes
3. Which of the following URLs will be matched by the provided Ingress resource
manifest configuration? (Choose one.)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-example
spec:
rules:
- host: frontend.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: frontend
port:
number: 3000
a. frontend.com
b. frontend.com/example
c. frontend-example.com
d. None, because the resource is misconfigured
42 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
4. Which two of the following URLs will be matched by the provided Ingress resource
manifest configuration? (Choose two.)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-example
spec:
rules:
- host: frontend.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
a. frontend.com
b. frontend.com/example
c. frontend-example.com
d. None, because the resource is misconfigured
DO100B-K1.22-en-2-7067502 43
Chapter 2 | Configuring Networking in Kubernetes
Solution
2. Which of the following commands creates an Ingress resource? Assume the names
of resources are correct. (Choose one.)
a. kubectl expose pod mypod
b. kubectl create -f ingress.yaml
c. kubectl expose deployment mydeployment
d. kubectl edit service myservice
44 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
3. Which of the following URLs will be matched by the provided Ingress resource
manifest configuration? (Choose one.)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-example
spec:
rules:
- host: frontend.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: frontend
port:
number: 3000
a. frontend.com
b. frontend.com/example
c. frontend-example.com
d. None, because the resource is misconfigured
DO100B-K1.22-en-2-7067502 45
Chapter 2 | Configuring Networking in Kubernetes
4. Which two of the following URLs will be matched by the provided Ingress resource
manifest configuration? (Choose two.)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-example
spec:
rules:
- host: frontend.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
a. frontend.com
b. frontend.com/example
c. frontend-example.com
d. None, because the resource is misconfigured
46 DO100B-K1.22-en-2-7067502
Chapter 2 | Configuring Networking in Kubernetes
Summary
In this chapter, you learned:
• Kubernetes offers several virtual networks to enable communication between cluster nodes and
between pods.
• The service resource abstracts internal pod communication so applications are not dependent
of dynamic IPs and ports.
• Services discover other services by using environment variables injected by Kubernetes or,
preferably, by using Kubernetes internal DNS service.
• Kubernetes allow exposing services to external networks using ingress resources and
controllers.
• Ingress implementations vary on different Kubernetes distributions, but all of them link a
subdomain name to a service.
DO100B-K1.22-en-2-7067502 47
48 DO100B-K1.22-en-2-7067502
Chapter 3
DO100B-K1.22-en-2-7067502 49
Chapter 3 | Customize Deployments for Application Requirements
Objectives
After completing this section, you should be able to leverage how to avoid applications overusing
system resources.
Resource requests
Used for scheduling and indicating that a pod cannot run with less than the specified amount
of compute resources. The scheduler tries to find a node with sufficient compute resources to
satisfy the requests.
Resource limits
Used to prevent a pod from using up all compute resources from a node. The node that runs a
pod configures the Linux kernel cgroups feature to enforce the pod's resource limits.
You should define resource requests and resource limits for each container in a deployment. If not,
then the deployment definition will include a resources: {} line for each container.
Modify the resources: {} line to specify the desired requests and or limits. For example:
...output omitted...
spec:
containers:
- image: quay.io/redhattraining/hello-world-nginx:v1.0
name: hello-world-nginx
resources:
requests:
cpu: "10m"
memory: 20Mi
limits:
cpu: "80m"
memory: 100Mi
status: {}
Note
If you use the kubectl edit command to modify a deployment, then ensure you
use the correct indentation. Indentation mistakes can result in the editor refusing
to save changes. To avoid indentation issues, you can use the kubectl set
resources command to specify resource requests and limits.
The following command sets the same requests and limits as the preceding example:
50 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
If a resource quota applies to a resource request, then the pod should define a resource request. If
a resource quota applies to a resource limit, then the pod should also define a resource limit. Even
without quotas, you should define resource requests and limits.
Note
The summary columns for Requests and Limits display the sum totals of defined
requests and limits. In the preceding output, only one of the 20 pods running on the
node defined a memory limit, and that limit was 512Mi.
The kubectl describe node command displays requests and limits, and the kubectl top
command shows actual usage. For example, if a pod requests 10m of CPU, then the scheduler will
ensure that it places the pod on a node with available capacity. Although the pod requested 10m
of CPU, it might use more or less than this value, unless it is also constrained by a CPU limit. The
kubectl top nodes command shows actual usage for one or more nodes in the cluster, and the
kubectl top pods command shows actual usage for each pod in a namespace.
DO100B-K1.22-en-2-7067502 51
Chapter 3 | Customize Deployments for Application Requirements
Applying Quotas
Kubernetes can enforce quotas that track and limit the use of two kinds of resources:
Object counts
The number of Kubernetes resources, such as pods, services, and routes.
Compute resources
The number of physical or virtual hardware resources, such as CPU, memory, and storage
capacity.
Imposing a quota on the number of Kubernetes resources avoids exhausting other limited software
resources, such as IP addresses for services.
Similarly, imposing a quota on the amount of compute resources avoids exhausting the capacity
of a single node in a Kubernetes cluster. It also prevents one application from starving other
applications of resources.
Note
Although a single quota resource can define all of the quotas for a namespace,
a namespace can also contain multiple quotas. For example, one quota resource
might limit compute resources, such as total CPU allowed or total memory
allowed. Another quota resource might limit object counts, such as the number
of pods allowed or the number of services allowed. The effect of multiple quotas
is cumulative, but it is expected that two different ResourceQuota resources
for the same namespace do not limit the use of the same type of Kubernetes or
compute resource. For example, two different quotas in a namespace should not
both attempt to limit the maximum number of pods allowed.
The following table describes some resources that a quota can restrict by their count or number:
The following table describes some compute resources that can be restricted by a quota:
52 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Quota attributes can track either resource requests or resource limits for all pods in the
namespace. By default, quota attributes track resource requests. Instead, to track resource limits,
prefix the compute resource name with limits, for example, limits.cpu.
The following listing shows a ResourceQuota resource defined using YAML syntax. This example
specifies quotas for both the number of resources and the use of compute resources:
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
spec:
hard:
services: "10"
cpu: "1300m"
memory: "1.5Gi"
Resource units are the same for pod resource requests and resource limits. For example, Gi means
GiB, and m means millicores. One millicore is the equivalent to 1/1000 of a single CPU core.
Resource quotas can be created in the same way as any other Kubernetes resource; that is, by
passing a YAML or JSON resource definition file to the kubectl create command:
Another way to create a resource quota is by using the kubectl create quota command, for
example:
Use the kubectl get resourcequota command to list available quotas, and use the kubectl
describe resourcequota command to view usage statistics related to any hard limits defined
in the quota, for example:
Without arguments, the kubectl describe quota command displays the cumulative limits set
for all ResourceQuota resources in the namespace:
DO100B-K1.22-en-2-7067502 53
Chapter 3 | Customize Deployments for Application Requirements
Name: count-quota
Namespace: schedule-demo
Resource Used Hard
-------- ---- ----
pods 1 3
replicationcontrollers 1 5
services 1 2
An active quota can be deleted by name using the kubectl delete command:
When a quota is first created in a namespace, the namespace restricts the ability to create any
new resources that might violate a quota constraint until it has calculated updated usage statistics.
After a quota is created and usage statistics are up-to-date, the namespace accepts the creation
of new resources. When creating a new resource, the quota usage immediately increments. When
deleting a resource, the quota use decrements during the next full recalculation of quota statistics
for the namespace.
Quotas are applied to new resources, but they do not affect existing resources. For example, if you
create a quota to limit a namespace to 15 pods, but 20 pods are already running, then the quota
will not remove the additional 5 pods that exceed the quota.
Important
ResourceQuota constraints are applied for the namespace as a whole, but many
Kubernetes processes, such as builds and deployments, create pods inside the
namespace and might fail because starting them would exceed the namespace
quota.
If a modification to a namespace exceeds the quota for a resource count, then Kubernetes denies
the action and returns an appropriate error message to the user. However, if the modification
exceeds the quota for a compute resource, then the operation does not fail immediately;
Kubernetes retries the operation several times, giving the administrator an opportunity to increase
the quota or to perform another corrective action, such as bringing a new node online.
Important
If a quota that restricts usage of compute resources for a namespace is set,
then Kubernetes refuses to create pods that do not specify resource requests or
resource limits for that compute resource. To use most templates and builders with
a namespace restricted by quotas, the namespace must also contain a limit range
resource that specifies default values for container resource requests.
54 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
To understand the difference between a limit range and a resource quota, consider that a limit
range defines valid ranges and default values for a single pod, and a resource quota defines
only top values for the sum of all pods in a namespace. A cluster administrator concerned about
resource usage in a Kubernetes cluster usually defines both limits and quotas for a namespace.
A limit range resource can also define default, minimum, and maximum values for the storage
capacity requested by an image, image stream, or persistent volume claim. If a resource that is
added to a namespace does not provide a compute resource request, then it takes the default
value provided by the limit ranges for the namespace. If a new resource provides compute
resource requests or limits that are smaller than the minimum specified by the namespace limit
ranges, then the resource is not created. Similarly, if a new resource provides compute resource
requests or limits that are higher than the maximum specified by the namespace limit ranges, then
the resource is not created.
The following listing shows a limit range defined using YAML syntax:
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "dev-limits"
spec:
limits:
- type: "Pod"
max:
cpu: "500m"
memory: "750Mi"
min:
cpu: "10m"
memory: "5Mi"
- type: "Container"
max:
cpu: "500m"
memory: "750Mi"
min:
cpu: "10m"
memory: "5Mi"
default:
cpu: "100m"
memory: "100Mi"
defaultRequest:
cpu: "20m"
memory: "20Mi"
- type: "PersistentVolumeClaim"
min:
storage: "1Gi"
max:
storage: "50Gi"
DO100B-K1.22-en-2-7067502 55
Chapter 3 | Customize Deployments for Application Requirements
The maximum amount of CPU and memory that all containers within a pod can consume. A
new pod that exceeds the maximum limits is not created. An existing pod that exceeds the
maximum limits is restarted.
The minimum amount of CPU and memory consumed across all containers within a pod.
A pod that does not satisfy the minimum requirements is not created. Because many pods
only have one container, you might set the minimum pod values to the same values as the
minimum container values.
The maximum amount of CPU and memory that an individual container within a pod can
consume. A new container that exceeds the maximum limits does not create the associated
pod. An existing container that exceeds the maximum limits restarts the entire pod.
The minimum amount of CPU and memory that an individual container within a pod can
consume. A container that does not satisfy the minimum requirements prevents the
associated pod from being created.
The default maximum amount of CPU and memory that an individual container can consume.
This is used when a CPU resource limit or a memory limit is not defined for the container.
The default CPU and memory an individual container requests. This default is used when
a CPU resource request or a memory request is not defined for the container. If CPU and
memory quotas are enabled for a namespace, then configuring the defaultRequest
section allows pods to start, even if the containers do not specify resource requests.
The minimum and maximum sizes allowed for a persistent volume claim.
Users can create a limit range resource in the same way as any other Kubernetes resource; that is,
by passing a YAML or JSON resource definition file to the kubectl create command:
Use the kubectl describe limitrange command to view the limit constraints enforced in a
namespace:
An active limit range can be deleted by name with the kubectl delete command:
After a limit range is created in a namespace, all requests to create new resources are evaluated
against each limit range resource in the namespace. If the new resource violates the minimum
or maximum constraint enumerated by any limit range, then the resource is rejected. If the new
56 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
resource does not set an explicit value, and the constraint supports a default value, then the
default value is applied to the new resource as its usage value.
All resource update requests are also evaluated against each limit range resource in the
namespace. If the updated resource violates any constraint, then the update is rejected.
Important
Avoid setting LimitRange constraints that are too high, or ResourceQuota
constraints that are too low. A violation of LimitRange constraints prevents pod
creation, resulting in error messages. A violation of ResourceQuota constraints
prevents a pod from being scheduled to any node. The pod might be created but
remain in the pending state.
References
Resource Quotas
https://kubernetes.io/docs/concepts/policy/resource-quotas/
Limit Ranges
https://kubernetes.io/docs/concepts/policy/limit-range/
DO100B-K1.22-en-2-7067502 57
Chapter 3 | Customize Deployments for Application Requirements
Guided Exercise
Outcomes
You should be able to use the Kubernetes command-line interface to:
• Configure an application to specify resource requests for CPU and memory usage.
Make sure your kubectl context refers to a namespace where you have enough
permissions, usually username-dev or username-stage. Use the kubectl config
set-context --current --namespace=namespace command to switch to the
appropriate namespace.
Instructions
1. Deploy a test application for this exercise that explicitly requests container resources for
CPU and memory.
1.1. Create a deployment resource file and save it to a file named hello-limit.yaml.
Name the application hello-limit and use the container image located at
quay.io/redhattraining/hello-world-nginx:v1.0.
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
1.2. Edit the file hello-limit.yaml to replace the resources: {} line with the
highlighted lines below. Ensure that you have proper indentation before saving the
file.
58 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
...output omitted...
spec:
containers:
- image: quay.io/redhattraining/hello-world-nginx:v1.0
name: hello-world-nginx
resources:
requests:
cpu: "8"
memory: 20Mi
status: {}
1.4. Although a new deployment was created for the application, the application pod
should have a status of Pending.
1.5. The pod cannot be customized because none of the compute nodes have sufficient
CPU resources. This can be verified by viewing warning events.
2.1. Edit the hello-limit.yaml file to request 1.2 CPUs for the container. Change the
cpu: "8" line to match the highlighted line below.
...output omitted...
resources:
requests:
cpu: "1200m"
memory: 20Mi
2.3. Verify that your application deploys successfully. You might need to run kubectl
get pods multiple times until you see a running pod. The previous pod with a
pending status will terminate and eventually disappear.
DO100B-K1.22-en-2-7067502 59
Chapter 3 | Customize Deployments for Application Requirements
Note
If your application pod does not get customized, modify the hello-limit.yaml
file to reduce the CPU request to 1000m. Apply the changes again and verify the
pod status is Running.
Finish
Delete the created resources to clean your cluster.
[user@host ~] rm hello-limit.yaml
60 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Quiz
1. The CPU quota of your Kubernetes cluster namespace allows pods to use up to five
CPU cores. The following warning shows the reason why Kubernetes cannot run a
pod. Based on the warning, which of the following solutions would solve this situation?
(Choose one.)
a. Add Kubernetes nodes with more than five CPU cores to the cluster.
b. Decrease the resources.limits.cpu property of the container to be less than five.
c. Decrease the resources.requests.cpu property of the container to be less than
five.
d. Decrease the resources.limits.memory property of the container to be less than
five.
2. Which three of the following resources can you restrict with quotas? (Choose three.)
a. cpu
b. memory
c. network.requests
d. ingress.speed
e. pods
DO100B-K1.22-en-2-7067502 61
Chapter 3 | Customize Deployments for Application Requirements
a. requests.cpu, because the application will require many CPU cores occasionally.
b. requests.memory, because the application needs a minimum amount of memory to
run.
c. requests.pods, because there must be two pods running.
d. limits.cpu, because the application might overuse the cluster CPUs occasionally.
e. limits.memory, because memory consumption might increase.
f. limits.pods, because there must be no more than two pods running.
4. Which two of the following statements about limit ranges are true? (Choose two.)
a. Limit range presents no difference when compared with resource quotas.
b. Limit ranges define default, minimum, and maximum values for resource requests.
c. A resource request or limit for a pod is the sum of the pod containers.
d. You can define LimitRange resources at the cluster level. Defining LimitRange
resources for a single namespace is not supported.
62 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Solution
1. The CPU quota of your Kubernetes cluster namespace allows pods to use up to five
CPU cores. The following warning shows the reason why Kubernetes cannot run a
pod. Based on the warning, which of the following solutions would solve this situation?
(Choose one.)
a. Add Kubernetes nodes with more than five CPU cores to the cluster.
b. Decrease the resources.limits.cpu property of the container to be less than five.
c. Decrease the resources.requests.cpu property of the container to be less than
five.
d. Decrease the resources.limits.memory property of the container to be less than
five.
2. Which three of the following resources can you restrict with quotas? (Choose three.)
a. cpu
b. memory
c. network.requests
d. ingress.speed
e. pods
DO100B-K1.22-en-2-7067502 63
Chapter 3 | Customize Deployments for Application Requirements
a. requests.cpu, because the application will require many CPU cores occasionally.
b. requests.memory, because the application needs a minimum amount of memory to
run.
c. requests.pods, because there must be two pods running.
d. limits.cpu, because the application might overuse the cluster CPUs occasionally.
e. limits.memory, because memory consumption might increase.
f. limits.pods, because there must be no more than two pods running.
4. Which two of the following statements about limit ranges are true? (Choose two.)
a. Limit range presents no difference when compared with resource quotas.
b. Limit ranges define default, minimum, and maximum values for resource requests.
c. A resource request or limit for a pod is the sum of the pod containers.
d. You can define LimitRange resources at the cluster level. Defining LimitRange
resources for a single namespace is not supported.
64 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Objectives
After completing this section, you should be able to review how Kubernetes evaluates application
health status via probes and automatic application restart.
• Configuration errors
• Application errors
Developers can use probes to monitor their applications. Probes make developers aware of events
such as application status, resource usage, and errors.
Monitoring of such events is useful for fixing problems, but can also help with resource planning
and managing.
A probe is a periodic check that monitors the health of an application. Developers can configure
probes by using either the kubectl command-line client or a YAML deployment template.
Startup Probe
A startup probe verifies whether the application within a container is started. Startup
probes run before any other probe, and, unless it finishes successfully, disables other
probes. If a container fails its startup probe, then the container is killed and follows the pod's
restartPolicy.
This type of probe is only executed at startup, unlike readiness probes, which are run
periodically.
Readiness Probe
Readiness probes determine whether or not a container is ready to serve requests. If the
readiness probe returns a failed state, then Kubernetes removes the IP address for the
container from the endpoints of all Services.
Developers use readiness probes to instruct Kubernetes that a running container should not
receive any traffic. This is useful when waiting for an application to perform time-consuming
initial tasks, such as establishing network connections, loading files, and warming caches.
DO100B-K1.22-en-2-7067502 65
Chapter 3 | Customize Deployments for Application Requirements
Liveness Probe
Liveness probes determine whether or not an application running in a container is in a
healthy state. If the liveness probe detects an unhealthy state, then Kubernetes kills the
container and tries to redeploy it.
HTTP Checks
An HTTP check is ideal for applications that return HTTP status codes, such as REST APIs.
HTTP probe uses GET requests to check the health of an application. The check is successful if
the HTTP response code is in the range 200-399.
The following example demonstrates how to implement a readiness probe with the HTTP check
method:
66 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
...contents omitted...
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
...contents omitted...
How long to wait after the container starts before checking its health.
When using container execution checks Kubernetes executes a command inside the container.
Exiting the check with a status of 0 is considered a success. All other status codes are considered a
failure.
...contents omitted...
livenessProbe:
exec:
command:
- cat
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
...contents omitted...
When using TCP socket checks Kubernetes attempts to open a socket to the container. The
container is considered healthy if the check can establish a successful connection.
The following example demonstrates how to implement a liveness probe by using the TCP socket
check method:
DO100B-K1.22-en-2-7067502 67
Chapter 3 | Customize Deployments for Application Requirements
...contents omitted...
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
...contents omitted...
Creating Probes
To configure probes on a deployment, edit the deployment's resource definition. To do this, you
can use the kubectl edit or kubectl patch commands.
The following example includes adding a probe into a deployment resource definition by using the
kubectl edit command.
References
The official Kubernetes documentation on probes
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-
readiness-startup-probes/
68 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Guided Exercise
The application you deploy in this exercise exposes two HTTP GET endpoints:
• The /healthz endpoint responds with a 200 HTTP status code when the application pod
can receive requests.
The endpoint indicates that the application pod is healthy and reachable. It does not
indicate that the application is ready to serve requests.
• The /ready endpoint responds with a 200 HTTP status code if the overall application
works.
In this exercise, the /ready endpoint responds with the 200 HTTP status code when the
application pod starts. The /ready endpoint responds with the 503 HTTP status code for
the first 30 seconds after deployment to simulate slow application startup.
You will configure the /healthz endpoint for the liveness probe, and the /ready endpoint
for the readiness probe.
You will simulate network failures in your Kubernetes cluster and observe behavior in the
following scenarios:
• The application is available but cannot reach the database. Consequently, it cannot serve
requests.
Outcomes
You should be able to:
• Configure readiness and liveness probes for an application from the command line.
Make sure your kubectl context refers to a namespace where you have enough
permissions, usually username-dev or username-stage. Use the kubectl config
set-context --current --namespace=namespace command to switch to the
appropriate namespace.
DO100B-K1.22-en-2-7067502 69
Chapter 3 | Customize Deployments for Application Requirements
Instructions
1. Deploy the do100-probes sample application to the Kubernetes cluster and expose the
application.
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
1.3. Use a text editor to create a file in your current directory called probes-
ingress.yml.
Create the probes-ingress.yml file with the following content. Ensure correct
indentation (using spaces rather than tabs) and then save the file.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: do100-probes
labels:
app: do100-probes
spec:
rules:
- host: INGRESS-HOST
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: do100-probes
port:
number: 8080
70 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
1.4. Use the kubectl create command to create the ingress resource.
2.1. Display information about the do100-probes ingress. If the command does not
display an IP address, then wait up to a minute and try running the command again.
The value in the HOST column matches the host line specified in your probes-
ingress.yml file. Your IP address is likely different from the one displayed here.
The /ready endpoint simulates a slow startup of the application, and so for the first
30 seconds after the application starts, it returns an HTTP status code of 503, and
the following response:
HTTP/1.1 200 OK
...output omitted...
Ready for service requests...
DO100B-K1.22-en-2-7067502 71
Chapter 3 | Customize Deployments for Application Requirements
3.1. Use the kubectl edit command to edit the deployment definition and add
readiness and liveness probes.
• For the liveness probe, use the /healthz endpoint on the port 8080.
• For the readiness probe, use the /ready endpoint on the port 8080.
This command opens your default system editor. Make changes to the definition so
that it displays as follows.
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
- image: quay.io/redhattraining/do100-probes:external
...output omitted...
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 2
72 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Warning
The YAML resource is space sensitive. Use spaces to preserve the spacing.
3.3. Wait for the application pod to redeploy and change into the READY state:
The READY status shows 0/1 if the AGE value is less than approximately 30 seconds.
After that, the READY status is 1/1. Note the pod name for the following steps.
3.4. Use the kubectl logs command to see the results of the liveness and readiness
probes. Use the pod name from the previous step.
Observe that the readiness probe fails for about 30 seconds after redeployment,
and then succeeds. Recall that the application simulates a slow initialization of the
application by forcibly setting a 30-second delay before it responds with a status of
ready.
DO100B-K1.22-en-2-7067502 73
Chapter 3 | Customize Deployments for Application Requirements
Do not terminate this command. You will continue to monitor the output of this
command in the next step.
4.1. In a different terminal window or tab, run the following commands to simulate a
liveness probe failure:
4.2. Return to the terminal where you are monitoring the application deployment:
Kubernetes restarts the pod when the liveness probe fails repeatedly (three
consecutive failures by default). This means Kubernetes restarts the application on an
available node not affected by the network failure.
You see this log output only when you immediately check the application logs after
you issue the kill request. If you check the logs after Kubernetes restarts the pod,
then the logs are cleared and you only see the output shown in the next step.
4.3. Verify that Kubernetes restarts the unhealthy pod. Keep checking the output of the
kubectl get pods command. Observe the RESTARTS column and verify that the
count is greater than zero. Note the name of the new pod.
4.4. Review the application logs. The liveness probe succeeds and the application reports
a healthy state.
74 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Finish
Delete the deployment, ingress, and service resources to clean your cluster. Kubernetes
automatically deletes the associated pods.
DO100B-K1.22-en-2-7067502 75
Chapter 3 | Customize Deployments for Application Requirements
Quiz
1. Which three of the following methods are valid Kubernetes probes? (Choose three.)
a. TCP socket check
b. Startup check
c. Container Execution check
d. End-to-end check
e. HTTP check
2. Given the following startup probe configuration, which of the following statements is
correct? (Choose one.)
startupProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 10
periodSeconds: 5
a. The startup probe retries the HTTP check at most five times.
b. After a failure, the probe waits 10 seconds until the next verification.
c. The startup probe disables the liveness and readiness probes until it finishes sucesfully.
d. The probe check must succeed 10 times for Kubernetes to consider the probe
successful.
76 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
3. Given the following liveness probe configuration, which of the following statements is
correct? (Choose one.)
livenessProbe:
exec:
command:
- check-status
failureThreshold: 1
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
a. The liveness probe waits five seconds after the container has started, then runs the
check-status command inside the container.
b. The liveness probe runs the check-status command immediately after the container
starts.
c. The liveness probe runs the check-status command every five seconds.
d. The liveness probe fails if the check-status command exists with a non-zero code five
consecutive times.
4. Given the following readiness probe configuration, which of the following statements
is correct?
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
a. If the readiness probe fails once, then Kubernenetes terminates the container.
b. If the readiness probe fails once, then Kubernenetes stops sending traffic to the
container.
c. If the readiness probe fails five consecutive times, then Kubernenetes terminates the
container.
d. If the readiness probe fails five consecutive times, then Kubernenetes stops sending
traffic to the container.
DO100B-K1.22-en-2-7067502 77
Chapter 3 | Customize Deployments for Application Requirements
Solution
1. Which three of the following methods are valid Kubernetes probes? (Choose three.)
a. TCP socket check
b. Startup check
c. Container Execution check
d. End-to-end check
e. HTTP check
2. Given the following startup probe configuration, which of the following statements is
correct? (Choose one.)
startupProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 10
periodSeconds: 5
a. The startup probe retries the HTTP check at most five times.
b. After a failure, the probe waits 10 seconds until the next verification.
c. The startup probe disables the liveness and readiness probes until it finishes sucesfully.
d. The probe check must succeed 10 times for Kubernetes to consider the probe
successful.
78 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
3. Given the following liveness probe configuration, which of the following statements is
correct? (Choose one.)
livenessProbe:
exec:
command:
- check-status
failureThreshold: 1
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
a. The liveness probe waits five seconds after the container has started, then runs the
check-status command inside the container.
b. The liveness probe runs the check-status command immediately after the container
starts.
c. The liveness probe runs the check-status command every five seconds.
d. The liveness probe fails if the check-status command exists with a non-zero code five
consecutive times.
4. Given the following readiness probe configuration, which of the following statements
is correct?
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
a. If the readiness probe fails once, then Kubernenetes terminates the container.
b. If the readiness probe fails once, then Kubernenetes stops sending traffic to the
container.
c. If the readiness probe fails five consecutive times, then Kubernenetes terminates the
container.
d. If the readiness probe fails five consecutive times, then Kubernenetes stops sending
traffic to the container.
DO100B-K1.22-en-2-7067502 79
Chapter 3 | Customize Deployments for Application Requirements
Objectives
After completing this section, you should be able to create Kubernetes resources holding
application configuration and secrets, and how to make that configuration available to running
applications.
The recommended approach for containerized applications is to decouple the static application
binaries from the dynamic configuration data and to externalize the configuration. This separation
ensures the portability of applications across many environments.
For example, you want to promote an application that is deployed to a Kubernetes cluster from a
development environment to a production environment, with intermediate stages such as testing
and user acceptance. You must use the same application container image in all stages and have
the configuration details specific to each environment outside the container image.
Secret resources are used to store sensitive information, such as passwords, keys, and tokens.
As a developer, it is important to create secrets to avoid compromising credentials and other
sensitive information in your application. There are different secret types that enforce usernames
and keys in the secret object. Some of them are service-account-token, basic-auth, ssh-
auth, tls, and opaque. The default type is opaque, which allows unstructured and non-validated
key:value pairs that can contain arbitrary values.
Configuration map resources are similar to secret resources, but they store nonsensitive data.
A configuration map resource can be used to store fine-grained information, such as individual
properties, or coarse-grained information, such as entire configuration files and JSON data.
You can create configuration map and secret resources using the kubectl command. You can
then reference them in your pod specification and Kubernetes automatically injects the resource
data into the container as environment variables, or as files mounted through volumes inside the
application container.
You can also configure the deployment to reference configuration map and secret resources.
Kubernetes then automatically redeploys the application and makes the data available to the
container.
80 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Data is stored inside a secret resource by using base64 encoding. When data from a secret
is injected into a container, the data is decoded and either mounted as a file, or injected as
environment variables inside the container.
Note
Encoding any text in base64 does not add any layer of security except against
casual snoop.
• For security reasons, mounted volumes for these resources are backed by temporary file
storage facilities (tmpfs) and never stored on a node.
To create a new configuration map that stores the contents of a file or a directory containing a set
of files:
When you create a configuration map from a file, the key name will be the name of the file by
default and the value will be the contents of the file.
When you create a configuration map resource based on a directory, each file with a valid name
key in the directory is stored in the configuration map. Subdirectories, symbolic links, device files,
and pipes are ignored.
Run the kubectl create configmap --help command for more information.
DO100B-K1.22-en-2-7067502 81
Chapter 3 | Customize Deployments for Application Requirements
Note
You can also abbreviate the configmap resource type argument as cm in the
kubectl command-line interface. For example:
To create a new secret that stores the contents of a file or a directory containing a set of files:
When you create a secret from either a file or a directory, the key names are set the same way as
configuration maps.
For more details, including storing TLS certificates and keys in secrets, run the kubectl create
secret --help and the kubectl secret commands.
apiVersion: v1
data:
key1: value1
key2: value2
kind: ConfigMap
metadata:
name: myconf
The name of the first key. By default, an environment variable or a file the with same name as
the key is injected into the container depending on whether the configuration map resource is
injected as an environment variable or a file.
The value stored for the first key of the configuration map.
The value stored for the second key of the configuration map.
82 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
apiVersion: v1
data:
username: cm9vdAo=
password: c2VjcmV0Cg==
kind: Secret
metadata:
name: mysecret
type: Opaque
The name of the first key. This provides the default name for either an environment variable
or a file in a pod, just like the key names from a configuration map.
To edit a configuration map, use the kubectl edit command. This command opens an inline
editor, with the configuration map resource definition in YAML format:
Use the kubectl patch command to edit a configuration map resource. This approach is non-
interactive and is useful when you need to script the changes to a resource:
DO100B-K1.22-en-2-7067502 83
Chapter 3 | Customize Deployments for Application Requirements
To delete a secret:
To edit a secret, first encode your data in base64 format, for example:
Use the encoded value to update the secret resource using the kubectl edit command:
You can also edit a secret resource using the kubectl patch command:
To inject all values stored in a configuration map into environment variables for pods created from
a deployment use the kubectl set env command:
To mount all keys from a configuration map as files from a volume inside pods created from a
deployment, use the kubectl set volume command:
To inject data inside a secret into pods created from a deployment, use the kubectl set env
command:
To mount data from a secret resource as a volume inside pods created from a deployment, use the
kubectl set volume command:
84 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
If your application only has a few simple configuration variables that can be read from environment
variables or passed on the command line, then use environment variables to inject data from
configuration maps and secrets. Environment variables are the preferred approach over mounting
volumes inside the container.
However, if your application has a large number of configuration variables, or if you are migrating
a legacy application that makes extensive use of configuration files, then use the volume mount
approach instead of creating an environment variable for each of the configuration variables. For
example, if your application expects one or more configuration files from a specific location on
your file system, then you should create secrets or configuration maps from the configuration files
and mount them inside the container ephemeral file system at the location that the application
expects.
To inject the secret into the application, configure a volume that refers to the secret created in the
previous command. The volume must point to an actual directory inside the application where the
secret's file is stored.
spec:
template:
spec:
containers:
- name: container
image: repo.my/image-name
volumeMounts:
- mountPath: "/opt/app-root/secure"
name: secure-volumen
readOnly: true
volumes:
- name: secure-volumen
secret:
secretName: secret-name
To add the volume mount on a running application, you can use the kubectl patch command.
DO100B-K1.22-en-2-7067502 85
Chapter 3 | Customize Deployments for Application Requirements
To bind the application to the configuration map, update the deployment configuration from that
application to use the configuration map:
References
ConfigMaps
https://kubernetes.io/docs/concepts/configuration/configmap/
Secrets
https://kubernetes.io/docs/concepts/configuration/secret/
86 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Guided Exercise
Outcomes
You should be able to:
• Inject configuration data into the container using configuration maps and secrets.
• Change the data in the configuration map and verify that the application picks up the
changed values.
Make sure your kubectl context uses the namespace username-dev. This allows you to
execute kubectl commands directly into that namespace.
Instructions
1. Review the application source code and deploy the application.
DO100B-K1.22-en-2-7067502 87
Chapter 3 | Customize Deployments for Application Requirements
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-config
labels:
app: app-config
spec:
rules:
- host: _INGRESS-HOST_
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-config
port:
number: 8080
2.3. Create the ingress resource to be able to invoke the service just exposed:
The undefined value for the environment variable and the ENOENT: no such
file or directory error are shown because neither the environment variable nor
the file exists in the container.
88 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
3.1. Create a configuration map resource to hold configuration variables that store plain
text data.
Create a new configuration map resource called appconfmap. Store a key called
APP_MSG with the value Test Message in this configuration map:
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
3.2. Verify that the configuration map contains the configuration data:
username=user1
password=pass1
salt=xyz123
3.4. Create a new secret to store the contents of the myapp.sec file.
3.5. Verify the contents of the secret. Note that the contents are stored in base64-
encoded format:
DO100B-K1.22-en-2-7067502 89
Chapter 3 | Customize Deployments for Application Requirements
4. Inject the configuration map and the secret into the application container.
4.1. Use the kubectl set env command to add the configuration map to the
deployment configuration:
4.2. Use the kubectl patch command to add the secret to the deployment
configuration:
Patch the app-config deployment using the following patch code. You can find this
content in the DO100x-apps/app-config/kubernetes/secret.yml file.
5. Verify that the application is redeployed and uses the data from the configuration map and
the secret.
5.1. Verify that the configuration map and secret were injected into the container. Retest
the application using the route URL:
Kubernetes injects the configuration map as an environment variable and mounts the
secret as a file into the container. The application reads the environment variable and
file and then displays its data.
90 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
Finish
Delete the created resources to clean your cluster. Kubernetes automatically deletes the
associated pods.
DO100B-K1.22-en-2-7067502 91
Chapter 3 | Customize Deployments for Application Requirements
Quiz
1. How can you manage sensitive configuration properties in Kubernetes? (Choose one.)
a. By obfuscating sensitive information in the application code.
b. By using the Secret resource in Kubernetes.
c. By using the ConfigMap resource in Kubernetes.
d. By creating a Kubernetes volume and storing sensitive information as files in this volume.
3. Given the the following command, which of the following statements is correct?
(Choose one.)
92 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
4. Assume you have an access token that is used to consume the GitHub API. The
token is stored in a Kubernetes resource called api_token. Your team has recently
regenerated the token in GitHub. Consequently, you must update the value of the
api_token resource. Which two of the following commands are valid options?
(Choose two.)
a. kubectl edit configmap/api_token
b. kubectl edit secret/api_token
c. kubectl set env deployment/my_app --from secret/api_token
d. kubectl patch configmap/api_token
e. kubectl patch secret/api_token
DO100B-K1.22-en-2-7067502 93
Chapter 3 | Customize Deployments for Application Requirements
Solution
1. How can you manage sensitive configuration properties in Kubernetes? (Choose one.)
a. By obfuscating sensitive information in the application code.
b. By using the Secret resource in Kubernetes.
c. By using the ConfigMap resource in Kubernetes.
d. By creating a Kubernetes volume and storing sensitive information as files in this volume.
3. Given the the following command, which of the following statements is correct?
(Choose one.)
94 DO100B-K1.22-en-2-7067502
Chapter 3 | Customize Deployments for Application Requirements
4. Assume you have an access token that is used to consume the GitHub API. The
token is stored in a Kubernetes resource called api_token. Your team has recently
regenerated the token in GitHub. Consequently, you must update the value of the
api_token resource. Which two of the following commands are valid options?
(Choose two.)
a. kubectl edit configmap/api_token
b. kubectl edit secret/api_token
c. kubectl set env deployment/my_app --from secret/api_token
d. kubectl patch configmap/api_token
e. kubectl patch secret/api_token
DO100B-K1.22-en-2-7067502 95
Chapter 3 | Customize Deployments for Application Requirements
Summary
In this chapter, you learned:
• Deployments in Kubernetes can include resource limits to ensure the application gets enough
resources or is restricted from using too many.
• Application probes can be configured to facilitate monitoring application health and readiness.
96 DO100B-K1.22-en-2-7067502
Chapter 4
Implementing Cloud
Deployment Strategies
Goal Compare different Cloud Deployment Strategies
DO100B-K1.22-en-2-7067502 97
Chapter 4 | Implementing Cloud Deployment Strategies
Objectives
After completing this section, you should be able to review what deployment strategies can be
used in the Cloud, what they are used for and their advantages.
Kubernetes provides several deployment strategies. These strategies are organized into two
primary categories:
Strategies defined within the deployment impact all routes that use the application. Strategies
that use router features affect individual routes.
The following are strategies that are defined in the application deployment:
Rolling
The rolling strategy is the default strategy.
If a significant issue occurs, the deployment controller aborts the rolling deployment.
Rolling deployments are a type of canary deployment. By using the readiness probe,
Kubernetes tests a new version before replacing all of the old instances. If the readiness probe
never succeeds, then Kubernetes removes the canary instance and rolls back the deployment.
• Your application supports running an older version and a newer version at the same time.
Recreate
With this strategy, Kubernetes first stops all pods running the application and then creates
pods with the new version. This strategy creates down time because there is a time period
with no running instances of your application.
• Your application does not support running multiple different versions simultaneously.
• Your application uses a persistent volume with ReadWriteOnce (RWO) access mode,
which does not allow writes from multiple pods.
98 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
You can configure the strategy in the Deployment object, for example by using the YAML
manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello
name: hello
spec:
replicas: 4
selector:
matchLabels:
app: hello
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 10%
template:
metadata:
labels:
app: hello
spec:
containers:
- image: quay.io/redhattraining/versioned-hello:v1.1
name: versioned-hello
The maxSurge parameter sets the maximum number of pods that can be scheduled above
the desired number of pods. This deployment configures 4 pods. Consequently, 2 new pods
can be created at a time.
The maxUnavailable parameter sets the maximum number of pods that can be unavailable
during the update. Kubernetes calculates the absolute number from the configured
percentage by rounding down. Consequently, maxUnavailable is set to 0 with the current
deployment parameters.
Use the kubectl describe command to view the details of a deployment strategy:
DO100B-K1.22-en-2-7067502 99
Chapter 4 | Implementing Cloud Deployment Strategies
Blue-green Deployment
With blue-green deployments, two identical environments run concurrently. Each environment
is labeled either blue or green and runs a different version of the application.
For example, the Kubernetes router is used to direct traffic from the current version labeled
green to the newer version labeled blue. During the next update, the current version is labeled
blue and the new version is labeled green.
At any given point, the exposed route points to one of the services and can be swapped to
point to a different service This allows you to test the new version of your application service
before routing traffic to it. When your new application version is ready, simply swap the router
to point to the updated service.
A/B Deployment
The A/B deployment strategy allows you to deploy a new version of the application for a
limited set of users. You can configure Kubernetes to route a percentage of requests between
two different deployed versions of an application.
By controlling the portion of requests sent to each version, you can gradually increase the
traffic sent to the new version. Once the new version receives all traffic, the old version is
removed.
References
Kubernetes documentation on deployment strategies
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
What is blue green deployment?
https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment
100 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
Guided Exercise
Outcomes
You should be able to:
Make sure your kubectl context refers to a namespace where you have enough
permissions, usually username-dev or username-stage. Use the kubectl config
set-context --current --namespace=namespace command to switch to the
appropriate namespace.
Instructions
1. Deploy a Node.js application container to your Kubernetes cluster.
1.1. Use the kubectl create command to create a new application with the following
parameters:
Note
This course uses the backslash character (\) to break long commands. On Linux and
macOS, you can use the line breaks.
On Windows, use the backtick character (`) to break long commands. Alternatively,
do not break long commands.
Refer to Orientation to the Classroom Environment for more information about long
commands.
DO100B-K1.22-en-2-7067502 101
Chapter 4 | Implementing Cloud Deployment Strategies
1.2. Wait until the pod is deployed. The pod should be in the READY state.
Note that the exact names of your pods will likely differ from the previous example.
2. Edit the deployment to change the application version and add a readiness probe.
2.1. Verify that the deployment strategy for the application is RollingUpdate:
2.3. Update the version of the image to v2-external. Additionally, configure a readiness
probe so that you can watch the new deployment as it happens.
Your deployment resource should look like the following:
...output omitted...
spec:
...output omitted...
template:
...output omitted...
spec:
containers:
- image: quay.io/redhattraining/do100-multi-version: v2-external
...output omitted...
readinessProbe:
httpGet:
102 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
path: /ready
port: 8080
initialDelaySeconds: 2
timeoutSeconds: 2
When you are done, save your changes and close the editor.
3. Verify that the new version of the application is deployed via the rolling deployment
strategy.
As the new application pods start and become ready, pods running the older version
are terminated. Note that the application takes about thirty seconds to enter the
ready state.
Press Ctrl+c to stop the command.
Finish
Delete the deployment to clean your cluster. Kubernetes automatically deletes the associated
pods.
DO100B-K1.22-en-2-7067502 103
Chapter 4 | Implementing Cloud Deployment Strategies
Quiz
1. Assume you maintain an application that does not support running multiple different
versions at the same time. Which of the following deployment strategies is the most
suitable for updating the application? (Choose one.)
a. Rolling
b. Blue-green deployment
c. Recreate
d. A/B deployment
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 40%
maxUnavailable: 20%
104 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
3. Based on the following scenario, which two of the following statements are correct?
(Choose two.)
a. During the update, the server responds with a 503 Service Unavailable error
code, because Kubernetes has to stop old pods before creating the new ones.
b. During the update, you might see v1 pods until Kubernetes creates all the new v2 pods.
c. Pods that use both the v1 and v2 versions stay running after the deployment update
succeeds.
d. Kubernetes determines when new v2 pods are ready before terminating v1 pods.
4. Your team just finished the development of a new feature. Instead of delivering the
new feature to all users, you decide to test the feature first with a limited set of users.
Which of the following deployment strategies should you use? (Choose one.)
a. Recreate
b. Rolling
c. Blue-green deployment
d. A/B deployment
DO100B-K1.22-en-2-7067502 105
Chapter 4 | Implementing Cloud Deployment Strategies
Solution
1. Assume you maintain an application that does not support running multiple different
versions at the same time. Which of the following deployment strategies is the most
suitable for updating the application? (Choose one.)
a. Rolling
b. Blue-green deployment
c. Recreate
d. A/B deployment
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 40%
maxUnavailable: 20%
106 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
3. Based on the following scenario, which two of the following statements are correct?
(Choose two.)
a. During the update, the server responds with a 503 Service Unavailable error
code, because Kubernetes has to stop old pods before creating the new ones.
b. During the update, you might see v1 pods until Kubernetes creates all the new v2 pods.
c. Pods that use both the v1 and v2 versions stay running after the deployment update
succeeds.
d. Kubernetes determines when new v2 pods are ready before terminating v1 pods.
4. Your team just finished the development of a new feature. Instead of delivering the
new feature to all users, you decide to test the feature first with a limited set of users.
Which of the following deployment strategies should you use? (Choose one.)
a. Recreate
b. Rolling
c. Blue-green deployment
d. A/B deployment
DO100B-K1.22-en-2-7067502 107
Chapter 4 | Implementing Cloud Deployment Strategies
Quiz
1. Which two of the following Kubernetes features simplify the lifecycle management of
containerized applications? (Choose two.)
a. Restarting of unresponsive containers
b. Container registry integration
c. Application rollout strategies
d. Decreased resource consumption
3. Your task is to create an application that uses the provided parameters. Which of the
following commands is correct? (Choose one.)
108 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
4. Consider the provided resource manifest. Which two of the following statements
about the manifest are correct? (Choose two.)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sso
name: sso
spec:
replicas: 2
selector:
matchLabels:
app: sso
template:
metadata:
labels:
app: sso
spec:
containers:
- image: redhat/sso:latest
name: redhat-sso
5. Consider the previously provided resource manifest. Which two of the following
statements about the manifest are correct? (Choose two.)
a. The spec.replicas property configures the number of replicas.
b. The spec.template.replicas property configures the number of replicas.
c. The metadata.labels property configures the labels applied to pods.
d. The spec.template.metadata.labels configures the label applied to pods.
DO100B-K1.22-en-2-7067502 109
Chapter 4 | Implementing Cloud Deployment Strategies
6. You tried to create a Deployment resource by using the kubectl utility. However,
kubectl returns the provided error message. Which of the following statements is the
likely cause of the error message? (Choose one.)
8. Your task is to connect the sso and api application pods. Based on the provided
requirements, which of the following solutions is the most suitable? (Choose one.)
110 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
10. Which two of the following statements about the Kubernetes Service resource are
correct? (Choose two.)
a. ClusterIP type services simplify discovering IP addresses of new pods by providing a
stable IP address that routes requests to a set of specified pods.
b. Pods can reach services by using DNS names.
c. Multiple services in one namespace cannot use the same port.
d. The port and targetPort service configurations must use the same port number.
11. Which of the following statements about the Kubernetes Ingress resource is
correct? (Choose one.)
a. The Ingress and Service resources are identical.
b. The Ingress resource replaces the deprecated Service resource.
c. The Ingress resource routes external traffic into the Kubernetes cluster.
d. The Ingress resource cannot use the Service resource to route requests.
12. Which two of the following statements about resource limits and resource requests in
Kubernetes are correct? (Choose two.)
a. You cannot limit resources that your applications consume in a Kubernetes cluster.
b. Resource requests configure the global minimum resources for any application in a
Kubernetes cluster.
c. Resource requests configure the minimum resources for an application container.
d. Resource limits prevent an application container from consuming more resources than
configured.
DO100B-K1.22-en-2-7067502 111
Chapter 4 | Implementing Cloud Deployment Strategies
13. You are in charge of operating an application that is running in a Kubernetes cluster.
You discover that the application becomes unresponsive after around 3000 served
clients, probably due to a memory leak. Which of the following statements is a
suitable temporary solution for the issue until the core development team fixes the
issue? (Choose one.)
a. You must manually monitor the application and restart it when it becomes unresponsive.
b. You can configure a startup probe and restart the application if it fails. This is useful
because the issue happens before the application starts.
c. You can configure a readiness probe and stop routing traffic to the pod if the pod
becomes unresponsive. This is useful because you can examine the issue when it
happens.
d. You can configure a liveness probe and restart the application when it becomes
unresponsive. This is useful because it minimizes the downtime of the application
without the need for manual intervention.
14. Which two of the following statements about the ConfigMap and Secret resources
are correct? (Choose two.)
a. The ConfigMap resource stores data by using the base64 encoding.
b. The Secret resource stores data in an encrypted format.
c. The Secret resource stores data by using the base64 encoding.
d. The ConfigMap resource is suitable for storing non-sensitive data.
15. Which two of the following statements are correct ways of exposing the ConfigMap
and Secret resources to your application? (Choose two.)
a. You can inject the values as environment variables.
b. You can expose the values by using the etcd database.
c. You can expose the values by using the kube-api service.
d. You can mount all keys as files.
16. Which two of the following statements about externalizing application configuration
are correct? (Choose two.)
a. Externalizing values like passwords is not always beneficial because it makes such values
harder to find.
b. Externalizing application configuration means removing the values from application
source code and reading the values at runtime, for example from environment variables.
c. Developers can use the ConfigMap and Secret Kubernetes resources to externalize
application configuration.
d. Applications that externalize values like database credentials are difficult to deploy in
varying environments, such as dev, stage, and prod.
112 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
17. Your team just finished the development of a new feature. You decide to test the
feature by using a production environment. However, you do not want to expose
the feature to users. Which of the following deployment strategies should you use?
(Choose one.)
a. Recreate
b. Rolling
c. Blue-green deployment
d. A/B deployment
18. Consider the following Deployment resource configuration. Which of the following
statements is correct? (Choose one.)
spec:
replicas: 10
strategy: {}
a. You cannot update the deployment because the resource does not specify an update
strategy.
b. The deployment configuration is invalid because the manifest does not specify an
update strategy.
c. The update strategy defaults to the Recreate strategy.
d. The update strategy defaults to the RollingUpdate strategy.
19. Which two of the following statements about deployment strategies are correct?
(Choose two.)
a. Developers configure all deployment strategies, such as the Recreate and A/B
Deployment strategies, in the Deployment resource manifest.
b. Developers configure some deployment strategies, such as the Recreate and
RollingUpdate strategies, in the Deployment resource manifest.
c. Developers must configure advanced deployment strategies by using the Kubernetes
ingress router.
d. All applications can always use all deployment strategies.
20. Which of the following commands shows you the deployment strategy of a
Deployment resource in a Kubernetes cluster? (Choose one.)
a. kubectl get deployment example
b. kubectl describe deployment
c. kubectl describe deployment example
d. kubectl logs deployment example
DO100B-K1.22-en-2-7067502 113
Chapter 4 | Implementing Cloud Deployment Strategies
Solution
1. Which two of the following Kubernetes features simplify the lifecycle management of
containerized applications? (Choose two.)
a. Restarting of unresponsive containers
b. Container registry integration
c. Application rollout strategies
d. Decreased resource consumption
3. Your task is to create an application that uses the provided parameters. Which of the
following commands is correct? (Choose one.)
114 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
4. Consider the provided resource manifest. Which two of the following statements
about the manifest are correct? (Choose two.)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sso
name: sso
spec:
replicas: 2
selector:
matchLabels:
app: sso
template:
metadata:
labels:
app: sso
spec:
containers:
- image: redhat/sso:latest
name: redhat-sso
5. Consider the previously provided resource manifest. Which two of the following
statements about the manifest are correct? (Choose two.)
a. The spec.replicas property configures the number of replicas.
b. The spec.template.replicas property configures the number of replicas.
c. The metadata.labels property configures the labels applied to pods.
d. The spec.template.metadata.labels configures the label applied to pods.
DO100B-K1.22-en-2-7067502 115
Chapter 4 | Implementing Cloud Deployment Strategies
6. You tried to create a Deployment resource by using the kubectl utility. However,
kubectl returns the provided error message. Which of the following statements is the
likely cause of the error message? (Choose one.)
8. Your task is to connect the sso and api application pods. Based on the provided
requirements, which of the following solutions is the most suitable? (Choose one.)
116 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
10. Which two of the following statements about the Kubernetes Service resource are
correct? (Choose two.)
a. ClusterIP type services simplify discovering IP addresses of new pods by providing a
stable IP address that routes requests to a set of specified pods.
b. Pods can reach services by using DNS names.
c. Multiple services in one namespace cannot use the same port.
d. The port and targetPort service configurations must use the same port number.
11. Which of the following statements about the Kubernetes Ingress resource is
correct? (Choose one.)
a. The Ingress and Service resources are identical.
b. The Ingress resource replaces the deprecated Service resource.
c. The Ingress resource routes external traffic into the Kubernetes cluster.
d. The Ingress resource cannot use the Service resource to route requests.
12. Which two of the following statements about resource limits and resource requests in
Kubernetes are correct? (Choose two.)
a. You cannot limit resources that your applications consume in a Kubernetes cluster.
b. Resource requests configure the global minimum resources for any application in a
Kubernetes cluster.
c. Resource requests configure the minimum resources for an application container.
d. Resource limits prevent an application container from consuming more resources than
configured.
DO100B-K1.22-en-2-7067502 117
Chapter 4 | Implementing Cloud Deployment Strategies
13. You are in charge of operating an application that is running in a Kubernetes cluster.
You discover that the application becomes unresponsive after around 3000 served
clients, probably due to a memory leak. Which of the following statements is a
suitable temporary solution for the issue until the core development team fixes the
issue? (Choose one.)
a. You must manually monitor the application and restart it when it becomes unresponsive.
b. You can configure a startup probe and restart the application if it fails. This is useful
because the issue happens before the application starts.
c. You can configure a readiness probe and stop routing traffic to the pod if the pod
becomes unresponsive. This is useful because you can examine the issue when it
happens.
d. You can configure a liveness probe and restart the application when it becomes
unresponsive. This is useful because it minimizes the downtime of the application
without the need for manual intervention.
14. Which two of the following statements about the ConfigMap and Secret resources
are correct? (Choose two.)
a. The ConfigMap resource stores data by using the base64 encoding.
b. The Secret resource stores data in an encrypted format.
c. The Secret resource stores data by using the base64 encoding.
d. The ConfigMap resource is suitable for storing non-sensitive data.
15. Which two of the following statements are correct ways of exposing the ConfigMap
and Secret resources to your application? (Choose two.)
a. You can inject the values as environment variables.
b. You can expose the values by using the etcd database.
c. You can expose the values by using the kube-api service.
d. You can mount all keys as files.
16. Which two of the following statements about externalizing application configuration
are correct? (Choose two.)
a. Externalizing values like passwords is not always beneficial because it makes such values
harder to find.
b. Externalizing application configuration means removing the values from application
source code and reading the values at runtime, for example from environment variables.
c. Developers can use the ConfigMap and Secret Kubernetes resources to externalize
application configuration.
d. Applications that externalize values like database credentials are difficult to deploy in
varying environments, such as dev, stage, and prod.
118 DO100B-K1.22-en-2-7067502
Chapter 4 | Implementing Cloud Deployment Strategies
17. Your team just finished the development of a new feature. You decide to test the
feature by using a production environment. However, you do not want to expose
the feature to users. Which of the following deployment strategies should you use?
(Choose one.)
a. Recreate
b. Rolling
c. Blue-green deployment
d. A/B deployment
18. Consider the following Deployment resource configuration. Which of the following
statements is correct? (Choose one.)
spec:
replicas: 10
strategy: {}
a. You cannot update the deployment because the resource does not specify an update
strategy.
b. The deployment configuration is invalid because the manifest does not specify an
update strategy.
c. The update strategy defaults to the Recreate strategy.
d. The update strategy defaults to the RollingUpdate strategy.
19. Which two of the following statements about deployment strategies are correct?
(Choose two.)
a. Developers configure all deployment strategies, such as the Recreate and A/B
Deployment strategies, in the Deployment resource manifest.
b. Developers configure some deployment strategies, such as the Recreate and
RollingUpdate strategies, in the Deployment resource manifest.
c. Developers must configure advanced deployment strategies by using the Kubernetes
ingress router.
d. All applications can always use all deployment strategies.
20. Which of the following commands shows you the deployment strategy of a
Deployment resource in a Kubernetes cluster? (Choose one.)
a. kubectl get deployment example
b. kubectl describe deployment
c. kubectl describe deployment example
d. kubectl logs deployment example
DO100B-K1.22-en-2-7067502 119
Chapter 4 | Implementing Cloud Deployment Strategies
Summary
In this chapter, you learned:
• These strategies dictate how new versions of an application are rolled out.
120 DO100B-K1.22-en-2-7067502
Appendix A
DO100B-K1.22-en-2-7067502 121
Appendix A | Installing and Configuring Kubernetes
Guided Exercise
Outcomes
You should be able to:
• Register for using a remote Kubernetes instance by using Developer Sandbox for
Red Hat OpenShift.
Instructions
Deploying a fully developed, multi-node Kubernetes cluster typically requires significant time and
compute resources. With minikube, you can quickly deploy a local Kubernetes cluster, allowing
you to focus on learning Kubernetes operations and application development.
minikube is an open source utility that allows you to quickly deploy a local Kubernetes cluster on
your personal computer. By using virtualization technologies, minikube creates a virtual machine
(VM) that contains a single-node Kubernetes cluster. VMs are virtual computers and each VM is
allocated its own system resources and operating system.
The latest minikube releases also allow you to create your cluster by using containers instead
of virtual machines. Nevertheless, this solution is still not mature, and it is not supported for this
course.
• An Internet connection
• At least 2 GB of free memory
• 2 CPUs or more
• At least 20 GB of free disk space
• A locally installed hypervisor (using a container runtime is not supported in this course)
Before installing minikube, a hypervisor technology must be installed or enabled on your local
system. A hypervisor is software that creates and manages virtual machines (VMs) on a shared
physical hardware system. The hypervisor pools and isolates hardware resources for VMs, allowing
many VMs to run on a shared physical hardware system, such as a server.
122 DO100B-K1.22-en-2-7067502
Appendix A | Installing and Configuring Kubernetes
Note
Prefix the following commands with sudo if you are running a user without
administrative privileges.
Use your system package manager to install the complete set of virtualization
libraries:
• If the repositories for your package manager do not include an appropriate version
for minikube, then go to https://github.com/kubernetes/minikube/releases and
download the latest release matching your operating system.
DO100B-K1.22-en-2-7067502 123
Appendix A | Installing and Configuring Kubernetes
Note
To set the default driver, run the command minikube config set driver
DRIVER.
2. Open the downloaded dmg file and follow the onscreen instructions to complete
the installation.
Note
Network connectivity might be temporarily lost while VirtualBox installs virtual
network adapters. A system reboot can also be required after a successful
installation.
124 DO100B-K1.22-en-2-7067502
Appendix A | Installing and Configuring Kubernetes
Alternatively, if the brew command is available in your system, then you can install
VirtualBox using the brew install command.
Your output can differ, but must show the available version and the commit it is based
on.
DO100B-K1.22-en-2-7067502 125
Appendix A | Installing and Configuring Kubernetes
Note
To set the default driver, run the command minikube config set driver
DRIVER.
Warning
System driver conflicts might occur if more than one hypervisor is installed or
enabled. Do not install or enable more than one hypervisor on your system.
1. Download the latest version of VirtualBox for Windows Hosts from https://
virtualbox.org/wiki/Downloads
Note
Network connectivity might be temporarily lost while VirtualBox installs virtual
network adapters. A system reboot can also be required after a successful
installation.
• Via PowerShell
126 DO100B-K1.22-en-2-7067502
Appendix A | Installing and Configuring Kubernetes
• Via Settings
– In the search box on the taskbar, type Programs and Features, and select it
from the search results.
– Select Turn Windows features on or off from the list of options under Control
Panel Home.
2. Determine the name of the network adapter, such as Wi-Fi or Ethernet, to use by
running Get-NetAdapter.
3. Create an external virtual switch named minikube that uses the selected
network adapter and allows the management operating system to share the
adapter:
Note
If you executed the minikube-installer.exe installer from a terminal window,
close the terminal and open a new one before you start using minikube.
DO100B-K1.22-en-2-7067502 127
Appendix A | Installing and Configuring Kubernetes
Note
To set the default driver, run the command minikube config set driver
DRIVER.
128 DO100B-K1.22-en-2-7067502
Appendix A | Installing and Configuring Kubernetes
In case of errors, make sure you are using the appropriate driver during the installation, or
refer to minikube Get Started documentation [https://minikube.sigs.k8s.io/docs/start/] for
troubleshooting.
5. Adding extensions
minikube comes with the bare minimum set of features. To add more features, minikube
provides an add-on based extension system. Developers can add more features by
installing the needed add-ons.
Use the minikube addons list command for a comprehensive list of the add-ons
available and the installation status.
• Installing the Ingress Add-on. For this course you must install the ingress add-on.
With your cluster up and ready, use the following command to enable the add-on:
Versions and docker images can vary in your deployment, but make sure the final validation
is successful.
• Installing the Dashboard add-on. The dashboard add-on is not required for this course
but serves as a visual graphical interface if you are not comfortable with CLI commands.
Once the dashboard is enabled you can reach it by using the minikube dashboard
command. This command will open the dashboard web application in your default browser.
Press Ctrl+C in the terminal to finish the connection to the dashboard.
DO100B-K1.22-en-2-7067502 129
Appendix A | Installing and Configuring Kubernetes
6. Using a Developer Sandbox for Red Hat OpenShift as a Remote Kubernetes cluster
Developer Sandbox for Red Hat OpenShift is a free Kubernetes-as-a-Service
platform offered by Red Hat Developers, based on Red Hat OpenShift.
Developer Sandbox allows users access to a pre-created Kubernetes cluster. Access is
restricted to two namespaces (or projects if using OpenShift nomenclature). Developer
Sandbox deletes pods after eight consecutive hours of running, and limits resources to
7 GB of RAM and 15 GB of persistent storage.
You need a free Red Hat account to use Developer Sandbox. Log in to your Red Hat
account, or if you do not have one, then click Create one now. Fill in the form
choosing a Personal account type, and then click CREATE MY ACCOUNT. You
might need to accept Red Hat terms and conditions to use the Developer Program
services.
When the account is ready you will be redirected back to the Developer Sandbox
page. Click Launch your Developer Sandbox for Red Hat OpenShift to log in to
Developer Sandbox.
130 DO100B-K1.22-en-2-7067502
Appendix A | Installing and Configuring Kubernetes
If you just created your account, then you might need to wait some seconds for
account approval. You might need to verify your account via 2-factor authentication.
Once the account is approved and verified, Click Start using your sandbox. You might
need to accept Red Hat terms and conditions to use the Developer Sandbox.
In the OpenShift log in form, click DevSandbox to select the authentication method.
DO100B-K1.22-en-2-7067502 131
Appendix A | Installing and Configuring Kubernetes
132 DO100B-K1.22-en-2-7067502
Appendix A | Installing and Configuring Kubernetes
Routing traffic from your local machine to your Minikube Kubernetes cluster requires
two steps.
First you must find the local IP assigned to your Ingress add on. The minikube ip
command is the easiest way to find the ingress IP:
<IP-ADDRESS> hello.example.com
Note
If for any reason you need to delete and recreate your Minikube cluster, then review
the IP address assigned to the cluster and update the hosts file accordingly.
For accessing services in the cluster you will use the declared hostname and
potentially any path associated to the ingress. So, if using the hello.example.com
hostname and assuming the application is mapped to the path /myapp, then your
application will be available in the URL http://hello.example.com/myapp.
DO100B-K1.22-en-2-7067502 133
Appendix A | Installing and Configuring Kubernetes
To get the wildcard domain, remove from the API URL the https://
console-openshift-console, and replace api by apps. For
example, the wildcard domain for the Console URL https://console-
openshift-console.apps.sandbox.x8i5.p1.openshiftapps.com is
apps.sandbox.x8i5.p1.openshiftapps.com.
To get the wildcard domain, remove the first part of the hostname, that is everything
before the first period. For example, the wildcard domain for the hostname
example-username-dev.apps.sandbox.x8i5.p1.openshiftapps.com is
apps.sandbox.x8i5.p1.openshiftapps.com.
Once you know the wildcard domain for your Developer Sandbox cluster, use it to
generate a sub-domain to be used by your services. Remember that sub-domains
must be unique for the shared Developer Sandbox cluster. One method for creating
a unique sub-domain is to compose it in the format of <DEPLOYMENT-NAME>-
<NAMESPACE-NAME>.<WILDCARD-DOMAIN>.
So, if using the apps.sandbox.x8i5.p1.openshiftapps.com wildcard
domain and assuming a deployment named hello in a namespace named
username-dev then you can compose your application hostname as hello-
username-dev.apps.sandbox.x8i5.p1.openshiftapps.com.
Assuming the application is mapped to the path /myapp, then your
application will be available in the URL http://hello-username-
dev.apps.sandbox.x8i5.p1.openshiftapps.com/myapp.
Finish
134 DO100B-K1.22-en-2-7067502
Appendix B
DO100B-K1.22-en-2-7067502 135
Appendix B | Connecting to your Kubernetes Cluster
Guided Exercise
Outcomes
You should be able to:
• Install kubectl
• Connect to the OpenShift Developer Sandbox (if you are using the OpenShift Developer
Sandbox)
Instructions
The installation procedure of kubectl depends on your operating system.
• Copy the binary to your PATH and make sure it has executable permissions.
136 DO100B-K1.22-en-2-7067502
Appendix B | Connecting to your Kubernetes Cluster
Transaction Summary
================================================================================
Install 1 Package
...output omitted...
• Give the binary file executable permissions. Move the binary file executable to your
PATH.
DO100B-K1.22-en-2-7067502 137
Appendix B | Connecting to your Kubernetes Cluster
Note
If you have previously installed Minikube with homebrew, kubectl should already be
installed in your computer. You can skip the installation step and directly verify that
kubectl has been installed correctly.
• Create a new folder, such as C:\kube, to use as the destination directory of the
kubectl binary download.
– In the search box on the taskbar, type env, and select Edit the system
environment variables from the search results.
– Under the System variables section, select the row containing Path and
click Edit. This will open the Edit environment variable screen.
– Click New and type the full path of the folder containing the kubectl.exe (for
example, C:\kube).
138 DO100B-K1.22-en-2-7067502
Appendix B | Connecting to your Kubernetes Cluster
• Click Code and then click Download ZIP. A ZIP file with the repository content is
downloaded.
Note
If you want to recover full access over your cluster, then you can change the kubectl
context to the default Minikube context, minikube. Use the command kubectl
config use-context minikube.
If you run the OpenShift Developer Sandbox script, it will configure kubectl to run
commands against the Openshift Developer Sandbox cluster. The script will ask you to
provide some information such as cluster url, username or token.
In your command-line terminal, move to the DO100x-apps directory and run the
script located at ./setup/operating-system/setup.sh. Replace operating-
DO100B-K1.22-en-2-7067502 139
Appendix B | Connecting to your Kubernetes Cluster
system for linux if you are using Linux. Use macos if you are using macOS. Make
sure the script has executable permissions.
• Windows
In your PowerShell terminal, move to the DO100x-apps directory and execute the
following command. This command allows you to run unsigned PowerShell scripts in
your current terminal session.
• Open a web browser and navigate to the OpenShift Developer Sandbox website.
Log in with your username and password.
• Click on your username in the upper right pane of the screen. A dropdown menu
opens.
• In the dropdown menu, click Copy login command. A new tab opens, log in again
with your account if necessary by clicking DevSanbox.
140 DO100B-K1.22-en-2-7067502
Appendix B | Connecting to your Kubernetes Cluster
• The token you must provide in the script shows in your web browser.
• Keep these values. You will be asked for them in the script.
Run the appropiate script. The following instructions will depend on your operating
system.
In your command-line terminal, move to the DO100x-apps directory and run the
script located at ./setup/operating-system/setup-sandbox.sh. Replace
DO100B-K1.22-en-2-7067502 141
Appendix B | Connecting to your Kubernetes Cluster
operating-system for linux if you are using Linux. Use macos if you are using
macOS. Make sure the script has executable permissions.
• Windows
In your PowerShell terminal, move to the DO100x-apps directory and execute the
following command. This command allows you to run unsigned PowerShell scripts in
your current terminal session.
Finish
142 DO100B-K1.22-en-2-7067502