0% found this document useful (0 votes)
160 views24 pages

DO280

Résumé do280

Uploaded by

hatemsallami29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views24 pages

DO280

Résumé do280

Uploaded by

hatemsallami29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 24

sincèrement

quay.io/redhattraining/hello-world-nginx
quay.io/redhattraining/hello-world-nginx

quay.io/redhattraining/hello-world-nginx

quay.io/redhattraining/do180-httpd-app

quay.io/redhattraining/mysql-app

oc delete pod `oc get pod | grep -vi running | awk '{print $1}'`

quay.io/redhattraining/do180-httpd-app

oc delete pod `oc get pod | grep -vi running | awk '{print $1}'`

####################
####################
#### CHAP 7 ########
####################
####################
oc image info registry.access.redhat.com/ubi9/nginx-120:1-86

skopeo login registry.ocp4.example.com:8443 -u developer

skopeo inspect docker://registry.ocp4.example.com:8443/ubi8/httpd-24:1-215

####################
####################

[student@workstation ~]$oc login -u admin -p redhatocp


[student@workstation ~]$
[student@workstation ~]$oc debug node/master01

sh-4.4#chroot /host

sh-4.4#crictl ps --name httpd-24 -o yaml

sh-4.4# crictl images --digests 8ee59251acc93

####################
####################
oc set image deployment/httpd2 httpd-24=registry.ocp4.example.com:8443/ubi8/httpd-
24:1-215

skopeo copy docker://registry.ocp4.example.com:8443/ubi8/httpd-24:1-209


docker://registry.ocp4.example.com:8443/ubi8/httpd-24:latest

oc rollout pause deployment/myapp

[user@host ~]$ oc set image deployment/myapp \


nginx-120=registry.access.redhat.com/ubi9/nginx-120:1-86
[user@host ~]$ oc set env deployment/myapp NGINX_LOG_TO_VOLUME=1
[user@host ~]$ oc set probe deployment/myapp --readiness --get-url http://:8080

oc rollout resume deployment/myapp

oc rollout undo deployment/myapp2

oc rollout status deployment/myapp2

oc rollout history deployment/myapp2

####################
####################
[user@host ~]$ oc annotate deployment/myapp2 \
kubernetes.io/change-cause="Image updated to 1-86"
deployment.apps/myapp2 annotated
[user@host ~]$ oc rollout history deployment/myapp2
deployment.apps/myapp2
REVISION CHANGE-CAUSE
1 <none>
3 <none>
4 <none>
5 Image updated to 1-86

####################
####################

oc rollout history deployment/myapp2 --revision 1

oc get istag -n openshift | grep php

oc describe is php -n openshift

oc create is keycloak

oc create istag keycloak:20.0 --from-image quay.io/keycloak/keycloak:20.0.2

oc create istag keycloak:19.0 --from-image quay.io/keycloak/keycloak:19.0

oc tag quay.io/keycloak/keycloak:20.0.3 keycloak:20.0

OpenShift can periodically verify whether a new image version is available. When it
detects a new version, it automatically updates the image stream tag.
To activate that periodic refresh, add the --scheduled option to the oc tag
command.

[user@host ~]$ oc tag quay.io/keycloak/keycloak:20.0.3 keycloak:20.0 --scheduled

To activate image pull-through, add the --reference-policy local option to the oc


tag command.

[user@host ~]$ oc tag quay.io/keycloak/keycloak:20.0.3 keycloak:20.0 --reference-


policy local

oc set image-lookup keycloak

oc set image-lookup

To disable the local lookup policy, add the --enabled=false option to the oc set
image-lookup command:

[user@host ~]$ oc set image-lookup keycloak --enabled=false

Use the oc set triggers command to configure an image trigger for the container
inside the Deployment object. Use the --from-image option to specify the image
stream tag to watch.

[user@host ~]$ oc set triggers deployment/mykeycloak --from-image keycloak:20 --


containers keycloak

You can disable the image trigger by adding the --manual option to the oc set
triggers command:
[user@host ~]$ oc set triggers deployment/mykeycloak --manual --from-image
keycloak:20 --containers keycloak

You re-enable the trigger by using the --auto option:


[user@host ~]$ oc set triggers deployment/mykeycloak --auto --from-image
keycloak:20 --containers keycloak

You can remove the triggers from all the containers in the Deployment object by
adding the --remove-all option to the command:
[user@host ~]$ oc set triggers deployment/mykeycloak --remove-all

################################################################
################################################################
#################### DO280 ####################################
################################################################
################################################################
oc patch deployment hello -p \
'{"spec":{"template":{"spec":{"containers":[{"name": \
"hello-rhel7","resources": {"requests": {"cpu": "100m"}}}]}}}}'

[user@host ~]$ oc patch deployment hello --patch-file ~/volume-mount.yaml


deployment.apps/hello patched

##################################
##################################
############## Chap 2 ############
##################################
##################################

oc new-app --template=cache-service -p APPLICATION_USER=my-user

The oc process command uses parameter values to transform a template into a set of
related Kubernetes resource manifests.
oc process my-cache-service \
-p APPLICATION_USER=user1 -o yaml > my-cache-service-manifest.yaml

oc process -f my-cache-service.yaml \
-p APPLICATION_USER=user1 -o yaml > my-cache-service-manifest.yaml

oc process my-cache-service \
--param-file=my-cache-service-params.env | oc apply -f -

Use the oc process --parameters command to view the parameters of the mysql-
persistent template.
[student@workstation ~]$ oc process --parameters mysql-persistent -n openshift

Create a text file named roster-parameters.env with the following content:


MYSQL_USER=user1
MYSQL_PASSWORD=mypasswd
IMAGE=registry.ocp4.example.com:8443/redhattraining/do280-roster:v2
The option of using a parameter file helps version control software to track
changes.

Use the oc process command and the oc diff command to view the changes in the new
manifests when compared to the live application.
[student@workstation ~]$ oc process roster-template --param-file=roster-
parameters.env | oc diff -f -

helm show chart chart-reference

helm show values chart-reference

helm install release-name chart-reference --dry-run --values values.yaml


Use the helm list command to inspect releases on a cluster.
[user@host ~]$ helm list

helm history release_name

helm rollback release_name revision

helm repo add openshift-helm-charts https://charts.openshift.io/

helm search repo

NB: ##### Ex:

helm repo list

helm repo add do280-repo http://helm.ocp4.example.com/charts

helm search repo --versions

helm show values do280-repo/etherpad --version 0.0.6

helm install example-app do280-repo/etherpad -f values.yaml --version 0.0.6

helm list command to verify the installed version of the etherpad chart.
helm list

helm search repo --versions

helm upgrade example-app do280-repo/etherpad -f values.yaml --version 0.0.7

modification de value.yml:
---
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: etherpad.apps.ocp4.example.com
replicaCount: 3
---

helm upgrade production do280-repo/etherpad -f values.yaml

value.yaml
-----
image:
repository: registry.ocp4.example.com:8443/etherpad
name: etherpad
tag: 1.8.18
route:
host: etherpad.apps.ocp4.example.com
resources:
limits:
memory: 256Mi
requests:
memory: 128Mi
-----

##################################
##################################
############## Chap 3 ###########
##################################
##################################

To use the kubeconfig file to authenticate oc commands, you must copy the file to
your workstation and set the absolute or relative path to the KUBECONFIG
environment variable.
Then, you can run any oc command that requires cluster administrator privileges
without logging in to OpenShift.
[user@host ~]$ export KUBECONFIG=/home/user/auth/kubeconfig
[user@host ~]$ oc get nodes

As an alternative, you can use the --kubeconfig option of the oc command.


[user@host ~]$ oc --kubeconfig /home/user/auth/kubeconfig get nodes

oc delete secret kubeadmin -n kube-system

Create the htpasswd file.


htpasswd -c -B -b /tmp/htpasswd student redhat123

Add or update credentials.


[user@host ~]$ htpasswd -b /tmp/htpasswd student redhat1234

Delete credentials.
[user@host ~]$ htpasswd -D /tmp/htpasswd student

[user@host ~]$ oc create secret generic htpasswd-secret --from-file


htpasswd=/tmp/htpasswd -n openshift-config

[user@host ~]$ oc extract secret/htpasswd-secret -n openshift-config --to /tmp/ --


confirm

[user@host ~]$ oc set data secret/htpasswd-secret --from-file


htpasswd=/tmp/htpasswd -n openshift-config

[user@host ~]$ watch oc get pods -n openshift-authentication

------
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_htpasswd_provider
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpasswd-secret
-----

oc get oauth cluster -o yaml > oauth.yaml

oc replace -f oauth.yaml

To delete the user from htpasswd, run the following command:


[user@host ~]$ htpasswd -D /tmp/htpasswd manager

Update the secret to remove all remnants of the user's password.


[user@host ~]$ oc set data secret/htpasswd-secret --from-file
htpasswd=/tmp/htpasswd -n openshift-config

Remove the user resource with the following command:


[user@host ~]$ oc delete user manager

[user@host ~]$ oc get identities | grep manager


my_htpasswd_provider:manager my_htpasswd_provider manager manager ...

[user@host ~]$ oc delete identity my_htpasswd_provider:manager

[user@host ~]$ oc adm policy add-cluster-role-to-user cluster-admin student

[student@workstation ~]$ oc get users

[student@workstation ~]$ oc get identity

Using --to - sends the secret to STDOUT rather than saving it to a file.
[student@workstation ~]$ oc extract secret/localusers -n openshift-config --to -

[student@workstation ~]$ oc edit oauth


----
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- ldap:
...output omitted...
type: LDAP
# Delete all lines below
- htpasswd:
fileData:
name: localusers
mappingMethod: claim
name: myusers
type: HTPasswd
----

Rule : Allowed actions for objects or groups of objects.

Role : Sets of rules. Users and groups can be associated with multiple roles.

Binding : Assignment of users or groups to a role.

Cluster RBAC : Roles and bindings that apply across all projects.

Local RBAC : Roles and bindings that are scoped to a given project. Local role
bindings can reference both cluster and local roles.

[user@host ~]$ oc adm policy add-cluster-role-to-user cluster-role username

[user@host ~]$ oc adm policy add-cluster-role-to-user cluster-admin username

[user@host ~]$ oc adm policy remove-cluster-role-from-user cluster-role username

[user@host ~]$ oc adm policy remove-cluster-role-from-user cluster-admin username

Default roles :

admin : Users with this role can manage all project resources, including granting
access to other users to access the project.

basic-user : Users with this role have read access to the project.

cluster-admin : Users with this role have superuser access to the cluster
resources. These users can perform any action on the cluster, and have full control
of all projects.

cluster-status : Users with this role can access cluster status information.

cluster-reader : Users with this role can access or view most of the objects but
cannot modify them.

edit : Users with this role can create, change, and delete common application
resources on the project, such as services and deployments. These users cannot act
on management resources such as limit ranges and quotas, and cannot manage access
permissions to the project.

self-provisioner : Users with this role can create their own projects.

view : Users with this role can view project resources, but cannot modify project
resources.

[user@host ~]$ oc policy add-role-to-user role-name username -n project

For example, run the following command to add the dev user to the basic-user
cluster role in the wordpress project.
[user@host ~]$ oc policy add-role-to-user basic-user dev -n wordpress

Regular users:
Most interactive OpenShift Container Platform users are regular users, and are
represented with the User object. This type of user represents a person with access
to the platform.

System users:
System user names start with a system: prefix, such as system:admin,
system:openshift-registry, and system:node:node1.example.com.

Service accounts
Service accounts are system users that are associated with projects.
By default, service accounts have no roles.
System account user names start with a system:serviceaccount:namespace: prefix,
such as system:serviceaccount:default:deployer and
system:serviceaccount:accounting:builder.

Group Management:

[user@host ~]$ oc adm groups new lead-developers

Likewise, the following command adds the user1 user to the lead-developers group:
[user@host ~]$ oc adm groups add-users lead-developers user1

[student@workstation ~]$ oc get clusterrolebinding -o wide | grep -E 'ROLE|self-


provisioner'

[student@workstation ~]$ oc describe clusterrolebindings self-provisioners

[student@workstation ~]$ oc adm policy remove-cluster-role-from-group self-


provisioner system:authenticated:oauth

The user without admin privileges can't create new project

[student@workstation ~]$ oc policy add-role-to-user admin leader

[student@workstation ~]$ oc adm groups new dev-group

[student@workstation ~]$ oc adm groups add-users dev-group developer

[student@workstation ~]$ oc get groups

[student@workstation ~]$ oc policy add-role-to-group edit dev-group

[student@workstation ~]$ oc adm policy add-cluster-role-to-group --rolebinding-


name self-provisioners self-provisioner system:authenticated:oauth
##################################
##################################
############## Chap 4 ###########
##################################
##################################

Encrypting Routes:

-Edge
-Passthrough
-Re-encryption

[user@host ~]$ oc create route edge --service api-frontend --hostname


api.apps.acme.com --key api.key --cert api.crt

[student@workstation ~]$ oc expose svc todo-http --hostname todo-


http.apps.ocp4.example.com

[student@workstation ~]$ oc create route edge todo-https --service todo-http --


hostname todo-https.apps.ocp4.example.com

[student@workstation network-ingress]$ oc get svc todo-http -o


jsonpath="{.spec.clusterIP}{'\n'}"

[student@workstation network-ingress]$ oc debug -t deployment/todo-http --image


registry.ocp4.example.com:8443/ubi8/ubi:8.4

-----

openssl genrsa -out training.key 4096

openssl req -new -key training.key -out training.csr -subj "/C=US/ST=North


Carolina/L=Raleigh/O=Red Hat/ CN=todo-https.apps.ocp4.example.com"

openssl x509 -req -in training.csr -passin file:passphrase.txt -CA training-CA.pem


-CAkey training-CA.key -CAcreateserial -out training.crt -days 1825 -sha256 -
extfile training.ext

-----

oc create secret tls todo-certs --cert certs/training.crt --key certs/training.key

oc set volumes deployment/todo-https

oc create route passthrough todo-https --service todo-https --port 8443 --hostname


todo-https.apps.ocp4.example.com

curl -vv -I --cacert certs/training-CA.pem https://todo-https.apps.ocp4.example.com


oc label namespace network-1 network=network-1

The following network policy allows traffic from any pods in namespaces with the
network=network-1 label into any pods and ports in the network-2 namespace.
This policy is less restrictive than the network-1 policy, because it does not
restrict traffic from any pods in the network-1 namespace.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: network-2-policy
namespace: network-2
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network: network-1

--------

This example combines the selectors into one rule, and thereby allows access only
from pods with the app=mobile label in namespaces with the network=dev label.
This sample shows a logical AND statement.
...output omitted...
ingress:
- from:
- namespaceSelector:
matchLabels:
network: dev
podSelector:
matchLabels:
app: mobile
----
By changing the podSelector field in the previous example to be an item in the from
list, any pods in namespaces with the network=dev label or any pods with the
app=mobile label from any namespace can reach the pods that match the top-level
podSelector field.
This sample shows a logical OR statement.
...output omitted...
ingress:
- from:
- namespaceSelector:
matchLabels:
network: dev
- podSelector:
matchLabels:
app: mobile
---------

Deny-all Network Policies : An empty pod selector means that this policy applies to
all pods in this project. The following policy blocks all traffic

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector: {}

oc annotate service hello service.beta.openshift.io/serving-cert-secret-name=hello-


secret

oc annotate configmap ca-bundle service.beta.openshift.io/inject-cabundle=true

oc delete secret certificate-secret

oc delete secret/signing-key -n openshift-service-ca

##################################
##################################
############## Chap 5 ###########
##################################
##################################

Load Balancer Services

----
apiVersion: v1
kind: Service
metadata:
name: example-lb
namespace: example
spec:
ports:
- port: 1234
protocol: TCP
targetPort: 1234
selector:
name: example
type: LoadBalancer
----

##################################
##################################
############## Chap 6 ###########
##################################
##################################

oc create resourcequota example --hard=count/pods=1

oc get quota example -o yaml

oc get quota
oc get resourcequota

oc get event

The selector key defines which namespaces the cluster resource quota applies to
oc create clusterresourcequota example --project-label-selector=group=dev --
hard=requests.cpu=10

oc create deployment test --image


registry.ocp4.example.com:8443/redhattraining/hello-world-nginx

oc set resources deployment test --requests=cpu=1

oc scale deployment test --replicas=8

oc create quota one-cpu --hard=requests.cpu=1

oc get quota one-cpu -o yaml

Limit Ranges
The following YAML file shows an example limit range:
-----
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: default
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
-----
Limit ranges can specify the following limit types:

Default limit
Use the default key to specify default limits for workloads.

Default request
Use the defaultRequest key to specify default requests for workloads.

Maximum
Use the max key to specify the maximum value of both requests and limits.

Minimum
Use the min key to specify the minimum value of both requests and limits.

Limit-to-request ratio
The maxLimitRequestRatio key controls the relationship between limits and requests.
If you set a ratio of two, then the resource limit cannot be more than twice the
request.
Creating Limit Ranges
Consider a namespace with the following quota:
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: example
namespace: example
spec:
hard:
limits.cpu: "8"
limits.memory: 8Gi
requests.cpu: "4"
requests.memory: 4Gi
----

oc set resources deployment example --limits=cpu=new-cpu-limit

Templates

oc adm create-bootstrap-project-template -o yaml > file

oc get limitrange,resourcequota -o yaml

oc create -f template -n openshift-config

oc describe clusterrolebinding.rbac self-provisioners

To disable self-provisioning, execute the following commands:


oc annotate clusterrolebinding/self-provisioners --overwrite
rbac.authorization.kubernetes.io/autoupdate=false

oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}'

oc edit clusterrolebinding self-provisioners

Change the subject of the role binding from the system:authenticated:oauth group to
the provisioners group.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
...output omitted...
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: self-provisioner
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: provisioners
---
oc adm create-bootstrap-project-template -o yaml >template.yaml

oc get limitrange -n template-test -o yaml >>template.yaml

Edit the template.yaml file to perform the following operations:


Apply the following changes to the subjects key in the admin role binding:
Change the kind key to Group.
Change the name key to provisioners.
Move the limit range to immediately after the role binding definition.
Replace the namespace: template-test text with the namespace: ${PROJECT_NAME} text.
Remove any left-over content after the parameters block.
Remove the following keys from the limit range and quota definitions:
creationTimestamp
resourceVersion
uid

oc create -f template.yaml -n openshift-config

oc edit projects.config.openshift.io cluster

Edit the resource to match the following content:


----
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...output omitted...
name: cluster
...output omitted...
spec:
projectRequestTemplate:
name: project-request
----

##################################
##################################
############## Chap 7 ###########
##################################
##################################

oc get clusteroperator

oc get catalogsource -n openshift-marketplace

oc get packagemanifests

oc describe packagemanifest lvms-operator -n openshift-marketplace

oc describe operator file-integrity-operator

oc patch installplan install-pmh78 --type merge -p '{"spec":{"approved":true}}' -n


openshift-file-integrity
oc get csv metallb-operator.v4.14.0-202401151553 -o
jsonpath="{.spec.customresourcedefinitions.owned[*].name}{'\n'}"

oc create namespace openshift-compliance

Create an operator-group.yaml file with the following content:


----
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- openshift-compliance
----

oc create -f operator-group.yaml

Create a subscription.yaml file with the following content:

----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
channel: stable
installPlanApproval: Automatic
name: compliance-operator
source: do280-catalog-cs
sourceNamespace: openshift-marketplace
----

oc create -f subscription.yaml

oc project openshift-compliance

oc get csv

oc get csv compliance-operator.v1.4.0 -o jsonpath={.spec.install.spec.deployments}


| jq

oc get all

oc get csv compliance-operator.v1.4.0 -o jsonpath={.metadata.annotations.alm-


examples} | jq

oc create -f ~/DO280/labs/operators-review/scan-setting-binding.yaml

oc get compliancesuite,pod

##################################
##################################
############## Chap 8 ###########
##################################
##################################

Security Context Constraints (SCCs)


RedHat OpenShift provides security context constraints (SCCs), a security mechanism
that limits the access from a running pod in OpenShift to the host environment.

[user@host ~]$ oc get scc


OpenShift provides the following default SCCs:

anyuid
hostaccess
hostmount-anyuid
hostnetwork
hostnetwork-v2
lvms-topolvm-node
lvms-vgmanager
machine-api-termination-handler
node-exporter
nonroot
nonroot-v2
privileged
restricted
restricted-v2

[user@host ~]$ oc describe pod console-5df4fcbb47-67c52 -n openshift-console | grep


scc
openshift.io/scc: restricted-v2

[user@host ~]$ oc get deployment deployment-name -o yaml | oc adm policy scc-


subject-review -f -

[user@host ~]$ oc create serviceaccount service-account-name

[user@host ~]$ oc adm policy add-scc-to-user SCC -z service-account

[user@host ~]$ oc set serviceaccount deployment/deployment-name service-account-


name

[student@workstation ~]$ oc login -u developer -p developer


https://api.ocp4.example.com:6443

[student@workstation ~]$ oc new-project appsec-scc

[student@workstation ~]$ oc new-app --name gitlab --image


registry.ocp4.example.com:8443/redhattraining/gitlab-ce:8.4.3-ce.0

[student@workstation ~]$ oc get pods

[student@workstation ~]$ oc logs pod/gitlab-d89cd88f8-jwqbp

[student@workstation ~]$ oc login -u admin -p redhatocp


https://api.ocp4.example.com:6443

[student@workstation]$ oc get deploy

[student@workstation]$ oc get deploy/gitlab -o yaml | oc adm policy scc-subject-


review -f -

[student@workstation ~]$ oc create sa gitlab-sa

[student@workstation ~]$ oc adm policy add-scc-to-user anyuid -z gitlab-sa

[student@workstation ~]$ oc login -u developer -p developer

[student@workstation ~]$ oc set serviceaccount deployment/gitlab gitlab-sa

[student@workstation ~]$ oc get pods

[student@workstation ~]$ oc expose service/gitlab --port 80 --hostname


gitlab.apps.ocp4.example.com

[student@workstation ~]$ oc get routes

[student@workstation ~]$ curl -sL http://gitlab.apps.ocp4.example.com/ | grep


'<title>'

Binding Roles to Service Accounts

[user@host ~]$ oc adm policy add-role-to-user cluster-role -z service-account


You can optionally use -z to avoid specifying the system:serviceaccount:project
prefix when you assign the role to a service account that exists in the current
project.

[user@host ~]$ oc adm policy add-cluster-role-to-user cluster-role service-account

[student@workstation appsec-api]$ oc login -u admin -p redhatocp


https://api.ocp4.example.com:6443

[student@workstation appsec-api]$ oc project configmap-reloader

[student@workstation appsec-api]$ oc create sa configmap-reloader-sa

Add the configmap-reloader-sa service account to the deployment in the reloader-


deployment.yaml file.
----
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configmap-reloader
name: configmap-reloader
namespace: configmap-reloader
spec:
selector:
matchLabels:
app: configmap-reloader
release: "reloader"
template:
metadata:
labels:
app: configmap-reloader
spec:
serviceAccountName: configmap-reloader-sa
containers:
...output omitted...
----

[student@workstation appsec-api]$ oc apply -f reloader-deployment.yaml

[student@workstation appsec-api]$ oc login -u developer -p developer


https://api.ocp4.example.com:6443

[student@workstation appsec-api]$ oc new-project appsec-api

Assign the edit cluster role to the configmap-reloader-sa service account in the
appsec-api project.
To assign the cluster role, create a local role binding by using the oc policy add-
role-to-user command with the following options:
The edit default cluster role.
The system:serviceaccount:configmap-reloader:configmap-reloader-sa username to
reference the configmap-reloader-sa service account in the configmap-reloader
project.
The --rolebinding-name option to use the reloader-edit name for the role binding.
The -n appsec-api, which is optional because you are already in the appsec-api
project.

[student@workstation appsec-api]$ oc policy add-role-to-user edit


system:serviceaccount:configmap-reloader:configmap-reloader-sa --rolebinding-
name=reloader-edit -n appsec-api

[student@workstation appsec-api]$ oc apply -f ./config-app

[student@workstation appsec-api]$ oc get configmap config-app --


output="jsonpath={.data.config\.yaml}"

[student@workstation appsec-api]$ curl -s https://config-app-appsec-


api.apps.ocp4.example.com/config | jq

[student@workstation appsec-api]$ oc apply -f config-app/configmap.yaml

[student@workstation appsec-api]$ watch "curl -s https://config-app-appsec-


api.apps.ocp4.example.com/config | jq"

Cron Job

[user@host ~]$ oc create job --dry-run=client -o yaml test --


image=registry.access.redhat.com/ubi8/ubi:8.6 -- curl https://example.com

[user@host ~]$ oc create cronjob --dry-run=client -o yaml test --


image=registry.access.redhat.com/ubi8/ubi:8.6 --schedule='0 0 * * *' -- curl
https://example.com

----
apiVersion: batch/v1
kind: CronJob
metadata:
name: wordpress-backup
spec:
schedule: 0 2 * * 7
jobTemplate:
spec:
template:
spec:
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: wp-cli
image: registry.io/wp-maintenance/wp-cli:2.7
resources: {}
command:
- bash
- -xc
args:
- >
wp maintenance-mode activate ;
wp db export | gzip > database.sql.gz ;
wp maintenance-mode deactivate ;
rclone copy database.sql.gz s3://bucket/backups/ ;
rm -v database.sql.gz ;
----

This combination of the command and args keys has the same effect as executing the
commands in a single line inside the container:
[user@host ~]$ bash -xc 'wp maintenance-mode activate ; wp db export | gzip >
database.sql.gz ; wp maintenance-mode deactivate ; rclone copy database.sql.gz
s3://bucket/backups/ ; rm -v database.sql.gz ;'

----
apiVersion: v1
kind: ConfigMap
metadata:
name: maintenance
app: crictl
data:
maintenance.sh: |
#!/bin/bash
NODES=$(oc get nodes -o=name)
for NODE in ${NODES}
do
echo ${NODE}
oc debug ${NODE} -- \
chroot /host \
/bin/bash -xc 'crictl images ; crictl rmi --prune'
echo $?
done
----

----
apiVersion: batch/v1
kind: CronJob
metadata:
name: image-pruner
spec:
schedule: 0 * * * *
jobTemplate:
spec:
template:
spec:
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: image-pruner
image: quay.io/openshift/origin-cli:4.14
resources: {}
command:
- /opt/maintenance.sh
volumeMounts:
- name: scripts
mountPath: /opt
volumes:
- name: scripts
configMap:
name: maintenance
defaultMode: 0555
----

You might also like