0% found this document useful (0 votes)
50 views

Kubernetes Notes Part-1 (Concepts)

The document provides an overview of Kubernetes concepts, including its cluster architecture, components, APIs, services, networking, application lifecycle management, and more. It aims to help users prepare for the CKA certification exam and serve as a quick reference guide.

Uploaded by

Madhusudan G J
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Kubernetes Notes Part-1 (Concepts)

The document provides an overview of Kubernetes concepts, including its cluster architecture, components, APIs, services, networking, application lifecycle management, and more. It aims to help users prepare for the CKA certification exam and serve as a quick reference guide.

Uploaded by

Madhusudan G J
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Search Translate to English

Ekant Mate (AWS APN Ambassador)

Summary
The provided web content is a comprehensive guide to Kubernetes
concepts, architecture, and management, aimed at helping users revise
for the Certified Kubernetes Administrator (CKA) exam and serving as a
quick reference for practical application.
Abstract
The web content serves as an installment in a series aimed at thoroughly
explaining Kubernetes fundamental concepts. It is designed to assist users
in preparing for the CKA exam and to function as a convenient reference
for quick revision. The guide covers the essential concepts of Kubernetes,
including its cluster architecture, API primitives, and networking
primitives. It delves into the roles of various components within a
Kubernetes cluster, such as nodes, pods, services, and controllers. The
content also discusses the management of application lifecycles, cluster
maintenance, security measures, storage strategies, and networking
essentials. Practical aspects such as designing and installing Kubernetes
clusters, using kubeadm for installation, and performing end-to-end
testing are also covered. The guide emphasizes the use of tools like
kubectl for interacting with Kubernetes clusters and managing resources.
It also explores the transition from Docker to containerd and the use of
CRI-compatible tools like crictl and nerdctl . The importance of etcd as
a distributed key-value store is highlighted, along with the roles of the
Kubernetes control plane components, including the API server,
controller manager, scheduler, and kubelet. The concept of replication for
high availability is explained through ReplicaSets and Deployments.
Various types of services, such as ClusterIP, NodePort, LoadBalancer, and
Ingress, are described for enabling communication within and outside the
cluster. The guide concludes with a discussion on namespaces for
resource organization and isolation, and the differences between
Translate to
imperative and declarative management approaches in Kubernetes.

Opinions
The guide positions Kubernetes as a powerful tool for managing
containerized applications, emphasizing its ability to automate
deployment, scaling, and management.

The author suggests that understanding Kubernetes' architecture and


components is crucial for effective cluster management and for
passing the CKA exam.
There is an opinion that the transition from Docker to containerd,
facilitated by tools like crictl and nerdctl , is a significant evolution
in the container runtime space.

The guide conveys that using kubectl is an essential skill for


Kubernetes administrators, providing them with the means to control
and interact with their clusters.
The importance of etcd is highlighted, suggesting that it is a critical
component for maintaining the state of a Kubernetes cluster.

The author expresses that the use of namespaces is beneficial for


organizing resources and managing access controls in a multi-tenant
or multi-environment setup.
The guide advocates for the declarative management approach using
kubectl apply as a best practice for managing Kubernetes objects,
citing its advantages in tracking changes and maintaining
configuration consistency.

Kubernetes Notes Part-1 (Concepts of Kubernetes)

Revise your Kubernetes course from here!!!


Translate to

This blog post will serve as one installment within a series of Nine. Its
primary purpose is to comprehensively explain the fundamental concept of
Kubernetes, offering valuable assistance for those preparing to undertake the
Kubernetes exam, specifically the Certified Kubernetes Administrator (CKA)
exam.

Moreover, this blog post will function as a convenient reference for quick
revision instead of turning pages in notes we can refer these blog post for
quick reference. In times when components slip from memory, this resource
will swiftly guide us through the intricacies, ensuring a rapid resolution of
any uncertainties.

Please find below the course content.

This restructured breakdown aims to encapsulate the essence of the CKA


course content in a comprehensive and understandable manner.

Essential Concepts of Kubernetes

Efficient Scheduling

Logging and Observability


Management of Application Life Cycle
Translate to

Cluster maintenance

Robust Security Measures

Storage Strategies

Networking Essentials

Designing and Installing Kubernetes Clusters

Kubernetes Installation via Kubeadm

End-to-End Testing on Kubernetes Cluster

Mastering Troubleshooting Techniques

Exploring Advanced Topics

Lets get started with Basics!!

What is Kubernetes?

Imagine you have a lot of containers (like boxes) that hold different parts of
your applications. These containers need to work together smoothly, like a
well-orchestrated team, to run your apps.

Kubernetes is like the manager of this container team. It takes care of putting
containers on the right computers (nodes), making sure they have what they
need to work (resources), and replacing any that might get sick (failures). It
also helps the containers talk to each other and to the outside world.

Just like a manager helps a team work efficiently, Kubernetes helps your
containers run your apps smoothly and reliably, whether you have just a few
or a whole bunch of them.

Essential Concepts of Kubernetes


Cluster Architecture
Api Primitives
Translate to

Services and Other Network Primitives

Cluster Architecture :

a. Kubernetes Architecture

Diagram from Kubernetes Documentation

A Kubernetes cluster consists of a set of worker machines, called nodes, that


run containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application
workload. The control plane manages the worker nodes and the Pods in the
cluster. In production environments, the control plane usually runs across
multiple computers and a cluster usually runs multiple nodes, providing
fault-tolerance and high availability.
Translate to

Kubernetes Architecture

This Architecture is to Host your applications in the form of containers in


automated fashion so that you can easily deploy as many instances of
application as required and easily enable communication between different
services with in your application so there are many things involved that work
together to make this possible.

Let me give you an example here:

Imagine a gigantic port where cargo ships come to load and unload their
containers. This port is like a small city with many tasks happening at once.

Port Control Center (Control Plane):

This is like the main office overseeing everything.

They keep track of which ships are coming and going, making sure
everything goes smoothly.

They use plans (configurations) to organize the work and handle changes.
2. Ship Docks (Worker Nodes):
Translate to

Think of these like special parking spots for ships.

Each dock has its own area and equipment (nodes).

Ships (containers) park at docks to load or unload their cargo.

3. Cargo Boxes (Pods):

These are like the containers you see on ships.

Each box holds different things and is labeled with where it needs to go.

Boxes can be grouped together or sent one by one.

4. Labels and Instructions (Services):

Labels on the boxes help sort and deliver them correctly.

Instructions (services) make sure each box (pod) goes to the right ship
(container) based on its label.

They guarantee that cargo ends up at the right place.

5. Loading Plans (Controllers):

Imagine the office gets requests to load or unload cargo.

Loading plans (controllers) manage how boxes move around.

If there’s a problem with a box or a ship, loading plans adjust to fix it.

6. Contents List (Configurations):

Before shipping, each box has a list of what’s inside.

Configurations are like these lists, telling the system how to set up
containers with resources and settings.

7. Storage Warehouse (Storage):

Warehouses store things before or after shipping.


Containers need space for data. Persistent Volumes are like these
Translate to
warehouses.

8. Talk and Directions (Networking):

Ships and workers talk to each other to coordinate.

Containers in different boxes (pods) communicate over the network to


work together.

So, Kubernetes is like the smart management of this port. It arranges cargo
containers on ships, ensures they’re labeled correctly, and handles any
problems to make sure everything runs smoothly, just like how it manages
your application’s containers.

Master Node (Control Plane):

Kube-API Server: Exposes the Kubernetes API, handling user requests


and managing the control plane.(Orchestrating all operations within the
cluster)

etcd: Distributed key-value store that stores all cluster data and
configuration.

Kube-Scheduler: Assigns pods to nodes based on resource requirements


and constraints and watches newly created pod.

Controller Manager: Manages controller processes to maintain desired


state and handle events.(Node Controller: Responsible for on boarding
new nodes to the cluster, Node destroy, Nodes unavailable and
Replication controller: Desired no. of containers are running all time in
replication group )

Cloud Controller Manager (optional): Interacts with the underlying cloud


provider’s API for managing resources.

Kubernetes Node Components:

2. Worker Nodes:
Kubelet: Agent running on each node, responsible for managing
Translate to
containers and their lifecycle.(Listens for instruction from kube API
server)

Kube Proxy: Maintains network rules to allow communication between


pods and services in between nodes within clusters.

Container Runtime: Software responsible for running containers, like


Docker, containerd, or CRI-O.

3. Pods:

Smallest Deployable Unit: A group of one or more containers sharing


network and storage.

Pod Spec: Describes the containers, volumes, and metadata associated


with the pod.
4. Replication Controller / ReplicaSet / Deployment: Translate to

Replication Controller (Deprecated): Ensures a specified number of pod


replicas are running.

ReplicaSet: An improved version of the Replication Controller.

Deployment: Manages rolling updates and rollbacks, ensuring desired


state.

5. Service:

ClusterIP: Exposes a set of pods to other services within the cluster.

NodePort: Exposes pods on a static port on each node.

LoadBalancer: Provides an externally accessible IP address, distributing


traffic to pods.

Ingress: Manages external access to services, usually providing HTTP


routing.

6. Namespace:

Virtual Cluster: Logical isolation within a cluster, useful for separating


different teams or environments.

7. ConfigMap and Secret:

ConfigMap: Stores non-sensitive configuration data in key-value pairs.

Secret: Stores sensitive information, like passwords or API keys, in an


encrypted manner.

Kubernetes previously supported various container runtime engines,


including Docker. As Kubernetes introduced container runtime interface
(CRI) this allowed any vendor to work as a container runtime engine as long
as they adhere to OCI (Open container initiatives). OCI consists of image
specs and runtime specs. Image specs means how an image should be build
and runtimespecs defined how the container runtime should be developed.
As the docker does not support CRI hence, kubernetes introduced docker
Translate to
shim to allow supporting docker in the kubernetes as a runtime engine.

Containerc is used in the docker backend as a container runtime engine and


is being managed by daemon containerd.

Containerd:

containerd is available as a daemon for Linux and Windows. It manages the


complete container lifecycle of its host system, from image transfer and
storage to container execution and supervision to low-level storage to
network attachments and beyond.

Containerd is CRI compatible and can work directly with Kubernetes due to
complexity of docker shim hence Kubernetes removed docker shim from 1.24
release version of kubernetes completely and support for docker was
removed.
Now you can install containerd without installing docker. We can use ctr cli
Translate to
to do the basic operations as it is not user friendly and solely made for
debugging.

$ctr
$ctr images pull docker.io/library/redis:alpine
$ctr run docker.io/library/redis:alpine redis

So the better alternative is nerdctl.

nerdctl provides a docker like cli for containerd

nerdctl supports docker compose

nerdctl supports newest feature in containerd - Encrypted container


images. - Lazy pulling. - P2P image distribution. - Image signing and
verification. - Name space in kubernetes.

Below example to use nerdctl instead of docker. The nerdctl supports all the
command same as like docker.

$docker ---> $nerdctl


$docker run --name redis redis:alpine --> $nerdctl run --name redis redis:alp

Now lets talk about crictl.

Crictl:

This is used to interact with Container runtime engine(CRI) from the


Kubernetes prescriptive. It works with kubelet. This is mostly used for
debugging purpose and not to create containers. If kublet finds that the
container is created by crictl then it deletes that container.
Translate to

Now insted of using docker we can use crictl for debuging shown as below.

$crictl
$crictl pull nginx
$crictl images
$crictl ps -a
$crictl exec -i -t 765sdmnbdsjfhgjhgu657657bkjbi87565ehggvjmhk ls
$crictl logs 765sdmnbdsjfhgjhg
$crictl pods ## Crictl is aware of pod hence pods can be listed ##

crictl provides a cli for CRI compatible containerruntimes.

Installed separately.

Used to inspect and debug container runtimes. - Not to create containers


idealy.
works across different runtimes.
Translate to

Diff between docker cli and cricli can be found here in more detail.
Translate to
Translate to

Critool debugging link can be found here.

So in short we will be using crictl moving forward.

b. ETCD For Beginners

What is ETCD?

ETCD is a distributed reliable key-value store that is Simple, Secure & Fast

What is a Key-Value Store?

Tabular or Relational databases

Install the ETCD from here.


When you start the etcd service it starts listening on port 2379.
Translate to

There is the default client which comes with etcd which is etcdctl.

$./etcdctl set/put key1 value1

$./etcdctl get key1 #To get the value of key##

To check the version:

./etcdctl --version#(for version 2) or ./etcdctl version#(for version 3)

etcdctl version: 3.3.11 API version: 2

With newer version of etcd thedefault version is set to v3.

Etcd stores the information regarding the clusters such as nodes, pods,
configs, secrets, roles, bindings and others.

Every info you see when you run kubectl get command is from the etcd
server.

Etcd Setup:

# Setup manual
wget -q --https-only \
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-am

# Setup by Kubeadm
kubectl get pods -n kube-system

ETCD — Commands (Optional)

ETCDCTL is the CLI tool used to interact with ETCD.


ETCDCTL can interact with ETCD Server using 2 API versions — Version 2
Translate to
and Version 3. By default its set to use Version 2. Each version has different
sets of commands.

For example ETCDCTL version 2 supports the following commands:

etcdctl backup

etcdctl cluster-health

etcdctl mk

etcdctl mkdir

etcdctl set

Whereas the commands are different in version 3

etcdctl snapshot save

etcdctl endpoint health

etcdctl get

etcdctl put

To set the right version of API set the environment variable ETCDCTL_API
command

export ETCDCTL_API=3

When API version is not set, it is assumed to be set to version 2. And version
3 commands listed above don’t work. When API version is set to version 3,
version 2 commands listed above don’t work.

Apart from that, you must also specify path to certificate files so that
ETCDCTL can authenticate to the ETCD API Server. The certificate files are
available in the etcd-master at the following path. We discuss more about
certificates in the security section of this course. So don’t worry if this looks
Translate to
complex:

— cacert /etc/kubernetes/pki/etcd/ca.crt

— cert /etc/kubernetes/pki/etcd/server.crt

— key /etc/kubernetes/pki/etcd/server.key

So for the commands we must specify the ETCDCTL API version and path to
certificate files. Below is the final form:

kubectl exec etcd-master -n kube-system - sh -c "ETCDCTL_API=3 etcdctl get /

c. Kube-API Server

It is the primary management component in kubernetes.

When we run a kubectl command the kube control utility reaches to


kubeapi server. The kube api server first authenticates the request.

It then retrives the data from etcd cluster and responds back with the
requested information.

KubeAPi server flow. - Authenticate User - Validate request - Retrive Data


- Update ETCD - Scheduler - Kublet

Install kube-api

#wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/l
#kube-apiserver.service
View api-server — kubeadm
Translate to

kubectl get pods -n kube-system

View api-server options — kubeadm

cat /etc/kubernetes/manifests/kube-apiserver.yaml
OR
cat /etc/systemd/system/kube-apiserver.service
OR
ps -aux | grep kube-apiserver

d. Kube Controller Manager

Kubecontroller manager manages various contollers in kubernetes. It


continuesly monitors the state of the various components in the system and
bringing the whole system to it’s desired working state.

Watch Status

Remidiate Situation

Node monitor period 5 sec.

Node monitor grace period 40 sec.

Pod evection time out 5 minutes.

Node controller : It is responsible for monitoring the status of nodes and take
necessary acctions to keep the application running. it does that through the
kube api server.

Replication controller : It is responsible for monitoring the status of replica


sets and ensuring that they desired number of pods are available at all times
within the set if a pod dies it creates another one.
Translate to

Same as like these controllers there are many controllers which all adds to
Kube controller manager. By installing controller manager below controller
also gets installed by default.

Deployment controller

Pv binding controller

Service account controller

PV protection controller

Job controller

Namespace controller

Endpoint controller

Installing kube-controller-manager :

#wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/l
# kube-controller-manager.service

ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pe
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
--node-monitor-period=5s
--node-monitor-grace-period=40s
--pod-eviction-timeout=5m0s

--controllers stringSlice Default: [*]


A list of controllers to enable. '*' enables all on-by-default controllers, '
named 'foo', '-foo' disables the controller named 'foo'.
All controllers: attachdetach, bootstrapsigner, clusterrole-aggregation, cron
Translate togarbagec
csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint,
horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistent
persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset,
resourcequota, root-ca-cert-publisher, route, service, serviceaccount, servic
tokencleaner, ttl, ttl-after-finished
Disabled-by-default controllers: bootstrapsigner, tokencleaner

View kube-controller-manager — kubeadm

kubectl get pods -n kube-system

NAMESPACE NAME READY STATUS RESTARTS AGE


kube-system coredns-78fcdf6894-hwrq9 1/1 Running 0 16m
kube-system coredns-78fcdf6894-rzhjr 1/1 Running 0 16m
kube-system etcd-master 1/1 Running 0 15m
kube-system kube-apiserver-master 1/1 Running 0 15m
kube-system kube-controller-manager-master 1/1 Running 0 15m
kube-system kube-proxy-lzt6f 1/1 Running 0 16m
kube-system kube-proxy-zm5qd 1/1 Running 0 16m
kube-system kube-scheduler-master 1/1 Running 0 15m
kube-system weave-net-29z42 2/2 Running 1 16m
kube-system weave-net-snmdl 2/2 Running 1 16m

View kube-controller-manager options — kubeadm

cat /etc/kubernetes/manifests/kube-controller-manager.yaml

spec:
containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true Translate to

View controller-manager options

cat /etc/systemd/system/kube-controller-manager.service

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pe
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5

OR

ps -aux | grep kube-controller-manager

e. Kube Scheduler

The Kubernetes scheduler is responsible for scheduling pods on nodes. The


scheduler is only responsible for deciding which pod goes on which node. It
doesn’t actually place the pod on the nodes that’s the job of the kubelet. The
kubelet creates the pod and the scheduler only decides which pod goes where.
In Kubernetes, the scheduler decides which nodes the pods are placed on
depending on certain criteria. You may have pods with different resource
requirements. You can have nodes in the cluster dedicated to certain
applications. The scheduler looks at each pod and tries to find the best node
Translate to
for it.

The scheduler ranks the nodes to identify the best fit for the pod. It uses a
priority function to assign a score to the nodes on a scale of zero to 10. You
can write your own customized scheduler. There are many more topics to
look at, such as resource requirements, limits, taints and tolerations, node
selectors, affinity rules etc.

Installing kube-scheduler

wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/li
kube-scheduler.service
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2

View kube-scheduler options — kubeadm

cat /etc/kubernetes/manifests/kube-scheduler.yaml
spec:
containers:
- command:
- kube-scheduler
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true

View kube-scheduler options

ps -aux | grep kube-scheduler


f. Kublet
Translate to

The kubelet in the Kubernetes worker node registers the node with a
Kubernetes cluster. When it receives instructions to load a container or a pod
on the node, it requests the container runtime engine, which may be Docker,
to pull the required image and run an instance. The kubelet then continues to
monitor the state of the pod and containers in it and reports to the kube API
server on a timely basis.

Kubeadm does not deploy Kubelets

Installing kubelet

wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/li
kubelet.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2

View kubelet options

ps -aux | grep kubelet

g. Kube Proxy
Within a Kubernetes cluster, every pod can reach every other pod. This is
Translate to
accomplished by deploying a pod networking solution to the cluster. A pod
network is an internal virtual network that spans across all the nodes in the
cluster to which all the pods connect to. Through this network, they’re able to
communicate with each other. There are many solutions available for
deploying such a network.

Kube-proxy is a process that runs on each node in the Kubernetes cluster. Its
job is to look for new services, and every time a new service is created, it
creates the appropriate rules on each node to forward traffic to those services
to the backend pods.

One way it does this is using iptables rules. It creates an iptables rule on each
node in the cluster to forward traffic heading to the IP of the service, to the IP
of the actual pod.

Installing kube-proxy

wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/li
kube-proxy.service
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

View kube-proxy — kubeadm

kubectl get pods -n kube-system


kubectl get daemonset -n kube-system
The kubeadm tool deploys kube-proxy as pods on each node. In fact, it is
Translate to
deployed as a DaemonSet, so a single pod is always deployed on each node in
the cluster.

h. Create and configure pods

Kubernetes does not deploy containers directly on the worker nodes. The
containers are encapsulated into a Kubernetes object known as pods. A pod is
a single instance of an application. A pod is the smallest object that you can
create in Kubernetes.

we’re considering a basic scenario: a Kubernetes cluster with just one node.
Within that node, there’s a single instance of your application, neatly
contained within a Docker container, which is in turn encapsulated within a
pod. Now, let’s consider a situation where the number of users accessing your
application starts to grow. As a result, the demands on your application
increase, and you must adapt by expanding its capacity.

So, what’s the plan for scaling up? Well, the solution involves introducing
more instances of your web application to effectively distribute the load. But
where do these additional instances find their place? Do you insert them into
the existing container within the same pod? The answer is a resounding no.
Instead, the strategy is to create entirely new pods, each housing a distinct
instance of your application. It’s like setting up fresh rooms in a hotel to
accommodate an influx of guests — you don’t overcrowd one room; you
provide new rooms for comfort and efficiency.

To put it concisely, when confronted with the need to manage more users, the
approach isn’t about packing extra instances into a single pod. Instead, it
revolves around generating new pods, each hosting a unique instance of your
application. By following this method, your application adeptly handles the
heightened demand while maintaining a systematic and organized structure.
Translate to

Pod creation in Node

As evident, we now observe two instances of our web application in action.


These instances are running within two distinct pods, both situated on the
same Kubernetes system or node. Now, let’s consider a scenario where the
number of users grows even further, and your existing node can no longer
handle the increased load adequately. What’s the solution? Well, in this case,
you have the flexibility to deploy new pods on a fresh node within the cluster.
By doing so, you’re essentially extending the physical capacity of the entire
cluster.

It’s worth noting that pods and containers share a tight-knit connection.
Typically, there’s a one-to-one correspondence between a pod and a
container, with each pod housing a single container that runs your
application. When you need to scale up, you create new pods; when scaling
down, you remove existing ones. The practice doesn’t involve adding extra
containers to an existing pod to manage the application’s growth.

Additionally, there are helper containers that serve a supportive role for your
web application. These helpers might handle tasks like processing user data
or managing uploaded files. If you wish these helper containers to work in
tandem with your main application container, you can place both within the
same pod. This arrangement ensures that when a new application container
is initiated, the helper container follows suit, and if one ceases operation, the
other does as well. Notably, due to sharing the same network and storage
space, these two containers can communicate directly by referencing each
other as local hosts. This setup streamlines their collaboration and resource
sharing.
i. Kubectl:
Translate to

kubectl is a command-line tool used for interacting with Kubernetes clusters.


Kubernetes is an open-source container orchestration platform that
automates the deployment, scaling, and management of containerized
applications. It allows you to manage containers and their configurations
across a cluster of machines.

kubectl acts as a control interface to communicate with the Kubernetes API


server, which manages the entire Kubernetes cluster. It allows users to
perform various operations on the cluster, such as deploying applications,
checking the status of resources, scaling applications, updating
configurations, and more.

Install kubectl:

To install the kubectl follow the official document from kubernetes.

With kubectl , you can:

Create and manage Kubernetes resources like pods, services,


deployments, and replicasets.

Inspect the current state of your cluster and its components.

Manage rolling updates and rollbacks of application deployments.

View logs and debug running pods.

Scale applications horizontally by adding or removing replicas.

Execute commands inside containers running in pods.

To use kubectl , you typically install it locally on your machine and configure
it to connect to your Kubernetes cluster. It's a powerful tool that helps
developers and administrators manage their applications in a Kubernetes
environment effectively.
kubectl run nginx –-image nginx
Translate to

kubectl get pods

Pods with yaml:

pods-definition.yml

apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
app: myapp
spec:

apiVersion: v1

This indicates the API version being used for the resource definition. In this
case, it's the core version of Kubernetes API.

kind: Pod

The kind field specifies the type of Kubernetes resource being defined, which
in this case is a Pod.

metadata:
name: mypod
labels:
app: myapp Translate to

The metadata section contains information about the Pod. The name field sets
the name of the Pod to "mypod". The labels field assigns a label to the Pod,
in this case, labeling it as part of the "myapp" application.

spec:

The spec section defines the specification of the Pod, detailing how it should
be configured. This section is where we define the container
configuration.

Overall, this YAML file is defining a Kubernetes Pod with the following
characteristics:

Name: "mypod"

Labels: It has a label named "app" with the value "myapp".

Labels in Kubernetes are key-value pairs attached to resources such as Pods,


Services, Deployments, and more. They are used to organize and categorize
resources, making it easier to manage, select, and filter them. Labels provide
metadata that can be used for various purposes, including grouping related
resources, applying policies, and enabling efficient querying.

Here’s a breakdown of how labels work in Kubernetes:

Key-Value Pairs: Labels consist of key-value pairs. For example, app:

myapp is a label where app is the key and myapp is the value.

Flexibility: You can use labels to define any attributes you want.
Common labels include identifying the application name, environment
(e.g., development, production), version, and more.
Translate to

Resource Association: Labels are associated with Kubernetes resources


in their metadata. For example, a Pod may have labels that describe its
purpose, and a Service may have labels to specify which Pods it should
route traffic to.

Selectors: Labels enable selectors, which are used to match resources.


For example, you can use a selector to target all Pods with a specific
label, or all resources that belong to a particular application.

Grouping and Organization: Labels allow you to group resources


logically. This is useful for managing and categorizing resources,
especially in larger and complex environments.

Filtering and Querying: Labels enable advanced resource selection. You


can use kubectl commands or create higher-level constructs like
Deployments and Services that use labels to determine which Pods to
manage or route traffic to.

For example, consider a scenario where you have multiple versions of an


application running in different environments. You could label Pods like this:

app=myapp

env=development or env=production

version=v1 , version=v2 , and so on

Then, you can use selectors to filter Pods based on these labels. For instance,
to select all production Pods of your app:

kubectl get pods -l app=myapp,env=production


In summary, labels are a powerful mechanism in Kubernetes for organizing
Translate to
and managing resources, allowing you to categorize and interact with them in
a more granular and efficient manner.

To initiate the creation of a container within the pod, let’s craft a YAML file.
Within this file, the ‘spec’ section serves as a dictionary. Here, you’ll
introduce a property under ‘spec’ called ‘containers’. This property is
designed as a list — or an array — due to the potential for multiple containers
to coexist within a single pod. However, for this specific case, we intend to
include only a solitary element in this list, as our plan revolves around having
just a single container within the pod.

Notably, the dash preceding the ‘name’ signifies the start of an entry within
the list. In this instance, we’re focusing on the initial item. This item is
fashioned as a dictionary, thus requiring the inclusion of properties.
Specifically, ‘name’ and ‘image’ are added. ‘Name’ designates a distinctive
identifier, while ‘image’ points to ‘nginx’. This reference corresponds to the
Docker image located within the Docker repository.

apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
app: myapp
spec:
containers:
- name: nginx-controller
image: nginx

Once the file is created run the following command to create the pod.

kubectl create -f pod-defination.yml


To see the pod created.
Translate to

#kubectl get pods

#kubectl describe pod myapp-pod ##To get the details related to pod##

j. Replicaset

Why do we need a Replication Controller?

Let’s consider one scenario where we had a single pod running our
application. What if for some reason our application crashes and the pod
fails?

Users will no longer be able to access our application. To prevent users from
losing access to our application, we would like to have more than one
instance or pod running at the same time. That way, if one fails we still have
our application running on the other one. The Replication Controller helps us
run multiple instances of a single pod in the Kubernetes cluster, thus
providing high availability.

Also, Even if you have a single pod, the Replication Controller can help by
automatically bringing up a new pod when the existing one fails. Thus, the
Replication Controller ensures that the specified number of pods are running
at all times even if it’s just one or 100.

Another reason we need Replication Controller is to create multiple pods to


share the load across them. The Replication Controller spans across multiple
nodes in the cluster.

It’s important to note that there are two similar terms. Replication Controller
and Replica Set. Both have the same purpose, but they’re not the same.
Replication Controller is the older technology that is being replaced by
Translate to
Replica Set.

Replica Set is the new recommended way to set up replication. Replica Set
can also manage pods that were not created as part of the Replica Set
creation. For example, there are pods created before the creation of the
Replica Set that match labels specified in the selector, the Replica Set will
also take those pods into consideration when creating the replicas. The
selector is one of the major differences between Replication Controller and
Replica Set. The selector is not a required field in case of a Replication
Controller but it is still available.

replicationcontroller.yml

apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
labels:
app: myapp
type: fe
spec:
template:
metadata:
name: myapp-rc
labels:
app: myapp
type: fe
spec:
containers:
- name: nginx-controller
image: nginx
replicas: 3

kubectl create -f replicationcontroller.yml


kubectl get replictioncontroller
replicaset.yml
Translate to

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rc
labels:
app: myapp
type: fe
spec:
template:
metadata:
name: myapp-rc
labels:
app: myapp
type: fe
spec:
containers:
- name: nginx-controller
image: nginx
replicas: 3
selector:
matchLabels:
type: fe

kubectl explain replicaset


kubectl create -f replicaset.yml
or
kubectl create replicaset myapp-rc --image=nginx --replicas=3 -o yaml --dry-r
kubectl create -f replicaset.yml
kubectl get replicaset

The role of the Replica Set is to monitor the pods and if any of them were to
fail, deploy new ones. The Replica Set is in fact a process that monitors the
pods. There could be hundreds of other pods in the cluster running different
applications. This is where labeling our pods during creation comes in handy.
We could now provide these labels as a filter for Replica Set.
Under the selector section, we use the match labels filter and provide the
Translate to
same label that we used while creating the pods. This way, the Replica Set
knows which pods to monitor. The same concept of labels and selectors is
used in many other places throughout Kubernetes.

Let’s now look at how we scale the Replica Set. Say we started with three
replicas and the future we decided to scale to six. How do we update our
Replica Set to scale to six replicas? Well, there are multiple ways to do it. The
first is to update the number of replicas in the definition file to six. Then run
the kubectl replace command to specify the same file using the -F parameter
and that will update the Replica Set to have six replicas.

kubectl replace -f replicaset.yml


OR
kubectl edit rs <replicaset-name>

The second way to do it is to run the kube control scale command, use the
replicas parameter to provide the new number of replicas and specify the
same file as input.

kubectl scale --replicas=6 -f replicaset.yml


kubectl edit rs <rs-name>
#To update the Image#
kubectl edit rs <rs-name>
# terminate the pods under this replicaset to get the new pod running with ne
kubectl delete pod

However, remember that using the file name as input will not result in the
number of replicas being updated automatically in the file. In other words,
the number of replicas in the Replica Set’s definition file will still be three,
even though you scaled your Replica Set to have six replicas using the kube
Translate to
control scale command and the file as input.

k. Deployments

Kubernetes Deployments ensure your applications are resilient, always


available, and easily upgradable by managing multiple copies of your
components and smartly handling updates.

Say for example, you have a web server that needs to be deployed in a
production environment. You need not one, but many such instances of the
web server running for obvious reasons.

Secondly, whenever newer versions of application become available on the


Docker Registry, you would like to upgrade your Docker instances
seamlessly. However, when you upgrade your instances, you do not want to
upgrade all of them at once. This may impact users accessing our applications
so you might want to upgrade them one after the other. And that kind of
upgrade is known as rolling updates.

Suppose one of the upgrades you performed resulted in an unexpected error


and you’re asked to undo the recent change. You would like to be able to roll
back the changes that were recently carried out.

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
labels:
app: myapp
type: fe
spec:
template:
metadata:
name: myapp-rc
labels:
app: myapp Translate to
type: fe
spec:
containers:
- name: nginx-controller
image: nginx
replicas: 3
selector:
matchLabels:
type: fe

kubectl explain deployment


kubectl create -f deployment.yml
or
kubectl create deployment myapp-deploy --image=nginx --replicas=3 -o yaml --d
kubectl create -f deployment.yml
kubectl get deployment
kubectl get replicaset
kubectl get pods
kubectl get all

l. Services

In Kubernetes, think of a Service as a virtual front door to your application.


When you have multiple pods running your app, a Service helps make them
accessible to other parts of your cluster or even the internet. It gives your app
a consistent address, like a phone number, that others can use to find and
talk to your app. This way, even if pods come and go, the Service ensures your
app is always reachable.

Kubernetes Services enable communication between various components


within and outside of the application. Kubernetes Services helps us connect
applications together with other applications or users.

For example, our application has groups of Pods running various sections,
such as a group for serving frontend loads to users, and other group for
running backend processes, and a third group connecting to an external data
Translate to
source. It is Services that enable connectivity between these groups of Pods.

Services enable the frontend application to be made available to end users. It


helps communication between backend and frontend Pods, and helps in
establishing connectivity to an external data source. Thus Services enable
loose coupling between microservices in our application.

There are other kinds of services available, which we will now discuss. The
first one is NodePort where the service makes an internal port accessible on a
port on the node. The second is CluserIP, and in this case, the service creates
a virtual IP inside the luster to enable communication between different
services, such as a set of frontend servers to a set of backend servers. The
third type is a LoadBalancer, where it provisions a load balancer for our
application in supported cloud providers.

How do we, as an external user, access the webpage?

First of all, let’s look at the existing setup. The Kubernetes node has an IP
address and that is one 192.168.1.2. My laptop is on the same network as well,
so it has an IP address, 192.168.1.10. The internal Pod network is in the range
10.244.0.0. And the Pod has an IP 10.244.0.2. Clearly I cannot ping or access
the Pod at address 10.244.0.2 as it’s in a separate network.So what are the
Translate to
options to see the webpage? First, if we were to SSH into the Kubernetes
node at 192.168.1.2 from the node, we would be able to access the Pod’s
webpage by doing a curl.

Or if the node has a GUI we would fire up a browser and see the webpage in a
browser following the address http://10.244.0.2. But this is from inside the
Kubernetes node and that’s not what I really want. I want to be able to access
the web server from my own laptop without having to SSH into the node and
simply by accessing the IP of the Kubernetes node. So we need something in
the middle to help us map requests to the node from our laptop, through the
node, to the Pod running the web container. This is where the Kubernetes
Service comes into play. The Kubernetes Service is an object, just like Pods,
ReplicaSet, or Deployments that we worked with before. One of its use case is
to listen to a port on the node and forward request on that port to a port on
the Pod running the web application. This type of service is known as a
NodePort service.
Let’s take a closer look at the service. If you look at it, there are three ports
Translate to
involved. The port on the Pod where the actual web server is running is 80,
and it is referred to as the target port because that is where the service
forwards their request to. The second port is the port on the service itself. It is
simply referred to as the port. Remember, these terms are from the viewpoint
of the service.

The service is, in fact, like a virtual server inside the node. Inside the cluster it
has its own IP address, and that IP address is called the ClusterIP of the
service. And finally, we have the port on the node itself which we use to
access the web server externally, and that is as the node port. As you can see,
it is set to 30,008. That is because node ports can only be in a valid range
which by default is from 30,000 to 32,767.

service-nodeport.yml
Translate to
apiVersion: v1
kind: Service
metadata:
name: myapp-deploy
spec:
type: NodePort
ports:
- targetport: 80
port: 80
nodePort: 30008
selector:
app: myapp
type: front-end

kubectl create -f service-nodeport.yml


kubectl get services
curl http://192.168.1.2:30008

So far we talked about a service mapped to a single Pod. But that’s not the
case all the time. What do you do when you have multiple Pods? In a
production environment, you have multiple instances of your web application
running for high availability and load balancing purposes. In this case, we
have multiple similar Pods running our web application. They all have the
same labels with a key app and set to a value of my app.

The same label is used as a selector during the creation of the service. So,
when the service is created, it looks for a matching Pod with the label and
finds three of them. The service then automatically selects all the three Pods
as endpoints to forward the external request coming from the user. You don’t
have to do any additional configuration to make this happen.

And if you’re wondering what algorithm it uses to balance the load across the
three different Pods, it uses a random algorithm. Thus, the service acts as a
built-in load balancer to distribute load across different Pods. And finally,
let’s look at what happens when the Pods are distributed across multiple
Translate to
nodes.

When we create a service, without us having to do any additional


configuration Kubernetes automatically creates a service that spans across all
the nodes in the cluster and maps the target port to the same node port on all
the nodes in the cluster. This way you can access your application using the
IP of any node in the cluster and using the same port number which in this
case is 30,008.

To summarize, in any case, whether it be a single Pod on a single node,


multiple Pods on a single node, or multiple Pods on multiple nodes, the
service is created exactly the same without you having to do any additional
steps during the service creation. When Pods are removed or added, the
service is automatically updated, making it highly flexible and adaptive. Once
created, you won’t typically have to make any additional configuration
changes.
m. Services clusterip
Translate to

A Kubernetes service can help us group the pods together and provide a
single interface to access the pods in a group. For example, a service created
for the backend pods will help group all the backend pods together and
provide a single interface for other pods to access this service. The requests
are forwarded to one of the pods under the service randomly.

Similarly, create additional services for Redis and allow the backend pods to
access the Redis systems through the service. This enables us to easily and
effectively deploy a microservices based application on Kubernetes cluster.
Each layer can now scale or move as required without impacting
Translate to
communication between the various services. Each service gets an IP and
name assigned to it inside the cluster, and that is the name that should be
used by other pods to access the service. This type of service is known as
cluster IP.

service-clusterip.yml

apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
ports:
- targetport: 80
port: 80
selector:
app: myapp
type: front-end

kubectl create -f service-clusterip.yml


kubectl get services

n. Services-loadbalancer

In Kubernetes, a Service acts as a way to expose your application to the


network. It provides a stable, network-specific IP address that you can use to
access your application, even if the underlying pods that make up your
application change or move.

In the Kubernetes world, a Service with a LoadBalancer is configured to


automatically create and manage a cloud-specific load balancer. This external
load balancer helps distribute incoming network traffic across the pods that
are part of your Service, ensuring that your application remains responsive,
Translate to
available, and doesn’t put too much pressure on any single pod.

So, to sum it up, a Kubernetes Service with a LoadBalancer ensures that your
application is accessible from a stable IP address while also spreading out
incoming traffic across multiple pods, maintaining a balanced workload and
enhancing the availability and performance of your app.

service-loadbalancer.yml

apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: LoadBalancer
ports:
- targetport: 80
port: 80
nodePort: 30008

kubectl create -f service-loadbalancer.yml


kubectl get services

o. Namespaces

In Kubernetes, namespaces are like separate rooms within a big house. They
help keep things organized by letting you group and manage your
application’s resources separately from others. Each namespace has its own
space for pods, services, and other components, making it easier to avoid
naming conflicts, set access controls, and manage multiple projects without
interference.
The default namespace is created automatically by Kubernetes when the
Translate to
cluster is first set up. Kubernetes creates a set of pods and services for its
internal purpose, such as those required by the networking solution, the DNS
service, etc. To isolate these from the user and to prevent you from
accidentally deleting or modifying these services, Kubernetes creates them
under another name space created at cluster startup named kube-system. A
third name space created by Kubernetes automatically is called kube-public.
This is where resources that should be made available to all users are created.

If your environment is small or you’re learning and playing around with a


small cluster, you shouldn’t really have to worry about name spaces. You
could continue to work in the default name space. However, as in when you
grow and use a Kubernetes cluster for enterprise or production purposes, you
may want to consider the use of name spaces. You can create your own name
spaces as well.

For example, if you wanted to use the same cluster for both dev and
production environment, but at the same time, isolate the resources between
them, you can create a different name space for each of them. That way, while
working in the dev environment, you don’t accidentally modify resources in
production. Each of these name spaces can have its own set of policies that
define who can do what. You can also assign quota of resources to each of
these name spaces. That way, each name space is guaranteed a certain
amount and does not use more than its allowed limit.
To access the resources from another namespace, use the
Translate to
servicename.namespace.svc.cluster.local format.

You’re able to do this because when the service is created, a DNS entry is
added automatically in this format. Looking closely at the DNS name of the
service, the last part, cluster.local, is the default domain name of the
Kubernetes cluster. SVC is the sub domain for service followed by the name
space, and then the name of the service itself.

To list the pods in default name space use below command.

kubectl get pods

To list the pods in another name space use below command.

kubectl get pods --namespace=kube-system

To create a pod in default namespace or in another namespace use below


command.

# In default namespace #
kubectl create -f pod.yml
# In dev name space #
kubectl create -f pod.yml --namespace=dev

You can also use the namespace field in the metadata section of pod creation
file to create a pod in the dev namespace
Translate to
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: dev
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx

To create a namespace,

apiVersion: v1
kind: Namespace
metadata:
name: dev

kubectl create -f namespace.yml


or
kubectl create namespace dev
kubectl get pods --namespace=dev

To set the default namespace as dev,

kubectl config set-context $(kubectl config current-context) --namespace=dev


kubectl get pods

To get pods in all namespaces


Translate to
kubectl get pods --all-namespaces

ResourceQuota:

To limit resources in a name space, create a resource quota. To create one,


start with a definition file for resource quota, specify the name space for
which you want to create the quota, and then under spec, provide your limits
such as 10 pods, 10 CPU units, 10 GB byte of memory, etcetera.

apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: dev
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: 5Gi
limits.cpu: "10"
limits.memory: 10Gi

kubectl create -f compute-quota.yml

p. Imperative vs declarative

In the infrastructure-as-code world, an example of an imperative approach of


provisioning infrastructure would be a set of instructions written step by step,
such as provisioning a VM named a web server, installing the NGINX
software on it, editing configuration file to use port 8080, and setting the
path to web files, downloading source code of repositories from Git, and
finally starting the NGINX server. So here we’re saying what is required and
Translate to
also how to get things done. In the declarative approach, we declare our
requirements. For instance, all we say is that we need a VM by the name “web
server,” with the NGINX software on it, with a port set to 8080 and the path
to the web files defined, and where the source code of the application is
stored. And everything that’s needed to be done to get this infrastructure in
place is done by the system or the software. You don’t have to provide step-
by-step instructions. Orchestration tools like Ansible, Puppet, or Chef, or
Terraform fall into this category.

In the imperative approach, what happens if the first time only half of the
steps were executed? What happens if you provide the same set of
instructions again to complete the remaining step? To handle such situations,
there will be many additional steps involved, such as checks to see if
something already exists and taking an action based on the results of that
check.

In the Kubernetes world, the imperative way of managing infrastructure is


using commands like the kubectl run command to create a pod. The kubectl
create deployment command to create a deployment. The kubectl expose
command to create a service, to expose a deployment. And the kubectl edit
command may be used to edit an existing object. For scaling a deployment or
replica set, use the kubectl scale command. And updating the image on a
deployment, we use the kubectl set image command.

we use the kubectl set image command. Now we have also used object
configuration files to manage object , such as creating an object using the
kubectl create -f command, with the f option to specify the object
configuration file. And editing an object using the kubectl replace command.
And deleting an object using the kubectl delete command. All of these are
imperative approaches to managing objects in Kubernetes. We’re saying
exactly how to bring the infrastructure to our needs by creating, updating, or
deleting objects. The declarative approach would be to create a set of files
that defines the expected state of the applications and services on a
Kubernetes cluster. And with a single kubectl apply command, Kubernetes
Translate to
should be able to read the configuration files and decide by itself what needs
to be done to bring the infrastructure to the expected state. So in the
declarative approach, you will run the kubectl apply command for creating,
updating, or deleting an object.

The apply command will look at the existing configurationand figure out
what changes need to be made to the system. So let’s look at these in a bit
more detail. Now within the imperative approach, there are two ways. The
first is using imperative commands, such as the run, create, or expose
commands to create new objects, and the edit, scale, and set commands to
update existing objects. Now these commands help in quickly creating or
modifying objects, as we don’t have to deal with YAML files. And these are
helpful during the certification exams.

However, they are limited in functionality and will require forming long and
complex commands for advanced use cases, such as creating a multi-
container pod or deployment.

Secondly, these commands are run once and forgotten. They are only
available in the session history of the user who ran these commands. So it’s
hard for another person to figure out how these objects were created. So it is
hard to keep track of. And so it’s difficult to work with these commands in
large or complex environments. And that’s where managing objects with the
object configuration files can help.

Creating object definition files or configuration files or manifest files, as it’s


also called, can help us write down exactly what we need the object to look
like in a YAML format and use the kubectl create command to create the
object. We now have the YAML file with us always, and it can be saved in a
code repository like Git. We can put together a change review and approval
process around these files so that a change made is reviewed and approved
before it is applied to a production environment. In the future, if a change is
to be made, for instance, editing the image name to another version, there are
Translate to
different ways to go about it.

One way is to use the kubectl edit command and specify the object name. So
when this command is run, it opens a YAML definition file, similar to the one
you used to create the object but with some additional fields, such as the
status fields that you see here, which are used to store the status of the pod.
This is not the file you used to create the object. This is a similar pod
definition file within the Kubernetes memory. You can make changes to this
file, and save and quit, and those changes will be applied to the live object.
However, note that there is a difference between the live object and the
definition file that you have locally.

The change you made using the kubectl edit command is not really recorded
anywhere. After the change is applied, you’re only left with your local
definition file, which in fact has the old image name in it. In the future, say
you or a teammate decide to make a change to this object, unaware that a
change was made using the kubectl edit command, when the new change is
applied, the previous change to the image is lost. So you can use the kubectl
edit command if you are making a change and you’re sure that you’re not
going to rely on the object configuration file in the future.

But a better approach to that is to first edit the local version of the object
configuration file, with the required changes, that is, by updating the image
name here, and then running the kubectl replace command to update the
object. This way, going forward, the changes made are recorded and can be
tracked as part of the change review process.

The declarative approach is where you use the same object configuration files
that we’ve been working on. But instead of the create or replace commands,
we use the kubectl apply command to manage objects. The kubectl apply
command is intelligent enough to create an object if it doesn’t already exist. If
there are multiple object configuration files, as you would usually, then you
may specify a directory as the path instead of a single file. That way, all the
objects are created at once. Now when changes are to be made, we simply
Translate to
update the object configuration file and run the kubectl apply command
again. And this time, it knows that the object exists. And so it only updates
the object with the new changes. So it never really throws an error that says
the object already exists or the updates cannot be applied. It will always
figure out the right approach to updating the object. So going forward, any
changes made on the application, whether they are updating images or fields
of existing configuration files, or adding new configuration files altogether for
new objects, all we do is simply update our local directory with the changes
and then the kubectl apply command take care of the rest.

# Imperative

kubectl run --image= nginx nginx


kubectl create deployment --image=nginx nginx
kubectl expose deployment nginx --port 80
kubectl scale deployment nginx --replicas=5
kubectl set image deployment nginx nginx=nginx:1.18
kubectl create -f nginx.yaml
kubectl replace -f nginx.yaml
kubectl delete -f nginx.yaml
# Create Objects
kubectl run image= nginx nginx
kubectl create deployment image= nginx nginx
kubectl expose deployment nginx port 80

#Update Objects
kubectl edit deployment nginx
kubectl scale deployment nginx replicas=5
kubectl set image deployment nginx nginx =nginx:1.18

# Declarative
kubectl apply -f nginx.yaml

q. Kubectl apply
The apply command in Kubernetes takes your local configuration file, the
Translate to
existing object definition in the Kubernetes cluster, and the previously
applied configuration. It then decides what changes need to be made. When
you use the apply command, if the object doesn't exist yet, it's created along
with a configuration that holds its status.

This live configuration in the cluster stores information about the object,
regardless of how you create it. When you use kubectl apply , something
extra happens. The YAML config you wrote gets transformed into JSON
format and becomes the last applied configuration. For future updates, all
three configurations (local, live, and last applied) are compared to determine
what changes are needed.

This last applied configuration is stored as an annotation named last applied

configuration on the live object in the Kubernetes cluster. Remember, this


process is specific to the apply command. The kubectl create or kubectl

replace commands don't work this way. It's crucial to avoid mixing
imperative and declarative approaches when managing Kubernetes objects.
After using apply , changes are decided by comparing the local definition, live
configuration, and the last applied configuration.

Next upcoming Efficient Scheduling……

Please follow me for more such innovative blogs.

If you enjoyed this article, consider trying out the AI service I recommend. It
provides the same performance and functions to ChatGPT Plus(GPT-4) but
more cost-effective, at just $6/month (Special offer for $1/month). Click here
to try ZAI.chat.

Kubernetes
Translate to

Recommended from ReadMedium

Jake Page

The guide to kubectl I never had.


What kind of engineer are you? 🤔 Can somebody guess by just looking at you? More than
likely not.

17 min read

David Chong

Kubernetes: What actually happens when we perform kubectl apply?


A deep dive under the hood

3 min read

Shambhu Kumar Sinha

Tips and tricks for passing the CKAD exam — Notes #1 of 4


The Kubernetes CKAD exam is not all about knowledge and concepts but also about efficiency
in solving problems quickly.

11 min read

Tamer Benhassan

7 Best Practices for Implementing Security Contexts in Kubernetes


Kubernetes security contexts are a fundamental part of safeguarding Kubernetes applications.
They allow administrators and developers to…

3 min read

Arvinder Singh
CKAD Success Blueprint: How I Scored 90% on My First Attempt, and
Translate to
You Can Too!
Introduction

7 min read

Yuval

Kubernetes Silent Pod Killer


Tracking down invisible OOM Kills in Kubernetes

3 min read

You might also like