100% found this document useful (3 votes)
2K views139 pages

Kubernetes

Uploaded by

ali abbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
2K views139 pages

Kubernetes

Uploaded by

ali abbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 139

What is Kubernetes?

• Kubernetes is an orchestration engine and open-source platform for managing


containerized applications.

• Responsibilities include container deployment, scaling & descaling of containers &


container load balancing.
• Actually, Kubernetes is not a replacement for Docker, But Kubernetes can be considered
as a replacement for Docker Swarm, Kubernetes is significantly more complex than
Swarm, and requires more work to deploy.

• Born in Google ,written in Go/Golang. Donated to CNCF(Cloud native computing


foundation) in 2014.
• Kubernetes v1.0 was released on July 21, 2015.
• Current stable release v1.18.0.
Kubernetes Features

The features of Kubernetes, are as follows:


• Automated Scheduling: Kubernetes provides advanced scheduler to launch container on cluster nodes based on their resource
requirements and other constraints, while not sacrificing availability.
• Self Healing Capabilities: Kubernetes allows to replaces and reschedules containers when nodes die. It also kills containers that don’t
respond to user-defined health check and doesn’t advertise them to clients until they are ready to serve.
• Automated rollouts & rollback: Kubernetes rolls out changes to the application or its configuration while monitoring application health
to ensure it doesn’t kill all your instances at the same time. If something goes wrong, with Kubernetes you can rollback the change.
• Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale down the application as per the requirements with a simple
command, using a UI, or automatically based on CPU usage.
Kubernetes Features

5. Service Discovery & Load balancing

With Kubernetes, there is no need to worry about networking and communication because Kubernetes will automatically assign IP addresses to
containers and a single DNS name for a set of containers, that can load-balance traffic inside the cluster.
Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing.

6. Storage Orchestration
With Kubernetes, you can mount the storage system of your choice. You can either opt for local storage, or choose a public cloud provider such as
GCP or AWS, or perhaps use a shared network storage system such as NFS, iSCSI, etc.
Container
•orchestration
Container orchestration automates the deployment, management, scaling, and networking
of containers across the cluster. It is focused on managing the life cycle of containers.
• Enterprises that need to deploy and manage hundreds or thousands of Linux® containers
and
hosts can benefit from container orchestration.
• Container orchestration is used to automate the following tasks at scale:
 Configuring and scheduling of containers
 Provisioning and deployment of containers
 Redundancy and availability of containers
 Scaling up or removing containers to spread application load evenly across
host infrastructure
 Movement of containers from one host to another if there is a
shortage of resources in a host, or if a host dies
 Allocation of resources between containers
 External exposure of services running in a container with the outside
world
 Load balancing of service discovery between containers
Swarm vs
Kubernetes
Both Kubernetes and Docker Swarm are important tools that are used to deploy
containers inside a cluster but there are subtle differences between the both
Features Kubernetes Docker Swarm
Installation is complicated; but once Installation is very simple; but cluster
Installation & Cluster Configuration setup, the cluster is very strong is not very strong
GUI GUI is the Kubernetes Dashboard There is no GUI
Highly scalable & scales 5x faster
Scalability Highly scalable & scales fast than Kubernetes
Docker Swarm cannot do auto-
Auto-Scaling Kubernetes can do auto-scaling scaling
Can deploy Rolling updates & does Can deploy Rolling updates, but not
Rolling Updates & Rollbacks automatic Rollbacks automatic Rollbacks
Can share storage volumes only with Can share storage volumes with any
Data Volumes other containers in same Pod other container
In-built tools for logging & 3rd party tools like ELK should be
Logging & Monitoring monitoring used for logging & monitoring
Kubernete
• s Kubernetes also known as K8s, is an open-source Container Management tool
• It provides a container runtime, container orchestration, container-centric
infrastructure orchestration, self-healing mechanisms, service discovery, load balancing
and container (de)scaling.
• Initially developed by Google, for managing containerized applications in a clustered
environment but later donated to CNCF
• Written in Golang
• It is a platform designed to completely manage the life cycle of containerized
applications and services using methods that provide predictability, scalability, and high
availability.
Kubernete
sCertified Kubernetes Distributions
• Cloud Managed: EKS by AWS, AKS by Microsoft andGKE by google
• Self Managed: OpenShift by Redhat and Docker Enterprise
• Local dev/test: Micro K8s by Canonical, Minikube
• Vanilla Kubernetes: The core Kubernetes project(baremetal), Kubeadm
• Special builds: K3s by Rancher, a light weight K8s distribution for Edge
devices

Online Emulator: https://labs.play-with-k8s.com/

https://www.cncf.io/certification/software-conformance/
Kubernetes
ACluster
Kubernetes cluster is a set of physical or virtual machines and other infrastructure
resources that are needed to run your containerized applications. Each machine in a
Kubernetes cluster is called a node.
There are two types of node in each Kubernetes cluster:
Master node(s): hosts the Kubernetes control plane components and manages the cluster
Worker node(s): runs your containerized applications

worke worke
r r

Maste
r
Kubernetes
Architecture Master
kubect Web
l UI
Schedule Controlle ETC
r r D
API
Server
Worker Worker
01 02
Kube- Kubele Kubele Kube-
proxy t t proxy
Docke Docke
r r
Containers Pod Containers Pod
s s
Kubernetes
Architecture

https://blog.alexellis.io/kubernetes-in-10-minutes/
Kubernetes
Architecture
Kubernetes Master
• Master is responsible for managing the
complete cluster.
• You can access master node via the CLI, GUI,
or API
• The master watches over the nodes in the cluster
and is responsible for the actual orchestration of
containers on the worker nodes
• For achieving fault tolerance, there can be more
than one master node in the cluster.
• It is the access point from which administrators
and other users interact with the cluster to
manage the scheduling and deployment of
containers.
• It has four components: ETCD, Scheduler,
Kubernetes
Architecture
Kubernetes Master
ETCD
• ETCD is a distributed reliable key-value store used by
Kubernetes to store all data used to manage the
cluster.
• When you have multiple nodes and multiple masters
in your cluster, etcd stores all that information on
all the nodes in the cluster in a distributed
manner.
• ETCD is responsible for implementing locks within
the cluster to ensure there are no conflicts
between the Masters

Scheduler
• The scheduler is responsible for distributing work
or containers across multiple
Kubernetes
Architecture
Kubernetes Master
API server manager
• Masters communicate with the rest of the cluster
through the kube-apiserver, the main access point to
the control plane.
• It validates and executes user’s REST commands
• kube-apiserver also makes sure that configurations in
etcd match with configurations of containers
deployed in the cluster.
Controller manager
• The controllers are the brain behind orchestration.
• They are responsible for noticing and responding when
nodes, containers or endpoints goes down.
The controllers makes decisions to bring up new
containers in such cases.
• The kube-controller-manager runs control loops that manage the state of the cluster by
Kubernetes
Architecture
Kubernetes Master
Kubectl
• kubectl is the command line utility using which we
can interact with k8s cluster
• Uses APIs provided by API server to interact.
• Also known as the kube command line tool or
kubectl or
kube control.
• Used to deploy and manage applications on a
Kubernetes

• kubectl run nginx used to deploy an application


on the cluster.
• kubectl cluster-info used to view information
Kubernetes
Architecture
Kubernetes Worker
Kubelet
• Worker nodes have the kubelet agent that is
responsible for interacting with the master to provide
health information of the worker node
• To carry out actions requested by the master on the
worker
nodes.
Kube proxy
• The kube-proxy is responsible for ensuring network traffic is routed properly to internal and
external services as required and is based on the rules defined by network policies in kube-
controller-manager and other custom controllers.
Kubernete
s is K3s?
What
• K3s
is a fully compliant Kubernetes distribution with the following enhancements:
Packaged as a single binary
<100MB memory footprint
Supports ARM and x86 architectures
Lightweight storage backend based on sqlite3 as the default storage mechanism to
replace heavier ETCD server
 Docker is replaced in favour of containerd runtime
 Inbuilt Ingress controller (Traefik)
Kubernete
s
K3s
Architecture
Kubernete
s Setup using VirtualBox
K3s
• Use 3VMs(1 master and 2 workers). All VMs should have bridge network adapter enabled
• Create a host only networking adapter(DHCP disabled) and connect all VMs to it. This is to have static
IPs for all VMs in the cluster. Make sure static IPs are configured in each VM in the same subnet range
of host only network
On Master
• bash -c "curl -sfL https://get.k3s.io | sh -“
• TOKEN=cat /var/lib/rancher/k3s/server/node-token
• IP = IP of master node where API server is running
On Worker nodes
• bash -c "curl -sfL https://get.k3s.io | K3S_URL=\"https://$IP:6443\" K3S_TOKEN=\"$TOKEN\" sh -"

https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
Kubernetes
Kubernete
sPod
•s Basic scheduling unit in Kubernetes.Pods are often
• Kubernetes doesn’t run containers directly;
ephemeral
instead it wraps one or more containers into a higher-
level structure called a pod
• It is also the smallest deployable unit that can be created, schedule, and managed on a
Kubernetes cluster. Each pod is assigned a unique IP address within the cluster.
• Pods can hold multiple containers as well, but you should limit yourself when possible. Because
pods are scaled up and down as a unit, all containers in a pod must scale together, regardless
of their individual needs. This leads to wasted resources.

container
Ex: nginx, s
mysql,
wordpress..

10.244.0.2
Kubernete
sPods
• Any containers in the same pod will share the same storage volumes and network resources
and communicate using localhost
• K8s uses YAML to describe the desired state of the containers in a pod. This is also called a Pod
Spec. These objects are passed to the kubelet through the API server.
• Pods are used as the unit of replication in Kubernetes. If your application becomes too popular and
a single pod instance can’t carry the load, Kubernetes can be configured to deploy new replicas of
your pod to the cluster as necessary.

containers

Inside cluster
Using the example from the above figure, you could run curl 10.1.0.1:3000 to
communicate to the one container and curl 10.1.0.1:5000 to communicate to
the other container from other pods. However, if you wanted to talk between
containers - for example, calling the top container from the bottom one, you
could use http://localhost:3000.
10.244.0.2
Kubernete
sScaling Pods
• All containers within the pod get scaled together.
• You cannot scale individual containers within the pods. The pod is the unit of scale in
K8s.
• Recommended way is to have only one container per pod. Multi container pods are very
rare.
• In K8s, initcontainer is sometimes used as a second container inside pod.

initcontainers are exactly like regular containers, except that they always run to completion. Each init container must complete successfully before the next
one starts.
Kubernete
s
Imperative vs Declarative commands
• Kubernetes API defines a lot of objects/resources, such as namespaces, pods,
deployments, services, secrets, config maps etc.
• There are two basic ways to deploy objects in Kubernetes: Imperatively and
Declaratively

Imperatively
• Involves using any of the verb-based commands like kubectl run, kubectl create,
kubectl expose, kubectl delete, kubectl scale and kubectl edit
• Suitable for testing and interactive experimentation

Declaratively
• Objects are written in YAML files and deployed using kubectl create or kubectl apply
• Best suited for production environments
Kubernete
sManifest /Spec file
• K8s object configuration files - Written in YAML or JSON
• They describe the desired state of your application in terms of Kubernetes API objects.
A file
can include one or more API object descriptions (manifests).
# manifest file template apiVersion:
apiVersion - version of the Kubernetes API v1 kind: Pod
metadata:
used to create the object name: …
spec:
kind - kind of object being created containers:
- name: … Multiple
metadata - Data that helps uniquely --- resource
apiVersion:
identify v1 kind: Pod
definition
the object, metadata: s
including a name and name: …
spec:
optional container
namespace s:
- name:
Kubernete
sManifest files Man
Pages
List all K8s API supported Objects and apiVersion:
Versions kubectl api-resources v1 kind: Pod
metadata:
kubectl api-versions name: …
spec:
container
Man pages for objects s: Multiple
kubectl explain - name: resource
… definition
<object>.<option> kubectl ---
s
explain pod apiVersion:
v1 kind: Pod
kubectl explain pod.apiVersion metadata:
kubectl explain pod.spec name: …
spec:
containers:
- name: …
Kubernete
sOnce the cluster is
setup…
kubectl version

kubectl get nodes – o


wide
Kubernete
sOnce the cluster is
setup…
kubectl cluster-info

kubectl cluster-info dump --output-directory=/path/to/cluster- # Dump current cluster state to /path/to/cluster-


state state
Kubernete
sCreating Pods dry-run doesn’t run the
command but will show
what the changes the
kubectl run <pod-name> --image <image- command would do to the
cluster
name> kubectl run nginx --image nginx --
dry-run=client

kubectl run nginx --image nginx --dry-run=client –o


shows the command
yaml output in YAML. Shortcut
to create a declarative
yaml from imperative
commands

https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubernete
sCreating Pods: Imperative way
kubectl run test --image nginx --port 80 - Also exposes port 80 of
container kubectl get pods – o wide

kubectl describe pod test – display extended information of


pod

container

10.244.2.2
Kubernetes
Creating
Pods
curl <ip-of-
pod>

container

10.244.2.2
Kubernete
sCreating Pods: Declarative # pod-definition.yml
apiVersion:
way v1 kind:
• kubectl create –f pod-definition.yml
Pod
• kubectl apply –f pod-definition.yml – if manifest file is
changed/updated after deployment and need to re-
metadata:
deploy the pod again name: nginx-
• kubectl delete pod <pod-name> pod labels:
app:
webapp spec:
containers:
- name: nginx-
container image:
nginx
ports:
- containerPort: 80
Kubernete
sPod
Networking
Kubernete
sReplication Controller
• A single pod may not be sufficient to handle the user traffic. Also if this only pod
goes
down because of a failure, K8s will not bring this pod up again automatically
• In order to prevent this, we would like to have more than one instance or POD
running at the same time inside the cluster
• Kubernetes supports different controllers(Replicacontroller & ReplicaSet) to handle
multiple instances of a pod. Ex: 3 replicas of nginx webserver
• Replication Controller ensures high availability by replacing the unhealthy/dead pods
with a new one to ensure required replicas are always running inside a cluster
• So, does that mean you can’t use a replication controller if you plan to have a single
POD? No! Even if you have a single POD, the replication controller can help by
automatically bringing up a new POD when the existing one fails.
• Another reason we need replication controller is to create multiple PODs to share the
load across them.
Kubernete
sBehind the
Kubelet
scene…
When a command is
given through kubectl

ETCD

K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
API Server updates the deployment
deployment details in
ETCD
ETCD

K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
Controller manager deployment
through API Server replicaset
identifies its workload
and creates a ReplicaSet ETCD

K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
ReplicaSet creates required deployment
number of pods and replicaset
updates the ETCD. pod PENDING
Note the status of pods. pod PENDING
pod PENDING
ETCD
They are still in PENDING
state K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
Scheduler identifies its deployment
workload through API-Server replicaset
and decides the nodes onto pod slave01
which the pod are to be pod slave02
scheduled. pod slave01
ETCD
At this stage, pods are K8s slave
assigned to a node
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
Kubelet identifies its deployment
workload through API- replicaset
Server and understands pod slave01
that it needs to deploy pod slave02
some pods on its node pod slave01
ETCD
K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
Kubelet instructs the deployment
docker daemon to create replicaset
the pods. At the same pod CREATING Pod Pod
time it updates the status pod CREATING
as ‘Pods CREATING’ in pod CREATING
ETCD
ETCD through API Server K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

Pod
K8s
Master
K8s slave
02
Kubernete
sBehind the
Kubelet
scene…
Once pods are created and deployment
run, Kubelet updates the replicaset
pod status as RUNNING in pod RUNNING Pod Pod
ETCD through API Server pod RUNNING
pod RUNNING
ETCD
K8s slave
$ kubectl run nginx –
01
image=nginx – replicas=3 API
Server Kubelet

Scheduler Controller

Pod
K8s
Master
K8s slave
02
Kubernete
sLabels and Selectors
Labels
• Labels are key/value pairs that are attached to
objects, such as pods
• Labels allows to logically group certain objects by
giving various names to them
• You can label pods, services, deployments
and even nodes

kubectl get pods -l


environment=production kubectl get
pods -l environment=production,
tier=frontend

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
Kubernete
sLabels and Selectors
• If labels are not mentioned while deploying k8s objects using imperative commands,
the label is auto set as app: <object-name>
kubectl run --image nginx
nginx kubectl get pods --
show-labels

Adding Labels
kubectl label pod nginx
environment=dev

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
Kubernete
sLabels and Selectors
Selectors
• Selectors allows to filter the objects based on labels
• The API currently supports two types of selectors: equality-based and set-based
• A label selector can be made of multiple requirements which are comma-
separated

Equality-based Selector
• Equality- or inequality-based requirements
allow filtering by label keys and values.
• Three kinds of operators are admitted =,==,!=

Used by Replication Controllers and Services


Kubernete
sLabels and Selectors
Selectors
• Selectors allows to filter the objects based on labels
• The API currently supports two types of selectors: equality-based and set-based
• A label selector can be made of multiple requirements which are comma-
separated
Set-based Selector
• Set-based label requirements allow
filtering keys according to a set of
values.
• Three kinds of operators are supported:
in,notin
and by
Used exists (only theDeployments,
ReplicaSets, key identifier).
DaemonSets
kubectl get pods -l 'environment in (production,
qa)'
Kubernete
sReplicaSet
• ReplicaSets are a higher-level API that gives the ability to easily run multiple instances
of a given pod
• ReplicaSets ensures that the exact number of pods(replicas) are always running in
the cluster by replacing any failed pods with new ones
• The replica count is controlled by the replicas field in the resource definition file
• Replicaset uses set-based selectors whereas replicacontroller uses equality based
selectors
Kubernete apiVersion:
apps/v1 kind:
replicaset.yml
sPod vs ReplicaSet
metadata:
ReplicaSet name: nginx-
replicaset labels:
apiVersion:
app: webapp
v1 kind: type: front-
Pod end
metadata: spec:
name: nginx- replicas:
pod labels: 3
app: selector:
webapp spec: matchLabels:
metadata:
containers: app:
name: nginx-
webapp
pod labels:
- name: nginx-
template:
app:
container image: webapp spec:
nginx containers:
ports: - name: nginx-
- containerPort: 80 container image:
pod.yml nginx
ports:
- containerPort: 80
Kubernetes
ReplicaSet Manifest
file
Kubernete apiVersion:
apps/v1 kind:
sReplicaSet ReplicaSet
metadata:
name: nginx-
kubectl create – f replica- replicaset labels:
set.yml app: webapp
type: front-
end
spec:
kubectl get rs – o replicas: 3
wide selector:
matchLabels:
app:
webapp
kubectl get pods – o template:
metadata:
wide name: nginx-
pod labels:
app:
webapp spec:
containers:
- name: nginx-
container image:
Kubernete
sReplicaSet
• kubectl edit replicaset <replicaset-name> - edit a replicaset; like image, replicas
• kubectl delete replicaset <replicaset-name> - delete a replicaset; like image,
replicas
• kubectl delete -f replica-set.yml
• kubectl get all - get pods, replicasets, deployments, services all in one shot
• kubectl replace -f replicaset-definition.yml -replaces the pods with
updated definition file
• kubectl scale -–replicas=6 –f replicaset-definition.yml – scale using
definition file
• kubectl scale -–replicas=6 replicaset <replicaset-name> - using name
of replicaset
Kubernetes
Deployments

Kubernetes
Kubernete
sDeployment
• A Deployment provides declarative updates for Pods and ReplicaSets.
• You describe a desired state in a Deployment, and the Deployment Controller changes the
actual state to the desired state at a controlled rate.
• It seems similar to ReplicaSets but with advanced functions
• Deployment is the recommended way to deploy a pod or RS
• By default Kubernetes performs deployments in rolling update strategy.
• Below are some of the key features of deployment:
 Easily deploy a RS
 Rolling updates pods
 Rollback to previous deployment versions
 Scale deployment
 Pause and resume deployment
Kubernete
sDeployment Strategy
• Whenever we create a new deployment, K8s triggers a Rollout.
• Rollout is the process of gradually deploying or upgrading your application containers.
• For every rollout/upgrade, a version history will be created, which helps in rolling back to
working version in case of an update failure
• In Kubernetes there are a few different ways to release updates to an application
• Recreate: terminate the old version and release the new one. Application experiences
downtime.
• RollingUpdate: release a new version on a rolling update fashion, one after the other. It’s
the
default strategy in K8s. No application downtime is required.
spec: spec:
• Blue/green: release a new version alongside the old
replicas: 10 replicas: 10 version then switch traffic
strategy: strategy:
type: type:
Recreate RollingUpdate
rollingUpdate:
maxSurge: 2
Kubernete
sRolling Update
•Strategy
By default, deployment ensures that only 25% of your pods are unavailable during an update and does
not
update more that 25% of the pods at a given time
• It does not kill old pods until/unless enough new pods come up
• It does not create new pods until a sufficient number of old pods are killed
• There are two settings you can tweak to control the process: maxUnavailable and maxSurge. Both have
the
default values set - 25%
• The maxUnavailable setting specifies the maximum number of pods that can be unavailable during the
rollout process. You can set it to an actual number(integer) or a percentage of desired pods
Let’s say maxUnavailable is set to 40%. When the update starts, the old ReplicaSet is scaled down to 60%.
As soon as new pods are started and ready, the old ReplicaSet is scaled down again and the new ReplicaSet is scaled up.
This happens in such a way that the total number of available pods (old and new, since we are scaling up and down) is
always at least 60%.
• The maxSurge setting specifies the maximum number of pods that can be created over the desired
number of pods
If we use the same percentage as before (40%), the new ReplicaSet is scaled up right away when the rollout starts. The
new ReplicaSet will be scaled up in such a way that it does not exceed 140% of desired pods. As old pods get killed, the
new ReplicaSet scales up again, making sure it never goes over the 140% of desired pods
Kubernete
apiVersion:
sDeployment apps/v1 kind:
Deployment
•s kubectl create deployment nginx --image nginx --dry-run -o metadata:
yaml name: nginx-
deployment labels:
• kubectl create -f deployment.yml --record (--record is optional, it app:
just records the events in the deployment) nginx spec:
replicas:
10 selector:
• kubectl get matchLabels:
deployments app:
nginx
template:
metadata:
labels:
app:
nginx spec:
containers:
- name: nginx-
container image:
nginx
Kubernete
apiVersion:
sDeployment apps/v1 kind:
Deployment
•s kubectl describe deployment <deployment- metadata:
name> name: nginx-
deployment labels:
app:
nginx spec:
replicas:
10 selector:
matchLabels:
app:
nginx
template:
metadata:
labels:
app:
nginx spec:
containers:
- name: nginx-
container image:
nginx
Kubernete
apiVersion:
sDeployment apps/v1 kind:
Deployment
•s kubectl get pods – o metadata:
wide name: nginx-
deployment labels:
app:
nginx spec:
replicas:
10 selector:
matchLabels:
• kubectl edit deployment <deployment -name> - perform live app:
edit of nginx
deployment template:
• kubectl scale deployment <deployment -name> --replicas2 metadata:
labels:
• kubectl apply –f deployment.yml – redeploy a modified yaml file; app:
Ex: replicas changed to 5, image to nginx:1.18 nginx spec:
containers:
- name: nginx-
container image:
nginx
Kubernete
sDeployments
• kubectl rollout status deployment <deployment -
name>

• kubectl rollout history deployment <deployment -


name>
Kubernete
sDeployments
• kubectl rollout undo deployment <deployment -
name>

• kubectl rollout undo deployment <deployment -name> --to-revision=1


• kubectl rollout pause deployment <deployment -name>
• kubectl rollout resume deployment <deployment -name>
• kubectl delete -f <deployment-yaml-file> - deletes deployment and related dependencies
• kubectl delete all --all – deletes pods, replicasets, deployments and services in current
namespace
Kubernete
sNamespaces
Namespaces are Kubernetes objects which partition a single Kubernetes cluster into
multiple
virtual clusters
• Kubernetes clusters can manage large numbers of unrelated
workloads concurrently and organizations often choose to
deploy projects created by separate teams to shared clusters.
• With multiple deployments in a single cluster, there are
high chances of deleting deployments belong to deff
prohjects.
• So namespaces allow you to group objects together so you
can filter and control them as a unit/group.
• Namespaces provide a scope for names. Names of resources
need to be unique within a namespace, but not across
namespaces.
• So each Kubernetes namespace provides the scope for
Kubernetes Names it contains; which means that using the
Kubernete
sNamespaces
By default, a Kubernetes cluster is created with the
following three namespaces:
• default: It’s a default namespace for users. By
default, all the resource created in Kubernetes
cluster are created in the default namespace
• Kube-system: It is the Namespace for objects
created by Kubernetes systems/control plane.
Any changes to objects in this namespace would
cause irreparable damage to the cluster itself
• kube-public: Namespace for resources that are
publicly readable by all users. This namespace is
generally reserved for cluster usage like Configmaps
and Secrets
Kubernete
sNamespaces
kubectl get
namespaces

kubectl get all -n kube-system (lists available objects under a specific


namespace)

kubectl get all --all-namespaces (lists available objects under all available
namespaces)
Kubernetes
Namespaces
Create a
kubectl create ns dev # Namespace for Developer
namespace
team kubectl create ns qa # Namespace for QA
team
kubectl create ns production # Namespace for
Production team
Deploy objects in a namespace
kubectl run nginx --image=nginx -n dev
kubectl get pod/nginx – n dev
kubectl apply --namespace=qa -f pod.yaml

Delete a namespace
kubectl delete ns production
Back
end

Fron
t
end

Kubernetes
Kubernete
sServices
• Services logically connect pods across the cluster to enable networking between them
• The lifetime of an individual pod cannot be relied upon; everything from their IP
addresses to their very existence are prone to change.
• Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters
an issue and dies, it’s Kubernetes’ job to replace it so that the application doesn’t
experience any downtime
• Services makes sure that even after a pod(back-end) dies because of a failure, the newly
created pods will be reached by its dependency pods(front-end) via services. In this case,
front-end applications always find the backend applications via a simple service(using
service name or IP address) irrespective of their location in the cluster
• Services point to pods directly using labels. Services do not point to
deployments or ReplicaSets. So, all pods with the same label gets attached to
same service
• 3 types: ClusterIP, NodePort and LoadBalancer
Kubernete
sService
s

Pods’ lifecycle are erratic; they


come and go by Kubernetes’
will.

Not healthy? Killed.


Not in the right place?
Cloned,
and killed.

So how can you send a request


to your application if you can’t
know for sure where it lives?
Services are tied
The answer to services.
lies in the pods
using pod labels and provides a
stable end point for the users to When requesting your application, you don’t care about its location or
reach the application. about
which pod answers the request.
Kubernete
s Services
ClusterIP
• ClusterIP service is the default Kubernetes service.
• It gives you a service inside your cluster that other apps
inside your cluster can access
• It restricts access to the application within the cluster
itself and no external access
• Useful when a front-end app wants to
communicate with back-end
• Each ClusterIP service gets a unique IP address inside
the
cluster
• Similar to --links in Docker

Services point to pods directly using


labels!!!
Kubernete
sWhen services are not
available

• Imagine 2 pods on 2
separate nodes node-1 &
node-2 with their local IP
address
• pod-nginx can ping and
connect to pod-python using
its internal IP 1.1.1.3.
Kubernete
sWhen services are not
available

• Now let’s imagine the pod-


python dies and a new one is
created.
• Now pod-nginx cannot reach
pod- python on 1.1.1.3
because its IP is changed to
1.1.1.5.

How do we remove this


dependency?
Kubernete
sEnter services…
• Services logically connects pods
together
• Unlike pods, a service is not
scheduled on a specific node. It
spans across the cluster
• Pod-nginx can always safely
connect to pod-python using
service IP 1.1.10.1 or the DNS
name of service (service- python)
• Even if the python pod gets deleted
and recreated again, nginx pod can
still reach python pod using the
service but not with IP of python
pod directly
Kubernete
sEnter services…
• Multiple ClusterIP
services
Kubernete
sClusterI
P
Kubernetes
Services Pod 10.244.0.24

ClusterIP Alpine
clusterservice.ym pod.ym
l l
apiVersion: apiVersion:
v1 kind: v1 kind:
por 10.20.0.1
Service Pod t 80 8
metadata: metadata: ingress-
name: ingress- name: backend- nginx
(ClusterIP
nginx spec: pod labels: )
type: app: nginx- targetPor
ClusterIP backend Pod 10.244.0.22
ports: spec:
t
80
- name: container Nginx
http port: s:
80 - name: nginx-
targetPort: container image:
80 protocol: nginx
TCP selector: ports:
Kubernetes
Services
ClusterIP
clusterip- pod.ym
service.yml l
apiVersion: apiVersion:
v1 kind: v1 kind:
Service Pod
metadata: metadata:
name: ingress- name: backend-
nginx spec: pod labels: kubectl create – f
type: app: nginx-
clusterservice.yml kubectl create
ClusterIP backend
ports: spec: – f pod.yml
- name: container root@alpine: # curl ingress-
http port: s: check
nginx the endpoints: kubectl
80 - name: nginx- describe svc/<svc-name>
targetPort: container image:
80 protocol: nginx
TCP selector: ports:
Kubernete
sServices
NodePort
• NodePort opens a specific port on all the
Nodes in the cluster and forwards any traffic
that is received on this port to internal
services
• Useful when front end pods are to be
exposed outside
the cluster for users to access it
• NodePort is build on top of ClusterIP service by
exposing the ClusterIP service outside of the
cluster
• NodePort must be within the port range
30000-
32767
• If you don’t specify this port, a random port will
be assigned. It is recommended to let k8s
Kubernete
sNodePor
t
spec:
type: NodePort
ports:
- port: 80 80
targetPort: 80
nodePort:
30080
Kubernete
sNodePor
t
Kubernete
sMulti Instances in same
node
Kubernete
sMulti Instances across
cluster
Kubernete
sNodePort
• Application can be reached from any of the available nodes in the cluster
using
<node-ip>:<node-port>
Kubernete
sNodePort NodePort 192.168.0.2

30001
nodeport-service.yml pod.ym
l
apiVersion: apiVersion:
v1 kind: v1 kind: por 10.105.32.21
Service Pod
t 80 7
metadata: metadata: ingress-
name: name: nginx-
nginx
(NodePort)
nodeport- frontend labels:
service app: nginx-
spec: frontend spec: targetPor
Node 10.244.1.66
type: containers: t 0380
NodePort - name: nginx-
ports: container Nginx
- port: 80 image:
targetPort: nginx
80 ports:
kubectl create –f Master/Worker
nodePort: -
nodeportservice.yml kubectl create
Kubernete
sDemo: NodePort

kubectl create – f nodeport-


service.yml kubectl create – f
pod.yml

kubectl get
services
Kubernete
sDemo: NodePort

kubectl describe service <service-


name>
Kubernete
sDemo: NodePort

kubectl get nodes – o


wide

192.168.0.107:30001

192.168.0.107:30001
Kubernete
sNodePort
spec:
Limitations
• In NodePort service, users can access application type:
using the URL http://<node-ip>:<node-port> NodePort
• In Production environment, we do not want the users ports:
to have to type in the IP address every time to - port: 80
access the application targetPort:
• So we configure a DNS server to point to the IP of 5000
nodePort:
the nodes. Users can now access the application
30001
using the URL http://xyz.com:30001
protocol:
• Now, we don’t want the user to have to remember
TCP
port
number either.
• However, NodePort service can only allocate
high numbered ports which are greater than
30,000.
• So we deploy a proxy server between the DNS
server and the cluster that proxies requests on
port 80 to port 30001 on the nodes.
• We then point the DNS to proxy server’s IP, and NodePort 30001 is being used only for demo. You can configure this
port number in service manifest file or let K8s auto assign for
you.
Kubernete
sServices
Load Balancer
• A LoadBalancer service is the standard way to
expose a Kubernetes service to the internet
• On GKE(Google Kubernetes Engine), this will spin up a
Network Load Balancer that will give you a single IP address
that will forward all external traffic to your service
• All traffic on the port you specify will be forwarded to the
service
• There is no filtering, no routing, etc. This means you can
send almost any kind of traffic to it, like HTTP, TCP, UDP
or WebSocket's
• Few limitations with LoadBalancer:
 Every service exposed will gets it's own IP address
 It gets very expensive to have external IP for each of
the
service(application)
Kubernete
sServices
Load Balancer
• On Google Cloud, AWS, or Azure, a service type of
LoadBalancer in the service manifest file will immediately
run an Elastic / Cloud Load Balancer that assigns
externally IP (public IP) to your application
• But for on-prem or bare-metal k8s clusters,
this functionality is not available
• Using service type as LoadBalancer on bare-metal will
not assign any external IP and service resource will
remain in Pending state forever
spec:
type:
LoadBalancer
selector:
app:
hello
ports:
- port:
80
targetPort:
https://collabnix.com/3-node-kubernetes-cluster-on-bare-metal-system-in-5-minutes/
8080 protocol:
TCP
Kubernete
sServices
Load Balancer
apiVersion:
v1 kind:
Service
metadata:
name: lb-
service labels:
app:
hello spec:
type:
LoadBalancer
selector:
app:
hello
ports:
- port:
80
Kubernete External IP
sGCP 10.12.16.2
LoadBalancer GCP Load 2
Balancer
www.flask-app.com

www.flask-app.co
m

DNS
Server

Few limitations with


•LoadBalancer
Every service exposed will gets
it's own public IP address
• It gets very expensive to have
public
Google Kubernetes Engine (GKE)
www.connected-city.com
Kubernetes www.connected-factory.com

www.connected-tools.com
GCP LoadBalancer
Cons External IP External IP External IP

10.12.16.2 10.12.16.2 10.12.16.2


Public IP 2
4 GCP Load Balancer
3
=

Kubernetes
Cluster

connected-city connected-factory service connected-tools


service service

connected-city connected-factory connected-tools


pods pods pods
Kubernete
sLoadBalancer
• Application can be reached using the
external IP assigned by the
LoadBalancer
• The LoadBalancer will forward the
traffic to the available nodes in the
cluster on the nodePort assigned to
the service
Kubernete
sGCP
LoadBalancer
Cons

Every service exposed will gets it's own IP address


It gets very expensive to have external IP for each of the
service(application)
Kubernete
sCloud LoadBalancer:
Cons
• Every service exposed will gets it's own IP
address
• It gets very expensive to have external IP for
each
of the service(application)
• We see two LoadBalancers, each having its own
IP. If we send a request to LoadBalancer
22.33.44.55 it gets redirected to our internal
service-nginx. If we send the request to
77.66.55.44 it gets redirected to our internal
service-python.
• This works great! But IP addresses are rare and
LoadBalancer pricing depends on the cloud
providers. Now imagine we don’t have just two
but many more internal services for which we
would like to create LoadBalancers, costs would
scale up.
• Might there be another solution which allows
us to only use one LoadBalancer (with one IP)
Kubernete
sLoadBalancer Vs
Ingress

(Application Load
Balancer)

• Public IPs aren’t cheap • Ingress acts as internal LoadBalancer


• ALB can only handle limited • Routes traffic based on URL path
IPs • All applications will need only one
• So SSL termination
Kubernete
sServices
MetalLB Load Balancer

• MetalLB is a load-balancer
implementation for bare
metal Kubernetes clusters.
• It allows you to create Kubernetes
services of type “LoadBalancer” in
bare- metal/on-prem clusters that
don’t run on cloud providers like
AWS, GCP, Azure and DigitalOcean.

https://metallb.universe.tf/
Kubernete
sIngress Resource(rules)
• With cloud LoadBalancers, we need to pay for each of the
service that is exposed using LoadBalancer as the service
type. As services grow in number, complexity to manage
SSLs, Scaling, Auth etc., also increase
• Ingress allows us to manage all of the above within the
Kubernetes cluster with a definition file, that lives along
with the rest of your application deployment files
• Ingress controller can perform load balancing, Auth, SSL
and URL/Path based routing configurations by being
inside the cluster living as a Deployment or a
• Ingress
DaemonSet
helps users access the application using a single www.smartfactory.com
externally accessible URL, that you can configure to service
1
route to different services within your cluster based on the
URL path, at the same time terminate SSL/TLS www.smartcity.com
service
2
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Kubernete
sWhy SSL Termination at LoadBalancer?
• SSL termination/offloading represents the end or termination point of an SSL connection
• SSL termination at LoadBalancer decrypts and verifies data on the load balancer instead
of the application server. Unencrypted traffic is sent between the load balancer and the
backend servers
• It is desired because decryption is resource and CPU intensive
• Putting the decryption burden on the load balancer enables the server to spend
processing power on application tasks, which helps improve performance
• It also simplifies the management of SSL certificates
Kubernete
sIngress Controller
• Ingress resources cannot do anything on their own. We need to have an Ingress controller
in
order for the Ingress resources to work
• Ingress controller implements rules defined by ingress resources
• Ingress controllers doesn’t come with standard Kubernetes binary, they have to be
deployed
separately
• Kubernetes currently supports and maintains GCE and nginx ingress controllers
• Other popular controllers include Traefik, HAProxy ingress, istio, Ambassador etc.,
• Ingress controllers are to be exposed outside the cluster using NodePort or with a Cloud
Native LoadBalancer.
• Ingress is the most useful if you want to expose multiple services under the same IP
address
• Ingress controller can perform load balancing, Auth, SSL and URL/Path based
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
routing configurations by being inside the cluster living as a Deployment or a
Kubernete www.connected-city.com
www.connected-factory.com

sIngress Controller www.connected-tools.com

Kubernetes 30001 Ingress NodePort/ Cloud


LB
Cluster GCE
LB Ingress
NGINX
TRAEFIK Controller
CONTOU connected-city.com connected- other
R factory.com s Ingres
ISTIO s
Rules
connected-city connected-factory service default
service service

custom
404
pages

connected-city connected-factory default


pods pods pods
Kubernete
sNginx Ingress Controller
• Ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy
and load balancer
• Officially maintained by Kubernetes community
• Routes requests to services based on the request host or path, centralizing a
number of services into a single entrypoint.
Ex: www.mysite.com or www.mysite.com/stats
www.smartfactory.com
service
1

www.smartcity.com
service
Deploy Nginx Ingress 2
Controller
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-
0.32.0/deploy/static/provider/baremetal/deploy.yaml
Kubernete
sIngress ingress-
Rules Path based routing Host based routing rules.yml
apiVersion: apiVersion:
networking.k8s.io/v1beta1 kind: networking.k8s.io/v1beta1 kind:
Ingress Ingress
metadata: metadata:
name: ingress- name: ingress-rules
rules spec: annotations:
rules: nginx.ingress.kubernetes.io/rewrite-
- host: target: /
http: spec:
paths: rules:
- path: - host: nginx-
/nginx app.com http:
backend: paths:
serviceNa - backend:
me: serviceName: nginx-
nginx- service servicePort: 80
service - host: flask-
servicePor app.com http:
t: 80 paths:
- path: - backend:
/flask by comparing with the http requested URL in the http
Ingress-controller executes these ingress-rules serviceName: flask-
header
Kubernete connected-
city.com
sDemo:
Ingress
• 3VMs K8s Cluster + 1 VM for Reverse
Proxy
• Deploy Ingress controller
• Deploy pods
• Deploy services
• Deploy Ingress rules
• Configure external reverse proxy connected-factory.com

• Update DNS names


• Access
• connected-city.com
applications using URLs
• connected-factory.com
10.11.3.5
Kubernete HAProx
s y HAProxy
Configuration
server
Demo: 192.168.0.101:30001
server

Ingress 192.168.0.102:30001
server
k8s cluster node IPs
with ingress
192.168.0.103:30001
Architecture controller port

Kubernetes
Cluster

connected-
city.com

DNS
Server
Kubernete
sDemo: Ingress
1. Deploy Nginx Ingress Controller
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-
0.32.0/deploy/static/provider/baremetal/deploy.yaml
2. Deploy pods and kubectl apply – f
services <object>.yml
apiVersion: apiVersion: apiVersion: apiVersion:
v1 kind: apps/v1 kind: v1 kind: apps/v1 kind:
Service Deployment Service Deployment
metadata: metadata: metadata: metadata:
name: connectedcity- name: connectedcity- name: connectedfactory- name: connectedfactory-
service spec: deployment spec: service spec: deployment spec:
ports: replica ports: replica
- port: 80 s: 3 - port: 80 s: 3
targetPort: selector: targetPort: selector:
5000 selector: matchL 5000 selector: matchL
app: abels: app: abels:
connectedcity app: connectedfac app:
connectedcity tory connectedfactory
template: template:
Application-1 metadata: metadata:
Deployment + labels: Application-2 labels:
ClusterIP service app: Deployment + app:
connectedcity ClusterIP service connectedfactory
spec: spec:
containers: containers:
Kubernete
sDemo: Ingress apiVersion: networking.k8s.io/v1beta1
3. Deploy ingress rules manifest file kind:
Ingress
• Host based routing rules metadata:
• Connects to various services name: ingress-
depending upon the host parameter rules annotations:
nginx.ingress.kubernetes.io/rewrite-
target: / spec:
rules:
- host: connected-
city.com http:
paths:
- backend:
serviceName: connectedcity-
service servicePort: 80
- host: connected-
factory.com http:
paths:
- backend:
kubectl apply – f serviceName: connectedfactory-
<object>.yml service servicePort: 80
Kubernete
sDemo: Ingress
10.11.3.5 /etc/haproxy/haproxy.cfg
4. Deploy HA Proxy
LoadBalancer
• Provision a VM
• Install HAProxy using
package manager
 apt install haproxy – y
• Restart HAProxy service
after
modifying the configuration
 systemctl stop haproxy
 add configuration to
/etc/haproxy/
haproxy.cfg
 systemctl start haproxy
&&
systemctl enable
haproxy
Kubernete
sDemo: Ingress
10.11.3.5
5. Update dummy DNS entries
HAProx
Both DNS names to point to IP of HAProxy y
server
windows

C:\Windows\System32\drivers\etc\hosts
192.168.0.105 connected-city.com
192.168.0.105 connected-factory.com
ipconfig /flushdns

linux
connected-city.com

/etc/hosts
connected-factory.com
192.168.0.105 flask-app.com
Kubernete
sDemo: Ingress
6. Access Application through
URLs

connected- connected-factory.com
city.com
Kubernetes
Ingress using Network
LB
Kubernete
sDashboard
• Default login access to dashboard is by using token or kubeconfig file. This can be bypassed for internal
testing but not recommended in production
• Uses NodePort to expose the dashboard outside the Kubernetes cluster
• Change the service to ClusterIP and use it in conjunction with ingress resources to make it accessible
through a
DNS name(similar to previous demos)

https://github.com/kunchalavikram1427/kubernet

es/blob/master/dashboard/insecure-dashboard-
nodeport.yaml

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
https://devblogs.microsoft.com/premier-developer/bypassing-authentication-for-the-local-kubernetes-cluster-dashboard/
Kubernete
sVolumes
• By default, container data is stored inside own its file system
• Containers are ephemeral in nature. When they are destroyed, the data inside them gets
deleted
• Also when running multiple containers in a Pod it is often necessary to share files between
• those Containers
In order to persist data beyond the lifecycle of pod, Kubernetes provide volumes
• A volume can be thought of as a directory which is accessible to the containers in Po
a pod d
• The medium backing a volume and its contents are determined by the volume type
Types of Kubernetes Volumes containers
• There are different types of volumes you can use in a Kubernetes pod:
 Node-local memory (emptyDir and hostPath) 10.244.0.2
 Cloud volumes (e.g., awsElasticBlockStore, gcePersistentDisk, and 2
azureDiskVolume)
 File-sharing volumes, such as Network File System (NFS)
 Distributed-file systems (e.g., CephFS and GlusterFS)
 Special volume types such as PersistentVolumeClaim, secret, configmap and
Kubernete
semptyDi
r• emptyDir volume is first created when a Pod is assigned to
a
Node
• It is initially empty and has same lifetime of a pod container
• emptyDir volumes are stored on whatever medium is container
2

backing the node - that might be disk or SSD or network 1

storage or RAM
• Containers in the Pod can all read and write the same
files in the emptyDir volume
• This volume can be mounted at the same or different
paths in emptyDi
each Container r
• 10.244.0.2
When a Pod is removed from a node for any reason, the
2
data in the emptyDir is deleted forever
• Mainly used to store cache or temporary data to be
processed
Kubernete apiVersion:
v1 kind: Pod
semptyDir metadata:
name: emptydir-
pod labels:
app: busybox
purpose: emptydir-
kubectl apply -f emptyDir- demo spec:
demo.yml volumes:
- name: cache-
volume emptyDir:
{} containers:
- name:
container-1
image: busybox
command:
kubectl exec -it pod/emptydir-pod -c container-2 -- cat ["/bin/sh","-c"]
/cache/date.txt kubectl logs pod/emptydir-pod -c container-2 args: ["date >> /cache/date.txt; sleep
1000"] volumeMounts:
- mountPath:
/cache name:
cache-volume
- name: container-2
image: busybox
command:
["/bin/sh","-c"]
args: ["cat /cache/date.txt; sleep
Kubernete
shostPat Kubernetes
h
• Node
This type of volume mounts a file or directory from the
host Kubernetes
cluster
node’s filesystem into your pod 10.244.0.22
• hostPath directory refers to directory created on Node
container
where pod is running container 2
• Use it with caution because when pods are scheduled 1
on multiple nodes, each nodes get its own hostPath
storage volume. These may not be in sync with each
other and different pods might be using a different
data
• Let’s say the pod with hostPath configuration is deployed
on Worker node 2. Then host refers to worker node 2.
So any hostPath location mentioned in manifest file
refers to worker node 2 only
• When node becomes unstable, the pods might fail to /
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath data
access
Kubernete
shostPath apiVersion:
v1 kind: Pod
metadata:
name: hostpath-
kubectl apply -f hostPath- pod spec:
demo.yml volumes:
- name: hostpath-
volume hostPath:
path: /data
kubectl logs pod/hostpath-pod -c container-1 type:
kubectl exec -it pod/hostpath-pod -c container-1 -- ls DirectoryOrCreate
/cache containers:
- name:
container-1
image: busybox
command: ["/bin/sh","-c"]
args: ["ls /cache ; sleep
1000"] volumeMounts:
- mountPath: /cache
name: hostpath-
Kubernete
sPersistent Volume and Persistent Volume Claim
• Managing storage is a distinct problem inside a cluster. You cannot rely on emptyDir or
hostPath for persistent data.
• Also providing a cloud volume like EBS, AzureDisk often tends to be complex because of
complex configuration options to be followed for each service provider
• To overcome this, PersistentVolume subsystem provides an API for users and administrators
that abstracts details of how storage is provided from how it is consumed. To do this, K8s
offers two API resources: PersistentVolume and PersistentVolumeClaim.
Kubernete
sPersistent Volume and Persistent Volume Claim
Persistent volume (PV)
• A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by
an administrator or dynamically provisioned using Storage Classes(pre-defined
provisioners and parameters to create a Persistent Volume)
• Admin creates a pool of PVs for the users to choose from
• It is a cluster-wide resource used to store/persist data beyond the lifetime of a pod
• PV is not backed by locally-attached storage on a worker node but by networked storage system
such as Cloud providers storage or NFS or a distributed filesystem like Ceph or GlusterFS
• Persistent Volumes provide a file system that can be mounted to the cluster, without being
associated with any particular node
Kubernete
sPersistent Volume and Persistent Volume Claim
Persistent Volume Claim (PVC)
• In order to use a PV, user need to first claim it using a PVC
• PVC requests a PV with the desired specification (size, speed, etc.) from Kubernetes and then binds it
to a
resource(pod, deployment…) as a volume mount
• User doesn’t need to know the underlying provisioning. The claims must be created in the same
namespace
where the pod is created.

https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
Kubernete
sPersistent Volume and Persistent Volume
Claim

https://www.learnitguide.net/2020/03/kubernetes-persistent-volumes-and-claims.html
Kubernete
s
Using PVCs
Deployments

Kubernetes
Cluster
Kubernetes
Using PVCs in
GKE

https://medium.com/google-cloud/introduction-to-docker-and-kubernetes-on-gcp-with-hands-on-configuration-part-3-kubernetes-with-eb41f5fc18ae
Kubernete
sLogs
kubectl logs my-pod # dump pod logs (stdout)
kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my- # dump pod container logs (stdout, multi-container
container case) # dump pod logs, with label name=myLabel
kubectl logs -l name=myLabel -c my- (stdout)
container kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container
kubectl logs -f -l name=myLabel --all- case) # stream all pods logs with label name=myLabel
containers kubectl logs my-pod -f --tail=1 (stdout) # stream last line of pod logs
kubectl logs deploy/<deployment-name> # dump deployment logs
Kubernete
sInteraction with pods
kubectl run -i --tty busybox --image=busybox -- # Run pod as interactive
sh kubectl run nginx –it --image=nginx -- bash shell # Run pod nginx
kubectl run nginx --image=nginx --dry-run -o # Run pod nginx and write its spec into a file
yaml > pod.yaml called pod.yaml
# Attach to Running Container
kubectl attach my-pod -i # Run command in existing pod (1 container
kubectl exec my-pod -- case)
ls / # Run command in existing pod (multi-
kubectl exec my-pod -c container case)
my-container -- ls /
Schedulin
Kubernete
sScheduling
• Kubernetes users normally don’t need to choose a node to which their Pods should be scheduled
• Instead, the selection of the appropriate node(s) is automatically handled by the Kubernetes
scheduler
• Automatic node selection prevents users from selecting unhealthy nodes or nodes with a shortage
of
resources
• However, sometimes manual scheduling is needed to ensure that certain pods only scheduled on
nodes with specialized hardware like SSD storages, or to co-locate services that communicate
frequently(availability zones), or to dedicate a set of nodes to a particular set of users
• Kubernetes offers several ways to manual schedule the pods. In all the cases, the
recommended approach is to use label selectors to make the selection
• Manual scheduling options include:
1. nodeName
2. nodeSelector
3. Node affinity
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
4. Taints and Tolerations
Kubernetes
Scheduling

• nodeName

• nodeName is a field of PodSpec


• nodeName is the simplest form of node selection constraint, but due to its limitations it is
typically not used
• When scheduler finds no nodeName property, it automatically adds this and assigns the pod to
any
available node
• Manually assign a pod to a node by writing the nodeName property with the desired node name.
• We can also schedule pods on Master by this method
• Some of the limitations of using nodeName to select nodes are:
 If the named node does not exist, the pod will not be run, and in some cases may be
automatically deleted
 If the named node does not have the resources to accommodate the pod, the pod will fail
and its reason will indicate why, for example OutOfmemory or OutOfcpu
Kubernetes
Scheduling
node Name nodeName.ym
l
apiVersion:
v1 kind: Pod
metadata:
name: nginx
spec:
containers:
kubectl apply – f - name:
nodeName.yml nginx
image:
nginx
ports:
- containerPort: 80
nodeName: k8s-
master
Kubernetes
Scheduling
node Selector
• nodeSelector is a field of PodSpec
• It is the simplest recommended form of node selection
constraint
• It uses labels(key-value pairs) to select matching nodes onto
which pods can be scheduled
• Disadvantage with nodeSelector is it uses hard preferences
i.e., if
matching nodes are not available pods remain in pending state!

check default node labels


kubectl describe node <node-name>
Kubernetes
Scheduling
node Selector

Add labels to nodes


kubectl label nodes <node-name> <label-key>=<label-
value> kubectl label nodes k8s-slave01
environment=dev

delete a label: kubectl label node <nodename>


<labelname> -
Kubernetes
Scheduling
node Selector nodeSelector.yml
apiVersion:
v1 kind: Pod
kubectl apply – f nodeSelector.yml
metadata:
kubectl get pods – o wide --show-
name: nginx
labels kubectl describe pod <pod-
labels:
name>
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
environment:
dev
Kubernetes
Scheduling
node Affinity

Node affinity is specified as field nodeAffinity in PodSpec


• Node affinity is conceptually similar to nodeSelector – it allows you to manually schedule pods based
on labels on the node. But it has few key enhancements:
• nodeAffinity implementation is more expressive. The language offers more matching rules
besides exact matches created with a logical AND operation in nodeSelector
• Rules are soft preferences rather than hard requirements, so if the scheduler can’t find a
node with matching labels, the pod will still be scheduled on other nodes

There are currently two types of node affinity rules:


1. requiredDuringSchedulingIgnoredDuringExecution: Hard requirement like nodeSelector. No
matching node label, no pod scheduling!
2. preferredDuringSchedulingIgnoredDuringExecution: Soft requirement. No matching node label, pod gets
scheduled on other nodes!
The IgnoredDuringExecution part indicates that if labels on a node change at runtime such that the
affinity rules on a pod are no longer met, the pod will still continue to run on the node.
Kubernetes nodeAffinity.y
Scheduling apiVersion: ml

node Affinity v1 kind: Pod


metadata:
name: with-node-
affinity spec:
kubectl apply – f nodeAffinity.yml
affinity:
nodeAffinity:
• Pod gets scheduled on the node has the requiredDuringSchedulingIgnoredDuringExecuti
label environment=production on: nodeSelectorTerms:
• If none of the nodes has this label, pod - matchExpressions:
remains - key:
environment
in pending state.
operator: In
• To avoid this, use affinity
values:
preferredDuringSchedulingIgnoredDuringExecu -
tion prod
containers:
- name: nginx-
container image:
Kubernete
sScheduling
Taints and Tolerations
• Node affinity, is a property of Pods that attracts them to a set of nodes (either as a
preference or a hard requirement)
• Taints are the opposite – they allow a node to repel a set of pods.
• Taints are applied to nodes(lock)
• Tolerations are applied to pods(keys)
• In short, pod should tolerate node’s taint in order to run in it. It’s like having a correct key
with pod to
unlock the node to enter it
• Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate
nodes

Pod
K8s Node
Kubernete
sScheduling
Taints and Tolerations
• By default, Master node is tainted. So you cannot deploy any pods on Master
• To check taints applied on any node use kubectl describe node <node-name>

Apply taint to nodes


kubectl taint nodes <node-name> key=value:<taint-effect>
• taint’s key and value can be any arbitrary string
• taint effect should be one of the supported taint effects such as
1. NoSchedule: no pod will be able to schedule onto node unless it has a
matching toleration.
2. PreferNoSchedule: soft version of NoSchedule. The system will try to avoid
placing a pod that does not tolerate the taint on the node, but it is not required
3. NoExecute: node controller will immediately evict all Pods without the matching
toleration from the node, and new pods will not be scheduled onto the node
Kubernete
sScheduling
Taints and Tolerations
Apply taint to nodes
kubectl taint nodes k8s-slave01
env=stag:NoSchedule

• In the above case, node k8s-slave01 is tained with label env=stag and taint
effect as
NoSchedule. Only pods that matches this taint will be scheduled onto this node

Check taints on nodes


kubectl describe node k8s-slave01 | grep -i taint
Kubernete taint_toleration.y
sScheduling apiVersion:
apps/v1 kind:
ml

Taints and Tolerations Deployment


metadata:
Apply tolerations to pods name: nginx-
deployment
kubectl apply – f spec:
replicas: 3
taint_toleration.yml selector:
kubectl get pods – o wide matchLabels:
app:
myapp
template:
• Here pods are scheduled onto both the slave nodes. metadata:
name:
• Only slave01 is tainted here and matching tolerations are myapp-pod
added to pods. So pods are scheduled onto slave-01 as well. labels:
• If we remove the tolerations from the pods and deploy app:
them, they will get scheduled onto slave-02 only as slave01 myapp spec:
containers:
is tainted and matching toleration is removed/not available - name: nginx-
with pods! container image:
nginx tolerations:
- key: "env"
Reference

s https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
Docker Installation on Ubuntu
K3s Installation
• https://k33g.gitlab.io/articles/2020-02-21-K3S-01-CLUSTER.html
https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c Kubernetes 101
• https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/
• Kubeadm
• https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ k3s

https://rancher.com/docs/ Kubectl commands
https://kubernetes.io/docs/reference/kubectl/cheatsheet/

https://kubernetes.io/docs/reference/kubectl/overview/
Deployments
https://www.bmc.com/blogs/kubernetes-deployment/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
• Services
https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other
https://theithollow.com/2019/02/05/kubernetes-service-publishing/
https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/
https://medium.com/@JockDaRock/metalloadbalancer-kubernetes-on-prem-baremetal-loadbalancing-101455c3ed48
https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e
• Ingress https://www.youtube.com/watch?v=QUfn0EDMmtY&list=PLVSHGLlFuAh89j0mcWZnVhfYgvMmGI0lF&index=18&t=0s
• K8s Dashboard
https://github.com/kubernetes/dashboard
https://github.com/indeedeng/k8dash

You might also like