Final Kubernets Merged
Final Kubernets Merged
Kubernetes (k8s)
Kubernetes is an container cluster manager used for managing, monitoring, scaling
the containerized applications on a clustered environment.
advantages:-
1. everyone in the team knows all the modules/functional areas of the system, since
everyone works on end-end of the project
2. easy to build, package and deploy the application
3. scalability can be achieved easily
dis-advantages:-
1. since is an enterprise large application that comprises of lot of
functionalities or modules and it is build into single sourcecode,
the developers often find it very complex to work with entire system.
2. since the whole application is built into one single project
2.1 To build the project it takes lot of time
2.2 IDE are overloaded with lot of sourcecode files
2.3 application servers takes more time in starting up
-==================================================================================
=====================================================
03-2-2-2023
Kubernetes
Kubernetes is an container cluster manager, that takes care of running, monitoring
and managing the containers on the cluster network of machines
Characteristics:
1. The enterprise application with all the modules are functionalities are built
into one single sourcecode project
2. The Team of developers will working across all the modules of the project
3. A single sourcecode repository will be used for versioning and collaborating the
development
4. only one single deployable artifact will be produced out of the build process
5. The application will be deployed on enterprise application server
advantages:
1. anyone in the project can work on any of the modules of the system
2. easy to build, package and deploy the application
3. scalability is easy to achieve in monolithic applications
dis-advantages:-
1. since the application is very huge and is build into one single sourcecode,
developers often find it very complex to understand
and work on the entire system
2. because of huge sourcecode, the integrated development environments (ide) will
be overloaded and quickly goes un-responsive
3. building and packaging of the application becomes heavy and takes more time, due
to which development will be impacted
4. application servers takes more time in deploying and starting the application,
because of large in nature, due to which
the productivity of the developer will be degraded
===================================================================================
==============================================================
04-02-2023
Characteristics:-
1. Each Microservice application is built into its own sourcecode project
which are independent of other modules/services of the system
2. Each Microservice application/project has its own sourcecode repository
3. Each Microservice has its own database schema into which those performs
persistence operations
4. Each Microservice application is built into an indepently deployable
artifact, that is deployed on its own server runtime
5. Each Microservice is build by a team independent from other teams
===================================================================================
========================================================
05-02-2023
Kubernetes (k8s)
Kubernetes is an container cluster manager, it takes care of distributing
containerized applications on a cluster of computers,
scheduling, monitoring and scaling up the containerized applications
Kubernetes Architecture
------------------------
Kubernetes is an container cluster manager that takes careof scheduling, monitoring
and managing the containerized applications on a
network cluster of machines
The Kubernetes has 4 major components in it
1. Master Node
2. Worker Node
3. Kubectl
4. etcd
===================================================================================
===================================================================
06-02-2023
Kubernetes (k8s)
Kubernetes is an container cluster manager that takes care of scheduling,
monitoring and managing the containers over the cluster of machines.
1.2 Scheduler
upon receiving and validating the request, the api manager to further process the
request it handovers the request to the Scheduler.
The Scheduler is responsible for scheduling an pod for execution on the worker
nodes of the cluster. The Scheduler will talks
to the kubelet process that is running on each worker node of the cluster, checks
does the workernode has enough capacity available
for running the pod or not, if not available goes to the next worker node of the
cluster until it finds one
upon identifying an workernode with enough capacity, the scheduler will handovers
the request to the kubelet process of the workernode
asking to bringup the pod on that node by allocating the requested resources
2. DaemonSet
DaemonSet ensures a pod is running on all the nodes of the cluster. Incase if a new
workernode has been added to cluster,
the daemonset ensures the pod is broughtup onto the new worker node aswell.
3. DeploymentSet
DeploymentSet controller helps us in upgrading or patching the old pods with newer
versions by supporting different deployment strategies
4. Service
service is an controller that discovers the running pods on the nodes of the
cluster based on labels and register
with them, so that it can distribute the request to the pod application by
loadbalancing.
5. Job
Job controller helps us in running a job or script on a node of the cluster to
perform one-time operation
===================================================================================
=================================================
07-02-2023
Kubernetes (k8s)
Kubernetes is an container cluster manager that takes care of scheduling,
monitoring and managing the containers on the cluster of computers.
1.2 Scheduler
upon receiving the request, the api manager forwards the request to scheduler. The
scheduler does the job of
communicating with the kubelet process of the workernode across the cluster to
identify an workernode suitable
enough in running the pod and handovers the job of bringing up the pod on the
workernode to the kubelet process
2. kubelet
kubelet is an process that runs on each workernode of the cluster. The Kubelet
process acts as an agent,
in letting the controlplane or master interact with the Workernodes of the cluster.
The Master Node schedules a pod for execution on the cluster by handovering the job
to kubelet process only.
Additionally Kubelet process performs various activities like
1. kubelet gathers the information about the running pods and their status and
reports them to the control plane or master whenever requested
2. The job of running the pods and bringing them up on a workernode is taken care
by kubelet process only
3. through the help of kubelet process, the control plane can determine which
workernode is suitable for running or scheduling a pod for execution
3. kubeproxy
kubeproxy enables the traffic to the external network for a pod in the cluster
#3. Kubectl
Kubectl is an cli tool provided by the kubernetes through which we can communicate
with the controlplane or master.
it helps us in administering, monitoring and managing the kubernetes cluster
#4 etcd
etcd is an key/value pair database where all the kubernetes objects information
will be stored on the etcd database only
===================================================================================
============================================
08-02-2023
The aws cloudplatform has provided an service called EKS stands for "Elastic
Kubernetes service",
it is an managed service provided by aws platform to host kubernetes on aws.
The AWS Cloudplatform itself takes care of provisioning the master/control plane,
workernodes and
setup the cluster with CNI network and install necessary components like
containerization engine, kubelet, kubeproxy etc
There are lot of advantages of using EKS Cluster over the one-premise
1. we dont need to manually setup, configure the kubernetes cluster, rather with an
click of a button
the AWS itself takes care of provisioning the kubernetes for us
2. Monitoring the kubernetes cluster and keeping track of the health and incase if
any of workernodes
are crashed those are replaced with the healthier nodes will be taken care by the
aws itself
3. The high availability of the cluster will be guaranteed since the workernodes
are distributed across
the availability zones of the vpc, in addition we can take the advantages of making
the application accessed to customer with low-network latency
4. The AWS Cloud platform itself takes care scale-out/scale-in the cluster
capacity/size based on the
load of the cluster so that we never run out of cluster capacity
5. The HA of the master/control plane will be takencare by the aws cloud platform
itself
3. install vscode
download the vscode binary ".deb" from the vscode downloads, by default it will be
downloaded into ~/Downloads directory
cd ~/Downloads
sudo apt install -f ./code_....deb
4. docker
4.1
sudo apt install -y ca-certificates curl gnugp lsb-release
4.6 grant sriman access to the docker by adding him to the docker group
sudo usermod -aG docker $USER
exit the terminal and re-enter or restart
6. create aws credentails using awscli tool to access the cloud account resources
using cli
aws configure
prompts for
api key:
access key:
region: ap-south-1
output format: none
===================================================================================
==================================================
10-02-2023
2.2
per each docker image we want to publish we need to create on repository in ecr
2.3 to publish or pull images from these repositories we need to login into ecr
registry. and in case of private repository
we need have AmazonEC2ContainerRegistryFullAccess to login or pull or push images
2.4 goto IAM User we have setup earlier and attach a policy to either user level or
group level with policy: AmazonEC2ContainerRegistryFullAccess
2.5 goto elastic container registry and click on Get Starter or create repository
1. private repository and enter an name for repository
2. go into the repository and click on View Push Commands for login pull or push
instructions
===================================================================================
==========================================================
12-02-2023
There are 3 topologies where we can choose one of them in setting up the eks
cluster
1. both master/workernodes on public subnets = not recommended since the whole
cluster is exposed to the world and poses security
2. master on public subnet and workernodes on private subnet = usually used in
organizations
allowing the team of people to manage the cluster directly (not recommended for
production usage)
3. both master/workernodes on private subnet only = highly recommended for
production usage
#2. create 2 public subnets and 2 private subnets, public subnets for control plane
and private subnets for workernodes
we need to create 2 of them for each public/private subnets across the azs of the
region, to ensure high-availability
2.1 hondaekspubsn1, 10.0.1.0/24
2.2 hondaekspubsn2, 10.0.2.0/24
2.3 hondaeksprvsn3, 10.0.3.0/24
2.4 hondaeksprvsn4, 10.0.4.0/24
#4. route the public network traffic through internet gateway by creating
routetable
routetable name: hondaigrt
subnet association: hondaekspubsn1, hondaekspubsn2
route: 0.0.0.0/0 -> hondaeksig
setting up workernodes
1. to setup the workernodes we need create an IAM Role that should be attached to
the workernode during provision.
goto IAM Policies and choose Role and create new
choose the type: EKSNodeGroupRole
policies:
1. AmazonEKSWorkerNodePolicy
2. AmazonEKSContainerRegistryReadOnly
3. AmazonEKS_CNI_Policy
create role
2. goto the eks cluster we have created above and click on add NodeGroup
we can think of a NodeGroup as equivalent to ASG. The NodeGroup takes care of
provisioning the workernodes and attach to the EKS MasterNode.
here we can specify
1. shape of the workernode (t2.micro)
2. min, max, initial workernodes
3. scale-out threshold
4. subnets
The NodeGroup based on the above configuration takes care of provisioning and
managing the workers automatically
#3. now add the kubernetes repository to the ubuntu sources.d or sources.list
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
1. run the below command in powershell or windows command-prompt that downloads the
minikube.exe and places under c:\minikube directory
New-Item -Path 'c:\' -Name 'minikube' -ItemType Directory -Force
Invoke-WebRequest -OutFile 'c:\minikube\minikube.exe' -Uri
'https://github.com/kubernetes/minikube/releases/latest/download/minikube-windows-a
md64.exe' -UseBasicParsing
C:\Users\Sriman>kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS
AGE
kube-system coredns-787d4945fb-zm2ch 1/1 Running 1 (25m ago)
28m
kube-system etcd-minikube 1/1 Running 1 (25m ago)
28m
kube-system kube-apiserver-minikube 1/1 Running 1 (25m ago)
28m
kube-system kube-controller-manager-minikube 1/1 Running 1 (25m ago)
29m
kube-system kube-proxy-znldh 1/1 Running 1 (25m ago)
28m
kube-system kube-scheduler-minikube 1/1 Running 1 (25m ago)
28m
kube-system storage-provisioner 1/1 Running 1 (25m ago)
28m
Kubernetes Dashboard
Kubernetes has provided an management-console or a kubernetes cluster dashboard
using which we can access all the kubernetes
object through the web console like
- namespaces
- pods
- services
- deployments
etc
...................................................................................
.....................................................................
13-02-2023
Kubernetes Namespace
--------------------
Namespaces are used for creating naming compartments or logical grouping of
objects/resources on the kubernetes cluster. A kubernetes cluster would be shared
across multiple teams to run various different projects on the cluster, so to avoid
objects/resources of one project/team are accessed by others we use kubernetes
Namespaces
By default aspart of the kubernetes install there are 4 namespaces are created
1. default = every object that is created in kubernetes cluster will be placed
by default under "default" namespace only. The default namespace is by default
empty. all the users/groups of kubernetes cluster has access to the default
namespace and should be sufficient for most of the usecases
2. kube-system = all the kubernetes system objects like api manager, scheduler and
controller manager etc are placed under "kube-system" namespace only
3. kube-public = by default kube-public ns is empty, if we place any objects within
the kube-public, those are accessible publicly to everyone without authentication
-----------------------------------------------------------------------------------
-------------------------------------------------------------------
14-02-2023
2. users = all the users of different clusters using which we need to connect to
the cluster
3. contexts = a context is a name given for the combination of cluster, user and
namespace which would be specified to kubectl to be used in
connecting and managing the cluster
$HOME/.kube/config
------------------
apiVersion: v1
kind: config
preferences: {}
clusters:
- cluster:
name: testcluster
server: http://host:port
certificate-authority-data: key
- cluster:
name: stagecluster
server: http://host:port
certificate-authority-data: key
users:
- name: testclusteruser
user:
client-certificate-data:
client-key-data:
- name: stageclusteruser
user:
client-certificate-data:
client-key-data:
contexts:
- context:
name: testcontext
cluster: testcluster
user: testclusteruser
namespace: default
- context:
name: stagecontext
cluster: stagecluster
user: stageclusteruser
namespace: ns1
current-context: testcontext
How to switch from one cluster to another cluster?
we need to change the current-context attribute in kube config file
-----------------------------------------------------------------------------------
-------------------------------------------------------------------
15-02-2023
.kube/config
apiVersion: v1
kind: config
preferences: {}
clusters:
- cluster:
name: cluster1
server: http://host:port
certificate-authority-data:
- cluster:
name: cluster2
server: http://host:port
certificate-authority-data:
users:
- name: clusteruser1
user:
client-certificate-data:
client-key-data:
- name: clusteruser2
user:
client-certificate-data:
client-key-data:
contexts:
- context:
name: devcontext
cluster: cluster1
user: clusteruser1
namespace: ns1
- context:
name: testcontext
cluster: cluster2
user: clusteruser2
namespace: ns2
current-context: devcontext
The kubernetes will represent all these resources information interms of objects
and persist them in etc of the kubernetes cluster. all these objects are created
under kube-system namespace. The state of the kubernetes cluster will be
represented by the type/number of objects on the cluster
The resource spec or manifest is an YAML file in which we describe the information
about the object we want to create in terms of key/value pair and
pass it as an input to control plane. The structure and contents of the spec file
is defined by kubernetes itself for each type of resource.
But all of these spec files carries few common attributes irrespective of type
object is:
1. apiVersion = which version of the kubernetes object we are using for creating
2. kind = type of object
3. metadata = used for defining labels for the object
3. namespace = under which namespace the object should be created
4. spec = the desired state of the object
1. imperative commands:
kubectl has provided handful commands to which we can pass arguments in creating
various different types of kubernetes objects on the cluster. this avoid writing an
specfile manually in creating objects.
advantages:-
1. we can quickly create an object on the cluster without writing any manifest file
and test it
dis-advantage:-
1. we dont have any spec file in hand, based on the inputs/arguments we passed, the
kubectl creates the yml onfly and passes it to the control plane. if we want to
modify or reuse in recreating the object at later point of time we dont have the
spec with us
-----------------------------------------------------------------------------------
--------------------------------------------------------------------
16-02-2023
The resource spec are yml files, and the structure of these yml files are specific
to the type of the object we want to create and standardized by the kubernetes.
based on the structure (key/value pairs) defined by the kubernetes we need follow
and write the spec file inorder to get this validated by the cluster
There are few common attributes are there across all type of object specs which are
defined as below.
1. apiVersion = defines the specfile version being used in defining the object
2. kind = type of object
3. metadata= used for attaching lables to the object for identification and
retrieval
4. namespace = under which namespace object should be created
5. spec = specification of the object
How many ways are there in creating these objects on the cluster?
There are 3 ways are there
1. imperative commands
2. imperative object configuration
3. declarative object configuration
1. imperative commands
The kubectl has provided handful of commands taking arguments as input in creating
various different kubernetes objects like pods, service, deployments etc. we dont
need to write the resource specfile in creating these objects
advantages:-
1. quickly create an object on the cluster
dis-advantage:-
2. since we dont have resource specfile we cannot reuse the object across the
environments
Instead keep all the resources under one directory under the project like manifests
/ configs
airtelcare2
|-manifests
|-pod.yml
|-deployment.yml
|-service.yml
pass the directory as an input to the kubectl asking to apply these manifests on
the cluster
1. Pod
Pod is an smallest unit or entity in the kubernetes world in which one or more
containers are kept together and executed. In general we may have multiple
containerized applications that shares common dependencies like
1. network
2. resources file/mounts/volumes
3. lifecycle (start/stop)
apache2-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: apache2pod
spec:
containers: #defining the info about the containers we want to run inside this
pod
- name: apache2
image: apache:latest
ports:
- containerPort: 80
name: http
protocol: tcp
-----------------------------------------------------------------------------------
-------------------------------------------------------------------
17-08-2023
What is a Pod?
Pod is an smallest entity or an unit within the kubernetes cluster, in which we run
one or more container together inside it.There can be few containers that are
dependent on each other interms of sharing
1. resources
2. file system / volumes
3. lifecycle
rather than having such containers managed independently we can place them inside
one pod and manage it together easily.
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
repositoryName/image:tag
techsriman/sailor:1.0
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
18-02-2023
If the container image is not available locally, then it pulls the image from
container registry onto that workernode and creates the pod and runs it. the moment
the pod has been started kubernetes reports the pod under running state
but the underlying application running inside the pod may not be started or while
running the underlying application might go into unresponsive state due to resource
availability, stuck threads etc. even then also the kubernetes reports the pod
status as running and available. The kubernetes doesnt know the real/underlying
status of the application that is running inside pod, due to which it would be able
to identify such faulty applications
In general kubernetes takes care of replacing an pod that is not working or
crashed, but in this scenario since kubernetes dont know the statusof the
underlying application, it would assume the pod is working and leave it
unresponsive
upon bringing up the pod, the kubernetes will perform the readinessCheck by
periodically hitting or accessing the readiness Application Endpoint we configured
in spec file, until the application has reported its availability, kubernetes will
not route the request to the pod instance of the application. upon the application
reports its availability, kubernetes marks the pod ready for scheduling the request
and would stop performing readinessProbe
2. livelinessProbe
while the pod is running on the cluster, there could be a chance due to resource
issues or applciation failures the application running inside the pod may become
unresponsive. kubernetes is not aware of the such unresponsive pods, so that it
routes the incoming requests for the pod application to these unresponsive pods as
well which will eventually leads to failures.
if the kubernetes can some how identify such unresponsive running pods on the
cluster, it can terminate them and recreate another pod on the cluster, which can
be done through livelinessProbe
the application developer has to expose an http endpoint, that can be used by the
kubernetes to periodically verify the application is running/responsive or not.
while write the podspec file the kubernetes developer has to configure the
livelinessProbe information so that kubernetes can perform this check for us
The livelinessProbe would be started only upon the readinessProbe has been reported
as successful and continued to perform the checks until the pod has been reported
as failed or manually terminated
now while writing the podspec file in running the application as pod on kubernetes
cluster, we need to configure both readinessProbe and livenessProbe configuration
as below.
urotaxi-pod.yml
---------------
apiVersion: v1
kind: Pod
metadata:
name: urotaxipod
labels:
version: 1.0
spec:
containers:
- name: urotaxi
image: techsriman/urotaxi:1.0
ports:
- name: tomcatport
containerPorts: 8089
protocol: TCP
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 10
failureThreadShold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 10
failureThreshold: 3
-----------------------------------------------------------------------------------
------------------------------------------------------------------
19-02-2023
Resource declarations
whenever an pod application is running on the kubernetes cluster, it is going to
consume cpu/memory during execution. The amount of cpu/memory it consumes depends
on various factors like
1. hardware capacity of the machine on which we are running the application
2. how much amount of user traffic is coming to the application
3. the amount of data the application is processing
whenever we scheduled an pod for execution, the kubernetes scheduler takes care of
identifying an appropriate worknode on the cluster which has sufficient cpu/memory
resources available for running the pod and handovers the pod execution to the
kubelet process of the node.
So to the kubernetes scheduler determine the right workernode to be used in
running the pod, we need to define or declare the resource specification aspart of
the pod spec file. we define the minimal requirements in running an pod based on
which the workernode will be choosen, incase if the pod application is requesting
more than the resources requested, the kubelet process will try to accomodate the
resources if available on the worker node.
In case if the workernode doesnt have sufficient resource capacity, the pod will be
terminated and would rescheduled to execute on a workernode which has appropriate
capacity. so we need to define the resource specification in the pod spec for
execution
roadster-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: roadsterpod
spec:
containers:
- name: roadster
image: techsriman/roadster:1.0
ports:
- name: tomcatport
containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: /roadster/actuator/health/readiness
port: 8080
initialDelaySeconds: 10
timeoutSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /roadster/actuator/health/liveness
port: 8080
initialDelaySeconds: 10
timeoutSeconds: 10
failureThreshold: 3
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "1024Mi"
-----------------------------------------------------------------------------------
-----------------------------------------------------------------
20-02-2023
These limits will be specified to the individual container of a pod, not at the pod
level. so that the sum of inidivual container resource specifications would be
considered as the final resource limits in scheduling and running a pod on the
cluster
In case if we have not specified the "limits" metrics then it considers the max
cpu/memory the memory to be allocated as
1. no limit, and allocate how much ever has been requested
2. while creating an namespace, the administrator can set a max default limit of
the resources to be allocated to the pods, which would be applied for all the
containers running inside the namespace, given if limits not specified at the pod
level
In case if we have not specified the "requests" resource spec for cpu/memory of a
pod, the it would considered by default same as "limits" spec only
It is always advised to specify the resource declarations within the pod spec file
to better manage the pod on the cluster
-----------------------------------------------------------------------------------
------------------------------------------------
What are the different states in which a pod can exists in kubernetes cluster?
A pod in kubernetes cluster can exists in 5 different states, which is also
reffered as "pod lifecycle"
1. pending = when we send a request for creating an pod, the api manager
upon accepting the request would send the request to
the scheduler for creating the pod, at this moment the pod state is reported as
"pending"
2. running = atleast one of the container inside the pod has been started
and readinessProbe on the pod has been passed,
then the pod is reported as running
3. succeeded = all the containers within the pod has been exited with an
exitcode as zero, then the pod is reported as succeeded
4. failed = when atleast one of the container inside the pod has been
exited with non-zero exitcode then the pod is reported
as failed
5. crashloopbackoff = when a pod is repeatedly failing for execution after a
successive restarts, then to avoid further scheduling of the same pod for
execution, kubernetes marks the state of the pod as "crashloopbackoff" indicating
the pod should not be scheduled for futher execution, since it is repeatedly
failing
-----------------------------------------------------------------------------------
------------------------------------------------
Working with Labels and Annotations in Kubernetes
Labels:
Labels are arbitary key/value pairs we can attach to the an kubernetes object,
these are used for identifying and accessing the objects over the cluster. For an
kubernetes object we can assign any number key/value pair labels but, they should
appear only once and should be unique
2. we can query or search for objects based on the labels using -l switch?
kubectl get pods -l key=value
Annotations
Annotations are used for attaching arbitary information which is non-identifier
data to an kubernetes objects. These are only used as documentation helpers which
we can read or used through kubernetes metadata api.
roadster-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: roadsterpod
labels:
app: roadster
version: 1.0
env: stage
annotations:
license: GPL License
Warranty: product comes under limited warranty
spec:
containers:
- name: roadster
image: techsriman/roadster:1.0
ports:
- name: tomcatport
containerPort: 8080
protocol: TCP
-----------------------------------------------------------------------------------
------------------------------------------------------------
22-02-2023
configmaps- config-secret
ConfigMaps
What are ConfigMaps and what is the purpose of ConfigMaps?
Every software application uses configuration information pertaining to an external
resource/system it is using to perform operations like an
1. database management system
2. enterpise cache (rediscache, memcache)
3. remote filesystem
4. an external api provided by an vendor
etc
How can the software application can maintain or use this configuration within
their application?
There are many ways of maintaining the configuration information
#1
The application can directly hardcode the configuration values within their
programs itself inorder to communicate and perform operations on the external
resources/systems. writing the configuration values directly within the sourcecode
of the application is not recommended, because we run into lot of problems as
described below.
dis-advantages:
1. whenever there is a change in the configuration values, the developer has to
modify the sourcecode, build/package and redeploy the application which takes huge
amount of time and rework for a configuration change
2. from one env to another env, the configuration values of the external resources
would be different, so while moving the application across the env, we need to
modify the sourcecode which again runs into the same problem we discussed above
#2
To avoid the above problem in maintaining the configuration information, we need to
externalize the configuration values into an external files like properties, yaml
or xml files. The developer has to write these configuration values within these
configuration files and read those values in the programs to perform operations on
the external resources.
advantages:
1. since the configuration values are placed in non-program files, a change in
configuration needs to modify the configuration files which does require
rebuild/repackaging or redeploying the applications. So the configuration changes
can be easily reflected
2. for different environments we have different configuration values, so we can
create multiple configuration files pertaining to each env and we can run the
application against those configurations
dis-advantages:-
1. incase if multiple replicas of the application has been deployed across the
nodes of the cluster for high availability and scalability, then maintaining these
configurations and changing the configuration values across all the instances will
be difficult
#3 centralize the configuration and distribute it across all the instances of the
application
How to run such applications that accepts the configuration values from an external
source like env variables aspart of the kubernetes cluster?
-----------------------------------------------------------------------------------
-------------------------------------------------------
23-02-2023
ConfigMaps
An Application has to be designed to read the configuration values as an input
through environment variables so that while running the application the devops
engineer can pass these values as an input by configuraing them as environment
variables.
How to lauch or run an application which accepts the configuration interms of env
variables on a kuberneters infrastructure?
The env variables should be seeded into the container, while lauching the container
so that these env variable values will be available as an input to the application.
In case of kubernetes we are not creating/lauching the containers rather the
controlplane takes care of creating the container, so we need to specify which what
env/values with which the container application has to be launched by writing them
in spec file
If we write these env variable with values in the pod specfile, there are few
problems are there
1. everytime there is a change in the value, we need to modify the pod specfile,
which is an unnecessary maintainance
2. the same env variables with values may have to be reused across different
applications running on the cluster, so if we write the env variables with values
locally in the pod spec file these will get duplicated across the applications so a
change in these values inccurs huge efforts and time
Instead of writing these configuration values in pod spec file place them inside
the configmap object in kubernetes cluster. These configuration values we placed
inside the ConfigMap can be accessed as inputs into the pod application in 3 ways
1. environment variables = we can pass these configMap values as environment
variables into the pod application by refferring them in pod specfile
2. command-line arguments = we can pass these configMap values as
commandLine-arguments while launching the application
3. Through ConfigMap api = The containerized application, the developer can write
the code in reading the values from ConfigMap object stored on the kubernetes
cluster (not recommended)
coronaguidelines-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: guidelinesconfigmap
labels:
app: corona
data:
oxygenLevels: 85
quarantine: 20
liters: 5
temparatureLevels: 99 - 100
corona-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: coronapod
labels:
app: corona
spec:
containers:
- name: corona
image: techsriman/corona:1.0
ports:
- name: tomcatport
containerPort: 8080
protocol: TCP
env:
- name: guidelines.oxygenLevels
valueFrom:
configMapKeyRef:
name: guidelinesconfigmap
key: oxygenLevels
- name: guidelines.liters
valueFrom:
configMapKeyRef:
name: guidelinessconfigmap
key: liters
- name: guideliness.temparatureLevels
valueFrom:
configMapKeyRef:
name: guidelinessconfigmap
key: tempartureLevels
===================================================================================
=====================================================================
27-02-23
Now while writing the podspec file we need to mount the properties file as an
volume mount into the container, so that developers while building the application
will have the logic for reading the file from the mountLocation
corona-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: coronapod
spec:
container:
- name: corona
image: techsriman/corona:1.0
ports:
- name: tomcatport
containerPort: 8081
protocol: TCP
volumeMounts:
- name: coronavolume
mountPath: /config
readOnly: true
volumes:
- name: coronavolume
configMap:
name: coronaconfigmap
items:
- key: corona.properties
path: "corona.properties"
===================================================================================
=========================================================================
02-03-2023
Config Secrets
---------------
Kubernetes secrets let us store and manage sensitive informaton like passwords, ssh
keys, encryption keys etc that are required as an input by an software application.
We can store these secrets/credentials directly aspart of podspec or in a configMap
as well, but storing these secrets in pod spec or configMap makes them insecure.
everyone can read the information/secrets we stored in podspec or configMap and can
grab the access to the end systems. Instead it is recomended to store such
sensitive information in ConfigSecret
Note:-
By default when we store the credential information in ConfigSecret it will not
encrypt the data while storing, rather it encodes the data into Base64 encoding and
will be stored. That means we can read the values back into plain-text format and
hence these are not by default secured.
So kubernetes ConfigSecrets are stored in HashiCorp Vault by integrating kubernetes
with vendor Vaults
While storing the sensitive data within the ConfigSecret we can attach type
information to help us identify what type of secret we are storing in ConfigSecret.
It is not mandatory to attach type information while storing a ConfigSecret but it
is recommended so that we can easily understand what type it is while accessing.
By default while storing if we dont specify the type, it treats the type as
"opaque"
Kubernetes has provided built-in secret types, we can use these secret types while
defining our own secrets
1. opaque = arbitary data
2. kubernetes.io/service-account-token = The service account token is an system
secret or kubernetes secret
3. kubernetes.io/dockercfg = serialized format of docker config file
4. kubernetes.io/dockerconfigjson = serialized format of docker config json file
5. kubernetes.io/basic-auth = username/password
6. kubernetes.io/ssh-auth = ssh keys
7. kubernetes.io/tls = ssl keys or public/private encryption keys
How to create an ConfigSecret object for storing the database username and
password?
airtel2-configsecret.yml
apiVersion: v1
kind: Secret
metadata:
name: airtel2dbconfigsecret
type: kubernetes.io/basic-auth
stringData:
username: root
password: root
airtel2-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: airtel2pod
spec:
containers:
- name: airtel2
image: techsriman/airtel2:1.0
ports:
- name: tomcatport
containerPort: 8081
protocol: TCP
env:
- name: "spring.datasource.username"
valueFrom:
secretKeyRef:
name: airtel2dbconfigsecret
key: username
- name: "spring.datasource.password"
valueFrom:
secretKeyRef:
name:airtel2dbconfigsecret
key: password
===================================================================================
==================================================================
04-03-2023
1. ReplicaSet Controller
Pod is the smallest entity within the kubernetes cluster where multiple containers
are kept together and are executed within a pod. these container may want to share
common resources like FileSystem or has a common lifecycle to be packaged and run
inside one pod.
We can create a pod in kubernetes cluster through manifest or pod specfile. There
are few characteristics of a pod are there
1. one pod manifest creates one pod on the cluster in running state
2. pod will not survive by crash = If a pod has been crashed due to any reason it
will not be replaced with another pod
In addition:
3. if we want to run #10 pods out of the same pod specfile (with same containers
inside it), we need to create #10 pod manifest files with different pod names and
create the pods manually, which is an difficult job
4. upon creating the #10 pods, we need to monitor them and ensure always those are
running, incase if any one of the replica of the pod has been crashed, we need to
take care of replacing them within another pod
So managing the multiple replicas of a pod and replacing them incase of crash by
monitoring them is difficult job, So to overcome this problem kubernetes has
provided ReplicaSet Controller
ReplicaSet Controller:
A ReplicaSet Controller can be imagined as an Reconcilation loop, where the
ReplicaSet controller loops through all the workernodes of the cluster to identify
whether the desired number of Replicas of a pod are running on the cluster or not.
if the desired number of replicas are not met, then the ReplicaSet Controller talks
to the Scheduler in bringing up the pods on the cluster to meet the desired state.
If already the desired number of replicas are met, it goes into monitoring state to
see if any pods has crashed over the course of time, it can replace them with
another pod
From the above we can understand we always write the podspec inside the replicaset
spec with replicas to bring up desired number of pods on the cluster
sailor-replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: sailorreplicaset
labels:
app: sailor
spec:
replicas: 2
selector: [through which we specify which pods of the specified replias to
be running]
matchLabels:
app: sailor
version: 1.0
template:
metadata:
labels:
app: sailor
version: 1.0
spec:
containers:
- name: sailor
image: techsriman/sailor:1.0
ports:
- name: tomcatport
containerPort: 8080
protocol: TCP
===================================================================================
===================================================================
05-03-2023
Deployment
----------
Deployment is another way of deploying the application onto the kubernetes cluster
and releasing it to the customers. Using Deployment controller any changes to the
pod template can be rolledout in a controlled way
Deployment controller applies the strategy of rolling or releasing the pod replicas
on the cluster, by which we can understand without a replicaSet there is no
deployment controller exists to manage the releases. So within the Deployment spec
always we embed the ReplicaSet spec
speed-deployment.yml
--------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: speeddeployment
spec:
replicas: 2
selector:
matchLabels:
app: speed
strategy:
type: Recreate
selector:
matchLabels:
app: speed
template:
metadata:
labels:
app: speed
spec:
containers:
- name: speedcontainer
image: techsriman/speed:1.0
ports:
- name: tomcatport
containerPort: 8080
protocol: TCP
once we created the deployment with the above deployment spec, we can apply the
spec to create deployment on the kubernetes
kubectl create -f speed-deployment.yml
2. upon the new version of the image is published and ready, we need edit the
deployment we have already created for that application on the cluster
There are 2 ways we can modify the deployment
1. using kubectl set command
2. updating the specfile on the cluster
1.
kubectl set image deployment/deploymentName container=newImage:newVersion
for eg..
kubectl set image deployment/speeddeployment speedcontainer=techsriman/speeddep:2.0
2. run the below command that opens the existing deployment spec on the cluster
kubectl edit deployment deploymentName
strategies:
1. Recreate
The recreate strategy means terminate all the running instances then recreate them
with newer version
spec:
replicas: 2
strategy:
type: Recreate
advantages:-
1. application state entirely renewed
2. no need of additional infrastructure to be created or planned to rollout
the newer version of the application
3. cost of releasing the newer version is less
dis-advantages:-
1. downtime and the amount of time the application will be un-available
depends on the time it takes to boot the instances
===================================================================================
====================================================================
06-03-2023
Deployment Controller
Deployment controller is another way of deploying the application onto the
kubernetes cluster. Through deployment controller any changes in the pod template
can be rolledout in a controlled way
1. Recreate
2. Ramped
3. blue/green
4. canary
5. a/b testing
#1. Recreate
In Recreate the existing pods on the cluster will be terminated and the new version
of the pods will be rolled onto the cluster.
advantage:-
1. the new version can be rolledout at one shot
2. no need of additional infrastructure to be planned for making an release
advantage:-
1. downtime, and the downtime depends on how long the new version of the
application takes time to boot up
speed-deployment.yml
appVersion: apps/v1
kind: Deployment
metadata:
name: speeddeployment
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # how many pods can be added at a time
maxUnavailable: 0 # maxUnavailable pods can
existing during this rolling update
advantages:-
1. version is slowly released across the instances
2. no downtime of the application
dis-advantage:-
1. rollout/rollback can take more time
2. no control over the traffic
3. supporting multiple api versions is very hard
advantages:-
1. instant rollout/rollback
2. zero downtime for the application
3. avoids versioning issues, because it changes the entire cluster state at
one go
dis-advantage:-
1. requires more infrastructure for every release
2. handling stateful applications will be hard
===================================================================================
===================================================================
07-03-2023
1. Pause
kubectl rollout pause deployment/deploymentName
upon applying the changes we can resume the deployment using the below command
kubectl rollout resume deployment/deploymentName
1. history
2. scale
3. to-revision
4. rollout undo
5. pause
6. resume
3. blue/green = release a new version of the pod applications alongside the old
versions and upon completing the testing, switch the traffic to new version and
obselete the older application
4. canary = the canary strategy is also similar to blue/green only, the only
difference is we release the new version of the application to the subset of users,
then based on feedback we rollout the full version
5. a/b testing = release a new version to the subset of users in precise way /
controlled way. For eg if the user is sending the request with a specific request
header or cookie, then send the user to new version of the application, other let
the user access to access the older version
-----------------------------------------------------------------------------------
------------------------------------------------
Service
Service is adding networking to the pods that are running on the kubernetes
cluster. By default when we create a pod on the node of a cluster, the pod has few
characteristics:
1. The pod will be accessible within the Node of the cluster and cannot be accessed
by any other pods that are running on other nodes of the cluster
2. The pod will be assigned with emphermal ip address, which would be renewed upon
a pod restart
===================================================================================
=======================================================================
08-03-2023
Service
Service is adding an network to the pods of the kubernetes cluster. By default a
pod is accessible within the node on which it has been created and has few more
characteristics
1. A pod will not be recovered upon crash
2. empheramal ip address is assigned to the pod, so that upon crash and recovery an
new ip address will be assigned
but we wanted to the pod to be
1. accessed by other pods on the cluster without worrying about the ip address
being changed
2. pod should be exposed to the external world
3. the traffic should be load balanced across the pod replicas
we can achieve all these things through the help of Service
The clusterIP Service will be assigned withan ip address within the cluster range,
using which the other pods of the cluster can access the pod.
#2. NodePort
NodePort service the name itself indicates we open a Port on each WorkerNode
through which we receive traffic and forward the to the targetPort on which the pod
application is running. We can directly access an Pod running on a workernode using
the fixed ip and port number using NodePort Service.
It is not meant for loadbalancing and distributing the traffic to the pods that are
running across the nodes of the cluster.
Usually it is not recommended to user NodePort Service, since it exposes the Port
of a WorkerNode over fixed ip address that creates an security breach. The purpose
of NodePort is to directly acces the application that is running on a workernode
during development/testing phases only
===================================================================================
==============================================================
09-03-2023
NodePort
NodePort is used for exposing an pod application that is running on a workernode of
the cluster directly to the external world over an fixed port. The name itself tell
us, it opens a port on the workernode through which it makes the underlying pod
application running on the node to be made accessible to the world.
===================================================================================
===========================================================
10-03-2023
NodePort
NodePort Service is used for exposing an pod application that is running on an
workernode directly to the external world. NodePort opens an port on the workernode
(within the range: 30000 - 32767) and forwards the request to the targetPort on
which the Pod application is running.
When we create an NodePort Service, The NodePort will be created across all the
Nodes of the cluster opening an port by itself. The traffic recieved on the
WorkerNode NodePort will be forwarded to NodePort Service Port, there by which be
forwarded to the targetPort on which the application is running on a pod
How can we persist the data that is generated by the pod application that is
running inside the container of a Pod, so that in the event of crash the data would
be retained?
That is where kubernetes has introduced Persistent Volume and Persistent Volume
Claim
There are few attributes we need to defined while creating an persistent volume:
1. storageClassName = indicates the type of storage to be created on the cluster
2. accessMode:
ReadOnly = only the pods can read the data from this volume
ReadWriteOnce = only one pod can write the read/write the data at one time
ReadWriteMultiple = Multiple pods are allowed to read/write at the same time
3. capacity = storage size to be assigned for that persistent volume
pvc.yml
-------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassname: pvClass
accessMode:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
speed-pod.yml
-------------
apiVersion: v1
Kind: Pod
metadata:
name: speedpod
spec:
containers:
- name: speedcontainer
image: techsriman/speeddep:2.0
ports:
- name: tomcatport
containerPort: 8080
protocol: TCP
volumeMounts:
- name: speedvolume
mountPath: /u01/app
volumes:
- name: speedvolume
persistentVolumeClaim:
claimName: pvc1
===================================================================================
=====================================================================
11-03-2023
mysql-pv.yml
---------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqlpv
spec:
storageClassName: mysqlStorageClass
capacity:
storage: 2Gi
accessMode:
-ReadWriteMultiple
hostPath:
path: /u01/data #location on the workernode the volume should be
created
mysql-pvc.yml
-------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysqlpvc
spec:
storageClassName: mysqlStorageClass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql-pod.yml
-------------
apiVersion: v1
kind: Pod
metadata:
name: mysqlpod
spec:
container:
- name: mysqlcontainer
image: mysql:8.0
ports:
- name: mysqlport
containerPort: 3306
protocol: TCP
volumeMounts:
- name: mysqlvolume
mountPath: /u01/mysql
volumes:
- name: mysqlvolume
persistentVolumeClaim:
claimName: mysqlpvc
===================================================================================
======================================================================
13-03-2023
Ingress
Ingress is an another type of Service in kubernetes, that is used for exposing the
pod applications to the external world.
Ingress is an another controller of the kubernetes, that receives the request from
the external world and routes it to the Service component.
by default withan kubernetes or minikube install Ingress controller will not be
available, we need to enable the ingress explicitly
minikube addons enable ingress
There are different ingress controller component providers are there in the market
one such provider is Nginix
These are internally httpd servers which receives the request over an domainName
and proxy the request to the backend
now we need to write an ingressService that receives request over a domain
or host: covido.org
then forward the request to the ClusterIP Service
(clusterip)
covido-service.yml
-------------------
apiVersion: v1
kind: Service
metadata:
name: covidoclusteripservice
spec:
type: ClusterIP
selector:
app: covido
version: v1
ports:
- port: 8080
target: 8080
covido-ingress.yml
----------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: covidoingress
annotations:
niginx.ingress.kubernetes.io/rewrite-target /$1
spec:
rules:
- host: covido.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name:
covidoclusteripservice
port:
number:
8080
http://covido.org/index
===================================================================================
====================================================================
15-03-2023
Job
Job is used for performing an operation on the kubernetes cluster. A Job creates
one or more Pods and executes util the specified number of executions are
successful
numbers-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: numbersJob
spec:
template:
metadata:
name: numberspod
spec:
containers:
- name: numberscontainer
image: ubuntu:20.03
command:
- "bin/bash"
- "-c"
- "for i in 1 2 3 4 5 6 8 9 0 ; do
echo $i ; done"
restartPolicy: Never
DaemonSet
nginx-daemonset.yml
--------------------
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginxdaemonset
spec:
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
===================================================================================
======================================================================
19-03-2023