Kubernetes
Kubernetes Introduction
Kubernetes is Open Source Orchestration system for containers
implemented by Google as a platform that eliminates the manual processes
involved in deploying containerized applications.
Kubernetes used to manage the State of Containers I.e.
Start Containers on Specific Nodes.
Restart Containers when gets Killed.
Move containers one Node to Another.
Problems when multiple services running inside containers
When multiple services running inside Containers we can see the following
problems:
Increase Complexity of Infrastructure
Scaling was very Difficult
Setting Up Services Manually
Manual Fix if any Node is Crashed
Increase the Human Cost of Running Service
Increase the Bills from Cloud Service Provider
Features of Kubernetes
1. Automated Scheduling : Kubernetes provides advanced scheduler to
launch container on cluster nodes based on their resource requirements
and other constraints.
2. Healing Capabilities: Kubernetes allows to replaces and reschedules
containers when nodes die. Kubernetes doesn’t allow Containers to use,
until they get ready.
3. Auto Upgrade and RollBack: Kubernetes rolls out changes to the
application or its configuration. Monitoring Application ensure that
Kubernetes doesn’t kill all Instance at that time. If something goes wrong,
with Kubernetes we can rollback the change.
Kubernetes
4. Horizontal Scaling: Kubernetes can scale up and scale down the
application as per the requirements with a simple command, using a UI, or
automatically based on CPU usage.
5. Storage Orchestration: With Kubernetes, you can mount the storage
system of your choice. You can either opt for local storage, or choose a
public cloud provider.
6. Secret & Configuration Management: Kubernetes can help you deploy
and update secrets and application configuration without rebuilding your
image and without exposing secrets in your stack configuration.
7. Run Kubernetes Anywhere:
On-Premise(Own DataCenter)
Public Cloud(Google, AWS, Azure, DigitalOcean) and
Hybrid Cloud
Kubernetes Architecture:
Kubernetes follow the Master-Slave(Worker) Node Architecture.
Master Node : The kubernetes cluster will be install and run on the
machine called as the master node which is then Responsible for the
management of Kubernetes cluster. Its an Entry point for all administrative
tasks.we can also have the Kubernetes Multi-Master Architecture by
replicating the master-master nodes.
Kubernetes
This node having the k8s cluster is also called as the controll-plane. Which
contains the kubernetes cluster components.
Controll plane components of the k8s
API Server
Scheduler
Etcd Registry
Controll Manager
Cloud Controll Manager
1. API Server : API server is the entry point for all the REST commands
used to control the cluster.its the Interaction Point with Kubernetes.
2. Etcd : Distributed key value store which stores the cluster state.Used
as Back-End for K8s and provides high availability of Data related to Cluster
State.
Kubernetes
3. Scheduler: Regulates the tasks on slave nodes.Stores the resource usage
information for each slave node.
4. Controller: Runs multiple Controller utility in single process. Carry on
Automated tasks in K8s Cluster.
Worker Node : It’s a physical server or we can say a VM where the
container managed by the Cluster Run. Worker nodes contain all the
necessary services to manage the
networking between the containers,
communicate with the master node, and
assign resources to the scheduled containers.
1. Kubelet : K8s Agent executed on the worker nodes.
2. Kubelet gets the configuration of a Pod from the API server and ensures
that the described containers are up and running.
Kubernetes
3. Pods : Is a group of one or more containers with shared
storage/network,and a specification for how to run the containers.
A Pod Can Share the content and same IP but reach other Pods via
LocalHost.
Single Pod can Run on Multiple Machines and
Single Machine can Run Multiple Pods.
4. Kube-Proxy : Kube-proxy runs on each node to deal with individual host
sub-netting and ensure that the services are available to external
parties.Kube-proxy acts as a network proxy and a load balancer for a
service on a single worker node.
Kubernetes Installation: K8s can be Installed in Two Ways
1. Single Node Deployment(Minikube K8s Cluster) Suitable for
Development
2. Kubernetes HA Deployment(1-Master|2 -Worker)using KubeAdm Suitable
for Production like Set Up
3. Using Cloud provider