K8s Kubesimplify
K8s Kubesimplify
API server → Main brain of the kubernetes control plane, handles all the requests
coming from kubectl for the managing the cluster.
→ It authenticates, authorizes using RBAC and then save the data to ETCD
Scheduler → Component used to find the best fit node for scheduling the pod.
Receives the scheduling request from API server.
K8s kubesimplify 1
→ It has replicaset controller, deployment controller, job controller, daemonset
controller, etc
Cloud controller manager → The cloud controller manager lets you link your cluster
into your cloud provider's API, it is very cloud specific.
Etcd db→ Kubernetes uses a Key:value database to keep data of the nodes to
manage the cluster, it keeps all the data stored in a distributed manner.
kubeproxy → Kube-proxy is a network proxy and load balancer that runs on each
node in a Kubernetes cluster, acting as a bridge between nodes and the
Kubernetes API server. It's responsible for enabling communication between pods
and services
When you run a container on a machine, many namespace gets created, Linux
namespaces provide a mechanism to isolate certain aspects of the system's state
for individual processes or groups of processes.
K8s kubesimplify 2
→ Cgroups: Control groups is use to control ram, cpu, storage, etc.
1. Run a container
> 1987
K8s kubesimplify 3
4026532429 ipc 2 1987 root nginx: master process nginx -g daemon off;
4026532430 pid 2 1987 root nginx: master process nginx -g daemon off;
4026532431 net 2 1987 root nginx: master process nginx -g daemon off;
4026532489 cgroup 2 1987 root nginx: master process nginx -g daemon off;
→ These are the given namespaces created by the container, each namespace is
used for different purpose.
/sys/fs/cgroup/system.slice/docker-3044a508bc828b2f473a68628cc9c61f196e
ab481a0e0017f88f847d054c7158.scope
# The docker-3044a508bc828b2f473a68628cc9c61f196eab481a0e0017f88f84
7d054c7158.scope, comes from the container id of the docker container, get it u
sing docker ps
Kubectl connects to the kubernetes cluster using the config given in the kubeconfig
file
cat ~/.kube/config
K8s kubesimplify 4
kubectl create deploy demo --image=nginx --dry-run=client -oyaml
The clusters created by the tooling by ourselves, are called as self-managed cluster,
while those created on cloud platforms such as EKS, AKS, etc are called as managed
kubernetes cluster.
1. Kind → https://kind.sigs.k8s.io/
3. Minikube → https://minikube.sigs.k8s.io/docs/
Whenever we try to interact with the cluster using kubectl commands like kubectl
get pods, etc it will send a request to the API server.
K8s kubesimplify 5
AGE
default nginx 1/1 Running 0 18m
kube-system calico-kube-controllers-fdf5f5495-dgc76 1/1 Running 2
(52m ago) 31d
kube-system canal-9hc7x 2/2 Running 2 (52m ago)
31d
kube-system canal-b5cnm 2/2 Running 2 (52m ago)
31d
kube-system coredns-7695687499-2vdd4 1/1 Running 1 (52
m ago) 31d
kube-system coredns-7695687499-ltw2v 1/1 Running 1 (52m
ago) 31d
kube-system etcd-controlplane 1/1 Running 3 (52m ago)
31d
kube-system kube-apiserver-controlplane 1/1 Running 2 (52m
ago) 31d
kube-system kube-controller-manager-controlplane 1/1 Running 2 (5
2m ago) 31d
kube-system kube-proxy-f7jnk 1/1 Running 2 (52m ago)
31d
kube-system kube-proxy-fbkjh 1/1 Running 1 (52m ago)
31d
kube-system kube-scheduler-controlplane 1/1 Running 2 (52m
ago) 31d
local-path-storage local-path-provisioner-5c94487ccb-bm6vp 1/1 Running
2 (52m ago) 31d
K8s kubesimplify 6
To get the current context: We can run the below command.
>
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin
#Create a key
K8s kubesimplify 7
#Encode the csr in base64
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1p6Q0NBV
ThDQVFBd0lqRVBNQTBHQTFVRUF3d0dVM1ZpYUdGdU1ROHdEUVlEVlFRS0RB
Wm5jbTkxY0RFdwpnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dn
RUtBb0lCQVFETWhwTFdzdDZMSkRwQ0ZLdU1qWXpKCmdXTnJteUdQU2lPT0J
ENjdQNEZmMWZ1OWlJUTRKbDMra1E5WHhZMlhZQjR1d2ZjNzBOclB2QVFVaGJ1
YXpFaTQKcnVETENGdENVblNWSklORGExMnl2WGdGa3R1SGpublBXZ1plUkxsM
kI1V2ZLZStURkdRK1VNT1lRTjFPckFCQwpVbm0yWVMxZ21DT1VPQmdMRWtoW
UNvTUlEM3hoeHFOZUFIZDV3UVNGNWtUeElqY2NCbHJJcEZ6K0UrNDZuOGNj
CklrUzlFL0pNOGcxZ2NGc3c0ZlFHdHlUc3J5WjU3WFZJcFBBY2tNc3hBYzBGb0
9YN1d3SmgwVjV1SG9rR2dzTG8KV0FQaXFOVllBNVVacTl4Q3hGTm4xeGZWcW
VYMkJlMkc2Q2NnOTJMbm00VW0xYStTUTF0SklENnJEUTVYYi80bgpBZ01CQU
FHZ0FEQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUF0ZVZnY3p0cW1YcUs4TV
FTZDlkTmRUaXZQKzRKCjRWZloxQUo2ZWxpZ0pRTG13bGE1b0pOcGFnUFI0MEJ
pcFkvZlgxdWdtWDQ5SkxOL1lRUEJQaWdVNURudXRVWXAKcFR1ODhLeVRCTXd
UUWpxVUY1NjA1SGxlb2F1cWhxaGU5OUZMSXBhSEdCdGplQ3VyOTBEcXl3TkUz
Qm5ZVVVqMApUOFFFSFNFYmVHZVJTRmVWTmF3ODlpZmFJb2UrVFltTWtWbE
NpM0VOS1ZROVoyQStMeHE0OWUvSGJOdFZNQytZCkFHejJSQUpiTituSllqQWZ
zREdSUHcxTFNGUVZrN3NlS0lxUFRyUXdLMEJkUy81YUVjYlNSUytERUx0N2k0
M3oKQXY3djRBZkpjNE55OG5IdjVzMzIydWk4Wkk1R24xN1puWWFoRWgxM1JD
Vnh3bjZtK21RUnZhckZ6UT09Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtL
S0tLQo=
vim csr.yaml
aapiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: subhan
spec:
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1
p6Q0NBVThDQVFBd0lqRVBNQTBHQTFVRUF3d0dVM1ZpYUdGdU1ROHdEUVlEV
lFRS0RBWm5jbTkxY0RFdw
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
K8s kubesimplify 8
certificatesigningrequest.certificates.k8s.io/subhan created
certificatesigningrequest.certificates.k8s.io/subhan approved
vim role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: subhan
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
#Bind a role to the user subhan, and make sure that the role allows the user to g
et, watch, list pods only
K8s kubesimplify 9
kubectl apply -f role.yaml
#Apply the yaml
role.rbac.authorization.k8s.io/pod-reader created
rolebinding.rbac.authorization.k8s.io/read-pods created
→ As a new user has been created, it will automatically update, the kubeconfig as
well to include the user
- name: subhan
user:
K8s kubesimplify 10
client-certificate: /root/subhan.crt
client-key: /root/subhan.key
We can set another kubeconfig file, if we manually define the variable on the current
shell
API server is an API which listens to all the requests coming to the kubernetes, we
can use also interact with the kubernetes directly using the APIs:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: test
name: test
spec:
containers:
- image: nginx
name: test
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
K8s kubesimplify 11
kubectl create deploy test --image=nginx --dry-run -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
K8s kubesimplify 12
→ kind → to specify the type of object (e.g., Pod, Deployment)
When sent to the Kubernetes API server, these objects are treated as resources (like
pods, deployments, etc).
It tells Kubernetes:
Version (e.g., v1 )
This is used internally by Kubernetes (especially in the API server and client-go).
It refers to:
Version (e.g., v1 )
K8s kubesimplify 13
We group these Kubernetes objects into api groups like core group, apps group, etc,
and this concept is known as GVK / GVR:
GVK (Group version kind)
GVR (Group version resource)
Pod belongs to the core group and core group objects are written in blank, hence we
only write apiVersion as v1 when we want to create a Pod, whereas for a deployment
we write apps/v1 since it belongs to the apps api group.
To work with the kubernetes api directly without kubeconfig and kubectl, we need a
token for the API server
#Create a clusterbinding role, bind the cluster-admin role to the service account
#Create and import the token for the subhan serviceaccount user
#List deployments
K8s kubesimplify 14
curl -X GET $APISERVER/apis/apps/v1/namespaces/default/deployments -H "Au
thorization: Bearer $TOKEN" -k
kubectl proxy
curl localhost:8001/apis
#We can call the k8 api without any authorization using the url like this, as long a
s the proxy is running
We can create kubernetes resources either through the imperative way (Kubectl
tool) or the Declarative way (Yaml files).
When we want to write multiple strings in a yaml file, we can either use literal blocker
scalar or the folded block scalar (>).
key: |
Hello guys
Hello
key: >
Hello guys
Hello
K8s kubesimplify 15
#Output for > will be
Hello guys Hello
YAML separator → We can have separate yaml files in a single yaml file, using - - -
operator.
In production env, we usually do not create pods, we deploy our application using
higher level objects like deployment, replicaset, job, stateful set, daemon set.
Each pod has its own IP, it comes from the podCIDR range defined for the node, we
can check it using the below command
podCIDR: 192.168.1.0/24
spec.os.name → This is the specification that we define in the pod yaml, it is used to
tell if it should run on windows or linux.
Pod template → We define pod templates for deployment, stateful sets, etc.
K8s kubesimplify 16
kubectl create ns subhan-dev-env
What happens when we run kubectl run nginx —image=nginx, this is the high-level
flow that is followed:
Sends a REST request to the Kubernetes API Server to create a Pod with the
specified image.
2. API Server:
Writes the desired Pod spec to etcd (the cluster's key-value store).
3. Scheduler:
Selects a suitable Node to run the Pod based on resource availability and
scheduling rules.
Tells the Container Runtime (like containerd or Docker) to pull the nginx
5. Container Runtime:
K8s kubesimplify 17
Pulls the image (e.g., from Docker Hub).
6. Kubelet:
Pods
#To get more info in a single command, we can use different flags
K8s kubesimplify 18
annotations:
cni.projectcalico.org/containerID: 37ae95072c8d897cc498006a80eb220ea8
027ddfe0467bb950db04f3aae67090
cni.projectcalico.org/podIP: 192.168.1.4/32
cni.projectcalico.org/podIPs: 192.168.1.4/32
creationTimestamp: "2025-04-25T14:38:31Z"
labels:
run: nginx
name: nginx
namespace: default
resourceVersion: "3302"
uid: b1152353-2a26-4fc4-aa39-8529ecc3b681
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-n52nq
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node01
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
K8s kubesimplify 19
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
apiVersion: v1
kind: Pod
metadata:
name: example-pod
labels:
purpose: example-purposes
spec:
containers:
- name: example-container
image: ubuntu
command: ["/bin/echo", "Hello"]
args: ["Welcome to example"]
apiVersion: batch/v1
kind: Job
metadata:
name: example-job
spec:
template:
spec:
containers:
- name: nginx
image: nginx
command: ["sleep", "5"]
restartPolicy: OnFailure
backoffLimit: 4
K8s kubesimplify 20
Init containers → Specialized containers within a Pod that run before the main
application containers. They are used to perform setup tasks like configuration,
data preparation, or initialization, ensuring the main containers have the necessary
prerequisites before starting.
→ Always run to completion, we can have multiple init containers.
apiVersion: v1
kind: Pod
metadata:
name: init-container-example
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-container
image: busybox
command: ["sh", "-c", "wget -O /usr/share/data/index.html https://google.co
m"]
volumeMounts:
- name: shared-data
mountPath: /usr/share/data
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
Explanation of volumes:
volumes:
- name: shared-data
emptyDir: {}
K8s kubesimplify 21
# emptyDir means a temporary directory that is shared among containers in the
pod. It's empty when the pod starts and is deleted when the pod is removed.
initContainers:
- name: init-container
image: busybox
command: ["sh", "-c", "wget -O /usr/share/data/index.html https://google.co
m"]
volumeMounts:
- name: shared-data
mountPath: /usr/share/data
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
#Mounts the same volume but to /usr/share/nginx/html, so the index.html file pre
sent in the shared-data volume, will be in the path /usr/share/nginx/index.html, t
his is the default location of the nginx
Multi-init containers:
apiVersion: v1
kind: Pod
metadata:
name: init-demo-2
spec:
initContainers:
- name: check-db-service
image: busybox
command: ['sh', '-c', 'until nslookup db.default.svc.cluster.local; do echo waiti
ng for db service; sleep 2; done;']
- name: check-myservice
K8s kubesimplify 22
image: busybox
command: ['sh', '-c', 'until nslookup myservice.default.svc.cluster.local; do ec
ho waiting for db service; sleep 2; done;']
containers:
- name: main-container
image: busybox
command: ['sleep', '3600']
→ We are creating two init containers that are checking for the presence of two
services (db.default.svc.cluster.local and myservice.default.svc.cluster.local).
→ Unless the given services are present, the main container will not start.
---
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: demo1
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: demo2
ports:
- protocol: TCP
port: 80
targetPort: 80
→ Now when you do ‘kubectl get pods’ the main container will run.
K8s kubesimplify 23
Multi-container pod:
apiVersion: v1
kind: Pod
metadata:
name: multi-cont
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: meminfo-container
image: alpine
command: ['sh', '-c', 'sleep 5; while true; do cat /proc/meminfo > /usr/share/d
ata/index.html; sleep 10; done;']
volumeMounts:
- name: shared-data
mountPath: /usr/share/data
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
→ In this example it will become a nginx service, which will get the memory
information of the container, every 10 seconds (Command explanation below)
K8s kubesimplify 24
container starts. Unlike sidecar containers, init containers are not continuously
running alongside the main containers.
→ Init containers do not support lifecycle, liveness probe, readiness probe, startup
probe, whereas sidecar support all these things.
→ Init containers are always retried until they succeed, or the Pod is deleted.
→ Even if the Pod’s restartPolicy is Never, init containers will be retried on failure.
→ You cannot configure an Init Container to keep running forever like a sidecar.
→ It's designed to complete its task and exit.
https://github.com/thockin/kubectl-sidecar/blob/main/example.yaml
curl http://localhost:8080/this-pod-status.json
curl http://localhost:8080/this-node-status.json
In Docker, a container can get a new IP upon container restart unless configured
otherwise.
In Kubernetes, the pod retains the same IP across container restarts, as
containers within the pod share the same network namespace.
The pause container is the "parent" container that holds the network namespace
for the pod and ensures the pod's IP is maintained throughout its lifecycle.
https://github.com/kubernetes/kubernetes/blob/master/build/pause/linux/pau
se.c
K8s kubesimplify 25
→ This is the original repo of kubernetes pause container, it basically checks for
POSIX Signals such as sigkill, sigterm, sigchld.
User namespaces → Feature in linux, that enhances the pod security by isolating
user and group IDs (UIDs and GIDs) within a pod's namespace from those on the
host.
→ So that in case of any container breakouts (vulns exploited), it will not affect the
host.
Pod disruption budget (PDB) → It is a resource which controls how many pods can
be down simultaneously during voluntary disruptions like node draining or
upgrades.
→ Used for HA.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
K8s kubesimplify 26
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: nginx-pdb
spec:
minAvailable: 3
selector:
matchLabels:
app: nginx
→ We can see in real-time how the updates will be affected by immediately running
→ PDB also affect node drain as well, so in case if we had not set the PDB our pods
would get evicted.
K8s kubesimplify 27
system/kube-proxy-rz6tx
evicting pod kube-system/coredns-7695687499-tdntw
evicting pod default/nginx-deployment-8d94c585f-cjbfm
evicting pod default/nginx-deployment-8d94c585f-d94js
evicting pod kube-system/coredns-7695687499-k49v6
error when evicting pods/"nginx-deployment-8d94c585f-cjbfm" -n "default" (wi
ll retry after 5s): Cannot evict pod as it would violate the pod's disruption budge
t.
error when evicting pods/"nginx-deployment-8d94c585f-d94js" -n "default" (wi
ll retry after 5s): Cannot evict pod as it would violate the pod's disruption budge
t.
Requests and Limits → In Kubernetes we can define the request / limit of a resource,
using the resources tag.
→ Request is the min resources required, and limit is the max resource an object can
take.
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
K8s kubesimplify 28
→ As a cluster operator, or as a namespace-level administrator, you might also be
concerned about making sure that a single object cannot monopolize all available
resources within a namespace, limitrange helps us achieve that.
apiVersion: v1
kind: LimitRange
metadata:
name: example-limitrange
namespace: default
spec:
limits:
- type: Pod
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "200m"
memory: "100Mi"
- type: Container
max:
cpu: "1"
memory: "500Mi"
min:
cpu: "100m"
memory: "50Mi"
default:
cpu: "300m"
memory: "200Mi"
defaultRequest:
cpu: "200m"
memory: "100Mi"
→ Limit range defines the maximum minimum of an object, and we can also define
default and default request for each kind of object.
apiVersion: v1
kind: ResourceQuota
K8s kubesimplify 29
metadata:
name: object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
→ When several users or teams share a cluster with a fixed number of nodes, there
is a concern that one team could use more than its fair share of resources, resource
quotas help us with this problem.
Pod Quality of Service classes → Kubernetes classifies the Pods that you run and
allocates each Pod into a specific quality of service (QoS) class.
→ Kubernetes uses that classification to influence how different pods are handled.
Kubernetes does this classification based on the resource requests of
the Containers in that Pod, along with how those requests relate to resource limits
→ This is known as
Quality of Service (QoS) class
During the Pod eviction, QOS matters the most the pods in BestEffort qos class are
most likely to evicted, pods in burstable class are less likely to get evicted, pods in
guaranteed qos class are least likely to get evicted.
K8s kubesimplify 30
Downward API → It is a way for a container to be able to get information about itself,
there are two ways to expose Pod and container fields to a running container:
environment variables, and as files that are populated by a special volume type
together these are called as Downward API.
apiVersion: v1
kind: Pod
metadata:
name: nginx-envars-fieldref
spec:
containers:
- name: nginx-container
image: nginx:latest
env:
- name: MY_NODE_NAME #We can get the node name using spec.nodena
me
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name #We can get the metadata using spec.nodena
me
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace #Get the namespace metadata
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP #Get the pod's IP address
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName #Get the service account name
restartPolicy: Never
K8s kubesimplify 31
→ To confirm that the container indeed has this information with itself, we can print
all the env variables present to itself using:
Scheduling
Kubernetes Scheduler | Kubernetes
K8s kubesimplify 32
A scheduler watches for newly created Pods that have no Node assigned. For every
Pod that the scheduler discovers, the scheduler becomes responsible for finding the
best Node for that Pod to run on. Picks up a node to run the containers based on
certain conditions.
metadata:
labels:
environment: dev
purpose: test
selector:
matchLabels:
app: nginx
K8s kubesimplify 33
In K8s, equity-based selectors filter resources based on exact key-value pairs in
labels, while set-based selectors allow for more complex matching using operators
like in, not in, and exists for a single label.
selector:
matchLabels:
components: jenkins
matchExpressions:
- {key: tier, operator: In, values: [cache]}
K8s kubesimplify 34
- {key: environment, operator: NotIn, values: [dev]}
#Here match labels is equity based selector, match expressions is set-based sel
ector.
We can check the resource shorthands by taking a look into the api-resources
kubectl api-resources
Namespaces
Whatever we deploy is deployed in a namespace, it is a type of virtual boundary, it
provide a mechanism for isolating groups of resources within a single cluster. Names
of resources need to be unique within a namespace, but not across namespaces.
Whenever scheduler schedules, it does not know the namespaces, but we can make
the scheduler aware of the namespaces using limitranges, resource quotas, etc.
kubectl get ns
→ You can launch the resources in the namespace with the -n flag (or —namespace
flag)
K8s kubesimplify 35
controlplane:~$ kubectl exec -it nginx -n ns_name -- curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
We can also change the current context of the namespace from “default” ns to
something else:
#Validate it using
K8s kubesimplify 36
apiVersion: v1
kind: ResourceQuota
metadata:
name: ns-quota
namespace: core-ns
spec:
hard:
requests.cpu: "500m"
requests.memory: "200Gi"
limits.cpu: "1"
limits.memory: "400Gi"
pods: "10"
#Only 10 pods can be created within this namespace core_ns
→ To define the namespace of any resource directly in the yaml file, is through the
metadata.
apiVersion: v1
kind: Pod
metadata:
name: oversized-nginx
namespace: core-ns
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
cpu: "600m"
memory: "500Mi"
limits:
cpu: "2"
memory: "500Mi"
→ If we try to apply the above oversized pod manifest, we will get the error:
K8s kubesimplify 37
requests.cpu=600m, used: limits.cpu=0,requests.cpu=0, limited: limits.cpu=1,re
quests.cpu=500m
#We instead need to use the specific object name, since "get all" is a shorthand
that retrieves a subset of common resource types (like Pods, Services, Deploym
ents, etc.), but it does not include custom or less-common types like Lease.
controlplane:~$ kubectl get lease -n kube-node-lease
NAME HOLDER AGE
controlplane controlplane 11d
node01 node01 11d
K8s kubesimplify 38
Labels: <none>
Annotations: <none>
API Version: coordination.k8s.io/v1
Kind: Lease
Metadata:
Creation Timestamp: 2025-04-28T12:36:39Z
Owner References:
API Version: v1
Kind: Node
Name: node01
UID: 5498aefa-37ee-4b5d-8ee2-5a6d75b2baa5
Resource Version: 6956
UID: dfc74481-a9eb-438f-b3bf-face2de84571
Spec:
Holder Identity: node01
Lease Duration Seconds: 40
Renew Time: 2025-05-09T17:44:01.719703Z #This here defines if a no
de is healthy
Events: <none>
→ Default: Kubernetes includes this namespace so that you can start using your new
cluster without first creating a namespace, the context is typically set to this
namespace.
→ Kube-node-lease: Explained above.
→ Kube-public: This namespace is readable by all clients (including those not
authenticated). This namespace is mostly reserved for cluster usage, in case that
some resources should be visible and readable publicly throughout the whole
cluster.
→ Kube-system: The namespace for objects created by the Kubernetes system.
K8s kubesimplify 39
Namespaces | Kubernetes
<service-name>.<namespace-name>.svc.cluster.loca
Which means that if a container only uses <service-name>, it will resolve to the
service which is local to a namespace.
Node-selection in kube-scheduler
1. Filtering → The filtering step finds the set of Nodes where it's feasible to
schedule the Pod by removing the node that does not meet the criteria to run the
pod due to any of the filter reasons:
2. Scoring → In the scoring step, the scheduler ranks the remaining nodes to
choose the most suitable Pod placement. The scheduler assigns a score to each
Node that survived filtering, basing this score on the active scoring rules, kube-
scheduler then assigns the Pod to the Node with the highest ranking.
This filter happen through the plugins in kube-scheduler, we also have the
Scheduler Framework plugins in Kubernetes that are modular components that
extend and customize the default behavior of the scheduler.
→ Within the plugins we have some built - in ones: NodeResourcesFit,
InterPodAffinity, PodTopologySpread, Node affinity, etc.
K8s kubesimplify 40
Refer to the below repo for more:
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-
scheduling/scheduler_framework_plugins.md
→ kubernetes/pkg/scheduler/framework/plugins at master · kubernetes/kubernetes
→ Scheduler checks if the scheduling gates are removed or not, if removed then pod
scheduling happens, if not then scheduling is stopped (gated).
→ This is done to reduce the load on the scheduler.
Scheduling gate → The schedulingGates field contains a list of strings, and each
string literal is perceived as a criterion that Pod should be satisfied before
considered schedulable.
K8s kubesimplify 41
controlplane:~$ cat podschedulereadiness.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
schedulingGates:
- name: test
containers:
- name: busybox
image: busybox
# Due to scheduling gates being present, the pod will not get scheduled
Pod topology spread → You can use topology spread constraints to control how
Pods are spread across your cluster among failure-domains such as regions, zones,
nodes, and other user-defined topology domains. This can help to achieve high
availability.
→ Use case is that we have 3 nodes, and we want all the newly scheduled pods to
get spread across the entire cluster to achieve HA.
apiVersion: apps/v1
kind: Deployment
metadata:
name: topological-deploy
spec:
replicas: 4
selector:
matchLabels:
apps: demo-app
template:
metadata:
K8s kubesimplify 42
name: demo-pod
labels:
apps: demo-app
spec:
containers:
- name: app-container
image: nginx
topologySpreadConstraints:
- maxSkew: 1 #Degree to which the pods can be evenly distributed
whenUnsatisfiable: DoNotSchedule
topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
apps: demo-app
---
→ Since we have set the skew to 1, even when we scale the number of replicas, it
works perfectly fine.
K8s kubesimplify 43
8.1.5 node01 <none> <none>
topological-deploy-7965ffdffc-kjqxp 1/1 Running 0 6s 192.168.
1.6 node01 <none> <none>
topological-deploy-7965ffdffc-mf4gs 1/1 Running 0 6s 192.168.
0.6 controlplane <none> <none>
topological-deploy-7965ffdffc-xbkc9 1/1 Running 0 7m45s 192.1
68.1.4 node01 <none> <none>
#Now control plane has become unschedulable, no further workload will be sch
edulable here
→ Thus if we keep scaling up the deployment the new pods will go into the pending
state, since the maxskew set in pod scheduling readiness is only set to 1, and the
control plane has been disabled for new workloads.
K8s kubesimplify 44
8.0.5 controlplane <none> <none>
topological-deploy-7965ffdffc-jlqsg 1/1 Running 0 13m 192.168.
1.5 node01 <none> <none>
topological-deploy-7965ffdffc-kjqxp 1/1 Running 0 5m44s 192.16
8.1.6 node01 <none> <none>
topological-deploy-7965ffdffc-mf4gs 1/1 Running 0 5m44s 192.1
68.0.6 controlplane <none> <none>
topological-deploy-7965ffdffc-q27wn 1/1 Running 0 5s 192.16
8.1.7 node01 <none> <none>
topological-deploy-7965ffdffc-xbkc9 1/1 Running 0 13m 192.16
8.1.4 node01 <none> <none>
K8s kubesimplify 45
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m26s default-scheduler 0/2 nodes are available:
1 node(s) didn't match pod topology spread constraints, 1 node(s) were unsched
ulable. preemption: 0/2 nodes are available: 1 No preemption victims found for in
coming pod, 1 Preemption is not helpful for scheduling.
Priority classes → K8s object, that defines how a pod is prioritized during
preemption, the higher the priority value the greater the priority.
→ We have two system priority classes, we can check them using:
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: demo-priority
value: 1000000
globalDefault: false
description: "This priority class should be used higher priority."
---
apiVersion: v1
K8s kubesimplify 46
kind: Pod
metadata:
name: high-priority-pod
spec:
priorityClassName: demo-priority
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
resources:
requests:
cpu: "300m"
memory: "300Mi"
Node selector -> nodeSelector is the simplest recommended form of node selection
constraint. You can add the nodeSelector field to your Pod specification and specify
the node labels you want the target node to have. Kubernetes only schedules the
Pod onto nodes that have each of the labels you specify.
Taints → Taints are opposite of node affinity; they allow a node to repel a set of
pods.
→ Taints can have the following effects: NoSchedule, PreferNoSchedule, NoExecute
K8s kubesimplify 47
Tolerations → Tolerations are applied to pods. Tolerations allow the scheduler to
schedule pods with matching taints
→ Taints and tolerations work together to ensure that pods are not scheduled onto
inappropriate nodes. One or more taints are applied to a node; this marks that the
node should not accept any pods that do not tolerate the taints.
Replica Sets → Works the same as replication controller with advanced features,
maintains a set of pods to be running in replication at all times, rarely used directly,
usually managed with deployments as it does not support rollback & rolling updates.
RS demo:
→ Even if we create a new pod with the same label, it will be considered within the
replicaset if the labels are same.
→ Suppose if 3 pods are running from the replicaset and if we try to create another
pod with the same label, it will terminate automatically, however if we first create that
pod with the same label and then apply the replicaset yaml, only 2 pods will get
created from the replicaset.
K8s kubesimplify 48
https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#:~:text=While
you can,the previous sections.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: demo-rs
labels:
apps: rs-nginx
spec:
replicas: 3
selector:
matchLabels:
apps: rs-nginx
template:
metadata:
name: rs-container
labels:
apps: rs-nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Propagation policies → Policy that determine, how child objects are handled, once
the parent objects are deleted.
2. Background prop policy → Parent objects are deleted first, then the child objects
are deleted.
3. Orphan → Child objects are not deleted; they become orphaned objects.
We can experience these policies but interacting with the api server.
kubectl proxy --port=8080 #Useful to connect to the api server directly without
needing to handle authorization
K8s kubesimplify 49
curl -X DELETE 'http://localhost:8080/apis/apps/v1/namespaces/default/replicas
ets/nginx-rs' \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foregroun
d"}' \
-H "Content-Type: application/json"
#The above request deletes the nginx-rs replicaset with foreground propagation
policy.
To get the sample yaml file as a template we can use the following command:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
K8s kubesimplify 50
app: demo-deploy
name: demo-deploy
spec:
replicas: 1
selector:
matchLabels:
app: demo-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: demo-deploy
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
→ When we create a deployment, lower-level objects such as replica set & pods also
get created.
→ To increase the replicas we can use kubectl scale command
New Pods are created until the desired replica count is met.
Pods are created based on the Pod template defined in the Deployment spec.
K8s kubesimplify 51
Pods are deleted in reverse order of creation — i.e., the newest pods are
deleted first, based on their metadata.creationTimestamp .
Deployment strategy → It specifies the strategy used to replace old Pods by new
ones. .spec.strategy.type can be "Recreate" or "RollingUpdate". "RollingUpdate" is
the default value.
Rolling update strat → Rolling deployments are the default k8s offering designed
to reduce downtime to the cluster. A rolling deployment replaces pods running
the old version of the application with the new version without downtime, You
can specify maxUnavailable and maxSurge to control the rolling update process.
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Recreate update strat → Recreating deployment terminates all the pods and
replaces them with the new version. This can be useful in situations where an old
and new version of the application cannot run at the same time.
K8s kubesimplify 52
Rollout history → Captures and tells the history of the changes done
Rollout status → We can check the stat of the ongoing rollout as well
We can also revert back the old rollouts (Revision number can be gotten from rollout
history
Probes →
K8s kubesimplify 53
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deploy
labels:
app: nginx-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test
template:
metadata:
name: nginx
labels:
app: nginx-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
Deployment strategies:
1. Blue - Green
2. Canary
3. AB test
K8s kubesimplify 54
https://blog.christianposta.com/deploy/blue-green-deployments-a-b-testing-and-
canary-releases/
2. Environment variables
4. Write code to run inside the container to connect programmatically with k8 api to
read the configmap
Other than the way of reading configmaps from the volume (it should not have a
subpath), you will employ some sort of restart when you do a change of configmap,
the configuration for this is present in configMapAndSecretChangeDetectionStrategy
https://kubernetes.io/docs/concepts/configuration/configmap/
We can also make the configmap immutable by setting the value to true in the config
map / secret
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
...
immutable: true
→ We can then delete & recreate the configmaps, but we cannot update it.
K8s kubesimplify 55
apiVersion: v1
kind: ConfigMap
metadata:
name: test-cfm
data:
username: "subhan"
databaseName: "Test-db"
apiVersion: v1
kind: Pod
metadata:
name: mysql-db
labels:
purpose: db
spec:
containers:
- name: mysql-db
image: mysql:5.7
env:
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: test-cfm
key: username
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: test-cfm
key: databaseName
- name: MYSQL_PASSWORD
value: demo@123
- name: MYSQL_ROOT_PASSWORD
value: demo@12345
ports:
- name: db-port
containerPort: 3386
volumeMounts:
- name: mysql-storage
K8s kubesimplify 56
volumes:
- name: mysql-storage
emptyDir: {}
apiVersion: v1
kind: ConfigMap
metadata:
name: cfm-app-dev
data:
settings.properties: |
#Dev configuration
debug=true
database_url=http://dev-db.example.com
featureX_enabled=false
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cfm-app-prod
data:
settings.properties: |
#Prod configuration
debug=false
database_url=http://prod-db.example.com
featureX_enabled=true
K8s kubesimplify 57
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-demo
spec:
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: web-app-container
image: nginx
ports:
- containerPort: 80
env:
- name: ENVIRONMENT
value: "development"
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: cfm-app-dev
Secrets → A Secret is an object that contains a small amount of sensitive data such
as a password, a token, or a key.
→ Using a Secret means that you don't need to include confidential data in your
application code.
→ Although their purpose is to store secrets, the secrets are not encrypted to the
one who has access to the cluster, to overcome this we can use sealed secret
K8s kubesimplify 58
(Sealed Secrets: Securely Storing Kubernetes Secrets in Git - Civo.com) or secrets
store csi driver (https://github.com/kubernetes-sigs/secrets-store-csi-driver)
→ Another use case of secrets is in case of pulling images from a private registry.
apiVersion: v1
data:
password: YWRtaW5AMTIz
kind: Secret
K8s kubesimplify 59
metadata:
creationTimestamp: "2025-05-17T20:43:05Z"
name: my-opaque-secret
namespace: default
resourceVersion: "8993"
uid: 7aa0fa4e-ca39-4480-9a38-7e49584904ac
type: Opaque
→ In the above example the secret is only encrypted into base64, which we can
easily decode
> admin@123
K8s kubesimplify 60
kubectl create secret docker-registry pullsec --docker-username ksab070 --doc
ker-password $SECRET --docker-email [email protected]
apiVersion: v1
kind: Pod
metadata:
name: bootcamp-demo-pod
spec:
containers:
- name: bootcamp-demo
image: ksab070/bootcamp-demo:v1
imagePullSecrets:
- name: pullsec
However, stateful applications still are not very easy to run on K8s due to the below
reasons, which we have to setup on our own.
2. DC / DR
3. Backups
4. Monitoring
K8s kubesimplify 61
For this we can use operators such as below, which has the features readily
available.
CloudNativePG - PostgreSQL Operator for Kubernetes |
https://github.com/cloudnative-pg/cloudnative-pg
In StatefulSet the service that gets created is of type Headless (ClusterIP: None)
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgresql
spec:
clusterIP: None
ports:
- port: 5432
name: postgres-port
selector:
app: postgresql #Name of the postgre containers to be used as a selector
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-stateful-set
spec:
serviceName: "postgres" #This comes from the above service definition
selector:
matchLabels:
app: postgresql
replicas: 3
K8s kubesimplify 62
template:
spec:
containers:
- name: postgresql
image: postgres:13
ports:
- containerPort: 5432
name: postgres-port
env:
- name: POSTGRES_PASSWORD
value: Subhan@123
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
metadata:
name: postgres-pod
labels:
app: postgresql
volumeClaimTemplates: #This creates a persistent volume claim, it is independ
ent of stateful set
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
DNS → When you create a Service, it creates a corresponding DNS entry. This entry
is of the form
<service-name>.<namespace-name>.svc.cluster.local
Which means that if a container only uses <service-name>, it will resolve to the
service which is local to a namespace.
→ DNS for Services and Pods | Kubernetes
K8s kubesimplify 63
This DNS functionality come from CoreDNS, which has an ip either → 10.32.0.10 or
10.96.0.10 (Default CIDR).
When using a StatefulSet with a headless service, the DNS entries for the pods are
in the form:
<pod-name>.<headless-service-name>.<namespace>.svc.cluster.local
1. postgres-stateful-set-0.postgres.default.svc.cluster.local
2. postgres-stateful-set-1.postgres.default.svc.cluster.local
3. postgres-stateful-set-2.postgres.default.svc.cluster.local
→ We can verify this by going inside the container and doing nslookup and match
the output with the kubectl command output
Name: postgres-stateful-set-0.postgres.default.svc.cluster.local
Address: 192.168.1.9
NOTE: We do not need to add coredns ip in the /etc/resolv.conf file of the container.
since kubernetes adds these DNS settings at the time of container creation.
K8s kubesimplify 64
Services are kubernetes objects used for exposing a network application that is
running as one or more Pods in your cluster.
→ They can be of the following types
1. Nodeport
2. Load balancer
3. ClusterIP
4. Headless
5. External Names
External DNS → Not an actual service type per say but similar in concept
Services
kubectl get svc
#OR
→ Service’s ip addresses are static always, but pod’s ips are ephemeral
→ They work through updating the ip tables of the nodes (checking iptables is a
good way to troubleshoot prod ussues).
K8s kubesimplify 65
→ Pause container holds the network namespaces of the containers in k8s (We can
check this using the commands given below)
K8s kubesimplify 66
NS TYPE NPROCS PID USER COMMAND
4026531834 time 148 1 root /sbin/init
4026531837 user 148 1 root /sbin/init
4026532359 net 3 4784 65535 /pause
4026532568 uts 3 4784 65535 /pause
4026532569 ipc 3 4784 65535 /pause
4026532571 mnt 2 4842 root nginx: master process nginx -g daemon off;
4026532572 pid 2 4842 root nginx: master process nginx -g daemon off;
4026532573 cgroup 2 4842 root nginx: master process nginx -g daemon of
f;
#Note the if9 for etho0, now to prove that it is connected to the container, run be
low command to get the network interfaces for the containers, here we can see t
hat the etho0 pair is same if9 (in node) and if9 (within the container)
K8s kubesimplify 67
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueu
e state UP group default
link/ether 6a:d6:37:36:9b:69 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.4/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::68d6:37ff:fe36:9b69/64 scope link
valid_lft forever preferred_lft forever
In case of Inter-node pod communication: When we have a multiple pods within the
same node, two different veth (virtual ethernet) talk to each other
→ For any communication from Pod A to Pod B, traffic goes to the eth0 interface >
veth1 acts as a tunnel and traffic goes through the root namespace, bridge resolves
the destination address using the arp (Address resolution protocol) table.
K8s kubesimplify 68
→ We instead use the default gateway
controlplane:~$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.30.1.1 0.0.0.0 UG 1002 0 0 enp1s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.30.1.0 0.0.0.0 255.255.255.0 U 1002 0 0 enp1s0
192.168.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali57beba5f60a
192.168.0.3 0.0.0.0 255.255.255.255 UH 0 0 0 cali328b5d9f5c2
192.168.1.0 192.168.1.0 255.255.255.0 UG 0 0 0 flannel.1
K8s kubesimplify 69
targetPort: 80
selector:
run: nginx
status:
loadBalancer: {}
ClusterIP
It is a Kubernetes service type that:
We should not run stateful application using deployments, as the pods in deployment
do not have a unique network identity, Deployments are designed for stateless
applications, where pods are interchangeable and any instance can handle any
request, it assign pods random hashes, making them interchangeable.
→ Instead, we should use another Kubernetes object known as a stateful set.
K8s kubesimplify 70
NodePort
→ K8s service that exposes the Service on each Node's IP at a static port (the
NodePort). To make the node port available, Kubernetes sets up a cluster IP address,
the same as if you had requested a Service of type: ClusterIP.
→ Type of service which should be used for testing purposes.
→ The Kubernetes control plane allocates a port from a range specified by --service-
→ Each node proxies that port (the same port number on every Node) into your
Service, it enables external access without a load balancer
We can create the service either by the imperative way or through yaml files
(declarative way)
→ YAML
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nodeport-demo
spec:
K8s kubesimplify 71
type: NodePort
ports:
- port: 80 #This is the port we want to expose on the nodes
protocol: TCP
targetPort: 80 #This is the port on our container
selector:
app: nginx
Now that our nodeport service is running, we can do the below command on any of
the node to get response from the nginx
K8s kubesimplify 72
LoadBalancer
→ A K8s service type that exposes the Service externally using an external load
balancer. Kubernetes does not directly offer a load balancing component; you must
provide one, or you can integrate your Kubernetes cluster with a cloud provider.
→ On cloud providers which support external load balancers, setting the type field to
LoadBalancer provisions a load balancer for your Service.
→ K8s communicate with the cloud provider via Cloud Controller Manager
component of ControlPlane - Cloud Controller Manager | Kubernetes and the
following things happen (Example: AWS).
In Prod systems LoadBalancer service do not get created very often, due to being
expensive, we usually use Ingress.
ExternalName
→ It is a K8s service type that maps a service to a DNS name.
→ The use case of this type of service is when we want to talk to any service that is
outside the kubernetes cluster, for e.g. an external database.
→ The flow will go like this: 1. Create an app 2. Create External Name service 3. App
will now be able to access the external service (DB, etc).
→ Another use case to use this, is to communicate with services across
namespaces.
K8s kubesimplify 73
git clone https://github.com/saiyam1814/Kubernetes-hindi-bootcamp.git
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-database
namespace: database-ns
spec:
replicas: 1
selector:
matchLabels:
app: my-database
template:
metadata:
labels:
app: my-database
spec:
containers:
- name: database
image: postgres:latest
env:
- name: POSTGRES_PASSWORD
value: "example"
ports:
- containerPort: 5432
apiVersion: v1
K8s kubesimplify 74
kind: Service
metadata:
name: my-database-service
namespace: database-ns
spec:
selector:
app: my-database
ports:
- protocol: TCP
port: 5432
targetPort: 5432
apiVersion: v1
kind: Service
metadata:
name: external-db-service
namespace: application-ns
spec:
type: ExternalName
externalName: my-database-service.database-ns.svc.cluster.local
ports:
- port: 5432
→ Build the app and push it to the ttl.sh registry (ephemeral registry)
apiVersion: v1
kind: Pod
metadata:
name: my-application
namespace: application-ns
K8s kubesimplify 75
spec:
containers:
- name: app
image: ttl.sh/saiyamdemo:1h
env:
- name: DATABASE_HOST
value: "external-db-service" #Here we are giving the name of the external
name service
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_USER
value: "postgres"
- name: DATABASE_PASSWORD
value: "example"
- name: DATABASE_NAME
value: "postgres"
Ingress
Kubernetes object that manages external access to services within a cluster, it is
native to kubernetes
→ It provides routing rules to direct traffic to the appropriate services based on the
ingress configuration.
→ Single point of external access
→ To use ingress, we need ingress controller that implement ingress in their own
way. For e.g. Nginx ingress controller, Traefik, etc.
2. Create ingress
K8s kubesimplify 76
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bootcamp
spec:
ingressClassName: nginx #We are specifying that we will be using nginx ingres
s controller, ingressclassname allows you to specify which Ingress controller sh
ould handle a given Ingress resource
rules:
- host: "kubernetes.hindi.bootcamp" #This should be a valid host
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service #Must have this service running
port:
number: 80
- path: /public
pathType: Prefix
backend:
service:
name: nginx-service #Must have this service running
port:
number: 80
→ Now we can access the service using hostname and it also provides path-based
routing, but we need ingress controller for external access as it works as a reverse
proxy, we can install it using below
K8s kubesimplify 77