0% found this document useful (0 votes)
10 views32 pages

Unit 5 DAK

Kubernetes patterns provide reusable solutions for building cloud-native applications and services, helping developers avoid trial and error in system design. The document outlines various types of patterns, including foundational, behavioral, structural, configuration, and advanced patterns, which guide developers in utilizing Kubernetes effectively. Additionally, it explains the roles of Kubernetes components such as nodes, pods, services, and replication controllers in managing containerized applications.

Uploaded by

Kalaiyarasi.A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views32 pages

Unit 5 DAK

Kubernetes patterns provide reusable solutions for building cloud-native applications and services, helping developers avoid trial and error in system design. The document outlines various types of patterns, including foundational, behavioral, structural, configuration, and advanced patterns, which guide developers in utilizing Kubernetes effectively. Additionally, it explains the roles of Kubernetes components such as nodes, pods, services, and replication controllers in managing containerized applications.

Uploaded by

Kalaiyarasi.A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

KUBERNETES

A pattern describes a repeatable solution to a problem. Kubernetes


patterns are design patterns for container-based applications and services.

Kubernetes can help developers write cloud-native apps, and it provides a


library of application programing interfaces (APIs) and tools for building
applications.

However, Kubernetes doesn’t provide developers and architects with


guidelines for how to use these pieces to build a complete system that meets
business needs and goals.

Patterns are a way to reuse architectures. Instead of completely creating the


architecture yourself, you can use existing Kubernetes patterns, which also
ensure that things will work the way they’re supposed to.

When you are trying to deliver important business services on top of


Kubernetes, learning through trial and error is too time-consuming, and can
result in problems like downtime and disruption.

Think of a pattern like a blueprint; it shows you the way to solve a whole
class of similar problems. A pattern is more than just step-by-step
instructions for fixing one specific problem.

Using a pattern may result in somewhat different outcomes; they aren’t


meant to provide an identical solution. Your system may look different from
someone else who used the same pattern. However, both systems will share
common characteristics.

By using Kubernetes patterns, developers can create cloud-native apps with


Kubernetes as a runtime platform

Types of Kubernetes patterns

Patterns are the tools needed by a Kubernetes developer, and they will show
you how to build your system.

 Foundational patterns cover the core concepts of Kubernetes. These


patterns are the underlying principles and practices for building
container-based cloud-native applications.

 Behavioral patterns describe the patterns that sit on top of


foundational patterns and add granularity to concepts for managing
various types of container and platform interactions.
 Structural patterns are related to organizing containers within a
Kubernetes pod.

 Configuration patterns are used for the various ways application


configuration can be handled in Kubernetes. These patterns include the
specific steps for connecting applications to their configuration.

 Advanced patterns include advanced concepts, such as how the


platform itself can be extended or how to build container images
directly within the cluster.

KUBERNETES DOCKER CONTAINERS

Kubernetes + Docker: How They Work Together

🔹 What is Docker?

 Docker is a containerization platform that packages your


application and its dependencies into a portable image.

 You use Docker to build and run container images.

🔹 What is Kubernetes?

 Kubernetes is a container orchestrator—it doesn’t build containers,


but it runs and manages them at scale.

 It uses container runtimes (like Docker, containerd, or CRI-O) to


launch containers.

🔧 Kubernetes Running Docker Containers (Basic Flow)

1. Write Dockerfile

Dockerfile

CopyEdit

FROM node:18

COPY . /app
WORKDIR /app

RUN npm install

CMD ["npm", "start"]

2. Build Docker Image

bash

CopyEdit

docker build -t myapp:latest .

3. Push to Registry

bash

CopyEdit

docker tag myapp:latest myrepo/myapp:latest

docker push myrepo/myapp:latest

4. Deploy to Kubernetes

yaml

CopyEdit

apiVersion: apps/v1

kind: Deployment

metadata:

name: myapp

spec:

replicas: 3

selector:

matchLabels:

app: myapp

template:

metadata:

labels:
app: myapp

spec:

containers:

- name: myapp

image: myrepo/myapp:latest

ports:

- containerPort: 3000

5. Expose via Service

yaml

CopyEdit

apiVersion: v1

kind: Service

metadata:

name: myapp-service

spec:

selector:

app: myapp

ports:

- protocol: TCP

port: 80

targetPort: 3000

type: LoadBalancer

Term Description

A machine (VM or physical) that runs pods. Each


Node
cluster has one or more nodes.

Cluster A group of nodes managed by Kubernetes control


Term Description

plane.

The smallest deployable unit in K8s. A pod runs


Pod
one or more containers.

A stable network endpoint that routes traffic to


Service
pods (via label selectors).

Ensures the specified number of pod replicas are


Replication
running. Mostly replaced by Deployments, but still
Controller
relevant in legacy setups.

Key-value pairs attached to objects (like pods) for


Label
organization and selection.

A query to match labels on objects (e.g., which


Selector
pods a service should route to).

A unique identifier for Kubernetes objects within a


Name
namespace.

A virtual cluster inside a cluster—used to


Namespace separate resources (like for multi-tenant systems or dev
vs prod environments).

Absolutely — let’s dive into Nodes in Kubernetes. They’re a fundamental


part of the architecture.

🧱 What is a Node in Kubernetes?

A Node is a physical or virtual machine in your Kubernetes cluster. It's


where your containers (via Pods) actually run.

Each node contains:

 A container runtime (e.g., containerd, formerly Docker)

 Kubelet (agent that talks to the control plane)

 Kube-proxy (handles networking for Pods on the node)

 Any necessary system services (like logging or monitoring agents)


📦 Node Types

Type Description

Master Node Manages the cluster: API server, scheduler, controller-


(Control Plane) manager, etcd.

Runs your applications inside Pods. Most of the actual


Worker Node
"work" happens here.

In managed services like GKE, EKS, or AKS, you often only worry about
worker nodes—control plane is managed for you.

What’s Inside a Node?

Here’s a simplified layout:

[ Node ]

├─ kubelet - Talks to the Kubernetes control plane

├─ container runtime - Runs containers (Docker, containerd)

├─ kube-proxy - Manages networking rules for services/pods

└─ pods - Run app containers

🔄 How a Node Joins a Cluster

1. Install Kubernetes components (e.g., via kubeadm or cloud


provisioning).

2. Node registers with the Kubernetes API Server.

3. It becomes schedulable, meaning the control plane can assign Pods


to it.

🔍 Checking Node Info

Run this command to list your nodes:

kubectl get nodes


Get details about a specific node:

kubectl describe node <node-name>

📊 Node Status

Each node reports its status to the control plane, including:

 Ready / NotReady status

 Resource usage (CPU, memory)

 Labels and taints

 Kubelet version

 Allocatable capacity

🧠 Bonus: Node Labels and Taints

 Labels: Used to group or filter nodes. Example:

 kubectl label node node-1 disktype=ssd

 Taints: Prevent pods from being scheduled unless they tolerate the
taint. Great for reserving nodes for specific workloads.

CLUSTER IN KUBERNETES

🌐 What is a Kubernetes Cluster?

A Kubernetes cluster is a set of machines (nodes) that run your


containerized applications. It includes:

 A Control Plane (master components)

 Worker Nodes (where apps actually run)

Kubernetes manages the deployment, scaling, networking, and lifecycle of


your containers across this cluster.

🧠 Core Concept
Think of a cluster like a brain + body system:

 🧠 Control Plane = the brain (decides what to do)

 💪 Worker Nodes = the muscles (do the work)

Your containers (wrapped in Pods) live on the worker nodes, but are
controlled by the control plane.

🧩 Cluster Components Overview

🧠 Control Plane Components

Component Role

The front door to your cluster (handles all


kube-apiserver
kubectl requests)

Key-value store for all cluster data (like a


etcd
database for configs)

kube-scheduler Decides where pods should run

controller-manager Handles background tasks like scaling, failures

cloud-controller-manager Manages cloud-specific stuff like load


(optional) balancers

⚙️Worker Node Components

Component Role

kubelet Talks to the API server, runs the containers

Manages networking (IP rules, service


kube-proxy
routing)

Container Pulls and runs containers (e.g., containerd,


runtime Docker)

🎯 What a Cluster Does

✅ Schedules pods (runs your containers)


✅ Handles service discovery & load balancing
✅ Reschedules failed containers
✅ Manages resource usage (CPU, memory)
✅ Supports auto-scaling
✅ Secures and isolates workloads

🚀 Real-World Cluster Example

 You deploy an app using a YAML file.

 The API Server accepts the request.

 The Scheduler picks a node to run your Pod.

 The Kubelet on that node starts the Pod using the container runtime.

 The Service connects users to your Pod, no matter where it lives in the
cluster.

🧪 Check Your Cluster

kubectl cluster-info

Check connected nodes:

kubectl get nodes

Managed Kubernetes Clusters

Instead of setting everything up manually, many teams use cloud-managed


clusters, like:

 GKE (Google Kubernetes Engine)

 EKS (Amazon Elastic Kubernetes Service)

 AKS (Azure Kubernetes Service)

 OpenShift (Red Hat’s enterprise Kubernetes)

These services handle control plane ops (patching, scaling, availability) for
you.

Want help setting up your own cluster? I can walk you through:
 Local setup with minikube or kind

 Cloud setup using Terraform or cloud CLIs

 Cluster scaling, security, or networking best practices

KUBERNETES SERVICES

one of the most important pieces when it comes to networking and


communication between pods.

🌐 What is a Kubernetes Service?

A Service in Kubernetes is an abstraction layer that provides a stable IP


and DNS name to access a group of Pods.

Even when Pods come and go (due to scaling or restarts), the Service
endpoint stays the same, making it much easier to connect to your apps.

🔗 Why Use a Service?

Pods have dynamic IPs — if you try to access a Pod directly, its IP might
change. A Service solves this by:

 Providing a stable entry point.

 Load balancing traffic across all matching Pods (using


labels/selectors).

 Enabling communication inside or outside the cluster.

🧱 Key Types of Services

Type Purpose Accessible From

ClusterIP
Internal-only service Inside the cluster
(default)

Exposes service on each node’s IP + External (via Node IP +


NodePort
port port)

LoadBalancer Exposes service via a cloud provider External (public IP)


Type Purpose Accessible From

load balancer

Maps service to an external DNS Outside K8s (like a


ExternalName
name DNS alias)

📦 Example YAML: ClusterIP Service

apiVersion: v1

kind: Service

metadata:

name: myapp-service

spec:

selector:

app: myapp

ports:

- protocol: TCP

port: 80 # exposed port

targetPort: 8080 # container port

type: ClusterIP

This service will route traffic to any Pod with app: myapp in its labels,
forwarding from port 80 to 8080 inside the Pods.

📦 Example: LoadBalancer Service (for cloud)

apiVersion: v1

kind: Service

metadata:

name: myapp-service

spec:
selector:

app: myapp

ports:

- port: 80

targetPort: 8080

type: LoadBalancer

In GKE, EKS, AKS, etc., this will provision an actual cloud load balancer with a
public IP.

📡 Service Discovery

 Services get a DNS entry in the form:

 <service-name>.<namespace>.svc.cluster.local

 Pods can connect using just the service name if in the same
namespace:

 curl http://myapp-service

🧠 Bonus: Headless Services

If you want direct Pod-to-Pod communication without a load balancer,


you can use a headless service:

spec:

clusterIP: None

Useful for StatefulSets, databases (like Cassandra, Redis, etc).

🔍 Check Service Info

kubectl get svc

kubectl describe svc myapp-service


You're moving right down the Kubernetes stack — now let’s get into Pods,
the smallest deployable unit in Kubernetes.

🧱 What is a Pod in Kubernetes?

A Pod is the basic building block of Kubernetes — it's the smallest unit
you can create or manage.

A Pod:

 Wraps one or more containers (usually one)

 Shares network, storage, and lifecycle

 Runs your actual app (like a Node.js server, Python app, Redis instance,
etc.)

🔥 Think of a Pod as a single "instance" of your app.

📦 What's Inside a Pod?

[ Pod ]

├─ Container(s) 🐳

├─ Shared IP Address

├─ Shared Volumes (if any)

└─ Pod-level configs (like env vars, secrets)

Typical Use Cases:

 Run a single containerized app (most common)

 Run multiple tightly-coupled containers (e.g., main app + sidecar for


logging or proxy)

📘 Example: Basic Pod YAML

apiVersion: v1

kind: Pod

metadata:
name: myapp-pod

labels:

app: myapp

spec:

containers:

- name: myapp-container

image: myrepo/myapp:latest

ports:

- containerPort: 8080

You’d run this using:

kubectl apply -f pod.yaml

🔧 Managing Pods

Command Description

kubectl get pods List all Pods

kubectl describe pod


Detailed info
<name>

Get logs from


kubectl logs <name>
container

kubectl exec -it <name> Access container


-- /bin/sh terminal

🔄 Pod Lifecycle

Pods go through phases:

 Pending: Scheduled, waiting to be created

 Running: Containers are up

 Succeeded / Failed: Containers exited

 CrashLoopBackOff: Container keeps crashing


Pods are ephemeral — if a node dies, so do the Pods on it (unless managed
by a controller like a Deployment).

Important Notes

 Pods aren’t meant to be created directly in production.

o Use Deployments, StatefulSets, or Jobs to manage Pods.

 Labels on Pods are crucial for connecting them to Services.

🚀 Real World Example

Say you want to run a Node.js app with Redis in the same Pod (not typical,
but for tightly-coupled use cases):

spec:

containers:

- name: node-app

image: node:18

- name: redis

image: redis

These containers:

 Share the same localhost network

 Communicate via localhost:<port>

 Start and stop together

Perfect — let’s get into Replication Controllers, one of the earlier tools in
Kubernetes for managing Pods.

🔁 What is a ReplicationController?

A ReplicationController (RC) is a Kubernetes object that ensures a


specified number of identical Pods are always running.
If a Pod dies, it spins up a new one.
If too many are running, it removes extras.

⚠️Note: ReplicationControllers are largely replaced by Deployments in


modern Kubernetes. But understanding RCs is still useful, especially for
legacy clusters or learning the basics of Pod management.

🧱 What Does It Do?

 Maintains Pod count

 Automatically restarts Pods if they crash

 Ensures high availability

📦 ReplicationController YAML Example

apiVersion: v1

kind: ReplicationController

metadata:

name: myapp-rc

spec:

replicas: 3

selector:

app: myapp

template:

metadata:

labels:

app: myapp

spec:

containers:

- name: myapp

image: myrepo/myapp:latest
ports:

- containerPort: 8080

Breakdown:

 replicas: 3: Maintain 3 running Pods at all times

 selector: Match existing Pods with app: myapp

 template: Defines the Pod to create

⚙️How It Works

1. You create a ReplicationController.

2. It checks how many matching Pods are running.

3. It creates or deletes Pods to match the desired replica count.

📋 Commands to Use

Command What It Does

kubectl get rc List ReplicationControllers

kubectl describe rc <name> Show details

Remove it and its managed


kubectl delete rc <name>
Pods

kubectl scale rc <name> --


Change desired count
replicas=5

🧠 When to Use RC vs Deployment

ReplicationContr
Feature Deployment
oller

Supports ✅ Rolling
❌ Manual
updates updates

Easy rollback ❌ No ✅ Yes

Recommended ❌ Legacy ✅ Yes


ReplicationContr
Feature Deployment
oller

✅ Use Deployments for most real-world scenarios.

📈 Visual Flow

ReplicationController

Watches matching Pods

Ensures 'replica' count is met

Creates/deletes Pods as needed

You're touching on several core Kubernetes concepts — let’s break


each of these down in a clean and easy-to-follow way:

🧲 Selector

A selector is how Kubernetes matches labels on objects like Pods, so


that it knows which resources to target.

📌 Example:

If a Service or Deployment uses:

selector:

app: myapp

It will only match resources that have:

labels:

app: myapp
🎯 Types of selectors:

 Equality-based: app=myapp

 Set-based:

 matchExpressions:

 - key: tier

 operator: In

 values:

 - backend

 - frontend

🆔 Name

Every object in Kubernetes has a name (must be unique within a


namespace). It’s how you refer to individual resources.

Examples:

 Pod name: myapp-pod

 Service name: backend-service

You access them via:

kubectl get pod myapp-pod

📛 Namespace

A namespace is like a virtual cluster inside your Kubernetes cluster.


It helps with organizing, isolating, and managing resources.

Default namespaces:

Name Purpose

default The default space for resources

System components like DNS, kube-


kube-system
proxy

kube-public Public readable (usually empty)


Name Purpose

kube-node-
Node heartbeat tracking
lease

Use Case:

Separate environments:

 dev

 test

 prod

kubectl create namespace dev

kubectl get pods --namespace=dev

💾 Volume

A volume in Kubernetes is a directory accessible to containers in a


Pod — used to store data across restarts or share files between containers.

Types include:

 emptyDir (temporary storage)

 hostPath (local disk on the node)

 persistentVolumeClaim (for external storage)

 Cloud volumes (AWS EBS, GCP PD)

Example:

volumes:

- name: app-storage

emptyDir: {}

containers:

- name: app

volumeMounts:
- mountPath: /data

name: app-storage

🔁 Service Proxy (kube-proxy)

kube-proxy is a network component that:

 Runs on each node

 Manages network rules

 Forwards traffic to the correct Pod behind a Service

It ensures that when you access a Service IP, the request is load-
balanced and sent to one of the healthy Pods.

How it works:

1. You send a request to the Service IP.

2. kube-proxy intercepts and sends it to a selected Pod.

3. That Pod serves the request.

If you're using iptables or IPVS mode, kube-proxy sets up rules to


forward traffic efficiently.

🚀 Bringing It All Together

Here's a mini-concept example combining them all:

apiVersion: v1

kind: Service

metadata:

name: myapp-service

namespace: dev

spec:

selector:

app: myapp
ports:

- port: 80

targetPort: 8080

This:

 Is named myapp-service

 Lives in the dev namespace

 Uses a selector to find Pods with app: myapp

 Routes requests from port 80 to the actual container's port 8080

 kube-proxy handles the traffic routing behind the scenes

To list SERVICES IN YOUR KUBERNETES CLUSTER, you can use the kubectl
command. Here's how you can list and explore Services:

📋 List Services in the Current Namespace

Run the following command to see all the Services in the


default namespace or the currently selected namespace:

kubectl get svc

This will display a list of all Services with details like:

 Name

 ClusterIP

 Ports

 Age (how long they've been running)

Example output:

NAME TYPE CLUSTER-IP PORT(S) AGE

myapp-service ClusterIP 10.96.0.1 80/TCP 10d

List Services in a Specific Namespace


If you want to list the Services in a specific namespace, you
can specify it like this:

kubectl get svc --namespace=your-namespace

Example:

kubectl get svc --namespace=dev

📄 Get Detailed Information About a Specific Service

If you want to see more details about a particular Service, such


as its spec, labels, and selector, use:

kubectl describe svc <service-name>

Example:

kubectl describe svc myapp-service

This will give you a detailed breakdown, including:

 Selector: Which Pods the Service targets

 Port information: The exposed port and target port

 Endpoints: The Pods that are part of the Service

🧑‍💻 List Services in All Namespaces

To list Services across all namespaces in your cluster:

kubectl get svc --all-namespaces

This will show Services from every namespace and their


associated details.

To list the nodes in your Kubernetes cluster, you can use the
kubectl get nodes command. Here’s how you do it:

📋 List All Nodes


Run this command to see a list of all nodes in your Kubernetes
cluster:

kubectl get nodes

Example output:

NAME STATUS ROLES AGE VERSION

node1 Ready <none> 10d v1.22.3

node2 Ready <none> 10d v1.22.3

This will display information about each node, such as:

 Name: The node’s name (usually the machine or VM name)

 Status: Whether the node is Ready, NotReady, or in another


state

 Roles: The roles assigned to the node (e.g., master, worker)

 Age: How long the node has been in the cluster

 Version: The Kubernetes version running on the node

📄 Get Detailed Information About a Specific Node

If you want to get more detailed information about a specific


node, use:

kubectl describe node <node-name>

Example:

kubectl describe node node1

This will give you detailed information like:

 Capacity: CPU, memory, and storage resources

 Allocatable resources: How much CPU, memory, etc., is


available for Pods

 Conditions: The health and status of the node (e.g., Ready,


OutOfDisk)

 Pod information: Pods running on this node


🧑‍💻 List Nodes in Specific Context or Namespace (if you have
multiple clusters)

If you're managing multiple clusters or using a specific


context, you can set the desired context before running kubectl get
nodes:

kubectl config use-context <your-context>

kubectl get nodes

To set up a Kubernetes cluster with WordPress and explore


scaling and testing, we'll walk through the process step by step.

🧑‍💻 Steps for Setting Up a Kubernetes Cluster with WordPress

We’ll use K3s (lightweight Kubernetes) or Minikube for local


development, but these steps will work for any Kubernetes cluster
provider (e.g., GKE, EKS, AKS).

🔧 1. Set Up Kubernetes Cluster

You can create a cluster on Minikube, K3s, or a cloud provider


like Google Kubernetes Engine (GKE). If you already have a
Kubernetes cluster set up, you can skip this step.

For Minikube (local Kubernetes):

minikube start

For K3s (lightweight, easy Kubernetes):

curl -sfL https://get.k3s.io | sh -

🐳 2. Deploy WordPress to Kubernetes

WordPress typically needs a database (MySQL) and a web


container to run. Kubernetes makes this easier using a Deployment
for WordPress and a PersistentVolume for storage.
Here’s a WordPress Deployment with a MySQL database:

Create a WordPress Deployment YAML (wordpress-


deployment.yaml):

apiVersion: apps/v1

kind: Deployment

metadata:

name: wordpress-deployment

spec:

replicas: 2

selector:

matchLabels:

app: wordpress

template:

metadata:

labels:

app: wordpress

spec:

containers:

- name: wordpress

image: wordpress:latest

ports:

- containerPort: 80

env:

- name: WORDPRESS_DB_HOST

value: mysql-service:3306

- name: WORDPRESS_DB_NAME

value: wordpress
- name: WORDPRESS_DB_USER

valueFrom:

secretKeyRef:

name: mysql-secret

key: username

- name: WORDPRESS_DB_PASSWORD

valueFrom:

secretKeyRef:

name: mysql-secret

key: password

---

apiVersion: v1

kind: Service

metadata:

name: wordpress-service

spec:

selector:

app: wordpress

ports:

- port: 80

targetPort: 80

type: LoadBalancer

Create a MySQL Deployment YAML (mysql-deployment.yaml):

apiVersion: apps/v1

kind: Deployment

metadata:

name: mysql-deployment
spec:

replicas: 1

selector:

matchLabels:

app: mysql

template:

metadata:

labels:

app: mysql

spec:

containers:

- name: mysql

image: mysql:5.7

env:

- name: MYSQL_ROOT_PASSWORD

valueFrom:

secretKeyRef:

name: mysql-secret

key: password

- name: MYSQL_DATABASE

value: wordpress

---

apiVersion: v1

kind: Service

metadata:

name: mysql-service

spec:
selector:

app: mysql

ports:

- port: 3306

targetPort: 3306

Create a Secret for MySQL Password (mysql-secret.yaml):

apiVersion: v1

kind: Secret

metadata:

name: mysql-secret

type: Opaque

data:

username: <base64-encoded-username>

password: <base64-encoded-password>

Note: Use echo -n 'password' | base64 to base64 encode your


username and password.

Deploy the WordPress and MySQL

Once you have the YAML files created, apply them to your
Kubernetes cluster:

kubectl apply -f mysql-deployment.yaml

kubectl apply -f wordpress-deployment.yaml

kubectl apply -f mysql-secret.yaml

📦 3. Expose WordPress via a Service

To expose WordPress outside the cluster, we already set the


LoadBalancer type in the WordPress service. If you're using
Minikube, you can expose services using:
minikube service wordpress-service

For a cloud-based cluster (GKE, EKS, AKS), it will automatically


provision a load balancer and provide a public IP.

⚖️4. Scaling WordPress

Now, let’s scale the WordPress Deployment. You can adjust the
number of replicas to scale your WordPress instance.

kubectl scale deployment wordpress-deployment --replicas=5

Check the scaling:

kubectl get pods

🔬 5. Test the WordPress Setup

To test if WordPress is working after scaling:

1. Go to the external IP provided by the load balancer (or


Minikube if you’re working locally).

2. You should see the WordPress installation page.

Test Kubernetes Resources:

 Pods: Ensure all WordPress and MySQL Pods are running


smoothly.

 kubectl get pods

 Services: Verify that the services are correctly exposed.

 kubectl get svc

🚀 6. Monitoring and Autoscaling

For a production environment, you can enable Horizontal Pod


Autoscaler (HPA) to automatically scale WordPress Pods based on
resource usage (e.g., CPU, memory).

First, enable metrics server:


kubectl apply -f https://github.com/kubernetes-sigs/metrics-
server/releases/download/v0.5.0/components.yaml

Create a HorizontalPodAutoscaler (hpa.yaml):

apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

name: wordpress-hpa

spec:

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: wordpress-deployment

minReplicas: 1

maxReplicas: 10

metrics:

- type: Resource

resource:

name: cpu

target:

type: Utilization

averageUtilization: 50

Apply the HPA:

kubectl apply -f hpa.yaml

🧪 7. Testing and Load Balancing

To simulate a load test, you can use tools like Apache JMeter or
k6 to simulate traffic to the WordPress service.
 Install and set up k6:

 k6 run load-test.js

This will help you check how the application performs when
scaled.

🧑‍💻 Conclusion

By following these steps, you’ve:

 Set up a Kubernetes cluster with WordPress and MySQL.

 Scaled the WordPress Deployment.

 Created a Horizontal Pod Autoscaler to automatically scale


based on load.

Let me know if you want help with a specific part of the


process or need a more detailed example of scaling and testing!

You might also like