Kubernetes cluster creation using Kubeadm:
Step 1: Installing Kubeadm.
Pre-requisites:
1. A compatible Linux host. The Kubernetes project provides generic instructions for Linux
distributions based on Debian and Red Hat, and those distributions without a package manager.
2. 2 GB or more RAM per machine (any less will leave little room for your apps).
3. 2 CPUs or more.
4. Full network connectivity between all machines in the cluster (public or private network is fine).
5. Unique hostname, MAC address, and product_uuid for every node.
6. Certain ports are open on your machines.
7. Swap disabled. You MUST disable swap for the kubelet to work properly.
Started with a fresh ubuntu 20.04.
Install the net-tools to run the ifconfig command:
sudo apt install net-tools
sudo apt-get install telnetd
sudo apt install firewalld
sudo swapoff -a
open the /etc/fstab file and add # before the /swapfile. Or run the below
commands:
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Verify the MAC address and product_uuid are unique for every node
You can get the MAC address of the network interfaces using the command ip
link or ifconfig -a
The product_uuid can be checked by using the command sudo cat
/sys/class/dmi/id/product_uuid
Letting iptables see bridged traffic
Make sure that the br_netfilter module is loaded. This can be done by
running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe
br_netfilter.
As a requirement for your Linux Node's iptables to correctly see bridged
traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in
your sysctl config, e.g.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
For more details please see the Network Plugin Requirements page.
Check required ports
These required ports need to be open in order for Kubernetes components to
communicate with each other. You can use telnet to check if a port is open. For
example:
telnet 127.0.0.1 6443
The pod network plugin you use (see below) may also require certain ports to be
open. Since this differs with each pod network plugin, please see the
documentation for the plugins about what port(s) those need
Install the container runtime: docker is most commonly used and do not require
some additional configuration at the K8s end.
Docker Installation Ubuntu:
Uninstall old versions
Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed,
uninstall them:
$ sudo apt-get remove docker docker-engine docker.io containerd runc
Set up the repository
1. Update the apt package index and install packages to allow apt to use a repository over
HTTPS:
2. $ sudo apt-get update
3.
4. $ sudo apt-get install \
5. ca-certificates \
6. curl \
7. gnupg \
8. lsb-release
9. Add Docker’s official GPG key:
10. $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor
-o /usr/share/keyrings/docker-archive-keyring.gpg
11. Use the following command to set up the stable repository. To add
the nightly or test repository, add the word nightly or test (or both) after the
word stable in the commands below. Learn about nightly and test channels.
12. $ echo \
13. "deb [arch=$(dpkg --print-architecture)
signed-by=/usr/share/keyrings/docker-archive-keyring.gpg]
https://download.docker.com/linux/ubuntu \
14. $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
> /dev/null
Install Docker Engine
1. Update the apt package index, and install the latest version of Docker Engine and
containerd, or go to the next step to install a specific version:
2. $ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
Installing kubeadm, kubelet and kubectl
You will install these packages on all of your machines:
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and
does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
1. Update the apt package index and install packages needed to use the
Kubernetes apt repository:
2. sudo apt-get update
3. sudo apt-get install -y apt-transport-https ca-certificates curl
4. Download the Google Cloud public signing key:
5. sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
https://packages.cloud.google.com/apt/doc/apt-key.gpg
6. Add the Kubernetes apt repository:
7. echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]
https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
8. Update apt package index, install kubelet, kubeadm and kubectl, and pin
their version:
9. sudo apt-get update
10.sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Run below command:
Kubeadm init --pod-network-cidr=192.168.0.1/16 --apiserver-advertise-
address=<yourhost_ip_address>
Sometimes the above commend doesn’t work so follow the below steps:
sudo vi /etc/docker/daemon.json and added below:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Then
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
then re-run Kubeadm init --pod-network-cidr=192.168.0.1/16 --apiserver-advertise-
address=<yourhost_ip_address>
Follow the instructions from the output from the above command:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --
discovery-token-ca-cert-hash sha256:<hash>
Since you are using the cidr address range as 192.168.0.0/16 you must use the calico as a network tool
for installation instructions you can follow the guidelines below depending on the number of pods 50 or
more:
https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/
onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/
onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/
onpremises#install-calico-with-etcd-datastore
Install Calico
1. Install the Tigera Calico operator and custom resource definitions.
2. kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-
operator.yaml
3. Install Calico by creating the necessary custom resource. For more information
on configuration options available in this manifest, see the installation reference.
4. kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-
resources.yaml
5. kubectl apply -f calico.yaml
Note: Before creating this manifest, read its contents and make sure its settings
are correct for your environment. For example, you may need to change the
default IP pool CIDR to match your pod network CIDR.
6. Confirm that all of the pods are running with the following command.
watch kubectl get pods -n calico-system