kubectl commands
kubectl commands
com/questions/35757620/how-to-gracefully-remove-a-node-from-
kubernetes
https://success.mirantis.com/article/how-to-pause-or-drain-a-node-on-kubernetes
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#options
journalctl -u etcd.service -l
Important Paths:
==============
/etc/kubernetes/
/etc/kubernetes/pki
/etc/kubernetes/manifests
/etc/cni/net.d -> CNI Config
/opt/cni/bin -> CNI Binaries
/var/lib/kubelet
/var/lib/docker
/etc/kubernetes/kubelet.conf
/var/lib/kubelet/config.yaml
export KUBECONFIG=/root/test_k8s.conf
SSL Certs:
========
# kubeadm certs check-expiration | grep apiserver
# kubeadm certs renew apiserver
Cluster CIDR (cluster_cidr) - The CIDR pool used to assign IP addresses to pods in
the cluster. By default, each node in the cluster is assigned a /24 network from
this pool for pod IP assignments. The default value for this option is
10.42.0.0/16.
Service Cluster IP Range (service_cluster_ip_range) - This is the virtual IP
address that will be assigned to services created on Kubernetes. By default, the
service cluster IP range is 10.43.0.0/16. If you change this value, then it must
also be set with the same value on the Kubernetes API server (kube-api).
kubectl auth can-i --list --namespace=foo -> list of actions allowed in a namespace
kubectl auth can-i --list --as dev-user --namespace=foo -> list of actions allowed
in a namespace
kubectl auth can-i create deployements -> tells if you have access to perform this
action or not...yes/no is the output
kubectl auth can-i create deployements --as dev-user
kubectl auth can-i create deployements --as dev-user --namespace test
kubectl cluster-info
kubectl run nginx --image nginx --dry-run=client -o yaml > nginx.yml -> to
generate YAML file for POD creation, this doesn't create POD
kubectl run pod test2 nodeName=node1 -> to run the node on specific node..you can
use "kind: Binding" if you want to bind the already running pod to a node
kubectl taint node node01 app=blue:NoSchedule -> Only pods with labels "app=blue"
will be scheduled on node01
kubectl taint node controlplane node-role.kubernetes.io/master:NoSchedule- -> To
untaint the master node
kubectl cordon <nodename> -> Disable the scheduling..exitsing pods remains running
on the same node
kubectl drain <nodename> --ignore-daemonsets -> Maintenance mode. To remove the
existing pods running on a specific node and have them rescheduled on other nodes
in the cluster if they are part of replica set
kubectl uncordon <nodename> -> To enable scheduling on the node for new pods. To
re-distribute the existing pods in the cluster, you need to restart the deployment
so that few of the PODs will be restart on this node
-- do the activity
docker ps
docker ps -a
echo -n 'password' | base64 -> to convert the plain password into hash
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
kubectl cluster-info
kubeadm version
kubeadm upgrade plan
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
kubeadm version
kubeadm upgrade plan
kubeadm upgrade apply v1.21.1
Upgrade kubeadm and cluster on other master nodes if you have any
kubectl drain master-node-1 --ignore-daemonsets
yum install -y kubelet-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet
kubectl uncordon
kubectl get nodes master-node-1
===============================
cat /etc/kubernetes/manifests/etcd.yaml | grep -i advertise-client-urls
--advertise-client-urls=https://10.37.253.3:2379
#vi /etc/kubernetes/manifests/etcd.yaml
-hostpath:
path: /var/lib/etcd-from-backup
==================================
openssl genrsa -out jane.key 2048
openssl req -new -key jane.key -subj "/CN=jane" -out jane.csr
cat jane.csr | base64
create manifests file with Kind "CertificateSigningRequest"
kubectl get csr
kubectl certificate approve <csrname>
kubectl certificate approve jane -o yaml
echo "sfs$^JGH*(" | base64 --decode
=====================================
Without kubeconfig file:
you can also mention the namespaces in contect section to switch to particular
namespace in a cluster
====================================================
To list API version:
================
On k8s master node:
# kubectl proxy
Starting to serve on 127.0.0.1:8001
# curl http://localhost:8001 -k -> this will list high level API options
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
# curl http://localhost:8001/apis -k
=============
Service Account:
kubectl create serviceaccount srv-test
kubectl get serviceaccount
kubectl describe serviceaccount srv-test | grep -i "Tokens:"
kubectl get secrets | grep -i "Token Name"
kubectl describe secret <secretname>
=============
Docker Registry:
=============
Elastic Container Registry (ECR)
Azure Container Registry (ACR)
VMWare Harbour
ingress:
======
# kubectl get all -A | grep -i controller
kube-system deployment.apps/calico-kube-controllers
1/1 1 1 83d
===============================
Nginx Ingress controller setup:
# kg roles -n ingress-space
NAME CREATED AT
ingress-role 2021-09-13T13:25:18Z
# kd roles ingress-role -n ingress-space
Name: ingress-role
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
configmaps [] [] [get create]
configmaps [] [ingress-controller-leader-nginx] [get update]
endpoints [] [] [get]
namespaces [] [] [get]
pods [] [] [get]
secrets [] [] [get]
# kg rolebinding -n ingress-space
NAME ROLE AGE
ingress-role-binding Role/ingress-role 100s
# kd rolebinding ingress-role-binding -n ingress-space
Name: ingress-role-binding
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: <none>
Role:
Kind: Role
Name: ingress-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount ingress-serviceaccount
#k apply -f ingress-controller.yaml
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-controller 1/1 1 1 4m42s
nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-
controller:0.21.0 name=nginx-ingress
#k apply -f ingress-svc.yml
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
SELECTOR
service/ingress NodePort 10.99.252.233 <none> 80:30080/TCP 2m36s
name=nginx-ingress
======================================
JSON Path:
=========
=====================================
#kg svc -A | grep -i dns
#cat /var/lib/kubelet/config.yaml| grep -i clusterDNS