kubeadm一键部署k8s1.25.4高可用集群--更新(2023-09-15)

原文地址:kubeadm一键部署k8s1.25.4高可用集群

kubeadm一键部署k8s1.25.4高可用集群--更新(2023-09-15)

配置清单

主机名

IP地址

组件

node1

192.168.111.130

etcd,apiserver,controller-manager,scheduler

node2

192.168.111.131

etcd,apiserver,controller-manager,scheduler

node3

192.168.111.133

etcd,apiserver,controller-manager,scheduler

apiserver.cluster.local

192.168.111.130

vip

本地资源有限,就测试三个master主机,worker节点比master更好添加。

使用本地DNS解析apiserver.cluster.local,可以替代复杂且不可靠的负载均衡工具,出现问题更改apiserver.cluster.local的解析master的IP即可进行切换,或者基于nginx进行自动切换。

操作系统及软件版本信息

本次测试系统有两个

  • centos:7

  • openruler:22.03

软件版本

  • kernel:4.1.9

  • kubelet:1.25.4

  • kubeadm:1.25.4

  • kubectl:1.25.4

  • cri-tools:1.26.0

  • socat:1.7.3.2

  • containerd:1.6.10

  • nerdctl:1.5.0

  • etcd:3.5.6

  • cni-plugins:1.1.1

  • crictl:1.25.0

用到的脚本--均已兼容centos7和open euler

  • 01-rhel_init.sh 用于初始化服务器的操作,并配置检查部署K8S的基础条件是否满足。

  • 02-containerd-install.sh 用于安装containerd容器运行时。

  • 03-kubeadm-mater1-init.sh 用于安装kubeadm等服务,并初始化master1节点,创建出token,用于其他节点注册。

  • 04-kubeadm-mater-install.sh 用于其他节点安装kubeadm等服务,并向master1进行注册。

  • copy-certs.sh 用于CA证书等在master1节点向其他master节点分发。

部署支持

支持在线安装

支持离线安装

暂时只制作了X86版本软件包

暂时只测试了centos7与oepneuler22.0.3

后续计划

  • 支持arm64

  • 测试国产系统

  • 出一个ansible版本,使部署更优雅

部署流程

  • node1初始化集群主节点

  • 拷贝node1的证书到node2,node3

  • 加入集群

准备工作

  • 1、上传软件包

  • 2、自行配置主机node1与其他节点的免密操作,不配置需要输入密码

  • 3、配置主机hosts

  • 4、修改kubeadm-config.yaml文件的IP地址为你当前主机IP地址

  • 5、剩下的就可以交给我了

开始部署

centos7的node1静态资源目录如下:

[root@node1 ~]# tree .
.
├── 01-rhel_init.sh
├── 02-containerd-install.sh
├── 03-kubeadm-mater1-init.sh
├── 04-kubeadm-mater-install.sh
├── bin
│   ├── etcdctl
│   ├── nerdctl
│   └── runc
├── conf
│   ├── containerd.service
│   ├── docker.service
│   ├── k8s.conf
│   └── sysctl.conf
├── copy-certs.sh
├── images_v1.25.4.tar
├── k8s_init.log
├── kernel
│   └── kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
├── kubeadm-config.yaml
├── kube-flannel.yml
├── packages
│   ├── cni-plugins-linux-amd64-v1.1.1.tgz
│   ├── containerd-1.6.10-linux-amd64.tar.gz
│   ├── cri-containerd-1.6.10-linux-amd64.tar.gz
│   ├── crictl-v1.25.0-linux-amd64.tar.gz
│   ├── docker-20.10.21.tgz
│   ├── etcd-v3.5.6-linux-amd64.tar.gz
│   └── nerdctl-1.5.0-linux-amd64.tar.gz
├── py_join.py
├── rely
│   ├── centos7
│   │   ├── bash-completion-2.1-8.el7.noarch.rpm
│   │   ├── cpp-4.8.5-44.el7.x86_64.rpm
│   │   ├── device-mapper-1.02.170-6.el7_9.5.x86_64.rpm
│   │   ├── device-mapper-event-1.02.170-6.el7_9.5.x86_64.rpm
│   │   ├── device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64.rpm
│   │   ├── device-mapper-libs-1.02.170-6.el7_9.5.x86_64.rpm
│   │   ├── device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64.rpm
│   │   ├── dstat-0.7.2-12.el7.noarch.rpm
│   │   ├── epel-release-7-11.noarch.rpm
│   │   ├── gcc-4.8.5-44.el7.x86_64.rpm
│   │   ├── gdisk-0.8.10-3.el7.x86_64.rpm
│   │   ├── glibc-2.17-326.el7_9.x86_64.rpm
│   │   ├── glibc-common-2.17-326.el7_9.x86_64.rpm
│   │   ├── glibc-devel-2.17-326.el7_9.x86_64.rpm
│   │   ├── glibc-headers-2.17-326.el7_9.x86_64.rpm
│   │   ├── gpm-libs-1.20.7-6.el7.x86_64.rpm
│   │   ├── iotop-0.6-4.el7.noarch.rpm
│   │   ├── libgcc-4.8.5-44.el7.x86_64.rpm
│   │   ├── libgomp-4.8.5-44.el7.x86_64.rpm
│   │   ├── libmpc-1.0.1-3.el7.x86_64.rpm
│   │   ├── lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm
│   │   ├── lrzsz-0.12.20-36.el7.x86_64.rpm
│   │   ├── lsof-4.87-6.el7.x86_64.rpm
│   │   ├── lvm2-2.02.187-6.el7_9.5.x86_64.rpm
│   │   ├── lvm2-libs-2.02.187-6.el7_9.5.x86_64.rpm
│   │   ├── mpfr-3.1.1-4.el7.x86_64.rpm
│   │   ├── net-tools-2.0-0.25.20131004git.el7.x86_64.rpm
│   │   ├── ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm
│   │   ├── psmisc-22.20-17.el7.x86_64.rpm
│   │   ├── python-chardet-2.2.1-3.el7.noarch.rpm
│   │   ├── python-kitchen-1.1.1-5.el7.noarch.rpm
│   │   ├── screen-4.1.0-0.27.20120314git3c2946.el7_9.x86_64.rpm
│   │   ├── sysstat-10.1.5-20.el7_9.x86_64.rpm
│   │   ├── telnet-0.17-66.el7.x86_64.rpm
│   │   ├── tree-1.6.0-10.el7.x86_64.rpm
│   │   ├── unzip-6.0-24.el7_9.x86_64.rpm
│   │   ├── vim-common-7.4.629-8.el7_9.x86_64.rpm
│   │   ├── vim-enhanced-7.4.629-8.el7_9.x86_64.rpm
│   │   ├── vim-filesystem-7.4.629-8.el7_9.x86_64.rpm
│   │   ├── yum-utils-1.1.31-54.el7_8.noarch.rpm
│   │   └── zip-3.0-11.el7.x86_64.rpm
│   └── openeuler
│       ├── binutils-2.37-19.oe2203sp2.x86_64.rpm
│       ├── bpftool-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│       ├── curl-7.79.1-23.oe2203sp2.x86_64.rpm
│       ├── dnf-4.14.0-15.oe2203sp2.noarch.rpm
│       ├── dnf-data-4.14.0-15.oe2203sp2.noarch.rpm
│       ├── file-5.41-3.oe2203sp2.x86_64.rpm
│       ├── file-libs-5.41-3.oe2203sp2.x86_64.rpm
│       ├── gawk-5.1.1-5.oe2203sp2.x86_64.rpm
│       ├── gnutls-3.7.2-9.oe2203sp2.x86_64.rpm
│       ├── gnutls-utils-3.7.2-9.oe2203sp2.x86_64.rpm
│       ├── grub2-common-2.06-33.oe2203sp2.noarch.rpm
│       ├── grub2-pc-2.06-33.oe2203sp2.x86_64.rpm
│       ├── grub2-pc-modules-2.06-33.oe2203sp2.noarch.rpm
│       ├── grub2-tools-2.06-33.oe2203sp2.x86_64.rpm
│       ├── grub2-tools-efi-2.06-33.oe2203sp2.x86_64.rpm
│       ├── grub2-tools-extra-2.06-33.oe2203sp2.x86_64.rpm
│       ├── grub2-tools-minimal-2.06-33.oe2203sp2.x86_64.rpm
│       ├── kernel-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│       ├── kernel-devel-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│       ├── kernel-headers-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│       ├── kernel-tools-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│       ├── krb5-devel-1.19.2-9.oe2203sp2.x86_64.rpm
│       ├── krb5-libs-1.19.2-9.oe2203sp2.x86_64.rpm
│       ├── libcurl-7.79.1-23.oe2203sp2.x86_64.rpm
│       ├── libnghttp2-1.46.0-4.oe2203sp2.x86_64.rpm
│       ├── libsmbclient-4.17.5-7.oe2203sp2.x86_64.rpm
│       ├── libtiff-4.3.0-31.oe2203sp2.x86_64.rpm
│       ├── libtiff-devel-4.3.0-31.oe2203sp2.x86_64.rpm
│       ├── libwbclient-4.17.5-7.oe2203sp2.x86_64.rpm
│       ├── ncurses-6.3-12.oe2203sp2.x86_64.rpm
│       ├── ncurses-base-6.3-12.oe2203sp2.noarch.rpm
│       ├── ncurses-libs-6.3-12.oe2203sp2.x86_64.rpm
│       ├── ntp-4.2.8p15-11.oe2203sp2.x86_64.rpm
│       ├── ntp-help-4.2.8p15-11.oe2203sp2.noarch.rpm
│       ├── ntpstat-0.6-4.oe2203sp2.noarch.rpm
│       ├── openssh-8.8p1-21.oe2203sp2.x86_64.rpm
│       ├── openssh-clients-8.8p1-21.oe2203sp2.x86_64.rpm
│       ├── openssh-server-8.8p1-21.oe2203sp2.x86_64.rpm
│       ├── openssl-1.1.1m-22.oe2203sp2.x86_64.rpm
│       ├── openssl-devel-1.1.1m-22.oe2203sp2.x86_64.rpm
│       ├── openssl-libs-1.1.1m-22.oe2203sp2.x86_64.rpm
│       ├── pcre2-10.39-9.oe2203sp2.x86_64.rpm
│       ├── pcre2-devel-10.39-9.oe2203sp2.x86_64.rpm
│       ├── perl-5.34.0-9.oe2203sp2.x86_64.rpm
│       ├── perl-devel-5.34.0-9.oe2203sp2.x86_64.rpm
│       ├── perl-libs-5.34.0-9.oe2203sp2.x86_64.rpm
│       ├── procps-ng-4.0.2-10.oe2203sp2.x86_64.rpm
│       ├── python3-3.9.9-25.oe2203sp2.x86_64.rpm
│       ├── python3-dnf-4.14.0-15.oe2203sp2.noarch.rpm
│       ├── python3-perf-5.10.0-153.25.0.101.oe2203sp2.x86_64.rpm
│       ├── samba-client-libs-4.17.5-7.oe2203sp2.x86_64.rpm
│       ├── samba-common-4.17.5-7.oe2203sp2.x86_64.rpm
│       ├── sqlite-3.37.2-6.oe2203sp2.x86_64.rpm
│       └── yum-4.14.0-15.oe2203sp2.noarch.rpm
└── repo
    ├── centos7
    │   ├── config.toml
    │   ├── conntrack-tools-1.4.4-7.el7.x86_64.rpm
    │   ├── cri-tools-1.26.0-0.x86_64.rpm
    │   ├── kubeadm-1.25.4-0.x86_64.rpm
    │   ├── kubectl-1.25.4-0.x86_64.rpm
    │   ├── kubelet-1.25.4-0.x86_64.rpm
    │   ├── kubernetes-cni-1.2.0-0.x86_64.rpm
    │   ├── libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
    │   ├── libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
    │   ├── libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
    │   └── socat-1.7.3.2-2.el7.x86_64.rpm
    └── openeuler
        ├── conntrack-tools-1.4.6-6.oe2203sp2.x86_64.rpm
        ├── containernetworking-plugins-1.1.1-2.oe2203sp2.x86_64.rpm
        ├── cri-tools-1.26.0-0.x86_64.rpm
        ├── ebtables-2.0.11-10.oe2203sp2.x86_64.rpm
        ├── kubeadm-1.25.4-0.x86_64.rpm
        ├── kubectl-1.25.4-0.x86_64.rpm
        ├── kubelet-1.25.4-0.x86_64.rpm
        ├── libnetfilter_cthelper-1.0.0-16.oe2203sp2.x86_64.rpm
        ├── libnetfilter_cttimeout-1.0.0-15.oe2203sp2.x86_64.rpm
        ├── libnetfilter_queue-1.0.5-2.oe2203sp2.x86_64.rpm
        └── socat-1.7.3.2-8.oe2203sp2.x86_64.rpm

10 directories, 142 files

包含除主要K8S依赖程序、离线镜像外,还包含日常运维工具命令包。

执行初始化脚本(所有节点)

[root@node1 ~]# chmod +x 01-rhel_init.sh 
[root@node1 ~]# sh 01-rhel_init.sh all
执行用户检测:                                          [ok]
操作系统检测:                                          [ok]
外网权限检查:                                          [ok]
CPU配置检查:                                             [ok]
内存配置检查:                                          [ok]
关闭防火墙 :                                            [ok]
关闭交换分区:                                          [ok]
历史命令格式                                           [ok]
node1 2023-09-14 10:15:06: 安装失败请忽略!!!
# ntpdate未安装,开始进行安装....
ntpdate安装成功:                                         [failed]
时间同步检测:                                          [failed]
添加内核参数:                                          [ok]
modprobe: FATAL: Module ip_vs_fo not found.
启用ipvs模块:                                            [ok]
node1 2023-09-14 10:20:12: 当前内核 (3.10.0) 低于 4.19. 正在启动更新...
------------------------------------------------------------------------------
【node1 2023-09-14 10:20:49】 内核已更新完成,请在确认无误后重启服务器!!!
------------------------------------------------------------------------------
[root@node1 ~]# reboot

安装运行时(所有节点)

[root@node1 ~]# chmod +x 02-containerd-install.sh 
[root@node1 ~]# sh 02-containerd-install.sh 
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
Containerd containerd-1.6.10 已安装并配置为systemd服务!
使用如下命令进行测试是否安装成功:nerdctl run -d -p 8080:80 --name nginx nginx:alpine
[root@node1 ~]# nerdctl ps -a
CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES

初始化kubernetes

[root@node1 ~]# chmod +x 03-kubeadm-mater1-init.sh 
[root@node1 ~]# sh 03-kubeadm-mater1-init.sh all
hosts写入:                                                 [ok]
ipvs检测:                                                  [ok]
内核检测:                                                [ok]
containerd检测:                                            [ok]
系统检测:                                                [ok]
当前系统为centos,版本为7
【node1 2023-09-14 10:22:44】 kubeadm未安装,开始离线安装
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
kubeadm安装:                                               [ok]
【node1 2023-09-14 10:23:18】 开始导入离线镜像
【node1 2023-09-14 10:24:01】 kubeadm开始初始化master节点
K8s初始化:                                                [ok]
【node1 2023-09-14 10:24:12】 master和worker节点的加入连接如下
Control plane information:
kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \
        --control-plane 

Worker node information:
kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \

加入master节点需要拿着带--control-plane参数的token去执行

node2执行

[root@node2 ~]# kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
>         --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \
>         --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [apiserver.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node2] and IPs [10.96.0.1 192.168.111.131 192.168.111.130 192.168.111.133 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node2] and IPs [192.168.111.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node2] and IPs [192.168.111.131 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node node2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

node3执行

由于node3是openeuler系统,和前面执行脚本一样

执行01-openeuler-init.sh脚本进行初始化操作,

执行02-containerd-install.sh安装运行时

执行 04-kubeadm-mater-install.sh 安装kubeadm等组件并导入离线镜像

[root@node3 ~]# chmod +x 02-containerd-install.sh 
[root@node3 ~]# sh 02-containerd-install.sh 
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
Containerd containerd-1.6.10 已安装并配置为systemd服务!
使用如下命令进行测试是否安装成功:nerdctl run -d -p 8080:80 --name nginx nginx:alpine
[root@node3 ~]# chmod +x 04-kubeadm-mater-install.sh 
[root@node3 ~]# sh 04-kubeadm-mater-install.sh all 
hosts写入:                                                 [ok]
ipvs检测:                                                  [ok]
内核检测:                                                [ok]
containerd检测:                                            [ok]
系统检测:                                                [ok]
当前系统为openEuler,版本为22.03
【node3 2023-09-14 10:51:57】 kubeadm未安装,开始离线安装
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
kubeadm安装:                                               [ok]
【node3 2023-09-14 10:52:22】 开始导入离线镜像
K8s初始化:                                                [ok]
[root@node3 ~]# kubeadm join apiserver.cluster.local:6443 --token abcdef.0123456789abcdef \
>         --discovery-token-ca-cert-hash sha256:5138ec691afbbb3c52e1d6aae6f31374d756f82a73de0a3000d9c02af483e633 \
>         --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [apiserver.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node3] and IPs [10.96.0.1 192.168.111.133 192.168.111.130 192.168.111.131 127.0.0.1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node3] and IPs [192.168.111.133 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node3] and IPs [192.168.111.133 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node node3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

确认部署结果

[root@node1 ~]# kubectl get nodes -o wide
NAME    STATUS     ROLES           AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                         CONTAINER-RUNTIME
node1   NotReady   control-plane   44m    v1.25.4   192.168.111.130   <none>        CentOS Linux 7 (Core)       4.19.12-1.el7.elrepo.x86_64            containerd://1.6.10
node2   NotReady   control-plane   2m5s   v1.25.4   192.168.111.131   <none>        CentOS Linux 7 (Core)       4.19.12-1.el7.elrepo.x86_64            containerd://1.6.10
node3   NotReady   control-plane   37s    v1.25.4   192.168.111.133   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.25.0.101.oe2203sp2.x86_64   containerd://1.6.10

安装网络插件flannel

[root@node1 ~]# wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@node1 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@node1 ~]# kubectl get nodes -o wide
NAME    STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                    KERNEL-VERSION                         CONTAINER-RUNTIME
node1   Ready    control-plane   50m     v1.25.4   192.168.111.130   <none>        CentOS Linux 7 (Core)       4.19.12-1.el7.elrepo.x86_64            containerd://1.6.10
node2   Ready    control-plane   7m46s   v1.25.4   192.168.111.131   <none>        CentOS Linux 7 (Core)       4.19.12-1.el7.elrepo.x86_64            containerd://1.6.10
node3   Ready    control-plane   6m18s   v1.25.4   192.168.111.133   <none>        openEuler 22.03 (LTS-SP2)   5.10.0-153.25.0.101.oe2203sp2.x86_64   containerd://1.6.10

自此部署高可用集群就介绍了,脚本和离线软件包后续整理后贴在本文后面,脚本上传github大家一起完善。

其实待完善点还是很多,比如shell脚本分为了三段,其实并不够优雅,故后续想使用ansible把这些脚本使用流程化的方式串联起来,更优雅的进行部署。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

llody_55

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值