k8s集群安装

本文详细介绍在CentOS 7环境下,如何从零开始搭建一个由一台Master节点和三台Node节点组成的Kubernetes集群,包括软件安装、网络配置、etcd集群配置、Flannel网络配置以及Kubernetes各组件的配置。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

简介

Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。

环境

系统:centos7
机器:192.168.118.131 k8s-master
192.168.118.132 k8s-node1
192.168.118.133 k8s-node2
192.168.118.134 k8s-node3

安装

软件依赖基础安装

  • k8s-master:

yum install -y etcd kubernetes-master ntp flannel

  • 其余节点:

yum install -y etcd kubernetes-node ntp flannel docker

  • 配置网路转发(生效:sysctl --system)
[root@k8s-node3 ~]# cat /etc/sysctl.conf | grep '^[^#]'
net.ipv4.ip_forward=1

[root@k8s-node3 ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
  • 路由设置

yum install -y bridge-utils.x86_64
modprobe br_netfilter

配置etcd集群

  • 192.168.118.131:
[root@k8s-master /]# cat /etc/etcd/etcd.conf  | grep '^[^#]'
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.118.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.118.131:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.118.131:2380,etcd2=http://192.168.118.132:2380,etcd3=http://192.168.118.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 192.168.118.132:
[root@k8s-node1 /]# cat /etc/etcd/etcd.conf  | grep '^[^#]'
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.118.132:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.118.132:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.118.131:2380,etcd2=http://192.168.118.132:2380,etcd3=http://192.168.118.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 192.168.118.133:
[root@k8s-node2 /]# cat /etc/etcd/etcd.conf  | grep '^[^#]'
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.118.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.118.133:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.118.131:2380,etcd2=http://192.168.118.132:2380,etcd3=http://192.168.118.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

  • 以上三台机子开放防火墙
firewall-cmd --add-port=2379/tcp --permanent
firewall-cmd --add-port=2380/tcp --permanent
firewall-cmd --reload
  • 启动服务并开机启动

systemctl restart etcd
systemctl enable etcd

配置flanneld网络

  • 所有主机配置
[root@k8s-node1 /]# cat /etc/sysconfig/flanneld | grep '^[^#]'
FLANNEL_ETCD_ENDPOINTS="http://192.168.118.131:2379,http://192.168.118.132:2379,http://192.168.118.133:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

  • 所有主机配置防火墙(flanneld通过udp端口网络通信)
firewall-cmd --add-port=8285/udp --permanent
firewall-cmd --reload
  • 启动服务并开机启动

systemctl restart flanneld
systemctl enable flanneld

配置k8s-master主机

  1. 配置apiserver
[root@k8s-master /]# cat /etc/kubernetes/apiserver | grep '^[^#]'
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.118.131:2379,http://192.168.118.132:2379,http://192.168.118.133:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""
  1. 配置config
[root@k8s-master /]# cat /etc/kubernetes/config | grep '^[^#]'
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.118.131:8080"

  1. 配置controller-manager
[root@k8s-master /]# cat /etc/kubernetes/controller-manager | grep '^[^#]'
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/var/run/kubernetes/apiserver.key  --root-ca-file=/var/run/kubernetes/apiserver.crt"
  1. 配置scheduler
[root@k8s-master /]# cat /etc/kubernetes/scheduler | grep '^[^#]'
KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
  1. 防火墙开放
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --reload
  1. 启动服务并开机启动

systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-apiserver

配置k8s-node主机

  1. 配置config
[root@k8s-node1 /]# cat /etc/kubernetes/config | grep '^[^#]'
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.118.131:8080"
  1. 配置kubelet
    注意KUBELET_ADDRESS,KUBELET_HOSTNAME配置跟具体主机相关
[root@k8s-node1 /]# cat /etc/kubernetes/kubelet | grep '^[^#]'
KUBELET_ADDRESS="--address=192.168.118.132"
KUBELET_HOSTNAME="--hostname-override=192.168.118.132"
KUBELET_API_SERVER="--api-servers=http://192.168.118.131:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--cluster-dns=10.254.0.2 --cluster-domain=cluster.local"
  1. 配置proxy
[root@k8s-node1 /]# cat /etc/kubernetes/proxy | grep '^[^#]'
KUBE_PROXY_ARGS="--bind-address=0.0.0.0"
  1. 防火墙配置
firewall-cmd --permanent --zone=trusted --add-interface=docker0
firewall-cmd --permanent --zone=trusted --add-interface=flannel0
  1. 添加静态路由
[root@k8s-node3 ~]# cat /etc/sysconfig/static-routes
any -net 10.254.0.0 netmask 255.255.0.0 dev docker0
  1. 启动服务并开机启动

systemctl restart kube-proxy docker
systemctl restart kubelet
systemctl restart docker
systemctl enable kube-proxy
systemctl enable kubelet
systemctl enable docker

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值