flannel安装:
1、yaml方式安装:
在已经安装好k8s集群之上部署flannel:
获取flannel yaml文件,应用官方的yaml文件:
若以下无法下载:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
1)拉取镜像问题:
yaml所使用的镜像为:quay.io/coreos/flannel:v0.14.0
若无法下载在yaml文件中可修改为其它下载地址:quay.mirrors.ustc.edu.cn/coreos/flannel:v0.14.0
2)yaml文件中指定的POD网段:根据自己需要修改网段
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
只需要在Master节点执行:
kubectl apply -f kube-flannel.yml
安装flanner后:
[root@vm1 k8sinstall]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-5w99f 1/1 Running 314 5d20h
coredns-7ff77c879f-f5vtn 1/1 Running 560 7d
etcd-vm1 1/1 Running 2 47d
kube-apiserver-vm1 1/1 Running 2 47d
kube-controller-manager-vm1 1/1 Running 56 47d
kube-flannel-ds-nmlmh 1/1 Running 0 46d
kube-flannel-ds-tqwf4 1/1 Running 0 46d
kube-flannel-ds-whkfq 1/1 Running 0 2d17h
删除flannel:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
kubectl delete -f kube-flannel.yml
2、二进制tar.gz包安装
https://www.cnblogs.com/linuxk/p/9272819.html
https://cloud.tencent.com/developer/article/1608835
下载安装包:https://github.com/flannel-io/flannel/releases
# wget https://github.com/flannel-io/flannel/releases/flannel-v0.14.0-linux-amd64.tar.gz
解压后:
# tar -xf flannel-v0.14.0-linux-amd64.tar.gz
[root@vm1 flannel]# ll
-rwxr-xr-x 1 1000 1000 49333192 May 27 22:40 flanneld
-rw-r--r-- 1 root root 13083392 Oct 9 17:44 flannel-v0.14.0-linux-amd64.tar.gz
-rwxr-xr-x 1 1000 1000 2139 May 29 2019 mk-docker-opts.sh
-rw-rw-r-- 1 1000 1000 4654 Apr 15 22:39 README.md
所有节点都要操作,master和node节点均需要这2个文件:flanneld,mk-docker-opts.sh
etcd安装:
项目地址:https://github.com/etcd-io/etcd
下载地址:https://github.com/coreos/etcd/releases/download/v3.2.15/etcd-v3.2.15-linux-amd64.tar.gz
下载解压后:
cd etcd-v3.2.15-linux-amd64
[root@vm1 etcd-v3.2.15-linux-amd64]# ll
drwxr-xr-x 11 1000 1000 4096 Jan 23 2018 Documentation
-rwxr-xr-x 1 1000 1000 17833792 Jan 23 2018 etcd
-rwxr-xr-x 1 1000 1000 15246720 Jan 23 2018 etcdctl
-rw-r--r-- 1 1000 1000 33849 Jan 23 2018 README-etcdctl.md
-rw-r--r-- 1 1000 1000 5801 Jan 23 2018 README.md
-rw-r--r-- 1 1000 1000 7855 Jan 23 2018 READMEv2-etcdctl.md
cp etcdctl /usr/local/bin/
(1)为flannel生成证书
# vim flanneld-csr.json
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
ssl]# ll flannel*
-rw-r--r-- 1 root root 997 May 31 11:13 flanneld.csr
-rw-r--r-- 1 root root 221 May 31 11:13 flanneld-csr.json
-rw------- 1 root root 1675 May 31 11:13 flanneld-key.pem
-rw-r--r-- 1 root root 1391 May 31 11:13 flanneld.pem
(2)分发证书
ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
将flanneld*.pem cp到所有的节点的/opt/kubernetes/ssl目录下
(3)下载flannel软件包
cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
将flanneld,mk-docker-opts.sh cp到所有的节点的/opt/kubernetes/bin/目录下
(4)配置flannel
# vim /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
将flannel文件cp到所有的节点上
(6)设置flannel系统服务
# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker
Type=notify
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
将该文件cp到所有的节点上
2.3 flannel cni集成
1)下载cni插件
https://github.com/containernetworking/plugins/releases
wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
# mkdir /opt/kubernetes/bin/cni
# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
所有的节点将cni/* cp到 /opt/kubernetes/bin/cni
2)创建etcd的key
此步的操作是为了创建POD的网段,并在ETCD中存储,而后FLANNEL从ETCD中取出并进行分配
# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
--no-sync -C https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379 \
mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1
3)启动flannel
# systemctl daemon-reload
# systemctl enable flannel
# chmod +x /opt/kubernetes/bin/*
# systemctl start flannel
启动后,可以看到每个节点上会多出一个flannel.1的网卡,不同的节点都在不同网段。
2.4 配置docker使用flannel
(s所有节点均需要操作)
# vim /usr/lib/systemd/system/docker.service
[Unit] #在Unit下面修改After和增加Requires
After=network-online.target firewalld.service flannel.service #让docker在flannel网络后面启动
Wants=network-online.target
Requires=flannel.service
[Service] #增加EnvironmentFile=-/run/flannel/docker
Type=notify
EnvironmentFile=-/run/flannel/docker #加载环境文件,设置docker0的ip地址为flannel分配的ip地址
ExecStart=/usr/bin/dockerd $DOCKER_OPTS
# systemctl daemon-reload
# systemctl restart docker