在构建现代云原生应用时,Kubernetes已成为不可或缺的一部分,而高效的存储管理对于确保应用程序的稳定性和性能至关重要。Ceph作为一个分布式存储系统,提供了强大的RBD(RADOS Block Device)功能,能够为Kubernetes提供可靠的持久化存储解决方案。本文将指导您使用Helm Chart快速部署Ceph RBD Provisioner,从而简化Kubernetes集群中的存储管理流程,提升运维效率。
01
Ceph RBD操作
1、创建kubernetes存储池
$ ceph osd pool create kubernetes 128 128
pool 'kubernetes' created
2、初始化存储池
$ rbd pool init kubernetes
3、创建一个新用户
$ sudo ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes' -o /etc/ceph/ceph.client.kubernetes.keyring
4、获取ceph相关信息
$ ceph mon dump
dumped monmap epoch 2
epoch 2
fsid a43fa047-755e-4208-af2d-f6090154f902
last_changed 2024-08-12T20:34:52.706720+0800
created 2024-08-08T14:48:39.332770+0800
min_mon_release 15 (octopus)
0: [v2:172.139.20.20:3300/0,v1:172.139.20.20:6789/0] mon.storage-ceph01
1: [v2:172.139.20.94:3300/0,v1:172.139.20.94:6789/0] mon.storage-ceph03
2: [v2:172.139.20.208:3300/0,v1:172.139.20.208:6789/0] mon.storage-ceph02
02
Helm部署Ceph RBD provisioner
1、下载对应的chart文件
$ curl -L -O https://github.com/ceph/ceph-csi/archive/refs/tags/v3.9.0.tar.gz
$ sudo tar xvf v3.9.0.tar.gz -C /etc/kubernetes/addons/
2、部署cephrbd provisioner配置文件
$ cat <<'EOF' | sudo tee /etc/kubernetes/addons/ceph-csi-rbd-values.yaml > /dev/null
nodeplugin:
# nodeplugin使用到的各资源的名称
fullnameOverride: ceph-csi-rbd-nodeplugin
# 以下配置image地址
registrar:
image:
repository: 172.139.20.170:5000/library/csi-node-driver-registrar
plugin:
image:
repository: 172.139.20.170:5000/library/cephcsi
tag: v3.9.0
# 容忍所有污点
tolerations:
- operator: Exists
provisioner:
# provisioner使用到的各资源的名称
fullnameOverride: ceph-csi-rbd-provisioner
# 以下配置是image地址
provisioner:
image:
repository: 172.139.20.170:5000/library/csi-provisioner
attacher:
image:
repository: 172.139.20.170:5000/library/csi-attacher
resizer:
image:
repository: 172.139.20.170:5000/library/csi-resizer
snapshotter:
image:
repository: 172.139.20.170:5000/library/csi-snapshotter
# pod反亲和性
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- csi-rbdplugin-provisioner
topologyKey: "kubernetes.io/hostname"
# kubelet数据目录。
kubeletDir: /var/lib/kubelet
# 驱动名称(即provisioner)
driverName: rbd.csi.ceph.com
# ceph.conf配置文件
cephconf: |
[global]
fsid = a43fa047-755e-4208-af2d-f6090154f902
cluster_network = 172.139.20.0/24
mon_initial_members = storage-ceph01, storage-ceph02, storage-ceph03
mon_host = 172.139.20.20,172.139.20.208,172.139.20.94
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# csi配置文件
csiConfig:
- clusterID: a43fa047-755e-4208-af2d-f6090154f902
monitors:
- "172.139.20.20:6789"
- "172.139.20.94:6789"
- "172.139.20.208:6789"
# 配置storageClass
storageClass:
create: true
name: ceph-rbd-storage
clusterID: a43fa047-755e-4208-af2d-f6090154f902
pool: kubernetes
fstype: xfs # 或ext4
# 策略为Retain时,删除pv不会删除对应的rbd image,需要手工删除
reclaimPolicy: Retain # 或Delete
allowVolumeExpansion: true # 允许扩容pv容量
# 当storageClass.create: true时,才需要配置连接ceph密钥信息
secret:
create: true
name: csi-rbd-secret
userID: kubernetes
userKey: AQArDbpmYEqxJhAAUP26aPfoHHr+saBtkjdTIw==
EOF
Tip:【血教训】原本是manifests部署且已经重建过pvc的话,不要删除原来sc使用到的secret,pv会使用到该secret信息。否则pod重建会一直处于CreateContainer。
3、部署cephrbd provisioner
$ helm -n storage-system install csi-rbd -f /etc/kubernetes/addons/ceph-csi-rbd-values.yaml /etc/kubernetes/addons/ceph-csi-3.9.0/charts/ceph-csi-rbd
NAME: csi-rbd
LAST DEPLOYED: Wed Jan 22 12:38:16 2025
NAMESPACE: storage-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Examples on how to configure a storage class and start using the driver are here:
https://github.com/ceph/ceph-csi/tree/devel/examples/rbd
03
验证
1、查看pod运行情况
$ kubectl -n storage-system get pod
NAME READY STATUS RESTARTS AGE
ceph-csi-rbd-nodeplugin-2sq59 3/3 Running 0 137m
ceph-csi-rbd-nodeplugin-88jsp 3/3 Running 0 137m
ceph-csi-rbd-nodeplugin-b9m4x 3/3 Running 0 137m
ceph-csi-rbd-nodeplugin-bctsn 3/3 Running 0 137m
ceph-csi-rbd-nodeplugin-ch5lb 3/3 Running 0 137m
ceph-csi-rbd-nodeplugin-d88vh 3/3 Running 0 137m
ceph-csi-rbd-nodeplugin-nl9hq 3/3 Running 0 137m
ceph-csi-rbd-provisioner-5bd8bc984-5ws8t 7/7 Running 0 137m
ceph-csi-rbd-provisioner-5bd8bc984-977xn 7/7 Running 0 137m
ceph-csi-rbd-provisioner-5bd8bc984-d7ddv 7/7 Running 0 137m
2、创建pvc并deployment挂载pvc
$ cat << EOF | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tools
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 3Gi
storageClassName: ceph-rbd-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tools
spec:
replicas: 1
selector:
matchLabels:
app: tools
template:
metadata:
labels:
app: tools
spec:
containers:
- image: core.jiaxzeng.com/library/tools:v1.3
name: tools
volumeMounts:
- name: data
mountPath: /app
volumes:
- name: data
persistentVolumeClaim:
claimName: tools
EOF
3、验证pod情况
$ kubectl get pod -l app=tools
NAME READY STATUS RESTARTS AGE
tools-6dc6f4bdc-qlgxt 1/1 Running 0 7s
$ kubectl exec -it deploy/tools -- df -h /app
Filesystem Size Used Avail Use% Mounted on
/dev/rbd2 3.0G 36M 3.0G 2% /app
04
结语
通过上述步骤,您可以轻松地使用Helm部署Ceph RBD Provisioner,实现对Kubernetes集群中持久化存储的有效管理。这不仅提高了系统的灵活性和可靠性,还大大简化了日常运维任务。希望本篇文章能为您提供有价值的参考,帮助您更高效地管理和利用存储资源。随着技术的发展,请持续关注最新的工具和技术,不断优化和完善您的基础设施,以保持竞争力。