1.基础环境配置
主机名 | IP地址 | 节点 |
ceph-node1 | 192.168.200.10 | 管理osd,mon |
ceph-node2 | 192.168.200.20 | osd,mon |
ceph-node3 | 192.168.200.30 | osd,mon |
client | 192.168.200.40 | 客户端 |
(1)创建四台虚拟机,节点为192.168.200.10(20,30,40)。前三台服务器节点上(10,20,30)各添加1块20GB的STAT类型硬盘sdb,更改主机名,此处以ceph-node1节点为例
[root@localhost ~]# hostnamectl set-hostname ceph-node1
[root@localhost ~]# bash
[root@ceph-node1 ~]#
(2)ceph-node1做管理osd,mon节点,ceph-node2和ceph-node3做osd,mon,ceph-node4为client客户端。格式化硬盘,创建相应目录并进行挂载
[root@ceph-node1 ~]# mkfs.xfs /dev/sdb
[root@ceph-node1 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph-node1 ~]# mount /dev/sdb /var/local/osd0/
[root@ceph-node1 ~]# chmod 777 -R /var/local/osd0
[root@ceph-node2 ~]# mkfs.xfs /dev/sdb
[root@ceph-node2 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph-node2 ~]# mount /dev/sdb /var/local/osd1
[root@ceph-node2 ~]# chmod 777 -R /var/local/osd1
[root@ceph-node3 ~]# mkfs.xfs /dev/sdb
[root@ceph-node3 ~]# mkdir /var/local/osd{0,1,2}
[root@ceph-node3 ~]# mount /dev/sdb /var/local/osd2/
[root@ceph-node3 ~]# chmod 777 -R /var/local/osd2
(3)修改四台虚拟机的/etc/hosts文件,修改主机名地址映射关系,此处以ceph-node1为例
[root@ceph-node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.10 ceph-node1
192.168.200.20 ceph-node2
192.168.200.30 ceph-node3
192.168.200.40 client
(4)在ceph-node1节点生成Root SSH密钥,并将它复制到其他节点上
[root@ceph-node1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):(直接回车)
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):(直接回车)
Enter same passphrase again:(直接回车)
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I0eGEaG1NbMZ5Kb3jknKzs79DIHV3bjzNm0wZ6qt0Eo root@ceph-node1
The key's randomart image is:
+---[RSA 2048]----+
| =+* |
| o * B . o |
| . o O . o . |
| B . |
| + S o o o|
| + + . o B |
| o E . = o|
| + + B o + o |
| oB +.= o.. |
+----[SHA256]-----+
[root@ceph-node1 ~]# ssh-copy-id ceph-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-node1 (192.168.200.10)' can't be established.
ECDSA key fingerprint is SHA256:F14+O/8ehU0uc/E/UPkSIAdgljPv58RoOwm7fBqak4I.
ECDSA key fingerprint is MD5:17:ac:ba:e6:9d:8f:18:d0:71:75:0c:67:a6:36:03:27.
Are you sure you want to continue connecting (yes/no)? yes(输入yes)
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-node1's password:(输入ceph-node1虚拟机的root(用户)密码)
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph-node1'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-node1 ~]# ssh-copy-id ceph-node2
[root@ceph-node1 ~]# ssh-copy-id ceph-node3
[root@ceph-node1 ~]# ssh-copy-id client
(5)配置阿里网络yum源(四台虚拟机都要配置网络源),此处以ceph-node1节点为例
[root@ceph-node1 ~]# mv /etc/yum.repos.d/* /media/
[root@ceph-node1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2523 100 2523 0 0 20018 0 --:--:-- --:--:-- --:--:-- 20184
(6)安装NTP服务,使用互联网上提供的NTP服务,把三台主机的时间保持一致(主机需要能够正常联网)
[root@ceph-node1 ~]# yum -y install ntp
[root@ceph-node1 ~]# ntpdate ntp1.aliyun.com
[root@ceph-node2 ~]# yum -y install ntp
[root@ceph-node2 ~]# ntpdate ntp1.aliyun.com
[root@ceph-node3 ~]# yum -y install ntp
[root@ceph-node3 ~]# ntpdate ntp1.aliyun.com
(7)增加yum配置文件(四台虚拟机都要配置ceph源),此处以ceph-node1节点为例
[root@ceph-node1 ~]# yum -y install wget
[root@ceph-node1 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@ceph-node1 ~]# vi /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
[root@ceph-node1 ~]# yum clean all
[root@ceph-node1 ~]# yum makecache
2.Ceph系统安装
(1)使用ceph-deploy工具在ceph-node1节点上安装和配置Ceph
[root@ceph-node1 ~]# yum -y install ceph-deploy
(2)进入配置文件夹中,使用ceph-deploy创建一个ceph集群,更改配置文件使两个osd也能达到active+clean状态
ceph-deploy的new子命令能够部署一个默认名称为Ceph的新集群
[root@ceph-node1 ~]# mkdir /etc/ceph && cd /etc/ceph
[root@ceph-node1 ceph]# ceph-deploy new ceph-node1
[root@ceph-node1 ceph]# vi ceph.conf
[global]
fsid = bafd6b88-c929-439c-9824-60e2a12d2e45
mon_initial_members = ceph-node1
mon_host = 192.168.200.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
(3)在ceph-node1节点上,使用ceph-deploy工具在所有节点上安装ceph二进制软件包
[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3 client
[root@ceph-node1 ceph]# ceph -v
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
(4)在ceph-node1节点上创建Ceph monitor,并查看mon状态
[root@ceph-node1 ceph]# ceph-deploy mon create ceph-node1
[root@ceph-node1 ceph]# ceph-deploy gatherkeys ceph-node1
[root@ceph-node1 ceph]# ceph mon stat
e1: 1 mons at {ceph-node1=192.168.200.10:6789/0}, election epoch 3, quorum 0 ceph-node1
3.OSD和MDS部署
(1)在前三台服务器节点(10,20,30)上关闭防火墙
[root@ceph-node1 ceph]# systemctl stop firewalld
[root@ceph-node1 ceph]# systemctl disable firewalld
(2)在ceph-node1节点创建OSD
[root@ceph-node1 ceph]# ceph-deploy osd prepare ceph-node1:/var/local/osd0 ceph-node2:/var/local/osd1 ceph-node3:/var/local/osd2
(3)在ceph-node1节点使用ceph-deploy工具激活OSD节点
[root@ceph-node1 ceph]# ceph-deploy osd activate ceph-node1:/var/local/osd0 ceph-node2:/var/local/osd1 ceph-node3:/var/local/osd2
(4)在ceph-node1节点使用ceph-deploy工具列出osd服务三个节点状态
[root@ceph-node1 ceph]# ceph-deploy osd list ceph-node1 ceph-node2 ceph-node3
以ceph-node1的信息为例
(5)在ceph-node1节点使用ceph osd tree查看目录树
[root@ceph-node1 ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05846 root default
-2 0.01949 host ceph-node1
0 0.01949 osd.0 up 1.00000 1.00000
-3 0.01949 host ceph-node2
1 0.01949 osd.1 up 1.00000 1.00000
-4 0.01949 host ceph-node3
2 0.01949 osd.2 up 1.00000 1.00000
(6)在ceph-node1节点使用ceph-deploy工具把配置文件和admin密钥复制到所有节点
[root@ceph-node1 ceph]# ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
[root@ceph-node1 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring
(7)使用ceph health或ceph -s命令查看osd状态
[root@ceph-node1 ceph]# ceph health
HEALTH_OK
[root@ceph-node1 ceph]# ceph -s
cluster 02d597ca-fa57-494b-8b8c-b6209821bbb2
health HEALTH_OK
monmap e1: 1 mons at {ceph-node1=192.168.200.10:6789/0}
election epoch 3, quorum 0 ceph-node1
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v25: 64 pgs, 1 pools, 0 bytes data, 0 objects
15681 MB used, 45728 MB / 61410 MB avail
64 active+clean
(8)部署mds服务,在ceph-node1节点使用ceph-deploy工具创建两个服务
[root@ceph-node1 ceph]# ceph-deploy mds create ceph-node2 ceph-node3
(9)使用命令查看服务状态和集群状态
[root@ceph-node1 ceph]# ceph mds stat
e3:, 2 up:standby
[root@ceph-node1 ceph]# ceph -s
cluster 02d597ca-fa57-494b-8b8c-b6209821bbb2
health HEALTH_OK
monmap e1: 1 mons at {ceph-node1=192.168.200.10:6789/0}
election epoch 3, quorum 0 ceph-node1
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v25: 64 pgs, 1 pools, 0 bytes data, 0 objects
15681 MB used, 45728 MB / 61410 MB avail
64 active+clean
4.创建ceph文件系统并挂载
(1)ceph-node1节点需要创建ceph文件系统的存储池,在创建之前先查看一下文件系统
[root@ceph-node1 ceph]# ceph fs ls
No filesystems enabled
(2)使用命令创建两个存储池
[root@ceph-node1 ceph]# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
[root@ceph-node1 ceph]# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created
(3)创建好存储池后,使用fs new命令创建文件系统
[root@ceph-node1 ceph]# ceph fs new 128 cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
(4)查看创建后的cephfs和mds节点状态
[root@ceph-node1 ceph]# ceph fs ls
name: 128, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-node1 ceph]# ceph mds stat
e6: 1/1/1 up {0=ceph-node3=up:active}, 1 up:standby
(5)在Client客户端上挂载Ceph文件系统,将ceph-node1节点中的存储密钥复制到Client客户端的Ceph配置文件下(admin.secret需自己创建)
[root@ceph-node1 ceph]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQCqBPhmadqKLRAACK9AWkuEqkkQYd1iBHYfNw==
[root@client ~]# vi /etc/ceph/admin.secret
AQCqBPhmadqKLRAACK9AWkuEqkkQYd1iBHYfNw==
(6)登录Client客户端节点,创建一个挂载点,进入相应的配置目录进行Ceph文件系统挂载
[root@client ~]# mkdir /opt/ceph
[root@client ~]# cd /opt/ceph/
[root@client ceph]# mount -t ceph 192.168.200.10:6789:/ /opt/ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret
(7)在Client客户端使用df -h命令查看挂载详情
[root@client ceph]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 7.7M 479M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mapper/centos-root 17G 1.8G 16G 11% /
/dev/sda1 1014M 138M 877M 14% /boot
tmpfs 98M 0 98M 0% /run/user/0
192.168.200.10:6789:/ 60G 16G 45G 26% /opt/ceph