环境:centos 7.X
192.168.10.11 ceph1
192.168.10.12 ceph2
一:基础环境配置
1.配置国内源
节点上都执行
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
配置ceph源
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1
更新源
yum makecache
yum update
2.安装ceph
yum install -y ceph
3.关闭selinux
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
setenforce 0 setenforce: SELinux is disabled
4.关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
5.为保证各个服务器的时间一致,安装ntp服务器
yum install -y ntp ntpdate ntp-doc
访问下面网址获得
http://www.pool.ntp.org/zone/cn
添加到配置文件,注释原有的
/etc/ntp.conf

再执行下面的命令手工从服务器同步并启动ntp服务
ntpdate 0.cn.pool.ntp.org
hwclock -w
systemctl enable ntpd.service
systemctl start ntpd.service
二:ceph集群安装
管理节点上操作
vim /etc/hosts
192.168.10.11 ceph1
192.168.10.12 ceph2
生成ssh密钥对并复制到各节点
ssh-keygen
ssh-copy-id ceph1
ssh-copy-id ceph2
验证:

1.安装部署ceph 工具ceph-deploy
yum install ceph-deploy -y
ceph-deploy --version
创建一个目录 以便存放 配置文件
mkdir /data/ceph-deploy
2.部署新的monitor节点
ceph1 ceph2
ceph-deploy new ceph1 ceph2

目录下生成的文件
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
#ceph.conf
[global]
fsid = d6356d8a-35db-4e0a-8195-ccdf5d71bd43
mon_initial_members = ceph1, ceph2
mon_host = 192.168.10.11,192.168.10.12
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
修改为:
[global]
fsid = d6356d8a-35db-4e0a-8195-ccdf5d71bd43
mon_initial_members = ceph1, ceph2
mon_host = 192.168.10.11,192.168.10.12
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
osd pool default size = 2
public network = 192.168.10.0/24
参数说明:
cephx 改为 none ,即取消认证模式
osd pool default size 副本数,此处设置为 2
public network 指osd之间通信的网络,根据实际情况配置
3.部署monitors,并获取密钥key
ceph-deploy --overwrite-conf mon create-initial
如果有报错,需要清理下面内容
/tmp/
/etc/ceph/
/var/lib/ceph/mon
4.分发配置文件
ceph-deploy admin ceph1 ceph2 ceph3
5.部署osd
没有足够多的磁盘,就用文件夹
对各个节点创建目录
mkdir -p /data/ceph/osd1
chmod -R 777 /data
或
chown -R ceph.ceph /data*
主节点操作
准备osd:
ceph-deploy osd prepare ceph1:/data/ceph/osd1 ceph2:/data/ceph/osd1 ceph3:/data/ceph/osd1
激活osd:
ceph-deploy osd activate ceph1:/data/ceph/osd1 ceph2:/data/ceph/osd1 ceph3:/data/ceph/osd1
6.安装mgr
ceph-deploy mgr create ceph1:ceph1_mgr ceph2:ceph2_mgr ceph3:ceph3_mgr
如有目录报错,则创建对应目录
/var/lib/ceph/mgr/ceph-ceph1_mgr
/var/lib/ceph/mgr/ceph-ceph2_mgr
/var/lib/ceph/mgr/ceph-ceph3_mgr

324

被折叠的 条评论
为什么被折叠?



