提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
文章目录
- 前言
- 一、ZooKeeper简介
- 二、ZooKeeper完全分布式搭建
- 1.解压安装配置环境变量
- 2.zookeeper配置
前言
首先得创建3个虚拟机,hadoop1,hadoop2与hadoop3,设置好hosts文件与地址,设置方式,就可以搭zookeeper了。
提取码: ybts
资源下载
一、ZooKeeper简介
ZooKeeper是一个开源的分布式协调服务,目标是将那些复杂且容易出错的分布式应用封装起来,构成一个高效可靠的原语集,并提供一系列简单易用的接口提供给用户使用,主要是就是为了解决分布式系统中单点故障的问题。
二、ZooKeeper完全分布式搭建
1.解压安装配置环境变量
#解压安装包
tar -zxvf apache-zookeeper-3.7.0-bin.tar.gz -C /export/servers/
#对ZooKeeper进行重命名
mv /export/servers/apache-zookeeper-3.7.0-bin /export/servers/zookeeper-3.7.0/
#修改环境变量
vi /etc/profile
export ZK_HOME=/export/servers/zookeeper-3.7.0
export PATH=$PATH:$ZK_HOME/bin
#重新加载
source /etc/profile
2.zookeeper配置
cd /export/servers/zookeeper-3.7.0/conf/
#将ZooKeeper的模版配置文件复制并重命名zoo.cfg
mv zoo_sample.cfg zoo.cfg
#编辑zookeeper的配置文件
vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/export/data/zookeeper/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
#设置三台主机的IP地址以及端口号
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888
在三台虚拟机分别创建数据持久化目录
mkdir -p /export/data/zookeeper/zkdata/
#分别为三台虚拟机配置myid
echo 1 > /export/data/zookeeper/zkdata/myid
echo 2 > /export/data/zookeeper/zkdata/myid
echo 3 > /export/data/zookeeper/zkdata/myid
#将zookeeper的安装包copy到hadoop2和hadoop3
scp -r /export/servers/zookeeper-3.7.0/ hadoop2:/export/servers/
scp -r /export/servers/zookeeper-3.7.0/ hadoop3:/export/servers/
#copy环境变量到hadoop2和hadoop3
scp /etc/profile hadoop2:/etc/profile
scp /etc/profile hadoop3:/etc/profile
#重新加载环境变量
source /etc/profile
#给三台hadoop主机分别启动zookeeper的服务
zkServer.sh start
通过zkServer.sh status命令查看当前集群运行的状态,hadoop1显示是follower,hadoop3显示为leader