Docker本地搭建Hadoop高可用,Hbase,Spark,Flink,Zookeeper集群_基于docker容器,搭建hadoop+spark+hive+hbase+zookeeper sca

本文详细介绍了如何使用Docker容器搭建包括Hadoop、HBase、Spark、Flink和Zookeeper在内的分布式集群。通过拉取相关镜像,配置Docker Compose文件,以及设置环境变量,实现各组件的高可用性和网络通信。此外,还涉及到Kafka和Kafka-Manager的部署,以及Hadoop集群的搭建和配置。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200401232922755.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80MzI3MjYwNQ==,size_16,color_FFFFFF,t_70)


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200401231813368.png)




---


## Kafka集群搭建




---



#拉取Kafka镜像和kafka-manager镜像
docker pull wurstmeister/kafka:2.12-2.3.1
docker pull sheepkiller/kafka-manager


###### 编辑docker-compose.yml文件



version: ‘2’

services:
broker1:
image: wurstmeister/kafka:2.12-2.3.1
restart: always # 出现错误时自动重启
hostname: broker1# 节点主机
container_name: broker1 # 节点名称
privileged: true # 可以在容器里面使用一些权限
ports:
- “9091:9092” # 将容器的9092端口映射到宿主机的9091端口上
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: PLAINTEXT://broker1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1:9092
KAFKA_ADVERTISED_HOST_NAME: broker1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
JMX_PORT: 9988 # 负责kafkaManager的端口JMX通信
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./broker1:/kafka/kafka-logs-broker1
external_links:
- zoo1
- zoo2
- zoo3
networks:
default:
ipv4_address: 172.25.0.14

broker2:
image: wurstmeister/kafka:2.12-2.3.1
restart: always
hostname: broker2
container_name: broker2
privileged: true
ports:
- “9092:9092”
environment:
KAFKA_BROKER_ID: 2
KAFKA_LISTENERS: PLAINTEXT://broker2:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2:9092
KAFKA_ADVERTISED_HOST_NAME: broker2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
JMX_PORT: 9988
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./broker2:/kafka/kafka-logs-broker2
external_links: # 连接本compose文件以外的container
- zoo1
- zoo2
- zoo3
networks:
default:
ipv4_address: 172.25.0.15
broker3:
image: wurstmeister/kafka:2.12-2.3.1
restart: always
hostname: broker3
container_name: broker3
privileged: true
ports:
- “9093:9092”
environment:
KAFKA_BROKER_ID: 3
KAFKA_LISTENERS: PLAINTEXT://broker3:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3:9092
KAFKA_ADVERTISED_HOST_NAME: broker3
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
JMX_PORT: 9988
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./broker3:/kafka/kafka-logs-broker3
external_links: # 连接本compose文件以外的container
- zoo1
- zoo2
- zoo3
networks:
default:
ipv4_address: 172.25.0.16
kafka-manager:
image: sheepkiller/kafka-manager:latest
restart: always
container_name: kafka-manager
hostname: kafka-manager
ports:
- “9000:9000”
links: # 连接本compose文件创建的container
- broker1
- broker2
- broker3
external_links: # 连接本compose文件以外的container
- zoo1
- zoo2
- zoo3
environment:
ZK_HOSTS: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
KAFKA_BROKERS: broker1:9092,broker2:9092,broker3:9092
APPLICATION_SECRET: letmein
KM_ARGS: -Djava.net.preferIPv4Stack=true
networks:
default:
ipv4_address: 172.25.0.10

networks:
default:
external: # 使用已创建的网络
name: bigdata



#运行命令
docker-compose up -d


![在这里插入图片描述](https://img-blog.csdnimg.cn/20200401235440792.png)  
 \*\*看看本地端口9000也确实起来了  
 ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200401235459198.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80MzI3MjYwNQ==,size_16,color_FFFFFF,t_70)  
 ![在这里插入图片描述](https://img-blog.csdnimg.cn/20200401235732298.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80MzI3MjYwNQ==,size_16,color_FFFFFF,t_70)




---


## Hadoop高可用集群搭建




---


#### docker-compose创建集群



version: ‘2’

services:
master:
image: hadoop:latest
restart: always # 出现错误时自动重启
hostname: master# 节点主机
container_name: master # 节点名称
privileged: true # 可以在容器里面使用一些权限
networks:
default:
ipv4_address: 172.25.0.3

master_standby:
image: hadoop:latest
restart: always
hostname: master_standby
container_name: master_standby
privileged: true
networks:
default:
ipv4_address: 172.25.0.4

slave01:<

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值