一、 伪分布式节点启动报错
./start-dfs.sh
Starting namenodes on [10.1.4.57]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [10.1.4.57]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
环境变量 hadoop-env.sh 中指定用户名:
export HDFS_DATANODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
再次启动ok。***_USER设置错误,会报
cannot set priority of datanode process 32156
二、 namenode节点启动成功,web服务无法访问
检查linux系统的防火墙设置
firewall-cmd --state
CentOS-7 防火墙默认使用的是firewall,与之前的版本使用iptables不一样。防火墙操作命令:
关闭防火墙:systemctl stop firewalld.service
开启防火墙:systemctl start firewalld.service
关闭开机启动:systemctl disable firewalld.service
开启开机启动:systemctl enable firewalld.service
关闭防火墙并设置禁用开机启动后,成功访问 http://ip:9870/
三、 datanode启动出错
2018-04-23 10:33:29,644 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/data/in_use.lock acquired by nodename 3692@localhost
2018-04-23 10:33:29,647 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-root/dfs/data
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-root/dfs/data: namenode clusterID = CID-103c769e-5fff-427c-9913-1004480fce63; datanode clusterID = CID-659951c9-642c-4325-8dc3-79f5edbdf175
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:736)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataSt