部署环境
OS: Ubuntu12.04 Server
Hadoop:CDH3U6
机器列表:namenode 192.168.71.46;datanode 192.168.71.202,192.168.71.203,192.168.71.204
安装Hadoop
添加软件源
/etc/apt/sources.list.d/cloudera-3u6.list
插入
deb http://192.168.52.100/hadoop maverick-cdh3 contrib
deb-src http://192.168.52.100/hadoop maverick-cdh3 contrib
增加GPG Key,执行
curl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add -更新
apt-get update
在namenode上安装hadoop-0.20-namenode和jobtracker
apt-get install -y --force-yes hadoop-0.20-namenode hadoop-0.20-jobtracker
在datanode上安装hadoop-0.20-datanode和tasktracker
apt-get install -y --force-yes hadoop-0.20-datanode hadoop-0.20-tasktracker
配置无SSH登录
在namendoe机器上执行
ssh-keygen -t rsa
一路回车,将在~/.ssh文件夹下生成的id_rsa.pub的内容复制到其他datanode机器的/root/.ssh/authorized_keys文件的尾部,如果其他机器中没有这个文件就自己手动创建一个。
建立Hadoop存储目录并修改owner
mkdir /opt/hadoop
chown hdfs:hadoop /opt/hadoop
mkdir /opt/hadoop/mapred
chown mapred:hadoop /opt/hadoop/mapred
修改配置文件并分发
修改/etc/hadoop/conf/core-site.xml为
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.71.46:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop</value>
</property>
</configuration>
修改/etc/hadoop/conf/hdfs-site.xml为
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.balance.bandwidthPerSec</name>
<value>10485760</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
</configuration>
修改/etc/hadoop/conf/mapred-site.xml为
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx1024m</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>192.168.71.46:8021</value>
</property>
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value>org.apache.hadoop.mapred.CapacityTaskScheduler</value>
</property>
<property>
<name>mapred.queue.names</name>
<value>default,extract</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>44</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>22</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/opt/hadoop/mapred/local</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/user/mapred/system</value>
</property>
<property>
<name>mapreduce.jobtracker.staging.root.dir</name>
<value>/user/mapred/staging</value>
</property>
<property>
<name>mapred.temp.dir</name>
<value>/user/mapred/temp</value>
</property>
</configuration>
将conf文件夹分发的datanode机器上。
scp -r /etc/hadoop/conf root@192.168.71.202:/etchadoop/conf
scp -r /etc/hadoop/conf root@192.168.71.203:/etchadoop/conf
scp -r /etc/hadoop/conf root@192.168.71.204:/etchadoop/conf
limit设置
vi /etc/security/limits.conf
加上这些字段
* soft nofile 65535
* hard nofile 131070
root soft nofile 65535
root hard nofile 131070
hdfs soft nofile 65535
hdfs hard nofile 131070
mapred soft nofile 65535
mapred hard nofile 131070
hosts和hostname设置
修改/etc/hosts加上集群所有机器的hostname和对应的IP
登录查看
http://192.168.71.46:50070
参考