1
Aim: Installation of Single Node Hadoop Cluster on Ubuntu
THEORY:
Apache Hadoop 3.1 have noticeable improvements any many bug fixes over the previous stable
3.0 releases. This version has many improvements in HDFS and MapReduce. This how-to guide
will help you to setup Hadoop 3.1.0 Single-Node Cluster on CentOS/RHEL 7/6/5, Ubuntu 18.04,
17.10, 16.04 & 14
.04, Debian 9/8/7 and LinuxMint Systems. This article has been tested with Ubuntu 18.04 LTS.
1. Prerequisites
Java is the primary requirement for running Hadoop on any system, So make sure you have
Java installed on your system using the following command. If you don’t have Java installed on
your system, use one of the following links to install it first. Hadoop supports only JAVA 8 If
already any other version is present then uninstall the following using these commands.
sudo apt-get purge openjdk-\* icedtea-\* icedtea6-\*
OR
sudo apt remove openjdk-8-jdk
Step 1.1 – Install Oracle Java 8 on Ubuntu
You need to enable additional repository to your system to install Java 8 on Ubuntu VPS. After
that install Oracle Java 8 on an Ubuntu system using apt-get.This repository contains a package
named oracle-java8-installer, Which is notan actual Java package. Instead of that, this package
contains a script toinstall Java on Ubuntu.Run below commands to install Java 8 on Ubuntu
and LinuxMint.
sudo add-apt-repository
ppa:webupd8team/java sudo apt-get
sudo apt-get install oracle-java8-installer
OR
sudo apt install openjdk-8-jre-headless
sudo apt install openjdk-8-jdk
Step 1.2 – Verify Java Inatallation
The apt repository also provides package oracle-java8-set-default to set Java 8 as your default
Java version. This package will be installed along with Java installation. To make sure run
below command.
sudo apt-get install oracle-java8-set-default
After successfully installing Oracle Java 8 using the above steps, Let’s verify the installed
version using the following command.
java -version
java version "1.8.0_201"
1
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
Step 1.3 – Setup JAVA_HOME and JRE_HOME Variable
Add the java path to JAVA_HOME variable in .bashrc file. Go to your home directory and in
the folder option click on show hidden files. After that a .bashrc file will be present, open the file
and add the following line at the end.
NOTE- Path of the java will be your pc path on which java is been
installed. export JAVA_HOME=/usr/lib/jvm/java-8-oracle
NOTE- After doing all changes and saving the file run the following command to make
changes through the .bashrc file.
source ~/.bashrc
All done, you have successfully installed Java 8 on a Linux system.
2. Create Hadoop User
We recommend creating a normal (nor root) account for Hadoop working. To create an
account using the following command.
adduser hadoop
passwd hadoop
Set up a new user for Hadoop working separately other than the normal users.
NOTE- Its compulsory to create a sperate user with username hadoop otherwise it may give
you path file issues later.
Also run these commands from an admin privileged user present on the
machine. sudo adduser Hadoop sudo
Command – sudo adduser hadoop sudo If you have already created the user and want to
give sudo/root privileges to it then run the following command.
sudo usermod -a -G sudo hadoop
Otherwise you can directly edit the permission lines in sudoers file. Go to the root access
by running
sudo -I or su- <username>
Type the following command and add the below line to the file.
visudo
After creating the account, it also required to set up key-based ssh to its own account. To do this
use execute following commands.
su - hadoop
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
Let’s verify key based login. Below command should not ask for the password but the first
time it will prompt for adding RSA to the list of known hosts.
ssh localhost
exit
Disable all firewall restriction
sudo ufw disable
2
If above command doesn’t work then go with.
service iptables stop
OR
sudo chkconfig iptables off
Sometimes it’s better to manage firewall using a third party software. Ex. yast
3. Download Hadoop 3.1 Archive
In this step, download hadoop 3.1 source archive file using below command. You can also
select alternate download mirror for increasing download speed.
cd ~
wget http://www-eu.apache.org/dist/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz
tar xzf hadoop-3.1.0.tar.gz
mv hadoop-3.1.0 hadoop
4. Setup Hadoop Pseudo-Distributed Mode
4.1. Setup Hadoop Environment Variables
First, we need to set environment variable uses by Hadoop. Edit ~/.bashrc file and
append following values at end of file.
export HADOOP_HOME=/home/hadoop/hadoop export
HADOOP_INSTALL=$HADOOP_HOME export
HADOOP_MAPRED_HOME=$HADOOP_HOME export
HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
Now apply the changes in the current running
environment source ~/.bashrc
Now edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh file and set JAVA_HOME
environment variable. Change the JAVA path as per install on your system. This path may vary
as per your operating system version and installation source. So make sure you are using correct
path. export JAVA_HOME=/usr/lib/jvm/java-8-oracle
4.2. Setup Hadoop Configuration Files
Hadoop has many of configuration files, which need to configure as per requirements of your
Hadoop infrastructure. Let’s start with the configuration with basic Hadoop single node
cluster setup. first, navigate to below location
cd $HADOOP_HOME/etc/hadoop
Edit core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
3
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Edit hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
Edit mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name </name>
<value>yarn </value>
</property>
</configuration>
Edit yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services </name>
<value>mapreduce_shuffle </value>
</property>
</configuration>
4.3. Format Namenode
Now format the namenode using the following command, make sure that Storage directory
is hdfs namenode -format
Sample output:
WARNING: /home/hadoop/hadoop/logs does not exist. Creating.
2018-05-02 17:52:09,678 INFO namenode.NameNode: STARTUP_MSG:
4
/************************************************************
STARTUP_MSG: Starting NameNode STARTUP_MSG: host =
localhost/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.1.0
...
...
...
2018-05-02 17:52:13,717 INFO common.Storage: Storage directory
/home/hadoop/hadoopdata/hdfs/namenode has been successfully formatted.
2018-05-02 17:52:13,806 INFO namenode.FSImageFormatProtobuf: Saving image file
/home/hadoop/hadoopdata/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using
no
compression
2018-05-02 17:52:14,161 INFO namenode.FSImageFormatProtobuf: Image file
/home/hadoop/hadoopdata/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size
391 bytes saved in 0 seconds .
2018-05-02 17:52:14,224 INFO namenode.NNStorageRetentionManager: Going to retain
1 images with txid >= 0
2018-05-02 17:52:14,282 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.1.1
************************************************************/
5. Start Hadoop Cluster
Let’s start your Hadoop cluster using the scripts provides by Hadoop. Just navigate to
your $HADOOP_HOME/sbin directory and execute scripts one by one.
Cd $HADOOP_HOME/sbin/
Now run start-dfs.sh script.
./start-dfs.sh
Sample output:Starting namenodes on
[localhost] Starting datanodes
Starting secondary namenodes [localhost]
2018-05-02 18:00:32,565 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Now run start-yarn.sh script.
./start-yarn.sh
Sample output:Starting resourcemanager
Starting nodemanagers
6. Access Hadoop Services in Browser
5
Hadoop NameNode started on port 9870 default. Access your server on port 9870 in
your favorite web browser.
http://localhost:9870/
Now access port 8042 for getting the information about the cluster and all
applications http:// localhost:8042/
Access port 9864 to get details about your Hadoop node.
http://localhost:9864/
7. Test Hadoop Single Node Setup
7.1 Make the HDFS directories required using following commands.
bin/hdfs dfs -mkdir /user
bin/hdfs dfs -mkdir /user/hadoop
7.2 Copy all files from local file system /var/log/httpd to hadoop distributed file system
using below command
bin/hdfs dfs -put /var/log/apache2 logs
7.3 Browse Hadoop distributed file system by opening below URL in the browser. You will
see an apache2 folder in the list.
http://localhost:9870/explorer.html#/user/hadoop/logs/
6
44
Prof. Vaishali Shirsath