 Introduction to Hadoop
 Hadoop nodes & daemons
 Hadoop Architecture
 Characteristics
 Hadoop Features
2
The Technology that empowers Yahoo, Facebook, Twitter, Walmart and
others
Hadoop
3
An Open Source framework
that allows distributed
processing of large data-sets
across the cluster of
commodity hardware
4
An Open Source framework
that allows distributed
processing of large data-sets
across the cluster of
commodity hardware
Open Source
 Source code is freely
available
 It may be redistributed
and modified
5
An open source framework
that allows Distributed
Processing of large data-sets
across the cluster of
commodity hardware
Distributed Processing
 Data is processed/
distributed on multiple
nodes / servers
 Multiple machines
processes the data
independently
6
An open source framework
that allows distributed
processing of large data-sets
across the Cluster of
commodity hardware
Cluster
 Multiple machines
connected together
 Nodes are connected via
LAN
7
An open source framework
that allows distributed
processing of large data-sets
across the cluster of
Commodity Hardware
Commodity Hardware
 Economic / affordable
machines
 Typically low
performance hardware
8
 Open source framework written in Java
 Inspired by Google's Map-Reduce programming model as
well as its file system (GFS)
9
Hadoop defeated
Super computer
Hadoop became
top-level project
launched Hive,
SQL Support for Hadoop
Development of
started as Lucene sub-project
published GFS &
MapReduce papers
2002 2003 2005 2006 2008
Doug Cutting started
working on
Doug Cutting added
DFS & MapReduce
in
converted 4TB of
image archives over
100 EC2 instances
Doug Cutting
joined Cloudera
2009
2004
Hadoop History
2007
10
Hadoop consists of three key parts
11
Master Node Slave Node
Nodes
12
Master Node Slave Node
Resource
Manager
NameNode
Node
Manager
DataNode
Nodes
13
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
Sub
Work
14
15
 Source code is freely
available
 Can be redistributed
 Can be modified
Free
Affordabl
e
Communi
ty
Transpare
nt
Inter-
operable
No
vendor
lock
Open
Source
16
 Data is processed
distributedly on cluster
 Multiple nodes in the
cluster process data
independently
Centralized Processing
Distributed Processing
17
 Failure of nodes are
recovered automatically
 Framework takes care of
failure of hardware as well
tasks
18
 Data is reliably stored on
the cluster of machines
despite machine failures
 Failure of nodes doesn’t
cause data loss
19
 Data is highly available
and accessible despite
hardware failure
 There will be no downtime
for end user application
due to data
20
 Vertical Scalability – New
hardware can be added to
the nodes
 Horizontal Scalability –
New nodes can be added
on the fly
21
 No need to purchase costly license
 No need to purchase costly hardware
Economic
Open
Source
Commodity
Hardware =
+
22
 Distributed computing
challenges are handled by
framework
 Client just need to concentrate
on business logic
23
 Move computation to data
instead of data to
computation
 Data is processed on the
nodes where it is stored
Storage Servers App Servers
Dat
a
Dat
a
Dat
a
Dat
a
Servers
Dat
a
Dat
a
Dat
a
Dat
a
Algorith
m
Alg
o
Alg
o
Alg
o
Alg
o
24
 Everyday we generate 2.5 quintillion bytes of data
 Hadoop handles huge volumes of data efficiently
 Hadoop uses the power of distributed computing
 HDFS & Yarn are two main components of Hadoop
 It is highly fault tolerant, reliable & available
25
Hadoop_Introduction_pptx.pptx

Hadoop_Introduction_pptx.pptx

  • 2.
     Introduction toHadoop  Hadoop nodes & daemons  Hadoop Architecture  Characteristics  Hadoop Features 2
  • 3.
    The Technology thatempowers Yahoo, Facebook, Twitter, Walmart and others Hadoop 3
  • 4.
    An Open Sourceframework that allows distributed processing of large data-sets across the cluster of commodity hardware 4
  • 5.
    An Open Sourceframework that allows distributed processing of large data-sets across the cluster of commodity hardware Open Source  Source code is freely available  It may be redistributed and modified 5
  • 6.
    An open sourceframework that allows Distributed Processing of large data-sets across the cluster of commodity hardware Distributed Processing  Data is processed/ distributed on multiple nodes / servers  Multiple machines processes the data independently 6
  • 7.
    An open sourceframework that allows distributed processing of large data-sets across the Cluster of commodity hardware Cluster  Multiple machines connected together  Nodes are connected via LAN 7
  • 8.
    An open sourceframework that allows distributed processing of large data-sets across the cluster of Commodity Hardware Commodity Hardware  Economic / affordable machines  Typically low performance hardware 8
  • 9.
     Open sourceframework written in Java  Inspired by Google's Map-Reduce programming model as well as its file system (GFS) 9
  • 10.
    Hadoop defeated Super computer Hadoopbecame top-level project launched Hive, SQL Support for Hadoop Development of started as Lucene sub-project published GFS & MapReduce papers 2002 2003 2005 2006 2008 Doug Cutting started working on Doug Cutting added DFS & MapReduce in converted 4TB of image archives over 100 EC2 instances Doug Cutting joined Cloudera 2009 2004 Hadoop History 2007 10
  • 11.
    Hadoop consists ofthree key parts 11
  • 12.
    Master Node SlaveNode Nodes 12
  • 13.
    Master Node SlaveNode Resource Manager NameNode Node Manager DataNode Nodes 13
  • 14.
  • 15.
  • 16.
     Source codeis freely available  Can be redistributed  Can be modified Free Affordabl e Communi ty Transpare nt Inter- operable No vendor lock Open Source 16
  • 17.
     Data isprocessed distributedly on cluster  Multiple nodes in the cluster process data independently Centralized Processing Distributed Processing 17
  • 18.
     Failure ofnodes are recovered automatically  Framework takes care of failure of hardware as well tasks 18
  • 19.
     Data isreliably stored on the cluster of machines despite machine failures  Failure of nodes doesn’t cause data loss 19
  • 20.
     Data ishighly available and accessible despite hardware failure  There will be no downtime for end user application due to data 20
  • 21.
     Vertical Scalability– New hardware can be added to the nodes  Horizontal Scalability – New nodes can be added on the fly 21
  • 22.
     No needto purchase costly license  No need to purchase costly hardware Economic Open Source Commodity Hardware = + 22
  • 23.
     Distributed computing challengesare handled by framework  Client just need to concentrate on business logic 23
  • 24.
     Move computationto data instead of data to computation  Data is processed on the nodes where it is stored Storage Servers App Servers Dat a Dat a Dat a Dat a Servers Dat a Dat a Dat a Dat a Algorith m Alg o Alg o Alg o Alg o 24
  • 25.
     Everyday wegenerate 2.5 quintillion bytes of data  Hadoop handles huge volumes of data efficiently  Hadoop uses the power of distributed computing  HDFS & Yarn are two main components of Hadoop  It is highly fault tolerant, reliable & available 25