Acropolis Institute of Technology &
Research, Indore
[Link]
Data Analytics
By: Mr. Ronak Jain
Table of Contents
UNIT-III:
PROCESSING BIG DATA: Integrating disparate data stores, Mapping data to the programming framework, Connecting
and extracting data from storage, Transforming data for processing, subdividing data in preparation for Hadoop Map
Reduce.
December 13, 2023 3
Hadoop Distributed File
System (HDFS)
Google search engines
1998
2013
Hadoop’s Developers
2005: Doug Cutting and Michael J. Cafarella developed
Hadoop to support distribution for the Nutch search
engine project.
Doug Cutting
The project was funded by Yahoo.
2006: Yahoo gave the project to Apache
Software Foundation.
Google Origins
2003
2004
2006
Some Hadoop Milestones
• 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1 terabyte
of data in 209 seconds, compared to previous record of 297 seconds)
• 2009 - Avro and Chukwa became new members of Hadoop
Framework family
• 2010 - Hadoop's Hbase, Hive and Pig subprojects completed, adding
more computational power to Hadoop framework
• 2011 - ZooKeeper Completed
• 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha.
- Ambari, Cassandra, Mahout have been added
Hadoop Framework Tools
Hadoop’s Architecture
• Distributed, with some centralization
• Main nodes of cluster are where most of the computational
power and storage of the system lies
• Main nodes run TaskTracker to accept and reply to MapReduce
tasks, and also DataNode to store needed blocks closely as
possible
• Central control node runs NameNode to keep track of HDFS
directories & files, and JobTracker to dispatch compute tasks to
TaskTracker
• Written in Java, also supports Python and Ruby
Hadoop’s Architecture
Hadoop’s Architecture
• Hadoop Distributed Filesystem
• Tailored to needs of MapReduce
• Targeted towards many reads of filestreams
• Writes are more costly
• High degree of data replication (3x by default)
• No need for RAID on normal nodes
• Large blocksize (64MB)
• Location awareness of DataNodes in network
Hadoop’s Architecture
NameNode:
• Stores metadata for the files, like the directory structure of a
typical FS.
• The server holding the NameNode instance is quite crucial,
as there is only one.
• Transaction log for file deletes/adds, etc. Does not use
transactions for whole blocks or file-streams, only metadata.
• Handles creation of more replica blocks when necessary after
a DataNode failure
Hadoop’s Architecture
DataNode:
• Stores the actual data in HDFS
• Can run on any underlying filesystem (ext3/4, NTFS, etc)
• Notifies NameNode of what blocks it has
• NameNode replicates blocks 2x in local rack, 1x elsewhere
Hadoop’s Architecture: MapReduce Engine
Hadoop’s Architecture
MapReduce Engine:
• JobTracker & TaskTracker
• JobTracker splits up data into smaller tasks(“Map”) and sends
it to the TaskTracker process in each node
• TaskTracker reports back to the JobTracker node and reports
on job progress, sends data (“Reduce”) or requests new jobs
Hadoop’s Architecture
• None of these components are necessarily limited to using
HDFS
• Many other distributed file-systems with quite different
architectures work
• Many other software packages besides Hadoop's
MapReduce platform make use of HDFS
Hadoop in the Wild
• Hadoop is in use at most organizations that handle big data:
o Yahoo!
o Facebook
o Amazon
o Netflix
o Etc…
• Some examples of scale:
o Yahoo!’s Search Webmap runs on 10,000 core Linux
cluster and powers Yahoo! Web search
o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012) &
growing at ½ PB/day (Nov, 2012)
Hadoop in the Wild
Three main applications of Hadoop:
• Advertisement (Mining user behavior to generate
recommendations)
• Searches (group related documents)
• Security (search for uncommon patterns)
Hadoop in the Wild
• Non-realtime large dataset computing:
o NY Times was dynamically generating PDFs of articles
from 1851-1922
o Wanted to pre-generate & statically serve articles to
improve performance
o Using Hadoop + MapReduce running on EC2 / S3,
converted 4TB of TIFFs into 11 million PDF articles in 24
hrs
Hadoop in the Wild: Facebook Messages
• Design requirements:
o Integrate display of email, SMS and
chat messages between pairs and
groups of users
o Strong control over who users
receive messages from
o Suited for production use between
500 million people immediately after
launch
o Stringent latency & uptime
requirements
• System requirements
Hadoop in the Wild
o High write throughput
o Cheap, elastic storage
o Low latency
o High consistency (within a
single data center good
enough)
o Disk-efficient sequential and
random read performance
Hadoop in the Wild
• Classic alternatives
o These requirements typically met using large MySQL cluster &
caching tiers using Memcached
o Content on HDFS could be loaded into MySQL or Memcached
if needed by web tier
• Problems with previous solutions
o MySQL has low random write throughput… BIG problem for
messaging!
o Difficult to scale MySQL clusters rapidly while maintaining
performance
o MySQL clusters have high management overhead, require
more expensive hardware
Hadoop in the Wild
• Facebook’s solution
o Hadoop + HBase as foundations
o Improve & adapt HDFS and HBase to scale to FB’s workload
and operational considerations
Major concern was availability: NameNode is SPOF &
failover times are at least 20 minutes
Proprietary “AvatarNode”: eliminates SPOF, makes HDFS
safe to deploy even with 24/7 uptime requirement
Performance improvements for realtime workload: RPC
timeout. Rather fail fast and try a different DataNode
Hadoop Highlights
Distributed File System
Fault Tolerance
Open Data Format
Flexible Schema
Queryable Database
Why use Hadoop?
Need to process Multi Petabyte Datasets
Data may not have strict schema
Expensive to build reliability in each application
Nodes fails everyday
Need common infrastructure
Very Large Distributed File System
Assumes Commodity Hardware
Optimized for Batch Processing
Runs on heterogeneous OS
DataNode
A Block Sever
Stores data in local file system
Stores meta-data of a block - checksum
Serves data and meta-data to clients
Block Report
Periodically sends a report of all existing blocks to
NameNode
Facilitate Pipelining of Data
Forwards data to other specified DataNodes
Block Placement
Replication Strategy
One replica on local node
Second replica on a remote rack
Third replica on same remote rack
Additional replicas are randomly placed
Clients read from nearest replica
Data Correctness
Use Checksums to validate data – CRC32
File Creation
Client computes checksum per 512 byte
DataNode stores the checksum
File Access
Client retrieves the data and checksum from DataNode
If validation fails, client tries other replicas
Data Pipelining
Client retrieves a list of DataNodes on which to place replicas of a
block
Client writes block to the first DataNode
The first DataNode forwards the data to the next DataNode in the
Pipeline
When all replicas are written, the client moves on to write the next
block in file
Goals of HDFS
Very Large Distributed File System
10K nodes, 100 million files, 10PB
Assumes Commodity Hardware
Files are replicated to handle hardware failure
Detect failures and recover from them
Optimized for Batch Processing
Data locations exposed so that computations
can move to where data resides
Provides very high aggregate bandwidth
Distributed File System
Single Namespace for entire cluster
Data Coherency
Write-once-read-many access model
Client can only append to existing files
Files are broken up into blocks
Typically 64MB block size
Each block replicated on multiple DataNodes
Intelligent Client
Client can find location of blocks
Client accesses data directly from DataNode
HDFS Architecture
NameNode Metadata
Metadata in Memory
The entire metadata is in main memory
No demand paging of metadata
Types of metadata
List of files
List of Blocks for each file
List of DataNodes for each block
File attributes, e.g. creation time, replication factor
A Transaction Log
Records file creations, file deletions etc
DataNode
A Block Server
Stores data in the local file system (e.g. ext3)
Stores metadata of a block (e.g. CRC)
Serves data and metadata to Clients
Block Report
Periodically sends a report of all existing blocks
to the NameNode
Facilitates Pipelining of Data
Forwards data to other specified DataNodes
Block Placement
Current Strategy
One replica on local node
Second replica on a remote rack
Third replica on same remote rack
Additional replicas are randomly placed
Clients read from nearest replicas
Would like to make this policy pluggable
Heartbeats
DataNodes send hearbeat to the NameNode
Once every 3 seconds
NameNode uses heartbeats to detect
DataNode failure
Replication Engine
NameNode detects DataNode failures
Chooses new DataNodes for new replicas
Balances disk usage
Balances communication traffic to DataNodes
Data Correctness
Use Checksums to validate data
Use CRC32
File Creation
Client computes checksum per 512 bytes
DataNode stores the checksum
File access
Client retrieves the data and checksum from
DataNode
If Validation fails, Client tries other replicas
Data Pieplining
Client retrieves a list of DataNodes on which
to place replicas of a block
Client writes block to the first DataNode
The first DataNode forwards the data to the
next node in the Pipeline
When all replicas are written, the Client
moves on to write the next block in file
Rebalancer
Goal: % disk full on DataNodes should be
similar
Usually run when new DataNodes are added
Cluster is online when Rebalancer is active
Rebalancer is throttled to avoid network
congestion
Command line tool
Secondary NameNode
Copies FsImage and Transaction Log from
Namenode to a temporary directory
Merges FSImage and Transaction Log into a
new FSImage in temporary directory
Uploads new FSImage to the NameNode
Transaction Log on NameNode is purged
Commads for HDFS User:
User Interface
hadoop dfs -mkdir /foodir
hadoop dfs -cat /foodir/[Link]
hadoop dfs -rm /foodir/[Link]
Commands for HDFS Administrator
hadoop dfsadmin -report
hadoop dfsadmin -decommision datanodename
Web Interface
[Link]