Why MapReduce
Why MapReduce
The Map task takes a set of data and converts it into another set of data,
where individual elements are broken down into tuples (key-value pairs).
The Reduce task takes the output from the Map as an input and combines
those data tuples (key-value pairs) into a smaller set of tuples.
Input Phase Here we have a Record Reader that translates each record in
an input file and sends the parsed data to the mapper in the form of keyvalue pairs.
Shuffle and Sort The Reducer task starts with the Shuffle and Sort step.
It downloads the grouped key-value pairs onto the local machine, where the
Reducer is running. The individual key-value pairs are sorted by key into a
larger data list. The data list groups the equivalent keys together so that
their values can be iterated easily in the Reducer task.
Reducer The Reducer takes the grouped key-value paired data as input
and runs a Reducer function on each one of them. Here, the data can be
aggregated, filtered, and combined in a number of ways, and it requires a
wide range of processing. Once the execution is over, it gives zero or more
key-value pairs to the final step.
Let us try to understand the two tasks Map &f Reduce with the help of a
small diagram
MapReduce-Example
Let us take a real-world example to comprehend the power of
MapReduce. Twitter receives around 500 million tweets per day, which is
nearly 3000 tweets per second. The following illustration shows how
Tweeter manages its tweets with the help of MapReduce.
Tokenize Tokenizes the tweets into maps of tokens and writes them as
key-value pairs.
Filter Filters unwanted words from the maps of tokens and writes the
filtered maps as key-value pairs.
The MapReduce algorithm contains two important tasks, namely Map and
Reduce.
Mapper class takes the input, tokenizes it, maps and sorts it. The output
of Mapper class is used as input by Reducer class, which in turn searches
matching pairs and reduces them.
Sorting
Searching
Indexing
TF-IDF
Sorting
Sorting is one of the basic MapReduce algorithms to process and analyze
data. MapReduce implements sorting algorithm to automatically sort the
output key-value pairs from the mapper by their keys.
In the Shuffle and Sort phase, after tokenizing the values in the mapper
class, the Context class (user-defined class) collects the matching valued
keys as a collection.
To collect similar key-value pairs (intermediate keys), the Mapper class takes
the help of RawComparator class to sort the key-value pairs.
Searching
Searching plays an important role in MapReduce algorithm. It helps in
the combiner phase (optional) and in the Reducer phase. Let us try to
understand how Searching works with the help of an example.
Example
The following example shows how MapReduce employs Searching
algorithm to find out the details of the employee who draws the highest
salary in a given employee dataset.
The Map phase processes each input file and provides the employee data in
key-value pairs (<k, v> : <emp name, salary>). See the following
illustration.
The combiner phase (searching technique) will accept the input from the
Map phase as a key-value pair with employee name and salary. Using
searching technique, the combiner will check all the employee salary to find
the highest salaried employee in each file. See the following snippet.
else{
Continue checking;
}
<gopal, 50000>
<kiran, 45000>
<manisha,
45000>
Reducer phase Form each file, you will find the highest salaried
employee. To avoid redundancy, check all the <k, v> pairs and eliminate
duplicate entries, if any. The same algorithm is used in between the four <k,
v> pairs, which are coming from four input files. The final output should be
as follows
<gopal, 50000>
Indexing
Normally indexing is used to point to a particular data and its address. It
performs batch indexing on the input files for a particular Mapper.
The indexing technique that is normally used in MapReduce is known
asinverted index. Search engines like Google and Bing use inverted
indexing technique. Let us try to understand how Indexing works with
the help of a simple example.
Example
The following text is the input for inverted indexing. Here T[0], T[1], and
t[2] are the file names and their content are in double quotes.
T[0] = "it is what it is"
T[1] = "what is it"
T[2] = "it is a banana"
Here "a": {2} implies the term "a" appears in the T[2] file. Similarly,
"is": {0, 1, 2} implies the term "is" appears in the files T[0], T[1], and
T[2].
TF-IDF
TF-IDF is a text processing algorithm which is short for Term Frequency
Inverse Document Frequency. It is one of the common web analysis
algorithms. Here, the term 'frequency' refers to the number of times a
term appears in a document.
TF(the) = (Number of times term the the appears in a document) / (Total number of terms in the
document)
Example
Consider
a
document
containing
1000
words,
wherein
the
word hive appears 50 times. The TF for hive is then (50 / 1000) = 0.05.
Now, assume we have 10 million documents and the word hive appears
in 1000 of these. Then, the IDF is calculated as log(10,000,000 / 1,000)
= 4.
The TF-IDF weight is the product of these quantities 0.05 4 = 0.20