Writing an hadoop mapreduce program in pythonfrom Michael G. nolljump to: navigation, search
This article from http://www.michael-noll.com/wiki/Writing_An_Hadoop_MapReduce_Program_In_Python
In this tutorial, I will describe how to write a simple mapreduce program for hadoop In the python programming language.
Contents [Hide]
- 1 motivation
- 2 what we want to do
- 3 prerequisites
- 4. Python mapreduce code
- 4.1 map: Mapper. py
- 4.2 reduce: CER Cer. py
- 4.3 Test your code (cat data | map | sort | reduce)
- 5 running the Python code on hadoop
- 5.1 download example input data
- 5.2 copy local example data to HDFS
- 5.3 run the mapreduce job
- 6 Improved Mapper and reducer code: Using Python iterators and generators
- 6.1 Mapper. py
- 6.2 reducer. py
- 7 feedback
- 8 related links
|
Motivation
Even though the hadoop framework is written in Java, programs for hadoop need not to be coded in Java but can also be developed in other versions like python or C ++ (the latter since version 0.14.1 ). however, the documentation and the most prominent Python example on the hadoop home page cocould make you think that youMustTranslate Your Python code using Jython into a Java JAR file. obviusly, this is not very convenient and can even be problematic if you depend on Python features not provided by Jython. another issue of the Jython approach is the overhead of writing your python program in such a way that it can interact with hadoop-Just have a look at the example in /Src/examples/Python/wordcount. py And you see what I mean. I still recommend to have at least a look at the Jython approach and maybe even at the new C ++ mapreduce API calledPipes, It's really interesting.
Having that said, the ground is prepared for the purpose of this tutorial: writing a hadoop mapreduce program in a more pythonic way, I. e. in a way you should be familiar.
What we want to do
We will write a simple mapreduce Program (see also Wikipedia) for hadoop in PythonWithoutUsing Jython to translate our code to Java jar files.
Our program will mimick the wordcount example, I. e. it reads text files and counts how often words occur. the input is text files and the output is text files, each line of which contains a word and the count of how often it occured, separated by a tab.
Note:You can also use programming languages other than Python such as Perl or Ruby with the "technique" described in this tutorial. I wrote some words about what happens behind the scenes. feel free to correct me if I'm wrong.
Prerequisites
You shoshould have an hadoop cluster up and running because we will get our hands dirty. if you don't have a cluster yet, my following tutorials might help you to build one. the tutorials are tailored to Ubuntu Linux but the information does also apply to other Linux/UNIX variants.
- Running hadoop on Ubuntu Linux (single-node cluster)
How to Set UpSingle-nodeHadoop cluster using the hadoop Distributed File System (HDFS) on Ubuntu Linux
- Running hadoop on Ubuntu Linux (multi-node cluster)
How to Set UpMulti-nodeHadoop cluster using the hadoop Distributed File System (HDFS) on Ubuntu Linux
Python mapreduce code
The "trick" behind the following Python code is that we will use hadoopstreaming (see also the Wiki entry) for helping us passing data between our map and reduce code via stdin (standard input) and stdout (standard output ). we will simply use Python'sSYS. stdinTo read input data and print our own outputSYS. stdout. That's all we need to do because hadoopstreaming will take care of everything else! Amazing, isn' t it? Well, at leastIHad a "wow" experience...
Map: Mapper. py
Save the following code in the file/Home/hadoop/Mapper. py. It will read data from stdin (standard input), split it into words and output a list of lines mapping words to their (intermediate) counts to stdout (standard output ). the map script will not compute an (intermediate) sum of a word's occurences. instead, it will output" 1 "immediately-even though Might occur multiple times in the input-and just let the subsequent reduce step do the final sum count. of course, you can change this behavior in your own scripts as you please, but we will keep it like that in this tutorial because of didactic reasons :-)
Make sure the file has execution permission (Chmod + x/home/hadoop/Mapper. pyShocould do the trick) or you will run into problems.
#!/usr/bin/env python
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s/t%s' % (word, 1)
Reduce: CER Cer. py
Save the following code in the file/Home/hadoop/CER Cer. py. It will read the resultsMapper. pyFrom stdin (standard input), and sum the occurences of each word to a final count, and output its results to stdout (standard output ).
Make sure the file has execution permission (Chmod + x/home/hadoop/CER Cer. pyShocould do the trick) or you will run into problems.
#!/usr/bin/env python
from operator import itemgetter
import sys
# maps words to their counts
word2count = {}
# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# parse the input we got from mapper.py
word, count = line.split('/t', 1)
# convert count (currently a string) to int
try:
count = int(count)
word2count[word] = word2count.get(word, 0) + count
except ValueError:
# count was not a number, so silently
# ignore/discard this line
pass
# sort the words lexigraphically;
#
# this step is NOT required, we just do it so that our
# final output will look more like the official Hadoop
# word count examples
sorted_word2count = sorted(word2count.items(), key=itemgetter(0))
# write the results to STDOUT (standard output)
for word, count in sorted_word2count:
print '%s/t%s'% (word, count)
Test your code (cat data | map | sort | reduce)
I recommend to test yourMapper. pyAndCER Cer. pyScripts manually before using them in a mapreduce job. otherwise your jobs might successfully complete but there will be no job result data at all or not the results you wowould have expected. if that happens, most likely it was you (or me) who screw up.
Here are some ideas on how to test the functionality of the map and reduce scripts.
# very basic test
hadoop@ubuntu:~$ echo "foo foo quux labs foo bar quux" | /home/hadoop/mapper.py
foo 1
foo 1
quux 1
labs 1
foo 1
bar 1
quux 1
hadoop@ubuntu:~$ echo "foo foo quux labs foo bar quux" | /home/hadoop/mapper.py | sort | /home/hadoop/reducer.py
bar 1
foo 3
labs 1
quux 2
# using one of the ebooks as example input
# (see below on where to get the ebooks)
hadoop@ubuntu:~$ cat /tmp/gutenberg/20417-8.txt | /home/hadoop/mapper.py
The 1
Project 1
Gutenberg 1
EBook 1
of 1
[...]
(you get the idea)
Running the Python code on hadoop
Download example input data
We will use three eBooks from Project Gutenberg for this example:
- The outline of Science, vol. 1 (of 4) by J. Arthur Thomson
- The notebooks of Leonardo da Vinci
- Ulysses by James Joyce
Download each ebook as plain text files inUS-ASCIIEncoding and store the uncompressed files in a temporary directory of choice, for example/Tmp/Gutenberg.
hadoop@ubuntu:~$ ls -l /tmp/gutenberg/
total 3592
-rw-r--r-- 1 hadoop hadoop 674425 2007-01-22 12:56 20417-8.txt
-rw-r--r-- 1 hadoop hadoop 1423808 2006-08-03 16:36 7ldvc10.txt
-rw-r--r-- 1 hadoop hadoop 1561677 2004-11-26 09:48 ulyss12.txt
hadoop@ubuntu:~#!/usr/bin/env python
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s/t%s' % (word, 1)