In this example, we write a MapReduce instance directly in Python: the word frequency of the words in the statistics input file
The "trick" of using Python to write MapReduce is to use the API of the Hadoop stream to pass data between the map function and the reduce function via stdin (standard input), STDOUT (standard output).
The only thing we need to do is to use Python's sys.stdin to read the input data and send our output to sys.stdout. Hadoop streams will help us with everything else.
1. Map function (
mapper.py)
# !/usr/bin/env python Import SYS for inch Sys.stdin: = Line.strip () = line.split () for in words: Print " %s\t%s " % (word, 1)
The file reads files from stdin. Cut the words and stdout the words and the word frequency output. The map script does not count the total number of words, but outputs <word> 1. In our case, we let the subsequent reduce phase do the statistical work.
2. Reduce function (
reducer.py)
#!/usr/bin/env python fromoperatorImportItemgetterImportSyscurrent_word=Nonecurrent_count=0word=None forLineinchSys.stdin:line=Line.strip () Word, Count= Line.split ('\ t', 1) Try: Count=Int (count)exceptValueError:#count If it's not a number, just ignore it . Continue ifCurrent_word = =Word:current_count+=CountElse: ifCurrent_word:Print "%s\t%s"%(Current_word, current_count) Current_count=Count Current_word=WordifWord = = Current_word:#Don't forget the final output Print "%s\t%s"% (Current_word, Current_count)
The file reads the results of the mapper.py as input to the reducer.py and counts the total number of occurrences of each word, outputting the final result to stdout.
Details: Split (' \ t ', 1), means to intercept only once
3. Perform (1) local test
Cat Input | Python mapper.py | sort-k1,1
| python reducer.py
(2) Hadoop testing
/home/work/tools/hadoop/bin/hadoop streaming \
-dmapred.job.name= "Test-job" \
-dmapred.job.priority=normal \
-dmapred.reduce.tasks=1 \
-mapper ' python mapper.py ' \
-reducer ' python reducer.py ' \
-file/home/work/code/mapper.py \
-file/home/work/code/reducer.py \
-input hdfs://data/input \
-output Hdfs://data/output
"Turn" writes the MapReduce function in Python--take wordcount as an example