Hadoop Streaming Example (python)

Source: Internet
Author: User

I used to write some mapreduce programs in Java. Here's an example of using Python to implement MapReduce via Hadoop streaming.

Task Description:

There are two directories on HDFS/A and/b, there are 3 columns in the data, the first column is the ID, the second column is the respective business type (this assumes the/a corresponds to a,/b b), and the third column is a JSON string. One example:

/A: 1234567 a {"name": "Jiufeng", "Age": "$", "sex": "Male", "school": "," status ": [" 111 "," $ "," 001 "],...}

A line of/b: 12345 B {"A": "abc", "B": "ADR", "Xxoo": "E",...}

To find the "status" and "111" status in/A, and a list of all IDs with this ID in/b.

So come on, first of all, you need mapper to extract the ID of "status" and the first two columns of "a" in the second column "A", and/b for all rows, Python code is as follows, mapper.py:

1 #!/usr/bin/env python2 #coding = Utf-83 4 ImportJSON5 ImportSYS6 ImportTraceback7 ImportDatetime,time8 9 defmapper ():Ten      forLineinchSys.stdin: Oneline =Line.strip () AId,tag,content = Line.split ('\ t') -         ifTag = ='a': -Jstr =json.loads (content) theActive = Jstr.get ('Status',[]) -             if "111" inchActive: -                 Print '%s\t%s'%(Id,tag) -         ifTag = ='b': +             Print '%s\t%s'%(Id,tag) -  + if __name__=='__main__': AMapper ()

This mapper extracts the data from the input in the table and then passes the data that satisfies the criteria through the standard output. Then the reducer.py:

1 #!/usr/bin/env python2 #coding = Utf-83 4 ImportSYS5 ImportJSON6 7 defReducer ():8Tag_a =09Tag_b =0Tenpre_id ="' One      forLineinchSys.stdin: Aline =Line.strip () -Current_id,tag = Line.split ('\ t') -         ifcurrent_id! =pre_id: the             ifTag_a==1 andTag_b==1: -Tag_a =0 -Tag_b =0 -                 Print '%s'%pre_id +             Else : -Tag_a =0 +Tag_b =0 Apre_id =current_id at         ifTag = ='a': -             ifTag_a = =0: -Tag_a = 1 -         ifTag = ='b': -             ifTag_b = =0: -Tag_b = 1 in     ifTag_b==1 andTag_b==1: -         Print '%s'%pre_id to  + if __name__=='__main__': -Reducer ()

A reducer can accept n more rows of data, not a single line like Java for a key and then multiple value, but a key corresponding to a value, but fortunately the same key row is continuous, as long as the key changes in the time to do a bit of processing on the line.

Then arrange for Hadoop to execute, schedule.py:

1 #!/usr/bin/env python2 #coding = Utf-83 4 Importsubprocess, Os5 Importdatetime6 7 8 defmr_job ():9MyPath = Os.path.dirname (Os.path.abspath (__file__))TenInputpath1 ='/b/*' OneInputpath2 ='/a/*' AOutputPath ='/out/' -Mapper = MyPath +'/mapper.py' -REDUCER = MyPath +'/reducer.py' theCmds = ['$HADOOP _home/bin/hadoop','Jar','$HADOOP _home/contrib/streaming/hadoop-streaming-1.2.1.jar', -             '-numreducetasks',' +', -             '-input', Inputpath1, -             '-input', Inputpath2, +             '-output', OutputPath, -             '-mapper', Mapper, +             '-reducer', Reducer, A             '-file', Mapper, at             '-file', Reducer,] -      forFinchOs.listdir (mypath): -Cmds.append (MyPath +'/'+f) -cmd = ['$HADOOP _home/bin/hadoop','FS','-RMR', OutputPath] - subprocess.call (cmd) - Subprocess.call (CMDS) in  -  to defMain (): + mr_job () -  the if __name__=='__main__': *Main ()

Schedule.py is where the mapreduce is executed by calling Hadoop-streamingxxx.jar to submit the job by invoking the shell command, and by configuring the parameters, the shell command uploads the developed file to HDFs and then distributes it to the individual nodes to execute ... $HADOOP _home is the installation directory for HADOOP ... The names of mapper and reducer's Python scripts do not matter, the method name does not matter because the shell is configured to execute the command when you have specified

The above is a very simple example of python_hadoop-streamingxxx ....

Hadoop Streaming Example (python)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.