Hadoop streaming python handles Lzo file problems

Source: Internet
Author: User
Tags stdin wrapper python script

A small demand, do not want to write Java MapReduce program, want to use streaming + Python to deal with the line, encountered some problems, make a note.

Later encountered such a scene, you can rest assured that use.

I was in Windows under the Pycharm written mapper and reducer, directly uploaded to the Linux server, found that can not run, always reported:

./maper.py file or directory not find

And there's no reason to find it, and later it was found to be due to differences in Windows and Linux file formats.

It's a lesson.

python shell script if written on Windows, run on Linux, you need to convert the format: Dos2unix + filename


Take a look at the Python script:

mapper.py

#!/usr/bin/python #coding: UTF8 import JSON import sys import RE wrapper = ["Fqa", "Fqq", "GTT", "Zxb", "Zxa"] for line in Sy
                S.stdin:try:line = Line.decode ("UTF8") str = re.split ("\\s+", line) [2].strip ()

                s = json.loads (str) If Len (s["Data"] ["TTS"]) = = 0:continue
                        MINPR = Sys.maxint for prline in s["Data" ["TTS"]: PR = prline["PR"] MINPR = min (MINPR,PR) for L in s["Data" ["TTS"]: if l["CL"] I  N wrapper:if l["PR"]==minpr:print "%s-%s,%s\t%s" % (l["DC"],l["AC"],l["CL"], "1")                                  else:                                          print "%s-%s,%s\t%s"% (l[DC) "],l[" AC "],l[" CL "]," 0 ")         except Exception,ex:     

             pass</span>

reducer.py:

#!/usr/bin/env python
import  sys

tcount = 0.0
lcount = 0.0
Lastkey = ""
for line in sys.stdin:< C6/>key,val = Line.split (' t ')
        if Lastkey!= key and Lastkey!= "":
                Lastkey = key
                print "%s\t%d\t%d\t%f"% (LA Stkey,tcount,lcount,lcount/tcount)
                tcount = 0.0
                lcount = 0.0 elif Lastkey = =
        "":
                Lastkey = key
        Tcount + + 1
        if val.strip () = = "1":
                Lcount + + 1

What you need to be aware of in your code are:

#!/usr/bin/env python
#coding: UTF8 line
= Line.decode ("UTF8")
try:
except Exception,ex:
    Pass

These points need to be noted, otherwise a small problem can cause the task to fail

In which, if there is dirty data in the input data, the Python script throws an exception, but if the agent does not handle the exception, it will error, under the type:

minrecwrittentoenableskip_=9223372036854775807 host=null user=datadev hadoop_user=null last tool output: |

LLA-AMS,ZXL 0| Java.io.IOException:Broken pipe at Java.io.FileOutputStream.writeBytes (Native method) at Java.io.FileOutputStream.write (fileoutputstream.java:345) at Java.io.BufferedOutputStream.write ( bufferedoutputstream.java:122) at Java.io.BufferedOutputStream.write (bufferedoutputstream.java:122) at Java.io.DataOutputStream.write (dataoutputstream.java:107) at Org.apache.hadoop.streaming.io.TextInputWriter.writeUTF8 (textinputwriter.java:72) at Org.apache.hadoop.streaming.io.TextInputWriter.writeValue (textinputwriter.java:51) at Org.apache.hadoop.streaming.PipeMapper.map (pipemapper.java:106) at Org.apache.hadoop.mapred.MapRunner.run ( maprunner.java:54) at Org.apache.hadoop.streaming.PipeMapRunner.run (pipemaprunner.java:34) at Org.apache.hadoop.mapred.MapTask.runOldMapper (maptask.java:429) at Org.apache.hadoop.mapred.MapTask.run ( maptask.java:341) at org.apache.hadoop.mApred. Yarnchild$2.run (yarnchild.java:162) at Java.security.AccessController.doPrivileged (Native method) at Javax.security.auth.Subject.doAs (subject.java:415) at Org.apache.hadoop.security.UserGroupInformation.doAs ( usergroupinformation.java:1491) at Org.apache.hadoop.mapred.YarnChild.main (yarnchild.java:157)

Throw the exception to the Hadoop framework process, it is of course, according to the error to deal with, error, exit.

Look at the submit job script, which is also important:

#!/bin/bash
export hadoop_home=/home/q/hadoop-2.2.0

sudo-u flightdev HADOOP jar $HADOOP _home/share/hadoop/ Tools/lib/hadoop-streaming-2.2.0.jar  \
        D-mapred.job.queue.name=queue1 \
        D Stream.map.input.ignorekey=true \
        -inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat \
        -input/ input/date=2014-11-17.lzo/* \
        -output/output/20141117  \
        -mapper maper.py  \
        -file  maper.py \
        -reducer reducer.py \
        -file reducer.py

Here are two points to note
-D stream.map.input.ignorekey=true \
 -inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat \

Indicates that the line number in the input Lzo file is ignored and the effect of the line number on the input data is avoided





Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.