Hadoop-streaming Learning

Source: Internet
Author: User

Original post address: http://cp1985chenpeng.iteye.com/blog/1312976


1. Overview

Hadoop streaming is a programming tool provided by Hadoop that allows users to use any executable file or script file as mapper and reducer, for example:

$HADOOP _home/bin/hadoop jar $HADOOP _home/hadoop-streaming.jar \

-input/user/test/input \

-output/user/test/output \

-mapper "mymapper.sh" \

-reducer "Myreducer.sh"

-input and so on are the parameters for this command.

2. principle

Mapper and reducer read the user data from standard input, one line at a line and then sent to standard output. The streaming tool creates MapReduce jobs, sends them to each tasktracker, and monitors the execution of the entire job.

If an executable file or script is mapper, mapper initialization, each mapper task will start the file as a separate process, mapper when the task runs, it divides the input into rows and provides each row with the standard input for the executable process. At the same time, mapper collects the contents of the standard output of the executable process and converts each line received into Key/value pairs as mapper output. By default, the portion before the first tab in a row acts as a key, followed by (excluding tab) as value. If there is no tab, the entire row is a key value and the value is null.

For reducer, similar.

3.Streaming Command Description

3.1Streaming command

Run the streaming MapReduce program using the following command:

$HADOOP _home/bin/hadoop jar $HADOOP _home/hadoop-stream.jar [args]

Where args is the streaming parameter, the following is a list of parameters :

-input <path>: Map input data HDFs path; path can be a file or directory, you can use the * wildcard character, and the-input option can be used to specify multiple files or directories as input multiple times.

-output <path>: Reduce output The HDFs path of the result, path must not exist, and the user performing the job must have permission to create the directory,-output can only use once.

-mapper <filename>: Mapper executable file or script, must be specified and unique.

-reducer <filename>: Reducer executable file or script, must be specified and unique.

-file <file>: Distribute local files, upload to HDFs, then distribute to each node;

-cachefile <file>: Distribute the files on HDFs to each node;

-cacharchive <file>: Distribute the compressed files on HDFs to each node;

-numreducetasks <num>: The number of reduce tasks in the system, if you set-numreducetasks 0 or-reducer None there is no reducer program, mapper output directly as The output of the entire job.

- D name=value: Configure system parameters;

1) Mapred.map.tasks:map Task number

2) mapred.reduce.tasks:reduce Task number

3 The STREAM.MAP.INPUT.FIELD.SEPARATOR/STREAM.MAP.OUTPUT.FIELD.SEPARATOR:MAP task input/output data separator, by default, is \ t.

4) Stream.num.map.output.key.fields: Specifies the number of fields for key in the map task output record

5 The Stream.reduce.input.field.separator/stream.reduce.output.field.separator:reduce task input/output data separator, by default, is \ t.

6) Stream.num.reduce.output.key.fields: Specifies the number of domains that the key occupies in the reduce task output record -combiner <javaClass>: combiner Java class, the corresponding Java class file is packaged into a jar file and distributed with-file.

-partitioner <javaClass>: Partitioner Java class;

-inputformat <java class>: InputFormat Java class for reading input data to implement the InputFormat interface. If not specified, Textinputformat is used by default.

-outputformat <javaClass>: OutputFormat Java class; for writing output data, implement OutputFormat interface. If not specified, Textoutputformat is used by default.

-cmdenv Name=value: Pass additional environment variables to mapper and reducer programs, name is variable name, value is variable value.

-verbose : Specify output details, such as which files to distribute, actual job configuration parameter values, etc., can be used for debugging

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.