Hadoop Streaming parameter settings

Source: Internet
Author: User
Hadoop Streaming usage
Usage: $HADOOP _home/bin/hadoop jar \
$HADOOP _home/hadoop-streaming.jar [Options]
Options
(1)-input: Input file path
(2)-output: Output file path
(3)-mapper: User-written mapper program, can be executable file or script
(4)-reducer: User-written reducer program, can be executable file or script
(5)-file: Packaging files to the submitted job, can be mapper or reducer to use the input files, such as configuration files, dictionaries and so on.
(6)-partitioner: User-defined Partitioner program
(7)-combiner: User-defined Combiner program (must be implemented in Java)
(8) - D: Some properties of the job (formerly-jonconf), specifically:
1) Number of mapred.map.tasks:map tasks
2) Number of mapred.reduce.tasks:reduce tasks
3) Stream.map.input.field.separator/stream.map.output.field.separator:map task input/output number
The default is \ t for the delimiter.
4) Stream.num.map.output.key.fields: Specifies the number of fields in the map task output record that the key occupies
5) Stream.reduce.input.field.separator/stream.reduce.output.field.separator:reduce task input/output data delimiter, default is \ t.

6) Stream.num.reduce.output.key.fields: Specify the number of fields in the reduce task output record for key


In addition, Hadoop itself comes with some handy mapper and reducer:
(1) Hadoop aggregation function

Aggregate provides a special reducer class and a special Combiner class, and there is a series of "aggregators" (such as "sum", "Max", "min", etc.) used to aggregate a set of value sequences. Users can use aggregate to define a mapper plug-in class that produces "aggregatable items" for each key/value pair entered by mapper. The Combiner/reducer aggregates these aggregatable items with the appropriate aggregator. To use aggregate, you only need to specify "-reducer aggregate".


(2) Selection of fields (similar to ' cut ' in Unix)
Hadoop's tool class Org.apache.hadoop.mapred.lib.FieldSelectionMapReduc helps users work with text data efficiently, like the "Cut" tool in Unix. The map function in the tool class considers the input key/value pairs as a list of fields. The user can specify the delimiter for the field (by default, tab), and you can select any paragraph in the field list (consisting of one or more fields in the list) as the key or value for the map output. Similarly, the reduce function in the tool class considers the input key/value pairs as a list of fields, and the user can select any segment as the key or value for the reduce output.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.