Hadoop Streaming usage
Usage: $HADOOP _home/bin/hadoop jar \
$HADOOP _home/hadoop-streaming.jar [Options]
Options
(1)-input: Input file path
(2)-output: Output file path
(3)-mapper: User-written mapper program, can be executable file or script
(4)-reducer: User-written reducer program, can be executable file or script
(5)-file: Packaging files to the submitted job, can be mapper or reducer to use the input files, such as configuration files, dictionaries and so on.
(6)-partitioner: User-defined Partitioner program
(7)-combiner: User-defined Combiner program (must be implemented in Java)
(8)
- D: Some properties of the job (formerly-jonconf), specifically:
1) Number of mapred.map.tasks:map tasks
2) Number of mapred.reduce.tasks:reduce tasks
3) Stream.map.input.field.separator/stream.map.output.field.separator:map task input/output number
The default is \ t for the delimiter.
4) Stream.num.map.output.key.fields: Specifies the number of fields in the map task output record that the key occupies
5) Stream.reduce.input.field.separator/stream.reduce.output.field.separator:reduce task input/output data delimiter, default is \ t.
6) Stream.num.reduce.output.key.fields: Specify the number of fields in the reduce task output record for key
In addition, Hadoop itself comes with some handy mapper and reducer:
(1) Hadoop aggregation function
Aggregate provides a special reducer class and a special Combiner class, and there is a series of "aggregators" (such as "sum", "Max", "min", etc.) used to aggregate a set of value sequences. Users can use aggregate to define a mapper plug-in class that produces "aggregatable items" for each key/value pair entered by mapper. The Combiner/reducer aggregates these aggregatable items with the appropriate aggregator. To use aggregate, you only need to specify "-reducer aggregate".
(2) Selection of fields (similar to ' cut ' in Unix)
Hadoop's tool class Org.apache.hadoop.mapred.lib.FieldSelectionMapReduc helps users work with text data efficiently, like the "Cut" tool in Unix. The map function in the tool class considers the input key/value pairs as a list of fields. The user can specify the delimiter for the field (by default, tab), and you can select any paragraph in the field list (consisting of one or more fields in the list) as the key or value for the map output. Similarly, the reduce function in the tool class considers the input key/value pairs as a list of fields, and the user can select any segment as the key or value for the reduce output.