PreviousArticleThis section describes various streaming parameters.
Example of submitting a hadoop task:
$ Hadoop_home/bin/hadoop streaming \
-Input/user/test/input-output/user/test/output \
-Mapper "mymapper. Sh"-reducer "myreducer. Sh "\
-File/Home/work/mymapper. Sh\
-File/home/work/mycer Cer. Sh\
-Jobconf mapred. Job. Name = "file-demo"
The preceding command submits a hadoop task. The output and input are/User/test/output and/User/test/input. MapProgramIsMymapper. Sh, the reduce program isMycer Cer. Sh. Note that the two files must be distributed to the cluster nodes using the-file. The task name is specified in the last row.
There are also some more complex use cases, such as the need to specify the number of tasks, you can use
-Jobconf mapred. Job. Map. capacity = m-jobconf mapred. Job. Reduce. capacity = N
The preceding command can be set to run at most simultaneously. M Items Map Task, N Items Reduce Task, if M Or N Is 0 Or if it is not specified, the corresponding Capacity No restrictions. The default configuration is 0 No restrictions. We recommend that you set Map And Reduce capacity To prevent jobs from occupying excessive resources.
Of course, the most basic usage is briefly introduced here. There are many advanced usage methods for hadoop streaming, which can be used to specify some powerful sorting functions. I will not introduce them too much here, if you have any need, you can leave a message for me to ask. If you have encountered any problems, you must provide a solution. If an error occurs during running, refer to my other article-hadoop error code.