Streaming supports the use of shell commands. However, you must note that multiple commands, such as CAT and grep, cannot be used, but scripts are required.
The following example uses grep to retrieve massive data:
1. Put the data to be retrieved into HDFS
$ Hadoop FS-put localfile/user/hadoop/hadoopfile
Usage: hadoop FS-put...
Copy one or more source paths from the local file system to the target file system. You can also read the input from the standard input and write it to the target file system.
2. Run search:
$ Hadoop streaming-Input/user/hadoop/hadoopfile-output/user/hadoop/result-mapper "grep hello"-jobconf mapre. job. name = "grep-test"-jobconf stream. non. zero. exit. is. failure = false-jobconf mapred. reduce. tasks = 1
Note:
-Input/user/hadoop/hadoopfile: directory of files to be processed
-Output/user/hadoop/Result: directory for storing processing results
-Mapper "grep hello": Map Program
-Jobconf mapre. Job. Name = "grep-test": Task Name
-Jobconf stream. non. zero. exit. is. failure = false: The returned values of the map-Reduce program are not judged. By default, the values returned by mapper and CER are not 0, which is considered an abnormal task and will be executed again, the default number of attempts is not 0 for four times, and the entire job will fail. Grep returns 1 if no matching result exists.
-Jobconf mapred. Reduce. Tasks = 1: Number of reduce tasks. It can also be set to 0. If it is set to 0, it indicates that the map/reduce framework should not create a CER task.
3. view the result:
$ Hadoop FS-CAT/user/hadoop/results/part-00000