In the previous article, how to build a Hadoop environment is described in detail. Today, we will introduce how to run WordCount, the first instance in the Hadoop environment. Run the WordCount example provided by hadoop in pseudo-distribution mode to feel the following MapReduce process: note that the program runs in dfs of the file system and the created File
In the previous article, how to build a Hadoop environment is described in detail. Today, we will introduce how to run WordCount, the first instance in the Hadoop environment. Run the WordCount example provided by hadoop in pseudo-distribution mode to feel the following MapReduce process: note that the program runs in dfs of the file system and the created File
In the previous article, how to build a Hadoop environment is described in detail. Today, we will introduce how to run WordCount, the first instance in the Hadoop environment.
Run the WordCount example provided by hadoop in pseudo-distribution mode to feel the following MapReduce process: note that the program runs in dfs of the file system, and the files created are also based on the file system:
1. Prepare the wordcount File
First, create a "file" folder in the "/home/hadoop" directory ". Create two upload files, file1.txtand file2.txt, and set the content of file1.txt to "Hello World" and that of file2.txt to "Hello hadoop ".
2. Create input Folder input on HDFS
hadoop fs -mkdir input
3. Upload the prepared test file to the input directory of the dfs file system,
hadoop fs -put /home/hadoop/file1.txt inputhadoop fs -put /home/hadoop/file2.txt input
4. Run wordcount
(Hadoop-0.20.2-examples.jar is a Hadoop self-carried instance, Hadoop version is not the same, the Instance name may be different, you can view in the directory.
hadoop jar hadoop-0.20.2-examples.jar wordcount input outputo
5. view results
hadoop dfs -cat outputo/part-r-00000
6, enter the bin directory to stop the stop-all.sh.
In the previous article, how to build a Hadoop environment is described in detail. Today, we will introduce how to run Hadoop […].