The command to end historyserver is as follows:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/4D/AD/wKioL1RXI_iBF3KwAAC-PTC7sXk177.jpg "style =" float: none; "Title =" 51.png" alt = "wKioL1RXI_iBF3KwAAC-PTC7sXk177.jpg"/>
Step 4: Verify the hadoop distributed Cluster
First, create two directories on the HDFS file system. The creation process is as follows:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/4D/ AE /wKiom1RXI5mjRTSFAADMDdFoZzE955.jpg "style =" float: none; "Title =" 52.png" alt = "wkiom1rxi5mjrtsfaadmddfozze955.jpg"/>
/Data/wordcount in HDFS is used to store the data files of the wordcount example provided by hadoop. The program running result is output to the/output/wordcount directory, through web control, we can find that we have successfully created two folders:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/4D/AD/wKioL1RXI_jiCeUUAAKjM9eHuEg297.jpg "style =" float: none; "Title =" 53.png" alt = "wkiol1rxi_jiceuuaakjm9ehueg297.jpg"/>
Next, upload the data of the local file to the HDFS Folder:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/4D/ AE /wKiom1RXI5minCMOAACtC2h6bcQ964.jpg "style =" float: none; "Title =" 54.png" alt = "wkiom1rxi5mincmoaactc2h6bcq964.jpg"/>
Through web control, we can find that we have successfully uploaded the file:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/4D/AD/wKioL1RXI_nALIrcAAPWJ1ijMyU126.jpg "style =" float: none; "Title =" 55.png" alt = "wkiol1rxi_nalircaapwj1ijmyu126.jpg"/>
You can also use the HDFS command of hadoop to view information on the Control Command terminal:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/4D/ AE /wKiom1RXI5qQ1B8KAALCAm5Icpg010.jpg "style =" float: none; "Title =" 56.png" alt = "wkiom1rxi5qq1b8kaalcam5icpg010.jpg"/>
Run the wordcount example provided by hadoop and run the following command:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/4D/AD/wKioL1RXI_mAm7NpAAD8jNy3C_A428.jpg "style =" float: none; "Title =" 57.png" alt = "wkiol1rxi_mam7npaad8jny3c_a428.jpg"/>
The running process is as follows:
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/4D/ AE /wKiom1RXI5rh5sqQAAac3yM0xpU732.jpg "style =" float: none; "Title =" 58.png" alt = "wkiom1rxi5rh5sqqaaac3ym0xpu732.jpg"/>
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/4D/AD/wKioL1RXI_nwoyBQAARnBeVGBBI013.jpg "style =" float: none; "Title =" 59.png" alt = "wkiol1rxi_nwoybq1_nbevgbbi013.jpg"/>
650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/4D/ AE /wKiom1RXI5vzD2mdAAKbb05EY5o005.jpg "style =" float: none; "Title =" 60.png" alt = "wkiom1rxi5vzd2mdaakbb05ey5o005.jpg"/>
This article is from the spark Asia Pacific Research Institute blog, please be sure to keep this source http://rockyspark.blog.51cto.com/2229525/1571250
[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 5) (6)