The command to end historyserver is as follows:
Step 4: Verify the hadoop distributed Cluster
First, create two directories on the HDFS file system. The creation process is as follows:
/Data/wordcount in HDFS is used to store the data files of the wordcount example provided by hadoop. The program running result is output to the/output/wordcount directory, through web control, we can find that we have successfully created two folders:
Next, upload the data of the local file to the HDFS Folder:
Through web control, we can find that we have successfully uploaded the file:
You can also use the HDFS command of hadoop to view information on the Control Command terminal:
Run the wordcount example provided by hadoop and run the following command:
The running process is as follows:
[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 5) (6)