STEP1: Start the Spark cluster, which is very detailed in the third lecture, after the start of the WebUI as follows:
STEP2: Start the spark Shell:
You can now view the shell situation through the following Web console:
STEP3: Copy the Spark installation directory "README.MD" to the HDFS system
Start a new command terminal on the master node and go to the Spark installation directory:
We copy the files to the root folder in HDFs:
At this point, we'll look at the Web console and find that the file has been successfully uploaded to HDFs:
STEP4: Work with the spark Shell to write code to manipulate the "readme.md" We Upload:
First, let's look at the "SC" in the shell environment, which automatically helps us to produce the environment variables:
It can be seen that SC is an example of Sparkcontext, which is what the system helps us generate automatically when launching the spark Shell, Sparkcontext is to commit the code to the cluster or local channel, we write the spark code, You must have an instance of Sparkcontext whether you want to run a local or a cluster.
Next, we read the file "readme.md":
We save the read content to the file, in fact, file is a mappedrdd, in the code of Spark, everything is based on the RDD operation;
Next, we filter out all the word "Spark" from the file we read.
A filteredrdd is generated at this time;
Next, let's count how many times "Spark" has appeared:
From the execution results we found that the word "Spark" appeared 15 times altogether.
At this point, we look at the Web console of the spark shell:
The discovery console shows that we have submitted a task and completed it successfully, and click on the task to see its execution details:
So how do we verify that the spark shell is correct for the 15 occurrences of "spark" in this readme.md file? In fact the method is very simple, we can use the Ubuntu comes with the WC command to statistics, as follows:
The result of this is also 15 times, and the spark shell count is the same.