Step 2: Use the spark cache mechanism to observe the Efficiency Improvement
Based on the above content, we are executing the following statement:
It is found that the same calculation result is 15.
In this case, go to the Web console:
The console clearly shows that we performed the "count" Operation twice.
Now we will execute the "Sparks" variable for the "cache" Operation:
Run the Count operation to view the Web console:
At this time, we found that the three count operations we performed before and after were respectively 0.7 s, 0.3 s, 0.5 s.
In this case, we perform the Count operation for the fourth time and check the effect of the Web Console:
The fourth clear operation on the console takes only 17 ms, which is about 30 times faster than the first three operations. This is the huge speed improvement brought by caching, and cache-based computing is one of the core of spark!
Step 3: Build the spark IDE development environment
Step 1: At present, the preferred inteiiij IDE development tool for spark in the world is idea. Download inteiiij idea:
Download the latest version 13.1.4:
For the version selection, the official team provides the following options:
Here we select the "Community edition free" version in Linux, which can fully meet Scala development needs of any degree of complexity.
After the download is complete, save it to the following local location:
Step 2: Install idea and configure idea system environment variables
Create the "/usr/local/idea" directory:
Decompress the downloaded idea package to this directory:
After the installation is complete, in order to facilitate the use of the command in the bin directory, we configure it in the "~ /. Bashrc ":
[Spark Asia Pacific Research Institute Series] the path to spark practice-Chapter 1 building a spark cluster (step 4) (2)