Spark is a cluster computing platform originating from the University of California, Berkeley, Amplab, which is a rare all-rounder based on memory computing, performance exceeding Hadoop, starting from multiple iterations of batch processing, eclectic data warehousing, streaming and graph computing. Spark is now an Apache Foundation's top open source project with huge community support (more active developers than Hadoop MapReduce), and technology is maturing.
Sacal environment variable configuration: http://spark.apache.org/download version 2.11.7 Configure environment variables after installation
After the download of the http://scala-ide.org/4.1.0 version is complete, the features and plugins content is placed in the eclipse's corresponding folder, and then you can see Scala by inflating Eclipse. You can create Scala projects, class packages, classes.
New system Variable
。
Add after user variable: System variable path to Bin folder
When the environment variable is configured we can do it in the console, such as: we can also see the Java JDK version installed
After the environment variable configuration is complete we can manipulate, for example we print a Hello Scala and addition operations such as:
Learn the first in Scala