Apache Spark iteration is fast, but the basic framework and classic components maintain this unified mode, so learning Spark source code, I chose the Apache Spark-1.0.0 version, through the analysis of several major modules working principle, understand the operation of Spark.
Through the Localwordcount program, debug spark source code:
Localwordcount first sets Master and appname through SPARKCONF, then instantiates sparkcontext with sparkconf parameters, reads local files through Sparkcontext, divides text to calculate the number of words, Finally prints the result.
PackageOrg.apache.spark.examplesImportorg.apache.spark.SparkConfImportOrg.apache.spark.SparkContextImportorg.apache.spark.sparkcontext._object Localwordcount {def main (args:array[string])={val sparkconf=NewSparkconf (). Setmaster ("local"). Setappname ("Local Word Count") Val SC=NewSparkcontext (sparkconf) Val file= Sc.textfile ("readme.md") Val Count= File.flatmap (line = Line.split (""). Map (Word= = (Word, 1). Reducebykey (_+_) Count foreach println}}
Apache Spark Source compiles and imports IntelliJ idea, see: http://8liang.cn/intellij-idea-spark-development/
Referring to @jerrylead's Spark-0.7 frame composition, Apache Spark can be divided into four modules: Scheduler, RDD, deploy, and storage, among other widgets.
- Scheduler: Resource Scheduling
- RDD: Operation Operations
- Deploy: Deployment Mode
- Storage: Data storage
Spark uses the MASTER-SLAVE structure, communication between master and slave, and between Submodules, using the Scala actor approach.
END
Apache Spark-1.0.0 Source Analysis (a): Intro