write in front
Spark is a more popular framework after relaying Hadoop in the field of distributed computing, and has recently researched the basic content of spark, which is summarized here and compared with Hadoop.
what is spark?
Spark is the open source universal Distributed computing framework, introduced by the University of Berkeley Amp Lab in 09, and uses a Hadoop-like computing model, but there are more improvements in design concepts. In summary, Spark is a fast cluster computing technique:
- Expanded the model of map reduce based on Hadoop map reduce
- Provides more distributed computing scenarios, including interactive queries and stream processing
- Memory-based cluster computing, which greatly improves the computational speed
- More efficient fault-tolerant mechanisms
In the current spark development trend, there will be more and more enterprises using spark, in the open source community Spark code activity has also overtaken Hadoop, the following is the current spark enterprise users, as well as the 2015 report Spark code activity.
key structures in Spark: RDD
RDD (Resilient distributed Datasets), Chinese called Elastic distributed datasets, is a read-only, partitioned collection of records on top of a distributed file system. The RDD is stored in memory, and the operations in the compute task of Spark are also based on the RDD. The read-only nature of the RDD means that its state is immutable and is generally non-modifiable, and a new rdd can only be generated by a series of transformations from the original hard drive data or other rdd. The meaning of "partitioning" is that the elements in the RDD are partitioned based on key, saved to multiple nodes, and restored with only the data of the lost partition recalculated, without affecting the entire system. Spark's design based on RDD, with some intermediate data stored in memory, saves time accessing data from your local hard drive compared to Hadoop, and improves computing speed, so Spark is ideal for iterative computing scenarios.
Spark's fault tolerance mechanism
Fault tolerance is a problem that cannot be neglected in distributed computing. Spark also has a breakthrough in fault-tolerant mechanisms, mainly based on Rdd features. Rdd fault tolerance is called the loneage mechanism, which means that the RDD stores enough lineage information to restore the data partition in the stable storage. The lineage here refers to a series of sequence of transformations, such as filter,map,join, that are coarse-grained and act on a particular data, documenting how an RDD is generated by other dataset transformations.
Spark VS Hadoop
Spark Distributed Computing Framework