"http://www.aliyun.com/zixun/aggregation/37954.html" Spark is a distributed data rapid analysis project developed by the University of California, Berkeley AMP Its core technology is flexible Resilient distributed datasets, which provide a richer MapReduce model than Hadoop, enable rapid iteration of datasets in memory to support complex data mining algorithms and graph computation algorithms.
Spark was developed using Scala, using Mesos as the underlying scheduling framework, tightly integrated with hadoop and Ec2, directly reading hdfs or S3 files for calculation and writing the results back to hdfs or S3 as part of Hadoop and Amazon's cloud computing ecosystem . Spark is a small and exquisite project, the project's core part of the code only 63 Scala files, fully reflects the beauty of streamlining.
Spark dependency
Map Reduce Model: As a distributed computing framework, Spark uses the MapReduce model. In its place, Google's Map Reduce and Hadoop traces the heavy, it is clear that it is not a big innovation, but a slight innovation. Under the premise of keeping the basic idea unchanged, it draws on, imitates and relies on its ancestors, adding a little improvement, greatly improving the efficiency of MapReduce. Functional programming: Spark is written by Scala and the supported language is also Scala. One of the reasons is that Scala supports functional programming. This led to the Spark code concise, and secondly to make Spark-based development process, but also particularly simple. A complete MapReduce, Hadoop need to create a Mapper class and Reduce class, and Spark only need to create a corresponding map function and reduce function, the amount of code is greatly reduced. Mesos: Spark will be distributed to run things to consider, have given Mesos, not Care, which is why it can streamline the code. HDFS and S3: Spark support two kinds of distributed storage systems: HDFS and S3. Should be considered the most mainstream of the two. File system read and write function is provided by Spark, with the help of Mesos distributed implementation.
Spark and Hadoop contrast
Spark intermediate data into memory, for iterative computing more efficient. Spark is more suitable for iterative computing more ML and DM operations. Because in Spark there is an abstract concept of RDD. Spark is more generic than Hadoop. There are many types of data sets that Spark provides, unlike Hadoop that provides only Map and Reduce operations. Such as map, filter, flatMap, sample, groupByKey, reduceByKey, union, join, cogroup, mapValues, sort, partionBy and other types of operations, Spark these operations called Transformations. Also provide Count, collect, reduce, lookup, save and other actions. These diverse dataset operation types provide convenience for users developing upper-level applications. The communication model between the various processing nodes is no longer the only Data Shuffle model as Hadoop does. Users can name, materialize, control the storage of intermediate results, partitions and so on. It can be said that the programming model is more flexible than Hadoop. However, due to the nature of RDDs, Spark does not apply to applications that have async fine-grained update status, such as web service storage or incremental web crawlers and indexes. It is not suitable for the incremental modification of the application model. Fault tolerance. Fault Tolerances are implemented through checkpoints when calculating distributed datasets. There are two ways to checkpoint, one is checkpoint data and the other is logging the updates. Users can control which approach is used to achieve fault tolerance. Usability. Spark improves usability by providing rich Scala, Java, Python APIs, and interactive shells.