Reading Notes of Spark tunsten project, sparktunsten
Spark tunsten project Reading Notes
The Declaration of the Spark tunsten project is Bringing Apache Spark closer Bare Metal. My understanding is not to let the hardware become the bottleneck of Spark performance, but to make full use of hardware resources (CPU, memory, IO, network) without limit ).
Tunsten has three major actions.
1. Memory Mangement and Bianary processing: Use the semantics of the application to manage the Memory, reducing JVM overhead and garbage collection.
I understand that sun. msic. UnSafe is used to manage the memory, instead of the JVM garbage collection mechanism. This feature can be used in 1.4 and 1.5. The hashmap of unsafe-heap and unsafe-offheap can process 1 million/s/thread aggregation operations. Compared with Java. util. Hasp.
2. Cache-aware Coputation: algorithm and data structure to exploit memory hierarchy. (Algorithms and big data structures use multi-level memory)
Improve the cache hit rate of sorting by using the level-1, level-2, and level-3 cache of the CPU (how to increase the cache hit rate ). The sorting speed is three times higher than that of previous versions. It is helpful for sorting, sort merger, and high cardinality aggregation performance.
3. Code-genaration: using code generation to exploit modern compilers and CPUs. (Use modern compiles and cpu to generate code)
Code generation is evaluated from the record-at-a-time expression to the vectorized expression. Multiple data records can be processed at a time. Shuffle's performance is twice higher than that of kryo (shuffle8 million test scenario)
References:
Https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html
Http://stackoverflow.com/questions/37505638/understanding-spark-physical-plan