In Spark, the most basic principle is that each task processes a partition of an RDD.
1, the advantages of mappartitions operation:
If it is a normal map, such as 10,000 data in a partition, OK, then your function will be executed and calculated 10,000 times.
However, after using the mappartitions operation, a task will only execute once function,function receive all partition data at once. As long as it executes once, the performance is relatively high.
2, Mappartitions's shortcomings:
If it is a normal map operation, the execution of a function will process a data, then if the memory is not enough, such as processing 1000 of data, then the memory is not enough, then you can have processed 1000 of data from the memory of garbage collection, or in other ways, Make room for it.
So normal map operations usually do not cause an Oom exception for memory.
But mappartitions operation, for a large amount of data, such as even a partition,100 data, once passed a function, then may be a sudden memory is not enough, but there is no way to free up memory space, may be oom, memory overflow.
3, when more suitable for use Mappartitions series Operation
That is, the amount of data is not particularly large, you can use this mappartitions series operation, performance is very good, there is a promotion. For example, it turned out to be 15 minutes, (there was once performance tuning), 12 minutes. 10 minutes->9 minutes.
But there have been problems with the experience, mappartitions as long as a use, direct oom, memory overflow, crashes.
In the project, we first estimate the amount of data for the RDD, and the amount of each partition, as well as the memory resources allocated to each executor. Look at the memory of all the partition data, OK. If the line, can try, can run through just fine. There is definitely a boost to performance.
But after trying for a while, found, no, oom, then give up.