Selection of storage formats:Do you take row or column-type storage? The number of times a column store is written, and the loss time is much faster when queriedselection of compression formats:Consider the compression speed and the compressed file of the partition compression can be less storage space, improve data transfer speed The default compression format in Spark is "snappy"optimization of the code:Selected high-performance operators: Foreachpartition = Partitionofrecords.foreach the benefit of getting each data partition is to save all of the partition data in the list first, We can then insert MySQL into the pstmt batch, one at a time to write the entire partition data to reuse the existing data: In the project, if you implement multiple functions at the same time (in this case there are three), in the calculation to see if there is overlap between each function of the data generated, If any, the corresponding data is extracted to generate, all the functions of the implementation can be shared (equivalent to do a cache, the intermediate data cache)optimization of parameters:Degree of parallelism: spark.sql.shuffle.partitions default is 200, configured is the number of partitions, corresponding to the number of tasks if you feel too slow to run, then you need to change this value in conf (Yarn startup) The partition field type is inferred: Spark.sql.sources.partitionColumnTypeInference.enabled is on by default, and when turned on, the system automatically guesses that the type of partition field is turned off to improve performance.
Optimization ideas in the Spark SQL project