1. Setting the degree of parallelism in the shuffle process: Spark.sql.shuffle.partitions (sqlcontext.setconf ())
2, in the Hive Data Warehouse construction process, the reasonable set of data types, such as can be set to int, do not set to bigint. Reduce the unnecessary memory overhead caused by the data type.
3, when writing SQL, try to give a clear column name, such as select name from students. Do not write the select * method.
4, Parallel processing query results: For Spark SQL query results, if the data volume is larger, such as more than 1000, then do not once collect () to driver re-processing. The query results are processed in parallel using the foreach () operator.
5. Cache tables: For tables that may be used more than once in an SQL statement, you can cache them, use sqlcontext.cachetable (tableName), or Dataframe.cache (). Spark SQL caches the table in a format that is stored in memory columns. Spark SQL can then simply scan the columns that need to be used and automatically optimize the compression to minimize memory usage and GC overhead. Sqlcontext.uncachetable (TableName) can remove a table from the cache. With sqlcontext.setconf (), set the Spark.sql.inMemoryColumnarStorage.batchSize parameter (default 10000) to configure the units of the Columnstore.
6. Broadcast Join table: Spark.sql.autoBroadcastJoinThreshold, Default 10485760 (Ten MB). In the case of sufficient memory, you can increase its size, the parameters set a table at the time of the join, maximum in how much, can be broadcast to optimize performance.
7, Tungsten filament plan: spark.sql.tungsten.enabled, the default is true, automatic management of memory.
Spark SQL Performance Optimization