There is a problem with spark SQL 1.2.x:
When we try to access multiple parquet files in a query, if the field names and types in these parquet files are exactly the same, except for the order of the fields, for example, a file is name string, id int, and the other file is ID int, name String, the query will error, throwing an exception to the metadata merge.
In 1.3, this problem has actually been solved. Then the solution in 1.2.x is:
In the Spark source Sql/core/src/main/scala/org/apache/spark/sql/parquet/parquettableoperations.scala file, find the override Def Getsplits (Configuration:configuration, Footers:jlist[footer]): Jlist[parquetinputsplit] This method, before the following code:
if (Globalmetadata = = null) { val splits = mutable. Arraybuffer.empty[parquetinputsplit] return splits }
Change Val Globalmetadata to Var globalmetadata
In the above code, add the following lines:
Val startTime = System.currenttimemillis (); Val metadata = Configuration.get (Rowwritesupport.spark_row_schema) Val Mergedmetadata = globalMetaData.getKeyValueMetaData.updated (Rowreadsupport.spark_metadata_key, Setasjavaset (Set ( Metadata)) Globalmetadata = new Globalmetadata (Globalmetadata.getschema, Mergedmetadata, Globalmetadata.getcreatedby) Val endTime = System.currenttimemillis (); Loginfo ("\n*** updated Globalmetadata in" + ( Endtime-starttime) + "Ms. ***\n");
Where the 第2-4 line is necessary, the three rows are taken from the spark1.3. The other three lines just want to make a log to see the execution time of this code.
Then the source code is compiled:
Mvn-phadoop-2.4-dhadoop.version=2.4.0-phive-phive-thriftserver-dskiptests Clean Package
Specific reference http://spark.apache.org/docs/1.2.1/building-spark.html
I tested the compiled spark on a single server, the problem was solved, the execution went smoothly, and the performance had no effect. Read 600 parquet files, plus a few lines of code used only about 1ms.
Spark Parquet Merge metadata issues