Background This article can be said to be "a little exploration of Hive JSON data processing" in the Brotherhood. Platform to speed up the analysis efficiency of ad hoc queries, we installed Spark Server on our Hadoop cluster and shared metadata with our hive Data warehouse. That is, our users can execute MapReduce profiling data using hive SQL through HiveServer2, or use Sparkserver to perform spark application profiling data using spark SQL (Hive sql). Apart from the difference between mapreduce and spark application compute mode, the advantage of Spark server is that its container process is resident, that is, its compute resources are reserved, and the SQL statements can be executed immediately after receiving them. Quicker response times. Now that spark server and HiveServer2 share metadata, we should be able to mask the difference between the two at the SQL level. While spark officially claims to be compatible with most of the hive SQL statements, there are often exceptions to the actual use. What this article is going to discuss is that spark SQL uses the hive built-in function to json_tuple the exception problem. We still use the sample data table in "a little exploration of Hive JSON data processing" to illustrate the problem. (1) Execute hive SQL statement via HiveServer2, (2) Execute hive SQL statement via Spark Server; Terminal exception information is: Error:java.lang.ClassNotFoundException:json_ Tuple (state=,code=0) Spark server log output is: There is also a discussion on the internet about: http://mail-archives.us.apache.org/mod_mbox/spark-user/201504.mbox/%[email protected]COM%3E suspect the problem is that the corresponding jar package is not found, in fact, the actual problem is the UDF parsing the class name error, Json_tuple is the function name, Its corresponding class name should be Org.apache.hadoop.hive.ql.udf.generic.GenericUDTFJSONTuple. This exception directly affects our use of hive UDF Json_tuple to parse JSON data through Spark server. In order to achieve the final query effect of the data table Myjson in "a little exploration of hive JSON data processing", we need to use hive UDF Get_json_object, as follows: Get_tuple and Func.json_ The scheme used by array becomes a scenario used in combination with Get_json_object and Func.json_array. It can be seen that the scheme is complex, but it can deal with practical problems.
Spark SQL JSON data processing