Sparksql practical use: JDBC operation is not supported in spark0.9.x, spark1.1 will join JDBC support
Version Description: spark-1.1.0+scala-2.10.4+hive-0.12.0
Note: The version mismatch may have a variety of problems, may be unable to work together, or the results of the wrong operation!
Release of the spark1.1 official version: 2014/9/11
1, increase the start-thriftserver.sh service start, through the JDBC/ODBC direct connection
bin/start-thriftserver.sh
2. SPARKSL CLI Command Terminal
Bin/spark-sql--master spark://hadoop0:7077--executor-memory 1g
Spark.sql.shuffle.partitions default:200
Spark sql> SET spark.sql.shuffle.partitions=10;
Deployment:
A, spark-site.xml configuration
B, Hive-sit.xml file More, refer to the installation of hive
Note: You need to configure zookeeper in Hive-site, keep the session synchronized, and configure some options for Hiveserver2! Remote MySQL as a data warehouse configuration is no doubt!
C, the Hive-site.xml file that requires copy hive to the CONF directory of Spark), configure Hive_conf_dir
I do not know the configuration of one of the lines can not, I am too troublesome, have been configured, two external hive Lib directory under the need to have the MySQL JDBC driver jar package.
1. Metastore Error starting Hive hive-0.12 version deployment:
Error 1:
Hive-site.xml <value>auth</auth> needs to be modified to </value>
Error 2:
Metaexception (message:version information not found in Metastore.)
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
2. Start spark1.1 cluster
start-all.sh
3, start the CLI side of the Sparksql
Bin/spark-sql--master spark://hadoop0:7077--executor-memory 1g
4, start-thriftserver.sh service start
. Use the JDBC remote client to log in to query data:
After remote success, look at the response of the host that initiated the START-THRIFTSERVER.SH service
. JDBC Client Query data
Summarize:
1, with the use of shark when the difference is not small, the program completely without modification, you can run on the sparksql above, but the bottom of the analysis into spark itself, Catalyst interpretation engine (SQL) to do! In the WebUI interface you can see a big difference, execute the statement!
2, parse SQL time feel no faster than HIVEQL, and use shark based on HIVEQL time difference is not small, may be my computer comparison rotten reason, perhaps have more good performance optimization scheme! is under study.
3, write the JDBC Connection Sparksql program reference:
Https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
Deployment of spark1.1, Sparksql CLI, SPARKSQL-JDBC application