SparkSQL reads data in Hive
Because Spark uses CDH of Cloudera and is automatically installed and deployed online. I recently learned SparkSQL and saw SparkSQL on HIVE. The following describes how to read HIVE data through SparkSQL.
(Note: if you do not use CDH for online automatic installation and deployment, you may need to compile the source code to make it compatible with HIVE.
The compilation method is also very simple. You only need to execute the following command in Spark_SRC_home (the home directory of the source code:
./Make-distribution.sh -- tgz-PHadoop-2.2-Pyarn-DskipTests-Dhadoop. version = 2.6.0-cdh5.4.4-Phive
After compilation, several jar packages will be added under the lib directory .)
The following describes my usage:
1. To enable Spark to connect to the original Hive data warehouse, we need to copy the hive-site.xml file in Hive to the conf directory of Spark, in this way, you can find the Hive metadata and data storage through this configuration file.
Here because my Spark is automatically installed and deployed, You Need To Know Where CDH places the hive-site.xml. After exploration. The default path of the file is/etc/hive/conf.
Similarly, spark conf is also in/etc/spark/conf.
Copy the corresponding hive-site.xml to the spark/conf directory, as described above.
If Hive metadata is stored in Mysql, we also need to prepare Mysql-related drivers, such as mysql-connector-java-5.1.22-bin.jar.
2. Write test code
Val conf = new SparkConf (). setAppName ("Spark-Hive"). setMaster ("local ")
Val SC = new SparkContext (conf)
// Create hivecontext
Val sqlContext = new org. apache. spark. SQL. hive. HiveContext (SC)
SqlContext. SQL ("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\ t'") // pay attention to the data delimiter
SqlContext. SQL ("LOAD DATA INPATH '/user/liujiyu/spark/kv1.txt' INTO TABLE src ");
SqlContext. SQL ("SELECT * FROM jn1"). collect (). foreach (println)
SC. stop ()
3. the following problems are listed:
(1) If the hive-site.xml is not copied to the spark/conf directory, the following occurs:
Analysis: From the error message above, spark cannot know the location of hive metadata, so it cannot instantiate the corresponding client.
The solution is to copy the hive-site.xml to the spark/conf directory.
(2) If SC. stop is not added to the test code, the following error occurs:
ERROR scheduler. LiveListenerBus: Listener EventLoggingListener threw an exception
Java. lang. reflect. InvocationTargetException
This problem is solved by adding SC. stop () in the last line of the Code.
Hive programming guide PDF (Chinese Version)
Hadoop cluster-based Hive Installation
Differences between Hive internal tables and external tables
Hadoop + Hive + Map + reduce cluster installation and deployment
Install in Hive local standalone Mode
WordCount word statistics for Hive Learning
Hive operating architecture and configuration and deployment
Hive details: click here
Hive: click here
This article permanently updates the link address: