Hive user Interface (ii)-Connect a hive operation instance using hive JDBC Driver

Source: Internet
Author: User
Tags deprecated min

Questions Guide:

1. What three types of user access does hive provide?

2, when using Hiveserver, you need to start which service first.

3, Hiveserver's Start command is.

4. Hiveserver is the service through which to provide remote JDBC access.

5, how to modify the default boot port of Hiveserver.

6. Which packages are required for the Hive JDBC driver connection.

7, HiveServer2 and Hiveserver in the use of different points.

Hive provides three user interfaces: CLI, HWI, and client. Where the client is using the JDBC driver to remotely manipulate hive via thrift. HWI provides a web interface for remote access to hive, and can refer to my other blog post: Hive user interface (i)-hive Web interface hwi operation and use. But the most common way to use it is by using the CLI. The following describes hive using the JDBC Driver connection operation Hive, my hive version is Hive-0.13.1.

The Hive JDBC driver connection is divided into two types, the early is Hiveserver, the latest is HiveServer2, the former itself has many problems, such as security, concurrency, and so on, the latter is a good solution such as security and concurrency issues. Let me first introduce the usage of hiveserver.

First, start the meta-data metastore

To connect to hive in any way, you first need to start the Hive Metadata service, otherwise the HQL operation cannot be performed.

[Hadoopuser@secondmgt ~]$ Hive--service metastore
starting hive metastore Server
15/01/11 20:11:56 INFO Configuration.deprecation:mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/01/11 20:11:56 INFO Configuration.deprecation:mapred.min.split.size are Deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/01/11 20:11:56 INFO configuration.deprecation: Mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/01/11 20:11:56 INFO configuration.deprecation: Mapred.min.split.size.per.node is deprecated. Instead, use Mapreduce.input.fileinputformat.split.minsize.per.node
15/01/11 20:11:56 INFO Configuration.deprecation:mapred.input.dir.recursive is deprecated. Instead, use Mapreduce.input.fileinputformat.input.dir.recursive
Second, start hiveserver service

Hiveserver uses the thrift service to provide the client with a remote connection access port, which must be started before JDBC connects to Hive. Hiveserver.

[Hadoopuser@secondmgt ~]$ Hive--service hiveserver
starting hive Thrift Server
15/01/12 10:22:54 INFO Configuration.deprecation:mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/01/12 10:22:54 INFO Configuration.deprecation:mapred.min.split.size are Deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/01/12 10:22:54 INFO configuration.deprecation: Mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/01/12 10:22:54 INFO configuration.deprecation: Mapred.min.split.size.per.node is deprecated. Instead, use Mapreduce.input.fileinputformat.split.minsize.per.node
15/01/12 10:22:54 INFO Configuration.deprecation:mapred.input.dir.recursive is deprecated. Instead, use Mapreduce.input.fileinputformat.input.dir.recursive
Hiveserver The default port is 10000, you can use hive--service hiveserver-p 10002 to change the default boot port, which is also a JDBC connection port.

Note:Hiveserver cannot be used with the HWI service at the same time.

Iii. Creating a hive project in the IDE

We use eclipse as the development IDE, create a hive project in Eclipse, and import the hive JDBC Remote connection related packages, the required packages are as follows:

        Hive-jdbc-0.13.1.jar
        commons-logging-1.1.3.jar
        hive-exec-0.13.1.jar
        Hive-metastore-0.13.1.jar
        Hive-service-0.13.1.jar
        libfb303-0.9.0.jar
        slf4j-api-1.6.1.jar
        Hadoop-common-2.2.0.jar
        Log4j-1.2.16.jar
        slf4j-nop-1.6.1.jar
        httpclient-4.2.5.jar
        Httpcore-4.2.5.jar
Iv. Writing connection and query code

Package com.gxnzx.hive;
Import java.sql.Connection;
Import Java.sql.DriverManager;
Import Java.sql.ResultSet;
Import java.sql.SQLException;

Import java.sql.Statement;

        public class HiveServer2 {private static Connection conn=null; public static void Main (String args[]) {try {class.forname ("Org.apache.hadoop.

                          Hive.jdbc.HiveDriver ");

                          Conn=drivermanager.getconnection ("Jdbc:hive://192.168.2.133:10000/hive", "Hadoopuser", "" ");

                          Statement st=conn.createstatement ();
                         
                          String sql1= "Select name,age from Log";

                          ResultSet rs=st.executequery (SQL1);
                          while (Rs.next ()) {System.out.println (rs.getstring (1) + "" +rs.getstring (2));
    }} catch (ClassNotFoundException e) {e.printstacktrace ();            } catch (SQLException e) {e.printstacktrace ();
 }
        }
}
Where: Org.apache.hive.jdbc.HiveDriver is the hive JDBC Connection driver name, using Drivermanager.getconnection ("jdbc:hive2://

    Tom     ,
    Jack, haoning,     Hadoop,
    Rose     23.
Five, the difference between HiveServer2 and Hiveserver

Hiveserver2 in terms of security and concurrency than Hiveserver, the JDBC implementation of the above differences, mainly in the following aspects are different:

1, service start is not the same, first to start the Hiveserver2 service

[Hadoopuser@secondmgt ~]$ Hive--service hiveserver2
starting HiveServer2
15/01/12 10:13:42 INFO Configuration.deprecation:mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/01/12 10:13:42 INFO Configuration.deprecation:mapred.min.split.size are Deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/01/12 10:13:42 INFO configuration.deprecation: Mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/01/12 10:13:42 INFO configuration.deprecation: Mapred.min.split.size.per.node is deprecated. Instead, use Mapreduce.input.fileinputformat.split.minsize.per.node
15/01/12 10:13:42 INFO Configuration.deprecation:mapred.input.dir.recursive is deprecated. Instead, use Mapreduce.input.fileinputformat.input.dir.recursive

2, the driver name is not the same

Hiveserver->org.apache.hadoop.hive.jdbc.hivedriver

Hiveserver2->org.apache.hive.jdbc.hivedriver
3, create a connection is not the same

Hiveserver->drivermanager.getconnection ("jdbc:hive://
4. Complete Example

Package com.gxnzx.hive;
Import java.sql.Connection;
Import Java.sql.DriverManager;
Import Java.sql.ResultSet;
Import java.sql.SQLException;

Import java.sql.Statement;

        public class Hivejdbctest {private static Connection conn=null; public static void Main (String args[]) {try {class.forname ("Org.apache.hive.jd Bc.

                          Hivedriver ");

                          Conn=drivermanager.getconnection ("Jdbc:hive2://192.168.2.133:10000/hive", "Hadoopuser", "" ");

                          Statement st=conn.createstatement ();

                          String sql1= "Select name,age from Log";

                          ResultSet rs=st.executequery (SQL1);
                          while (Rs.next ()) {System.out.println (rs.getstring (1) + "" +rs.getstring (2));
                }} catch (ClassNotFoundException e) {e.printstacktrace (); } catch (SQlexception e) {e.printstacktrace ();
 }


        }
}
attached: Related anomalies and solutions

Exception or error one

slf4j:failed to load Class "Org.slf4j.impl.StaticLoggerBinder".
Slf4j:defaulting to No-operation (NOP) Logger implementation
Failed to load class Org.slf4j.impl.StaticLoggerBinder
Official Settlement method

This error was reported when the Org.slf4j.impl.StaticLoggerBinder class could wasn't being loaded into memory. This happens when no appropriate slf4j binding could is found on the class path. Placing one (and only one) of Slf4j-nop.jar, Slf4j-simple.jar, Slf4j-log4j12.jar, Slf4j-jdk14.jar or Logback-classic.jar o n the class path should solve the problem.

Since 1.6.0 as of SLF4J version 1.6, in the absence of a binding, SLF4J would default to a no-operation (NOP) Logger Implem Entation.
Import any one of Slf4j-nop.jar, Slf4j-simple.jar, Slf4j-log4j12.jar, Slf4j-jdk14.jar, or Logback-classic.jar into the project Lib, SLF4J Related package download address is as follows: SLF4J bindings.
Exception or Error two

Job submission failed with exception ' Org.apache.hadoop.security.AccessControlException (Permission denied:user= Anonymous, Access=execute, inode= "/tmp": hadoopUser:supergroup:drwx------at Org.apache.hadoop.hdfs.server.nameno De. Fspermissionchecker.check (fspermissionchecker.java:234) at Org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse (fspermissionchecker.java:187) at Org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission (fspermissionchecker.java:150) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission (fsnamesystem.java:5185) at Org.apache.hadoop . Hdfs.server.namenode.FSNamesystem.checkPermission (fsnamesystem.java:5167) at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner (fsnamesystem.java:5123) at Org.apache.hadoop.hdfs . Server.namenode.FSNamesystem.setPermissionInt (fsnamesystem.java:1338) at Org.apache.hadoop.hdfs.server.namenode . Fsnamesystem.setperMission (fsnamesystem.java:1317) at Org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission ( namenoderpcserver.java:528) at Org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission ( clientnamenodeprotocolserversidetranslatorpb.java:348) at

Org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod (clientnamenodeprotocolprotos.java:59576) at org.apache.hadoop.ipc.protobufrpcengine$server$
        Protobufrpcinvoker.call (protobufrpcengine.java:585) at Org.apache.hadoop.ipc.rpc$server.call (RPC.java:928) At Org.apache.hadoop.ipc.server$handler$1.run (server.java:2048) at Org.apache.hadoop.ipc.server$handler$1.run (Ser ver.java:2044) at java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subjec T.doas (subject.java:415) at Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1491) at Org.apache.hadoop.ipc.server$handler.run (server.java:2042) 
When executing the program, the above error is reported because at the beginning my connection content is the following way, no user is added, here the user should not be the hive user, but should be the user of Hadoop:

Conn=drivermanager.getconnection ("Jdbc:hive2://192.168.2.133:10000/hive", "", "" ");

Workaround:

Conn=drivermanager.getconnection ("Jdbc:hive2://192.168.2.133:10000/hive", "Hadoopuser", "" ");
Hadoopuser is my Hadoop user, add after use normal.

For more information, please refer to the website website to learn: HiveServer2 clients.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.