Use of HiveServer2 with Beeline & Hive JDBC Programming

Source: Internet
Author: User
Tags stmt table name xmlns
Use of Hiveserver2&beeline Overview

Url:
Https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients

Hive is just a client, and in production it is not necessary to deploy the cluster

There are several major classes of hive clients:
-Hive
-WebUI: Operation of hive table via Hue/zeppelin
- use of JDBC HiveServer2

The concept of Hiveserver:
Start a server on the hive machine: The client can access it via IP + port
After that, there are a lot of clients that can connect to the server and work on it.
Can be connected in a JDBC, ODBC, Beeline Way

Start HiveServer2:

$>CD $HIVE _home/bin
$>./hiveserver2
use of Beeline

Start Beeline:

##-n Specify the name of the machine login, the current machine login user name
##-u Specify a connection string
# #每成功运行一个命令, hiveserver2 start the window, as long as the launch Beeline in the window to execute a successful command, Another window prints an OK
# #如果命令错误, hiveserver2 that window throws an exception
$>./beeline-u jdbc:hive2://hadoop003:10000/default-n Hadoop

Operating effect:

set the port yourself, using HiveServer2 and Beeline

To set the port for Hiveserver:

$>./hiveserver2--hiveconf hive.server2.thrift.port=4444

To start the Beeline client:

$>./beeline-u jdbc:hive2://hadoop003:4444/default-n Hadoop
Hive JDBC Programming

Note the point:
JDBC is the way clients are accessed, so you need to start a server that is HiveServer2

To build a project using MAVEN, the Pom.xml file looks like this:

<project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "Http://www.w3.org/2001/XMLSchema-instance" xsi: schemalocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" > < Modelversion>4.0.0</modelversion> <groupId>com.zhaotao.bigdata</groupId> <artifactId> hive-train</artifactid> <version>1.0</version> <packaging>jar</packaging> <name& gt;hive-train</name> <url>http://maven.apache.org</url> <properties> <project.build.so Urceencoding>utf-8</project.build.sourceencoding>  

Program:

Package com.zhaotao.bigdata.hive;

Import java.sql.*;
 /** * Hive JDBC JAVA API Access * CRUD operation * Created by Tao on 2017/10/7.

    */public class Hivejdbc {private static String drivername = "Org.apache.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */public static void main (string[] args) throws SQLException
        {try {class.forname (drivername);
            } catch (ClassNotFoundException e) {e.printstacktrace ();
        System.exit (1); }//replace "Hive" here with the name of the user the queries should run as Connection con = drivermanage
        R.getconnection ("Jdbc:hive2://192.168.26.133:4444/default", "Root", "" ");

        Statement stmt = Con.createstatement ();
        Used to test the creation of a new table String tableName = "testhivedrivertable";

        Data for querying an existing EMP//String tableName = "EMP";
     If the following actions are performed under the Windows operating system, the source of the permissions issue//problem is due to the fact that the permissions of the files in HDFs are not sufficient for users of Windows to operate   Workaround: Modify permissions//delete table Stmt.execute ("drop table if exists" + tableName);

        CREATE TABLE Stmt.execute ("CREATE TABLE" + TableName + "(key int, value string)");
Display table name//String sql = "Show tables '" + tableName + "'";
System.out.println ("Running:" + sql);
ResultSet res = stmt.executequery (SQL);  if (Res.next ()) {//System.out.println (res.getstring (1));//}//Table descriptor//SQL
= "Describe" + tableName;
System.out.println ("Running:" + sql);
res = stmt.executequery (SQL);

        while (Res.next ()) {//System.out.println (res.getstring (1) + "\ T" + res.getstring (2));//}
Load data into hive Table//String filepath = "/opt/data/emp.txt";
sql = "Load data local inpath '" + filepath + "' into table" + tableName;
System.out.println ("Running:" + sql);

        Stmt.execute (SQL); Querying the Hive table for data//sql = "SelecT * from "+ tableName;
System.out.println ("Running:" + sql);
res = stmt.executequery (SQL);
while (Res.next ()) {//System.out.println (String.valueof (Res.getint (1)) + "\ T" + res.getstring (2)); }//The number of rows in the statistics//because the hive QL performs MapReduce, the issue to be aware of is the permissions issue//Because, at run time, temporary text is stored in the directory of/TMP in HDFS
        Pieces//Specific error: Permission denied:user=root, Access=execute, inode= "/tmp/hadoop-yarn": hadoop:supergroup:drwx------
Workaround: Modify the Execute permission of/TMP or run the program with user name consistency//sql = "SELECT COUNT (1) from" + TableName;
System.out.println ("Running:" + sql);
res = stmt.executequery (SQL);
 while (Res.next ()) {//System.out.println (res.getstring (1));//}}}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.