Use JDBC to access hive programs in the Eclipse environment (hive-0.12.0 + hadoop-2.4.0 cluster)

Source: Internet
Author: User
Tags stmt

First,Eclipse new Other-"map/reduce Project Project

The project automatically contains the jar packages of the associated Hadoop,

In addition, you will need to import the following hive and the MySQL-connected jar package separately:

Hive/lib/*.jar

Mysql-connector-java-5.1.24-bin.jar

Second, the shipment hiveserver

Command: bin/hive--service Hiveserver &

once executed many times this command did not succeed, error: Could not create serversocket on address 0.0.0.0/0.0.0.0:10000.

Workaround: At startup, specify the port number, command

Bin/hive--service hiveserver-p 10002

If you do not have to ignore the error, but the following code to change the port number to the default of 10000, the same as here.


Third, the Java test code inEclipse

If the new project chooses Hadoop engineering from time to time, it will error: Exception in thread "main" java.lang.noclassdeffounderror:org/apache/hadoop/io/writable

Workaround: This is because the Hadoop package is not added, re-building a project, project type select other-"Map/reduce Project project, automatically includes the Hadoop development environment

You need to prepare the User_info.txt file in the/home/hadoop/file/directory, as follows (\ t separator)

1001 Jack 30
1002 Tom 25
1003 Kate 20

//--------------HiveTest. Java-----------------package Test;import java.sql.SQLException;  Import java.sql.Connection;  Import Java.sql.ResultSet;  Import java.sql.Statement;   Import Java.sql.DriverManager;     public class Hivequery {private static String drivername = "Org.apache.hadoop.hive.jdbc.HiveDriver";        /** * @param args * @throws SQLException * * public static void Main (string[] args) throws SQLException {try {      Class.forName (drivername);        } catch (ClassNotFoundException e) {//TODO auto-generated catch block E.printstacktrace ();      System.exit (1); } Connection con = drivermanager.getconnection ("jdbc:hive://192.168.1.200: 10002/default "," "," "");      Statement stmt = Con.createstatement ();      String tableName = "testhivedrivertable";      stmt.executequery ("drop table" + TableName);      ResultSet res = stmt.executequery ("CREATE TABLE" + TableName + "(id int, name string, age string) ro W format delimited fields terminated by ' \ t ' lines terminated by ' \ n ' ");     //Show Tables      String sql = "Show tables '" + tableName + "'";      System.out.println ("Running:" + sql);      res = stmt.executequery (SQL);      if (Res.next ()) {       System.out.println (res.getstring (1));    & nbsp }     //Describe table      sql = "describe" + tableName;      System.out.println ("Running:" + sql);      res = stmt.executequery (SQL);      while (Res.next ()) {     &nbsp System.out.println (res.getstring (1) + "\ T" + res.getstring (2) + "\ T" + res.getstring (3));     }      //Load data into table     //Note:filepath have to be lo Cal to the Hive server     //NOTE:/tmp/a.txt are a ctrl-a separated file with both fields per line  & nbsp   String filepath = "/home/hadoop/file/user_info.txt";      sql = "Load data local inpath '" + filepath + "' into table" + tableName;      System.out.println ("Running:" + sql);      res = stmt.executequery (SQL);      //select * Query      SQL = "SELECT * from" + tableName;      System.out.println ("Running:" + sql);      res = stmt.executequery (SQL);      while (Res.next ()) {       System.out.println (string.valueof (Res.getint (1)) + " \ t "+ res.getstring (2) +" \ T "+ res.getstring (3));     }      //Regular hive query      sql = "SELECT COUNT (1) from" + TableName;      System.out.println ("Running:" + sql);      res = stmt.executequery (SQL);      while (Res.next ()) {       System.out.println (res.getstring (1));    & nbsp }   }  } //------------End---------------------------------------------

Iv. Display of results

Running:show Tables ' testhivedrivertable '

Testhivedrivertable

Running:describe testhivedrivertable

ID int

Name string

Age string

Running:load data local inpath '/home/hadoop/file/user_info.txt ' into table testhivedrivertable

Running:select * from Testhivedrivertable

1001jack30

1002tom25

1003kate20

Running:select count (1) from testhivedrivertable

3

Use JDBC to access hive programs in the Eclipse environment (hive-0.12.0 + hadoop-2.4.0 cluster)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.