Cause: The above problem is usually caused by a script running hive under the bin/directory.
Explanation: assume that the hive source check out to the local hive-trunk directory, and compile the source without specifying the "Target.dir" attribute, if the hive_home variable points to the Hive-trunk directory, $hive_ A
default boot port, which is also a JDBC connection port.
Note:Hiveserver cannot be used with the HWI service at the same time.
Iii. Creating a hive project in the IDE
We use eclipse as the development IDE, create a hive project in Eclipse, and import the hive JDBC Remote c
kylin2.3 version enables JDBC data sources (you can generate hive tables directly from SQL, eliminating the hassle of manually conducting data to hive and building hive tables)DescriptionThe JDBC data source, which is essentially a hive
Hive Interface Introduction (Web UI/JDBC)
Experiment Introduction
This experiment learns the two interfaces of Hive: Web UI and JDBC.
First, the experimental environment explained
1. Environment Login
No password automatic login, system user name Shiyanlou, password Shiyanlou
2. Introduction to the Env
Label: First,Eclipse new Other-"map/reduce Project Project The project automatically contains the jar packages of the associated Hadoop, In addition, you will need to import the following hive and the MySQL-connected jar package separately: Hive/lib/*.jar Mysql-connector-j
Part I: Building a hive JDBC development environmentBuild:Steps ? New project Hivetest? Import hive-dependent packages? Hive Command line start thrift service? Hive--service Hiveserver Part Two: Introduction of basic Operation ObjectsConnectionDescription: The connection obj
. XML key configuration Can be overridden by setting $HIVE _server2_thrift_port 2), start meta-database Start the metabase first, and at the command line type: Hive--service metastore 3), start the service #hive--service hiveserver2 >/dev/null The above command starts the Hiveserver2 service.
To add Elsaticsearch-hadoop-2.1.2.jar as an example, describe several ways to add a third-party jar to a hive.1, add in hive Shell
[Hadoop@hadoopcluster78 bin]$./hive
Logging initialized using configuration in file:/home/hadoop/apache/
This article source: http://blog.csdn.net/bluishglc/article/details/46005269 prohibited any form of reprint, or will entrust CSDN official maintenance rights!Many times, we need to introduce a third-party jar package in hive or a "UDF" jar package that we write ourselves. In hive, there are two configurations involved
After the jar file of the udfudafudtf function of hive is developed, you must put the jar file into the hive environment for use. You can add the following three methods:
After the jar file of the hive udf udaf udtf function is d
After the jar file of the hive udf udaf udtf function is developed, you must put the jar file into the hive environment. You can add the following three methods:
1. Use the add jar path/test. jar; Method to add
The disadvantage
Use of Hiveserver2beeline
Overview
Url:Https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
Hive is just a client, and in production it is not necessary to deploy the cluster
There are several major classes of hive clients:-Hive-WebUI: Operation of hive tab
Hive with the JDBC ExampleWhen developing a hive program using JDBC, you must first turn on the remote service interface for hive. Under the Hive installation directory, use the following command to open the bin:Hive-service Hives
Tags: style blog color using Java ar Strong div spStart HiveServer2:CD $HIVE _home/binHiveServer2 (default port is 10000) after default port: Hiveserver2 Start by specifying the port in the next mode: Hiveserver2--hiveconf hive.server2.thrift.port=14000 Connect HiveServer2 using beeline:CD $HIVE _home/binBeeline-u jdbc:hive2://hadoop000:10000Parameter description:hadoop000: Is the name of the
1, Hive open JDBC Interface, development need to import jar as followsCommons-logging-1.0.4.jarHadoop-common-2.6.0.jarHive_exec.jarHive_jdbc.jarHive_metastore.jarHive_service.jarHttpclient-4.2.5.jarHttpcore-4.2.5.jarLibfb303.jarLog4j-1.2.16.jarSlf4j-api-1.7.5.jarSlf4j-log4j12-1.7.5.jar2, the development of the example program is as follows
Package CO
'/home/centos/customers.txt ' into table t2;//upload to hive table from local file, local is uploading file,Copying tables$mysql >create Table TT as SELECT * from users; Copy tables, carry data and table structure$mysql >create table TT like users; Copy table, carry only table structure, without datahive>create table tt as select * from users;hive>create table tt like users ;
Hive supports custom implementation of SQL functions in Java code, with the following steps1: Inherit UDF, implement function evaluate, parameter return value can be overloaded2:jdbc:hive2://ht03:10000/default> Add Jar/opt/hive-udf/udf-str.jar3:create temporary FUNCTION tostring as ' Com.htdc.etl.server.demo.UDFDemo ';4:select Demo (' 1 ', ' 2 ') from car limit 1
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.