54th Lesson: Hive Cluster Installation and testing

Source: Internet
Author: User

One, hive cluster installation

1, install Hadoop, and start HDFs and yarn.


2, download Hive 1.2.1

Http://apache.fayea.com/hive/hive-1.2.1/

Apache-hive-1.2.1-bin.tar.gz

Uploading files to the cluster

3. Installing hive

[Email protected]:~# lsapache-hive-1.2.1-bin.tar.gz core LINKS-ANON.TXTAAA public template video picture document download music Desktop [email Protec ted]:~# mkdir/usr/local/hive[email protected]:~# tar-zxvf apache-hive-1.2.1-bin.tar.gz-c/usr/local/hive/

Rename Hive directory Name

[Email protected]:~# cd/usr/local/hive/[email protected]:/usr/local/hive# Lsapache-hive-1.2.1-bin[email protected] :/usr/local/hive# MV apache-hive-1.2.1-bin/apache-hive-1.2.1

4, install MySQL

[Email protected]:/usr/local/hive# dpkg-l|grep mysql[email protected]:/usr/local/hive# apt-get Install Mysql-server

Enter the password for the root user of the MySQL database in the dialog box that pops up

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M01/7D/57/wKiom1bmZQnTSgINAAAeCREo1Pg907.png "title=" Cheetah 20160314151327.png "alt=" Wkiom1bmzqntsginaaaecreo1pg907.png "/>


After the installation succeeds, initialize the database

[Email protected]:/usr/bin#/usr/bin/mysql_secure_installation


Log in to MySQL

[Email protected]:/usr/bin# mysql-uroot-penter password:welcome to the MySQL monitor. Commands End With; or \g.your MySQL connection ID is 48Server version:5.5.47-0ubuntu0.14.04.1 (Ubuntu) Copyright (c), Oracle and/o R its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names trademarks of their respectiveowners. Type ' help ', ' or ' \h ' for help. Type ' \c ' to clear the current input statement.mysql>


Assign permissions to the root user so that they can telnet to the database (the Windowns client can connect to the MySQL database)

mysql> select user,host from mysql.user;+------------------+-----------+| user              | host       |+------------------+-----------+| root              | 127.0.0.1 | |  root             | ::1        | |  debian-sys-maint | localhost | |  root             | localhost  |+------------------+-----------+4 rows in set  (0.00 sec) mysql> grant  all privileges on *.* to  ' root ' @ '% '  IDENTIFIED BY  ' Vincent '  with  GRANT OPTION; query ok, 0 rows affected  (0.00 sec) MYSQL>&NBSp flush privileges; query ok, 0 rows affected  (0.00 sec) mysql> grant all  privileges on *.* to  ' root ' @ '% '  IDENTIFIED BY  ' Vincent '  WITH  grant option; query ok, 0 rows affected  (0.00 sec) mysql> select user,host  from mysql.user;+------------------+-----------+| user              | host      |+------------------+------ -----+| root             | %          | |  root             | 127.0.0.1  ||  root             | ::1        | |  debian-sys-maint | localhost | |  root             | localhost  |+------------------+-----------+5 rows in set  (0.00 sec)


Modify MySQL's listening address

[Email protected]:/usr/bin# vi/etc/mysql/my.cnf

Comment out line 47th

#bind-address = 127.0.0.1


Restart MySQL

Connect to MySQL database using Navicat

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M01/7D/56/wKioL1bmaV7j97w2AABDRPQa4x8820.png "title=" Cheetah 20160314153111.png "alt=" Wkiol1bmav7j97w2aabdrpqa4x8820.png "/>


5. Configure Hive

5.1 Modify hive-env.sh, add the following items;

[Email protected]:/usr/local/hive/apache-hive-1.2.1/conf# pwd/usr/local/hive/apache-hive-1.2.1/conf[email protected]:/usr/local/hive/apache-hive-1.2.1/conf# CP hive-env.sh.template Hive-env.sh[email protected]:/usr/local /hive/apache-hive-1.2.1/conf# VI hive-env.shexport hive_home=/usr/local/hive/apache-hive-1.2.1/export HIVE_CONF_ Dir=/usr/local/hive/apache-hive-1.2.1/conf

5.2 Modifying hive-config.sh

[Email protected]:/usr/local/hive/apache-hive-1.2.1/bin# VI hive-config.sh

Add the following on the last line

Export Java_home=/usr/lib/java/jdk1.8.0_60export Hadoop_home=/usr/local/hadoop/hadoop-2.6.0export SPARK_HOME=/usr /local/spark/spark-1.6.0-bin-hadoop2.6/

5.3 Modifying Hive-site.xml

[Email protected]:/usr/local/hive/apache-hive-1.2.1/conf# cp hive-default.xml.template Hive-site.xml[email protected]:/usr/local/hive/apache-hive-1.2.1/conf# VI Hive-site.xml

For the sake of simplicity, only the following four configuration items are kept in,<property>

  <property>    <name>javax.jdo.option.ConnectionURL</name>     <value>jdbc:mysql://spark-master:3306/hive?createdatabaseifnotexist=true</ Value>    <description>jdbc connect string for a jdbc  metastore</description>  </property>  <property>     <name>javax.jdo.option.ConnectionDriverName</name>    <value> Com.mysql.jdbc.driver</value>    <description>driver class name  for a JDBC metastore</description>  </property>  < property>    <name>javax.jdo.option.connectionusername</name>     <value>root</value>    <description>Username to  use against metastore database</description>  </property>  <property>     <name>javax.jdo.option.connectionpassword</name>    <value>vincent</ Value>    <description>password to use against metastore  database</description>  </property>


5.4 Upload the Mysql-connector-java-5.1.13-bin.jar mysql driver file to the Lib directory of Hive

[Email protected]:/usr/local/hive/apache-hive-1.2.1/lib# ls Mysql-connector-java-5.1.13-bin.jar Mysql-connector-java-5.1.13-bin.jar


6, start HDFs, YARN

[Email protected]:/usr/local/hadoop/hadoop-2.6.0/sbin#./start-dfs.sh [Email protected]:/usr/local/hadoop/ hadoop-2.6.0/sbin#./start-yarn.sh [Email protected]:/usr/local/hadoop/hadoop-2.6.0/sbin# jps16336 ResourceManager15974 NameNode16183 SecondaryNameNode16574 Jps[email protected]:/usr/local/hadoop/hadoop-2.6.0/sbin #

7. Open Hive Client

[email protected]:/usr/local/hive/apache-hive-1.2.1/bin# ./hiveslf4j: class path  Contains multiple slf4j bindings. slf4j: found binding in [jar:file:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/ slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/staticloggerbinder.class]slf4j: found binding in [ jar:file:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/ staticloggerbinder.class]slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for  An explanation. slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory]slf4j:  Class path contains multiple slf4j bindings. slf4j: found binding in [jar:file:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/ slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/staticloggerbinder.class]slf4j: found binding  in [jar:file:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/ Slf4j/impl/staticloggerbinder.class]slf4j: see http://www.slf4j.org/codes.html#multiple_bindings  for an explanation. Slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory]logging  initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1/lib/ hive-common-1.2.1.jar!/hive-log4j.properties[error] terminal initialization failed;  falling back to unsupportedjava.lang.incompatibleclasschangeerror: found class  JLine. terminal, but interface was expected

Error, JLine class has a problem.

Workaround:

[Email protected]:/usr/local/hive/apache-hive-1.2.1/lib# cp Jline-2.12.jar $HADOOP _home/share/hadoop/yarn/lib/# Delete old version [email protected]:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib# rm Jline-0.9.94.jar


Start again

[email protected]:/usr/local/hive/apache-hive-1.2.1/bin# ./hiveslf4j: class path  Contains multiple slf4j bindings. slf4j: found binding in [jar:file:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/ slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/staticloggerbinder.class]slf4j: found binding in [ jar:file:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/ staticloggerbinder.class]slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for  An explanation. slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory]slf4j:  Class path contains multiple slf4j bindings. slf4j: found binding in [jar:file:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/ slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/staticloggerbinder.class]slf4j: found binding  in [jar:file:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/ Slf4j/impl/staticloggerbinder.class]slf4j: see http://www.slf4j.org/codes.html#multiple_bindings  for an explanation. Slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory]logging  initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1/lib/ Hive-common-1.2.1.jar!/hive-log4j.propertieshive>

Test

hive> CREATE TABLE T1 (a string,b int) oktime taken:1.67 secondshive> Show tables > Okt1time taken:0.29 Seco NDS, Fetched:1 Row (s)

Let's check to see if there's any metadata available in MySQL.


650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M00/7D/59/wKiom1bme-eybyWEAAAeHuRWrZM946.png "title=" Cheetah 20160314165308.png "alt=" Wkiom1bme-eybyweaaaehurwrzm946.png "/>

Data is synchronized!!




This article is from the "Ding Dong" blog, please be sure to keep this source http://lqding.blog.51cto.com/9123978/1750967

54th Lesson: Hive Cluster Installation and testing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.