Hive Cluster Installation Configuration

Source: Internet
Author: User
Tags chmod log4j

Before installing hive, you need to configure the JDK installation and set up the Hadoop cluster.

Install configuration jdk, Hadoop here is a bit, jdk1.7, Hadoop 2.6.0.


system Environment, VMware opened three virtual machines.

Node1

Node2

Node3


website Download hive package, and then unzip in the Hadoop directory

tar-zxvf./hive.............tar.gz


then configure hive environment variables in/etc/profile

Vi/etc/profile


Export Hive_home=/usr/hadoop/hive

Export path= $PATH: $HIVE _home/bin

Source/etc/profile



get the MySQL installation package and install


# wget http://dev.mysql.com/get/downloads/mysql/mysql-5.6.25.tar.gz
# tar ZXVF mysql-5.6.25.tar.gz
# CD mysql-5.6.25
# cmake \
-dcmake_install_prefix=/data/mysql \
-dmysql_unix_addr=/data/mysql/mysql.sock \
-ddefault_charset=utf8 \
-DDEFAULT_COLLATION=UTF8_GENERAL_CI \
-dwith_innobase_storage_engine=1 \
-dwith_archive_storage_engine=1 \
-dwith_blackhole_storage_engine=1 \
-dmysql_datadir=/data/mysql/data \
-dmysql_tcp_port=3306 \
-denable_downloads=1


If the error is not found CMakeCache.txt the Ncurses-devel is not installed


# Make && make install


Modify Directory Permissions


# chmod +w/data/mysql/
# Chown-r mysql:mysql/data/mysql/
# ln-s/data/mysql/lib/libmysqlclient.so.18/usr/lib/libmysqlclient.so.18
# ln-s/data/mysql/mysql.sock/tmp/mysql.sock


Initializing the database


# CP/DATA/MYSQL/SUPPORT-FILES/MY-DEFAULT.CNF/ETC/MY.CNF
# Cp/data/mysql/support-files/mysql.server/etc/init.d/mysqld
#/data/mysql/scripts/mysql_install_db--user=mysql--defaults-file=/etc/my.cnf--basedir=/data/mysql--datadir=/ Data/mysql/data


Start the MySQL service


# chmod +x/etc/init.d/mysqld
# service Mysqld Start
#ln –s/data/mysql/bin/mysql/usr/bin/


Initialize password

#mysql-uroot-h127.0.0.1-p
mysql> SET PASSWORD = PASSWORD (' 123456 ');


Create Hive User


Mysql>create USER ' hive ' identified by ' hive ';
Mysql>grant all privileges in *.* to ' hive ' @ ' hadoop-master ' with GRANT OPTION;
Mysql>flush privileges;


Hive User Login


[Hadoop@node1 ~]mysql-h node1-uhive
Mysql>set Password = password (' hive ');


Creating a Hive Database


Mysql>create database hive;


Configure Hive


Modify configuration file
Go to Hive's configuration file directory and find HIVE-DEFAULT.XML.TEMPLATE,CP for Hive-default.xml
Create another hive-site.xml and add parameters


[Hadoop@node1 conf]$ pwd
/home/hadoop/hive/conf
[Hadoop@node1 conf]$ VI Hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node1:3306/hive?createDatabaseIfNotExist=true</value>
&LT;DESCRIPTION&GT;JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>driver class name for a JDBC metastore</description>
</property>


<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive<value>
<description>username to use against Metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against Metastore database</description>
</property>
</configuration>


JDBC Downloads


[Hadoop@node1 ~]$ wget http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.36.tar.gz
[hadoop@node1~]$ ls
Hive DFS hadoop-2.7.1 hsource tmp
[Hadoop@node1 ~]$ CP Mysql-connector-java-5.1.33-bin.jar apache-hive-1.2.1-bin/lib/


Hive Client Configuration


[Hadoop@node1 ~]$ scp-r Hive/hadoop@node1:/usr/hadoop
[Hadoop@node2 conf]$ VI Hive-site.xml
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop-master:9083</value>
</property>
</configuration>


Hive Start


To start the Metastore service


[Hadoop@node1 ~]$ hive--service Metastore &
[Hadoop@node1 ~]$ JPS
10288 Runjar #多了一个进程
9365 Namenode
9670 Secondarynamenode
11096 Jps
9944 NodeManager
9838 ResourceManager
9471 Datanode


Hive Server-side access


[Hadoop@node1 ~]$ Hive
Logging initialized using configuration in jar:file:/usr/hadoop/hive/lib/hive-common-1.2.1.jar!/ Hive-log4j.properties
hive> show databases;
Ok
Default
Src
Time taken:1.332 seconds, Fetched:2 row (s)
hive> use SRC;
Ok
Time taken:0.037 seconds
Hive> CREATE TABLE test1 (id int);
Ok
Time taken:0.572 seconds
Hive> Show tables;
Ok
Abc
Test
Test1
Time taken:0.057 seconds, Fetched:3 row (s)
Hive>


Hive Client Access


[hadoop@node2]$ Hive
Logging initialized using configuration in jar:file:/usr/hadoop/hive/lib/hive-common-1.2.1.jar!/ Hive-log4j.properties
hive> show databases;
Ok
Default
Src
Time taken:1.022 seconds, Fetched:2 row (s)
hive> use SRC;
Ok
Time taken:0.057 seconds
Hive> Show tables;
Ok
Abc
Test
Test1
Time taken:0.218 seconds, Fetched:3 row (s)
Hive> CREATE TABLE test2 (id int, name string);
Ok
Time taken:5.518 seconds
Hive> Show tables;
Ok
Abc
Test
Test1
Test2
Time taken:0.102 seconds, Fetched:4 row (s)
Hive>


OK, the test is complete and the installation has been successful.


If you want to use code for remote access, manipulate the hive database.

Please enter
Hive--service Hiveserver2.

The hive database is then remotely manipulated using code.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.