Hadoop+hive Deployment Installation Configuration __hadoop

Source: Internet
Author: User
Tags chmod flush mkdir

Recently combined with specific projects, set up Hadoop+hive, before running Hive to first set up a good Hadoop, about the construction of Hadoop has three models, in the following introduction, I mainly used the pseudo distribution of Hadoop installation mode. Write it down for you to share.
Preparatory work:

all of the above downloaded installation packages are in the/usr/local/hadoop directory after the decompression files

1, respectively ssh to each server , under the root user modified hostname
Su Root
Vim/etc/sysconfig/network

As shown in the above picture, Hostname=master
Vim/etc/hosts

As shown in the figure above, change the Localhost.localmain to master, the occlusion is the IP address, and then restart the server
Reboot
Add mappings for each host name and address on the master server
Vim/etc/hosts

Then check the ping connection
Ping slave1
Similarly, modify the hostname and add address mappings on the other three servers.

2, on each server to establish the appropriate folder, and modify the folder permissions
Mkdir/usr/local/hadoop
chmod 777–r/usr/local/hadoop

3, installation Jdk,hadoop Use environment must have JDK, each server must be installed
First check for any jdk:java-version
Cd/usr/local/hadoop
Http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
Upload the downloaded installation package to the/usr/local/hadoop folder on Master
TAR-ZXVF jdk-7u79-linux-x64.tar.gz
Configure environment variables for JDK:
Vim/etc/profile

Make it effective immediately source/etc/profile
Check to see if the installation was successful: Java–version

4, Hadoop user preparation:
Add User: Useradd Hadoop
User password: passwd hadoop
Authorize to Hadoop:chown-r Hadoop:hadoop/usr/local/hadoop

5, SSH password-free login configuration
In Hadoop, Namenode is through SSH to start and stop the various daemons on each datanode, this is necessary to execute the instructions between the nodes do not need to enter the form of a password, it needs to configure SSH to use the form of a password-free public key authentication.
Switch to Hadoop under User: The following is configure Master SSH password-free login slave1
Su Hadoop
Ssh-keygen–t Rsa–p '
Three times Enter
Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod ~/.ssh/authorized_keys
Su Root
Vim/etc/ssh/sshd_config

Service sshd Restart
To test the success of a local password-free connection:

The id_rsa.pub is then distributed to the SLAVE1 server:
SCP ~/.ssh/id_rsa.pub hadoop@slave1:~/
On the SLAVE1 host, under the Hadoop User:
Su Hadoop
mkdir ~/.ssh (if not, you should create a new. SSH folder)
chmod ~/.ssh
Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod ~/.ssh/authorized_keys
Switch to root User:
Vim/etc/sys

Ditto: Vim/etc/ssh/sshd_config
Service sshd Restart
To test SSH password-free connection slave1 on the master host:

Ibid., configure master SSH password-free connection slave2,slave3 respectively.
The above configuration process, can only achieve master SSH without password connection slave1,slave2,slave3, and can not achieve slave1,slave2,slave3 SSH no password connection master.
To implement slave1,slave2,slave3 SSH password-free connection master, take the slave1 ssh no password connection master as an example: in the same vein, first on the SLAVE1 host under the Hadoop user, generate Id_ Rsa.pub, and then copy it to the master host and append to Authorized_keys. The final configuration succeeded as follows:

6. Installing Hadoop (all machines in a cluster will have Hadoop installed)
Cd/usr/local/hadoop
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
TAR–ZXVF hadoop-1.2.1.tar.gz
To modify an environment variable:
Su Root
Vim/etc/profile

To bring it into force immediately:
Source/etc/profile
Modify the hadoop-env.sh under the Hadoop conf folder
Cd/usr/local/hadoop/hadoop-1.2.1/conf
Vim hadoop-env.sh

Uncomment the text below the red box in the image above;
Modify the Hdfs-site.xml file under Conf

Modify conf core-site.xml file:

Modify conf under Mapred-site.xml:

Note: All four servers should be configured accordingly.

7, Master on the verification:
Format Hadoop:
Cd/usr/local/hadoop/hadoop-1.2.1/bin
./hadoop Namenode–format
./start-all.sh
JPs

(ii) Hive installation (hive to be installed on each node)
MySQL is selected here as a metabase, MySQL and hive installed on the master server
Unified to put to/usr/local/hadoop
1, download the installation files, and decompression:
Cd/usr/local/hadoop
wget http://mirrors.cnnic.cn/apache/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz
TAR-ZXVF apache-hive-1.2.1-bin.tar.gz

2. Configure Environment variables
Under root User:
Su Root
Vim/etc/profile

Entry into force: Source/etc/profile
Chown–r Hadoop:hadoop/usr/local/hadoop

3, install MySQL
Yum Install Mysql-server
After the installation is complete;
Service mysqld Start
mysql>mysql;
If you have an error:
Mysqladmin:connect to server at ' localhost ' failed
Error: ' Access denied for user ' root ' localhost ' (using Password:yes) '
Solution:
Service Mysqld Stop
Mysqld_safe–skip-grant-tables &;
Mysql–uroot–p

Use MySQL;
Update user set Password=password ("Hadoop") where user= "root";
Flush privileges;
Quit
Service mysqld Restart
Mysql-uroot–phadoop
or Mysql–uroot–hmaster–phadoop.
If you can log on successfully, the MySQL database has been installed successfully.
Create Hive User:
Mysql>create USER ' hive ' identified by ' hive ';
Mysql>grant all privileges on. To ' hive ' @ ' master ' with GRANT OPTION;
Mysql> GRANT all privileges on. To ' hive ' @ ' master ' identified by ' hive ';
Mysql>flush privileges;
To create a hive database:
Mysql>create database hive;

4, modify the hive configuration file:
Cd/apache-hive-1.2.1-bin/conf
CP Hive-default.xml.template Hive-default.xml
VI Hive-site.xml

5. Copy JDBC Driver package
Copy MySQL's JDBC driver package to the hive Lib directory
CP Mysql-connector-java.bin.jar/usr/local/hadoop/apache-hive-1.2.1-bin/lib

6, distribute hive to Slave1,slave2,slave3 respectively
Scp-r/usr/local/hadoop/apache-hive-1.2.1-bin slave1:/usr/local/hadoop/
Scp-r/usr/local/hadoop/apache-hive-1.2.1-bin slave2:/usr/local/hadoop/
Scp-r/usr/local/hadoop/apache-hive-1.2.1-bin slave3:/usr/local/hadoop/
Configure environment variables as master.

7. Test Hive
Access to the Hive installation directory, command line:
Cd/usr/local/hadoop/apache-hive-1.2.1-bin/bin
Hive
Hive>show tables;
normal display, that is, the installation configuration was successful.
Note: Start Hadoop before testing hive
to start a remote service startup, you can do the following:
Cd/usr/local/hadoop/apache-hive-1.2.1-bin/bin
Nohup Hive–-service Hiveserver2
Static is normal, in the background has started related services.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.