Hadoop2.7.3 + HBase1.2.5 + ZooKeeper3.4.6

Source: Internet
Author: User
Tags scp command ssh port

Hadoop2.7.3 + HBase1.2.5 + ZooKeeper3.4.6

I. Environment Description

Personal Understanding:

ZooKeeper can build clusters independently. HBase itself cannot build clusters independently and must be integrated with Hadoop and HDFS.

The cluster environment requires at least three nodes (namely three server devices): one Master node and two Slave nodes. The nodes can be pinged to each other through the LAN. The following example shows that, configure the node IP Address allocation as follows:

IP role
10.10.50.mongomaster
10.10.125.156 slave1
10.10.114.112 slave2

The CentOS 6.5 system is used for all three nodes. To facilitate maintenance, it is best to use the same user name, user password, same Hadoop, Hbase, and zookeeper directory structure for Cluster Environment configuration items.

Note:
It is best to keep the Host Name and role consistent. If they are different, you only need to configure the corresponding relationship in/etc/hosts.
You can modify the hostname by editing the/etc/sysconfig/network file.

Prepare for downloading the software package:
Hadoop-2.7.3.tar.gz
Hbase-1.2.5-bin.tar.gz
Zookeeper-3.4.6.tar.gz
Jdk-8u111-linux-x64.rpm

Because it is a testing environment, root is used for this operation. If it is a production environment, it is recommended that other users, such as hadoop, need to authorize the directory as hadoop
Chown-R hadoop. hadoop/data/yunva

2. Preparations

2.1 install JDK

Configure the JDK environment on three machines and download the jdk-8u111-linux-x64.rpm file for direct installation:

# Rpm-ivh jdk-8u111-linux-x64.rpm

Modify the configuration file vim/etc/profile:

Export JAVA_HOME =/usr/java/jdk1.8.0 _ 111 # This option needs to be modified for different jdk paths
Export PATH = $ JAVA_HOME/bin: $ PATH
Export HADOOP_HOME =/data/yunva/hadoop-2.7.3
Export HADOOP_INSTALL = $ HADOOP_HOME
Export HADOOP_MAPRED_HOME = $ HADOOP_HOME
Export HADOOP_COMMON_HOME = $ HADOOP_HOME
Export HADOOP_HDFS_HOME = $ HADOOP_HOME
Export YARN_HOME = $ HADOOP_HOME
Export HADOOP_COMMON_LIB_NATIVE_DIR = $ HADOOP_HOME/lib/native
Export PATH = $ PATH: $ HADOOP_HOME/sbin: $ HADOOP_HOME/bin
Export HADOOP_SSH_OPTS = "-p 48490" # This option is required for non-default ssh port 22, indicating port 48490

Because the deployment environment is different from jdk, You need to modify the configuration separately:
Master
Export JAVA_HOME =/usr/java/jdk1.8.0 _ 111

Slave1
Export JAVA_HOME =/usr/java/jdk1.8.0 _ 65

Slave2
Export JAVA_HOME =/usr/java/jdk1.8.0 _ 102

Then reload the configuration file to make it take effect:

# Source/etc/profile

2.2 Add Hosts ing

Add the hosts ing on the three nodes respectively:

# Vim/etc/hosts

The added content is as follows:

10.10.50.mongomaster
10.10.125.156 slave1
10.10.114.112 slave2

2.3 SSH password-less login between clusters

Ssh is installed in CentOS by default. If you do not have it, install ssh first.

The cluster environment must be accessed through ssh without a password. The local machine must be logged on without a password, and the host and the slave machine must be logged on without a password, there is no limit between the slave and the slave.

2.3.1 set master password-less automatic login to slave1 and slave2

There are three main steps:
① Generate public and private keys
② Import the public key to the authentication File
③ Change permissions

# Ssh-keygen-t rsa
# Cat ~ /. Ssh/id_rsa.pub> ~ /. Ssh/authorized_keys
# Chmod 700 ~ /. Ssh & chmod 600 ~ /. Ssh /*

Test. You may need to confirm yes for the first login. Then you can log on directly:

# Ssh master
# Ssh slave1
# Ssh slave2

For slave1 and slave2, set password-free self-login. The operation is the same as above.

There is also a quick operation method. When all servers generate a public key through ssh-keygen-t rsa, after the master is successfully logged on to the master/slave1/slave2 without a password, copy the file directly to another host.
Then, copy the Certificate file to the user's home directory on another machine.
# Scp-P 48490 authorized_keys master:/root/. ssh/
# Scp-P 48490 authorized_keys slave1:/root/. ssh/
# Scp-P 48490 authorized_keys slave2:/root/. ssh/

Iii. Hadoop cluster installation and configuration

The installation packages of hadoop, hbase, and zookeeper will be decompressed to the/data/yunva/folder and renamed.
The installation directory is as follows:
/Data/yunva/hadoop-2.7.3
/Data/yunva/hbase-1.2.5
/Data/yunva/zookeeper-3.4.6

3.1 modify hadoop Configuration

The configuration files are all under the/data/yunva/hadoop-2.7.3/etc/hadoop/directory

3.1.1 core-site.xml

<Configuration>
<Property>
<Name> fs. default. name </name>
<Value> hdfs: // master: 9000 </value>
</Property>
</Configuration>

3.1.2 hadoop-env.sh
Add the JDK path. If the jdk path of different servers is different, you must modify it separately:
Export JAVA_HOME =/usr/java/jdk1.8.0 _ 111

3.1.3 hdfs-site.xml
# Create hadoop data and User Directories
# Mkdir-p/data/yunva/hadoop-2.7.3/hadoop/name
# Mkdir-p/data/yunva/hadoop-2.7.3/hadoop/data

<Configuration>
<Property>
<Name> dfs. name. dir </name>
<Value>/data/yunva/hadoop-2.7.3/hadoop/name </value>
</Property>
<Property>
<Name> dfs. data. dir </name>
<Value>/data/yunva/hadoop-2.7.3/hadoop/data </value>
</Property>
<Property>
<Name> dfs. replication </name>
<Value> 3 </value>
</Property>
</Configuration>

3.1.4 mapred-site.xml

# Mv mapred-site.xml.template mapred-site.xml

<Configuration>
<Property>
<Name> mapred. job. tracker </name>
<Value> master: 9001. </value>
</Property>
</Configuration>

3.1.5 modify the slaves file and change localhost
# Cat/data/yunva/hadoop-2.7.3/etc/hadoop/slaves

Slave1
Slave2

Note: All configurations on the three machines are placed in the same path (if jdk paths are different, they must be modified separately)
Use the scp command to easily transfer files from local to remote (or remote to local:

Scp-r/data/yunva/hadoop-2.7.3/slave1:/data/yunva
Scp-r/data/yunva/hadoop-2.7.3/slave2:/data/yunva
3.2 start a hadoop Cluster

Go to the/data/yunva/hadoop-2.7.3/directory of the master and perform the following operations:

# Bin/hadoop namenode-format

Format the namenode. the operation that is performed before the service is started for the first time.

Then start hadoop:

# Sbin/start-all.sh

The jps command shows three processes except jps:

# Jps

30613 NameNode
30807 SecondaryNameNode
Jps 887
30972 ResourceManager

Hbase-env.sh (different java paths need to be modified)

Master

Export JAVA_HOME =/usr/java/jdk1.8.0 _ 111
Export HBASE_CLASSPATH =/data/yunva/hadoop-2.7.3/etc/hadoop/
Export HBASE_MANAGES_ZK = false
Export HBASE_SSH_OPTS = "-p 48490" # This parameter must be added to non-default ssh port 22, indicating that ssh is 48490

Slave1
Export JAVA_HOME =/usr/java/jdk1.8.0 _ 65
Export HBASE_CLASSPATH =/data/yunva/hadoop-2.7.3/etc/hadoop/
Export HBASE_MANAGES_ZK = false
Export HBASE_SSH_OPTS = "-p 48490"

Slave2

Export JAVA_HOME =/usr/java/jdk1.8.0 _ 102
Export HBASE_CLASSPATH =/data/yunva/hadoop-2.7.3/etc/hadoop/
Export HBASE_MANAGES_ZK = false
Export HBASE_SSH_OPTS = "-p 48490"

Iv. ZooKeeper cluster installation and configuration

Can refer to CentOS 6.5 environment Zookeeper-3.4.6 Cluster Environment deployment and standalone deployment detailed https://www.bkjia.com/Linux/2018-03/151439.htm

V. HBase cluster installation and configuration
Configuration file directory/data/yunva/hbase-1.2.5/conf

5.1 hbase-env.sh
Export JAVA_HOME =/usr/java/jdk1.8.0 _ 111 # separate configuration if jdk paths are different
Export HBASE_CLASSPATH =/data/yunva/hadoop-2.7.3/etc/hadoop/
Export HBASE_MANAGES_ZK = false
Export HBASE_SSH_OPTS = "-p 48490" # modify the ssh port 22 if it is not the default port

5.2 hbase-site.xml (Consistent)

<Configuration>
<Property>
<Name> hbase. rootdir </name>
<Value> hdfs: // master: 9000/hbase </value>
</Property>
<Property>
<Name> hbase. master </name>
<Value> master </value>
</Property>
<Property>
<Name> hbase. cluster. distributed </name>
<Value> true </value>
</Property>
<Property>
<Name> hbase. zookeeper. property. clientPort </name>
<Value> 2181 </value>
</Property>
<Property>
<Name> hbase. zookeeper. quorum </name>
<Value> master, slave1, and slave2 </value>
</Property>
<Property>
<Name> zookeeper. session. timeout </name>
<Value> 60000000 </value>
</Property>
<Property>
<Name> dfs. support. append </name>
<Value> true </value>
</Property>
</Configuration>

5.3 change regionservers

Add the slave list to the regionservers file:

Slave1
Slave2

5.4 distribute and synchronize installation packages

Copy the entire hbase installation directory to all slave servers:

$ Scp-P 48490-r/data/yunva/hbase-1.2.5 slave1:/data/yunva/
$ Scp-P 48490-r/data/yunva/hbase-1.2.5 slave2:/data/yunva/

6. Start the Cluster

1. Start ZooKeeper

/Data/yunva/zookeeper-3.4.6/bin/zkServer. sh start

2. Start hadoop

/Data/yunva/hadoop-2.7.3/sbin/start-all.sh

3. Start hbase

/Data/yunva/hbase-1.2.5/bin/start-hbase.sh

4. After the master is started, the master and slave processes are listed.

[Root @ master ~] # Jps
Jps
SecondaryNameNode # hadoop Process
NameNode # hadoop master Process
ResourceManager # hadoop Process
HMaster # hbase master Process
ZooKeeperMain # zookeeper Process

[Root @ slave1 ~] # Jps
Jps
ZooKeeperMain # zookeeper Process
DataNode # hadoop slave Process
HRegionServer # hbase slave Process

5. Go to hbase shell for verification.

# Cd/data/yunva/hbase-1.2.5/
[Root @ test6_vedio hbase-1.2.5] # bin/hbase shell
09:51:51, 479 WARN [main] util. NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar: file:/data/yunva/hbase-1.2.5/lib/slf4j-log4j12-1.7.5.jar! /Org/slf4j/impl/StaticLoggerBinder. class]
SLF4J: Found binding in [jar: file:/data/yunva/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar! /Org/slf4j/impl/StaticLoggerBinder. class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org. slf4j. impl. Log4jLoggerFactory]
HBase Shell; enter 'help <RETURN> 'for list of supported commands.
Type "exit <RETURN>" to leave the HBase Shell
Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar 1 00:34:48 CST 2017

Hbase (main): 001: 0> list
TABLE
0 row (s) in 0.2620 seconds

=> []

Hbase (main): 003: 0> create 'scores', 'case', 'Course'
0 row (s) in 1.3300 seconds

=> Hbase: Table-scores
Hbase (main): 004: 0> list
TABLE
Scores
1 row (s) in 0.0100 seconds

=> ["Scores"]

6. Enter the zookeeper shell for verification.

[Root @ test6_vedio zookeeper-3.4.6] # bin/zkCli. sh-server

10:04:33, 083 [myid:]-INFO [main: Environment @ 100]-Client environment: zookeeper. version = 3.4.6-1569965, built on 02/20/2014 GMT
10:04:33, 088 [myid:]-INFO [main: Environment @ 100]-Client environment: host. name = test6_vedio
10:04:33, 088 [myid:]-INFO [main: Environment @ 100]-Client environment: java. version = 1.8.0 _ 111
10:04:33, 091 [myid:]-INFO [main: Environment @ 100]-Client environment: java. vendor = Oracle Corporation
10:04:33, 091 [myid:]-INFO [main: Environment @ 100]-Client environment: java. home =/usr/java/jdk1.8.0 _ 111/jre
10:04:33, 091 [myid:]-INFO [main: Environment @ 100]-Client environment: java. class. path =/data/yunva/zookeeper-3.4.6/bin /.. /build/classes:/data/yunva/zookeeper-3.4.6/bin /.. /build/lib /*. jar:/data/yunva/zookeeper-3.4.6/bin /.. /lib/slf4j-log4j12-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin /.. /lib/slf4j-api-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin /.. /lib/netty-3.7.0.Final.jar:/data/yunva/zookeeper-3.4.6/bin /.. /lib/log4j-1.2.16.jar:/data/yunva/zookeeper-3.4.6/bin /.. /lib/jline-0.9.94.jar:/data/yunva/zookeeper-3.4.6/bin /.. /zookeeper-3.4.6.jar:/data/yunva/zookeeper-3.4.6/bin /.. /src/java/lib /*. jar:/data/yunva/zookeeper-3.4.6/bin /.. /conf:
10:04:33, 091 [myid:]-INFO [main: Environment @ 100]-Client environment: java. library. path =/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
10:04:33, 091 [myid:]-INFO [main: Environment @ 100]-Client environment: java. io. tmpdir =/tmp
10:04:33, 091 [myid:]-INFO [main: Environment @ 100]-Client environment: java. compiler = <NA>
10:04:33, 092 [myid:]-INFO [main: Environment @ 100]-Client environment: OS. name = Linux
10:04:33, 092 [myid:]-INFO [main: Environment @ 100]-Client environment: OS. arch = amd64
10:04:33, 092 [myid:]-INFO [main: Environment @ 100]-Client environment: OS. version = 2.6.32-431.11.25.el6.ucloud.x86 _ 64
10:04:33, 092 [myid:]-INFO [main: Environment @ 100]-Client environment: user. name = root
10:04:33, 092 [myid:]-INFO [main: Environment @ 100]-Client environment: user. home =/root
10:04:33, 092 [myid:]-INFO [main: Environment @ 100]-Client environment: user. dir =/data/yunva/zookeeper-3.4.6
10:04:33, 094 [myid:]-INFO [main: ZooKeeper @ 438]-Initiating client connection, connectString = localhost: 2181 sessionTimeout = 30000 watcher = org. apache. zookeeper. zooKeeperMain $ MyWatcher @ 446cdf90
Welcome to ZooKeeper!
10:04:33, 128 [myid:]-INFO [main-SendThread (localhost: 2181): ClientCnxn $ SendThread @ 975]-Opening socket connection to server localhost/127.0.0.1: 2181. will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
10:04:33, 209 [myid:]-INFO [main-SendThread (localhost: 2181): ClientCnxn $ SendThread @ 852]-Socket connection established to localhost/127.0.0.1: 2181, initiating session
10:04:33, 218 [myid:]-INFO [main-SendThread (localhost: 2181): ClientCnxn $ SendThread @ 1235]-Session establishment complete on server localhost/127.0.0.1: 2181, sessionid = 0x35bb23d68ba0003, negotiated timeout = 30000

WATCHER ::

WatchedEvent state: SyncConnected type: None path: null
[Zk: localhost: 2181 (CONNECTED) 0] ls/

Zookeeper hbase

[Zk: localhost: 2181 (CONNECTED) 0] ls/hbase
[Replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, flush-table-proc, region-in-transition, online-snapshot, master, running, recovering-regions, draining, namespace, hbaseid, table]

If you access the default http Management Port page, you can see the cluster status.
Hadoop:
Http: // IP: 8088/cluster

Hbase:
Http: // IP: 16010/master-status

Hdfs:
Http: // IP: 50070/dfshealth.html # tab-overview

You can download the complete documentation from the help site:

------------------------------------------ Split line ------------------------------------------

Free in http://linux.bkjia.com/

The username and password are both www.bkjia.com

The specific download directory is available at/July 6, 2018,/July 19, March,/Hadoop2.7.3, HBase1.2.5, and ZooKeeper3.4.6 to build a distributed cluster environment/

For the download method, see

------------------------------------------ Split line ------------------------------------------

Hadoop2.3-HA high availability cluster environment build https://www.bkjia.com/Linux/2017-03/142155.htm
Hadoop project-Cloudera 5.10.1 (CDH) installation and deployment https://www.bkjia.com/Linux/2017-04/143095.htm Based on CentOS7
Hadoop2.7.2 cluster construction (high availability) https://www.bkjia.com/Linux/2017-03/142052.htm
Use Ambari to deploy a Hadoop cluster (build an intranet HDP source) https://www.bkjia.com/Linux/2017-03/142136.htm
Ubuntu 14.04 Hadoop cluster installation https://www.bkjia.com/Linux/2017-02/140783.htm
CentOS 6.7 installing Hadoop 2.7.2 https://www.bkjia.com/Linux/2017-08/146232.htm
Build a distributed Hadoop-2.7.3 cluster https://www.bkjia.com/Linux/2017-07/145503.htm on Ubuntu 16.04
CentOS 7 Hadoop 2.6.4 distributed Cluster Environment Building https://www.bkjia.com/Linux/2017-06/144932.htm
Hadoop2.7.3 + Spark2.1.0 https://www.bkjia.com/Linux/2017-06/144926.htm for fully distributed cluster building process

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.