Hbase,zookeeper Fully Distributed installation

Source: Internet
Author: User
Tags zookeeper

The previous time the installation was configured with a Hadoop cluster, two days to install HBase and then use some data to learn. Online tutorials A bit more also a bit messy, groping for a long time, so still record their own configuration experience. The process read some blog feeling is good, share:
Installation Configuration tutorial:
http://www.linuxidc.com/Linux/2012-12/76947.htm
http://blog.csdn.net/ lskyne/article/details/8900608
Taobao Application Introduction (Popular Science):
http://www.iteye.com/magazines/83 one. Zookeeper installation configuration Detailed steps already configured Hadoop cluster: Master node includes one namenode and one Datanode (Datanode set in slave file) ; The mit02 node is a datanode. First in master node: Download unzip zookeeper install package to/usr/local/hadoop/into conf/folder, copy zoo_sample.cfg to zoo.cfg modify Zoo.cfg file as follows:

# The number of milliseconds of each tick ticktime=2000 # The number of ticks, the
initial
# Synchronizati On phase can take
initlimit=10 # The number of
ticks so can pass between
# Sending a request and getting an Acknowledgement
synclimit=5
# The directory where the snapshot is stored.
# do not use/tmp for storage,/tmp here is just
# example sakes.
Datadir=/usr/local/hadoop/zk  
  #注意新建一个zk文件夹 and create a new myID file with a content of 1 or 2 (corresponding to the server host below)
datalogdir=/usr/local /HADOOP/ZK
# The port at which the clients would connect
clientport=2181
server.1=master:2888:3888
server.2=mit02:2888:3888
5.SCP Copy the Zookeep folder to another node (remember to modify the myID content)
two. HBase installation configuration Detailed stepsor first download the HBase installation package on the master node, note that the version of Hadoop is matched into the conf/folder, open the Hbase-site.xml file and modify the following
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hba se</value> </property> <property> <name>hbase.cluster.distributed</name> &LT;VALUE&G t;true</value> </property> <property> <name>hbase.master</name> <value>maste r:60000</value> </property> <property> <name>dfs.support.append</name> <value> true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value&gt ;master,mit02</value> </property> <property> <name>hbase.zookeeper.property.datadir</ name> <value>/usr/local/hadoop/zk</value> </property> <property> <name>hbase.zook eeper.property.clientport</name> <value>2181</value> </property> <property> <name >hbase.master.info.port</name> <value>60010</value> </property> </configuration>
 

3. Modify the hbase-env.sh to add the following:

Export java_home=/opt/software/java/jdk1.8.0_65 export
 hadoop_home=/usr/local/hadoop
 export hbase_home=/ Usr/local/hadoop/hbase

In fact, this pids I do not know what is the use of, and then start hbase when I found that as long as the start of the normal will generate PIDs files, when hbase problem closed when the file is automatically deleted, so through this can be intuitive to determine the operation of the situation. So you can set this path to your own more convenient to see the place.

# The directory where PID files are stored. /tmp by default.
 Export Hbase_pid_dir=/usr/local/hadoop/pids

Another key: This default value is True, is the zookeeper with HBase, I first want to lazy to set up directly so, do not install zookeeper, but found that startup is not normal, or close hbase when the error, do not know what reason, Online there is said to be only in a single and pseudo-distributed under the can. I would like to have a zookeeper cluster, anyway, learn more. So this is set to false.

# tell HBase whether it should manage it's own instance of Zookeeper or not.
Export Hbase_manages_zk=false

4. Add the following in Regionservers (add node):

Master
mit02

5. Configure the HBase folder SCP to the corresponding directory of other nodes. start/close HBase

1. Set the environment variable,/etc/profile add the following:

Export Zookeeper_home=/usr/local/hadoop/zookeeper export
hbase_home=/usr/local/hadoop/hbase
Export Classpath=.: $CLASSPATH: $JAVA _home/lib: $JRE _home/lib
export path= $PATH: $JAVA _home/bin: $JRE _home/bin: $HADOOP _ Home/bin: $HADOOP _home: $HADOOP _home/sbin: $HIVE _home/bin: $ZOOKEEPER _home/bin: $HBASE _home/bin: $SQOOP _home/bin:$ PATH

2. Start hadoop:start-all.sh on the master node

3. Each node starts zookeep:zkServer.sh start

4. Start hbase:start-hbase.sh

5.JPS View Current process status:

6512 ResourceManager
5984 NameNode
6144 DataNode
15570 hregionserver
6647 nodemanager
15256 Quorumpeermain
6349 secondarynamenode
16605 Jps
15406 hmaster

6. Web-side view: Http://master:60010/master-status
7. Close hbase:stop-hbase.sh
Close zookeeper:zkServer.sh Stop
Close hadoop:stop-all.sh

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.