Pseudo-distributed cluster environment Hadoop, HBase, zookeeper build (All)

Source: Internet
Author: User
Tags zookeeper ssh file permissions iptables firewall
Environment Description

1, operating system CentOS 6.5

2, jdk-7u51-linux-x64.tar.gz

Hadoop-1.1.2.tar.gz

Hbase-0.94.7-security.tar.gz

zookeeper-3.4.5.tar.gz Setting the IP address

Set static IP

Perform

Vim/etc/sysconfig/network-scripts/ifcfg-eth0
device= "eth0"
bootproto= "static"
onboot= "yes"
Type= "Ethernet"
ipaddr= "192.168.40.137"
prefix= "gateway=" "192.168.40.2"

No Internet access after setting static IP, waiting for master to turn off the firewall

Execute command service iptables stop authentication: Service iptables status shutdown automatic run of firewall execution command chkconfig iptables off authentication: chkconfig--list | grep iptables

Set host name

Execute command

(1) hostname m (2) vi/etc/sysconfig/network

set up SSH password-free login

Execute command (1) ssh-keygen-t RSA
(2) CP ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
Verify: SSH chaoren install JDK

Execute command

(1) Cd/usr/local (Note the file permissions problem under Linux) (2) MV Dk-7u51-linux-x64 JDK (3) vi/etc/profile Add the following:

Export JAVA_HOME=/OPT/JDK
export path= $JAVA _home/bin: $PATH
export classpath=.: $JAVA _home/lib/tools.jar:$ Java_home/lib/dt.jar: $CLASSPATH
(4) Source/etc/profile Verification: java-version

Installing Hadoop

Execute command (1) tar-zxvf hadoop-1.1.2.tar.gz (2) MV hadoop-1.1.2 Hadoop (3) vi/etc/profile Add the following:

Export JAVA_HOME=/USR/LOCAL/JDK export
hadoop_home=/usr/local/hadoop export
path=.: $HADOOP _home/bin: $JAVA _home/bin: $PATH
(4) Source/etc/profile (5) Modify the configuration file in the Conf directory
1.hadoop-env.sh export Java_home=/usr/local/jdk/2.core-site.xml <configuration> < property> <name>fs.default.name</name> <value>hdfs://m:9000</value> </PR operty> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp
        </value> </property> </configuration> 3.hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <propert y> <name>dfs.permissions</name> <value>false</value> </property> < /configuration> 4.mapred-site.xml <configuration> <property> <name>mapred.job.tracker< ;/name> <value>m:9001</value> </property> </configuration> 
(6) Hadoop Namenode-format (7) $HADOOP _home/bin/start-all.sh Authentication: (1) Execute command JPS if you see 5 new Java processes, respectively Namenode, Secondarynamenode, DataNode, Jobtracker, Tasktracker (2) View in browser, http://m:50070 http://m:50030

2 Installing Zookeeper

ZK server cluster size is not less than 3 nodes, requires the system time between the servers to be consistent. Under the/usr/local directory of M, unzip zk....tar.gz, set environment variable in conf directory, modify file vi zoo_sample.cfg zoo.cfg Edit the file, execute VI zoo.cfg modify datadir=/usr/ Local/zk/data New server.0=m:2888:3888 Create folder Mkdir/usr/local/zk/data in the data directory, create a file myID with a value of 0 start, execute commands on three nodes separately zkserver.sh Start test, execute command zkserver.sh status on three nodes respectively

3 Installing HBase

Unzip, rename, set environment variable hbase_home modify file $hbase_home/conf/hbase-env.sh on Hadoop0, modify the contents as follows export JAVA_HOME=/USR/LOCAL/JDK export Hbase_manages_zk=true Modify the file Hbase-site.xml, modify the content as follows

<property>
	  <name>hbase.rootdir</name>
	  <value>hdfs://m:9000/hbase</value>
	</property>
	<property>
	  <name>hbase.cluster.distributed</name>
	  < value>true</value>
	</property>
	<property>
	  <name>hbase.zookeeper.quorum </name>
	  <value>m</value>
	</property>
	<property>
	  <name> dfs.replication</name>
	  <value>1</value>
	</property>
(optional) Modify the regionservers, the specific operation see PPT start, execute command start-hbase.sh Note: Before starting HBase, start Hadoop to ensure that Hadoop can write data.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.