Hadoop & hbase cluster configuration

Source: Internet
Author: User
Tags ssh port

Server:

Nodea -----> master

Nodeb -----> slave

Nodec -----> slave

Create a hadoop account

Sudo useradd-d/home/hadoop-m hadoop-s/bin/bash

Sudo passwd hadoop

Any password.

Install necessary environment

Jdk install sudo apt-get install sun-java6-jdk

After the installation is complete, the jdk is located at/usr/lib/jvm/java-6-sun.

Create an ssh password-free logon.

Ssh-keygen-t rsa scp authorized_keys hadoop @ nodeb :~ /. Ss

Press enter to create a. ssh file in the home directory. The file contains id_rsa and id_rsa.pub.

Upload the id_rsa.pub files of nodeb and nodec to nodea.

Scp id_rsa.pub hadoop @ nodea :~ /. Ssh/nodeb

Scp id_rsa.pub hadoop @ nodea :~ /. Ssh/nodec

Then, append the id_rsa.pub file of server I, c, g to authorized_keys on nodea.

Hadoop @ nodea :~ /. Ssh $ cat id_rsa.pub> authorized_keys

Hadoop @ nodea :~ /. Ssh $ cat nodec> authorized_keys

Hadoop @ nodea :~ /. Ssh $ cat nodeb> authorized_keys

Then, distribute the authorized_keys file to the. ssh directory of nodeb and nodea.

Scp authorized_keys hadoop @ nodec :~ /. Ssh

Scp authorized_keys hadoop @ nodeb :~ /. Ssh

Verify logon without a password

Ssh localhost

Ssh nodec

And so on. Verify that all servers can log on without a password through ssh.

Install and configure hadoop

Download hadoop

Wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/stable/hadoop-1.0.4.tar.gz

Decompress the package and start configuring hadoop.

Modify the conf/hadoop-env.sh File

Modify JAVA_HOME to modify the java home

Change the ssh port to the local port.

Export HADOOP_SSH_OPTS = ""

Modify the storage location of pids

Export HADOOP_PID_DIR =/home/hadoop/hadoop-1.0.4/pids

Modify the conf/core-site.xml to add the following properties:

<property>        <name>fs.default.name</name>        <value>hdfs://nodea:9000</value> </property>

Modify conf/hdfs-site.xml to add the following properties

<property>        <name>dfs.name.dir</name>        <value>/home/hadoop/data/namenode</value>    </property>    <property>        <name>dfs.data.dir</name>        <value>/home/hadoop/data/datanode</value>    </property>    <property>        <name>dfs.replication</name>        <value>3</value>    </property>    <property>        <name>dfs.support.append</name>        <value>true</value>    </property>    <property>        <name>dfs.datanode.max.xcievers</name>        <value>4096</value>    </property>

Modify the conf/mapred-site.xml file to add the following properties:

<property>        <name>mapred.job.tracker</name>        <value>nodea:9001</value>    </property>

Modify the conf/master file:

Replace localhost with nodea

Modify the conf/slaves file:

Replace localhost

Nodec

Nodeb

Finally, synchronize the configured hadoop to all servers.

Available

Scp-r hadoop-1.0.4 hadoop @ nodeb :~

Scp-r hadoop-1.0.4 hadoop @ nodec :~

Run hadoop:

1. Format namenode

Bin/hadoop namenode-format

2. Run hadoop

Bin/start-all.sh

Configure HBase

Download Hbase

Wget http://labs.mop.com/apache-mirror/hbase/stable/hbase-0.94.2.tar.gz

Decompress the downloaded file and configure HBase.

First, remove the hadoop-core-1.0.3.jar in lib under the Hbase directory, and then replace it with the downloaded hadoop jar File

Cp/home/hadoop/hadoop-1.0.4/hadoop-core-1.0.4.jar lib/

Configure the conf/hbase-env.sh file:

Export JAVA_HOME =/usr/lib/jvm/java-6-sun

Export HBASE_SSH_OPTS = ""

Export HBASE_PID_DIR =/home/hadoop/hbase-0.94.2/pids

Export HBASE_MANAGES_ZK = true

Then modify the hbase-site.xml File

<property>        <name>hbase.rootdir</name>         <value>hdfs://nodea:9000/hbase</value>    </property>    <property>        <name>hbase.zookeeper.quorum</name>        <value>nodea</value>    </property>    <property>        <name>hbase.zookeeper.property.dataDir</name>        <value>/home/hadoop/hbase-0.94.2/zookeeper</value>    </property>    <property>        <name>hbase.cluster.distributed</name>        <value>true</value>    </property>    <property>        <name>hbase.master</name>        <value>nodea:6000</value>    </property>    <property>        <name>dfs.support.append</name>        <value>true</value>    </property>

Modify the regionservers File

Replace localhost

Nodeb

Nodec

Then synchronize the file to nodec and nodeb.

Scp-r hbase-0.94.2/hadoop @ nodeb :~

Scp-r hbase-0.94.2/hadoop @ nodec :~

Run hbase

Before Running hbase, check whether hadoop is in safe mode,

Bin/hadoop dfsadmin-safemode get

If the result is:

Safe mode is ON

Then start hbase after hadoop exits safe mode.

You can also use commands,

Bin/hadoop dfsadmin-safemode leave

Security models are introduced, but there are some risks.

Run hbase after hadoop exits safe mode.

Bin/start-hbase.sh

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.