Hadoop + hbase installation manual in centos

Source: Internet
Author: User
Required before installation

Because of the advantages of hadoop, file storage and task processing are distributed, the hadoop distributed architecture has the following two types of servers responsible for different functions, master server and slave server. Therefore, this installation manual will introduce the two to the individual.

Installation hypothesis

If you want to install the hadoop architecture for two servers during installation, we assume that:

1. The two servers are named master and slave;

2. The operating systems of the two servers are centos5. * and the number of versions is greater than or equal to 5.4;

3. The Master will be used as the master server, and the slave will be used as the slave server;

4. The master and slave are both operating normally and connected to the Internet;

5. The wget command of master and slave can be used normally;

6. The master and slave have sufficient space;

7. Both master and slave have obtained the root permission;

8. The master IP address is 192.168.229.large, And the slave IP address is 192.168.229.134;

Installation & Configuration

Note: This part will install the common parts of master and slave. perform the following operations on master and slave respectively.

1. Set hosts and hostname

Add the following in/etc/hosts of master and slave:

192.168.229.mongomaster

192.168.229.134 slave

Modify the master hostname file:

VI/etc/hostname

Master

Modify the hostname file of slave:

VI/etc/hostname

Slave

2. Download and install JDK 1.6 and configure it. The command is

Wget 'HTTP: // response'

/Jdk-6u26-linux-i586-rpm.bin

[Wait until JDK is properly installed. Assume that the JDK installation path is/usr/Java/jdk1.6.0-26]

Ln-S/usr/Java/jdk1.6.6-26/usr/Java/JDK

[Configure Java environment variables]

VI/etc/profile

[Add at the end of the file]

Export java_home =/usr/Java/JDK
Export classpath = $ classpath: $ java_home/lib: $ java_home/JRE/lib
Export Path = $ java_home/bin: $ java_home/JRE/bin: $ path: $ home/bin

[Save and exit to make the settings take effect]

Source/etc/profile

3. install OpenSSH with the following command:

Yum install OpenSSH

[Set SSH password-less connection]

Ssh-keygen-t rsa-p'-F/root/. Ssh/id_dsa

CAT/root/. Ssh/id_dsa.pub>/root/. Ssh/authorized_keys

[Pass id_dsa.pub of master to slave and name it master_id_dsa]

[Run cat master_id_dsa.pub>/root/. Ssh/authorized_keys on slave]

4. Download and install hadoop with the command:

Wget 'HTTP: // labs.renren.com/apache-#//hadoop/core/hadoop-0.%2/hadoop-0.%2.tar.gz'

Tar zxvf hadoop-0.20.2.tar.gz

CP-r hadoop-0.20.2/opt/hadoop

[Configure hadoop environment variables]

VI/etc/profile

[Add at the end of the file]

Export hadoop_home =/opt/hadoop

[Configure hadoop]

CD/opt/hadoop/Conf

VI hadoop-env.sh

[Add at the end]

Export java_home =/usr/Java/JDK

VI core-site.xml

[Add under <configuration> node]

<Property>

<Name> hadoop. tmp. dir </Name>

<Value>/home/hadoop-$ {user. name} </value>

</Property>

<Property>

<Name> fs. Default. Name </Name>

<Value> HDFS: // master: 9000 </value>

</Property>

VI mapred-site.xml

[Add under <configuration> node]

<Property>

<Name> mapred. Job. Tracker </Name>

<Value> master: 9001. </value>

</Property>

VI master

[Change the content to the following content. Note: It is not added or changed]

Master

VI slaves

[Change the content to the following content. Note: It is not added or changed]

Slave

5. Download and install hbase with the command:

Wget 'HTTP: // labs.renren.com/apache-#//hbase/hbase-0.90.3/hbase-0.90.3.tar.gz'

Tar zxvf hbase-0.90.3.tar.gz

Hbase-0.90.3/opt/hbase CP-R

[Edit the hbase configuration file]

CD/opt/hbase/Conf

VI hbase-env.sh

[Add at the end of the file]

Export java_home =/usr/Java/JDK

Export hadoop_conf_dir =/opt/hadoop/Conf

Export hbase_home =/opt/hbase

Export hbase_log_dir =/var/hadoop/hbase-logs

Export hbase_pid_dir =/var/hadoop/hbase-PIDs

Export hbase_manages_zk = true

Export hbase_classpath = $ hbase_classpath:/opt/hadoop/Conf

VI hbase-site.xml

[Add under <configuration> node]

<Property>

<Name> hbase. rootdir </Name>

<Value> HDFS: // master: 9000/hbase </value>

</Property>

<Property>

<Name> hbase. tmp. dir </Name>

<Value>/home/hbase-$ {user. name} </value>

</Property>

<Property>

<Name> hbase. Cluster. Distributed </Name>

<Value> true </value>

</Property>

<Property>

<Name> hbase. Cluster. Distributed </Name>

<Value> true </value>

</Property>

<Property>

<Name> hbase. zookeeper. Quorum </Name>

<Value> slave </value>

</Property>

<Property>

<Name> hbase. zookeeper. Property. datadir </Name>

<Value>/home/hbase-data </value>

</Property>

VI regionservers

[Replace the content]

Slave

Rm/opt/hbase/lib/hadoop-core-0.20 -*

CP/opt/hadoop/The hadoop-0.20.2-core.jar ./

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.