Hadoop1.2.1 installation tutorial in full distribution mode

Source: Internet
Author: User

Assume that there are three machines whose IP addresses and corresponding host names are:

192.168.12.18 localhost. localdomain

192.168.2.215 rhel5530g

192.168.35.198 mddb01

1: Add IP address and host name ing to the/etc/hosts file of each machine, that is, add the above three lines to the hosts file, note: in actual installation, we often need to modify the host name. The configured hosts content is as follows:

 

2: Configure ssh password-less access:

Run the following command:

Ssh-keygen-t dsa-P ""-f ~ /. Ssh/id_dsa

Cat ~ /. Ssh/id_dsa.pub> ~ /. Ssh/authorized_keys

And copy the authorized_keys of all nodes to the authorized_keys of each node. After configuration, the authorized_keys file of each node should have the same content.

 

Make sure that the installation is correct before continuing the installation. You can run the ssh host name command to test whether the installation is successful, for example, ssh rhel5530g. If the installation is successful, the installation will go to the rhel5530g machine.

3: decompress the hadoop folder.

4: go to the conf directory and configure the hadoop File. We need to configure the following files:

 

First configure the masters file:

 

Here we use 192.168.12.18 as the master node, that is, the namenode node.

 

Then configure the slaves file:

 

Here, 192.168.2.215 and 192.168.35.198 are used as the datanode nodes.

 

Configure the hadoop-env.sh file:

 

Here is the installation address for configuring java.

 

Configure the hdfs-site.xml file:

 

Configure core-site.xml:

 

Configure mapred-site.xml:

 

5. Copy the file to another machine:

Scp-r/data/software/hadoop/rhel5530g:/data/software/hadoop/

Scp-r/data/software/hadoop/mddb01:/data/software/hadoop/

Here we first configure the file in the localhost. localdomain machine, and then copy it to another machine.

 

6: format namenode:

Cd/data/software/hadoop/hadoop-1.2.1/bin/

./Hadoop namenode-format

If ...... Has been successfully formatted indicates that the formatting is successful.

 

7: start hadoop, enter the bin directory, run./start-all.sh, after the execution is complete, run jps on the master node, if the process in the following red box appears, it indicates that the execution is successful.

 

Then run jps on the slave node. If the process in the red box below appears, the execution is successful.

 

You can also view it in a browser:

Http: // 192.168.12.18: 50070/dfshealth. jsp

Http: // 192.168.12.18: 50030/jobtracker. jsp

Http:/// 192.168.35.198: 50060/tasktracker. jsp

Http:/// 192.168.2.215: 50060/tasktracker. jsp

 

You may also need to disable the firewall:

Service iptables stop

Hadoop1.2.1 installation tutorial in full distribution mode

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.