Hadoop 2.6.0 fully Distributed Deployment installation

Source: Internet
Author: User

First, prepare the software environment:

Hadoop-2.6.0.tar.gz

Centos-5.11-i386

jdk-6u24-linux-i586

MASTER:HADOOP02 192.168.20.129

SLAVE01:HADOOP03 192.168.20.130

Slave02:hadoop04 192.168.20.131

Second, install JDK, SSH environment and Hadoop "first under HADOOP02"

For JDK

chmod u+x JDK-6U24-LINUX-I586.BIN./JDK-6U24-LINUX-I586.BINMV JDK-1.6.0_24/HOME/JDK

Note: proof of successful JDK installation command:

#java-version

For SSH

Ssh-keygen-t RSACP ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Note: proof SSH no password login success command:

#ssh localhost

For Hadoop

TAR-ZXVF HADOOP-2.6.0.TAR.GZMV Hadoop-2.6.0/home/hadoop

#vim/etc/profile

Export Java_home=/home/jdkexport hadoop_home=/home/hadoopexport path=.: $JAVA _home/bin: $HADOOP _home/bin: $PATH

#source/etc/profile

#vim/etc/hosts

192.168.20.129 hadoop02192.168.20.130 hadoop03192.168.20.131 hadoop04

Third, configure the Hadoop environment "first under HADOOP02"

1) configuration file 1:hadoop-env.sh

Export JAVA_HOME=/HOME/JDK

2) configuration file 2:yarn-env.sh

Export JAVA_HOME=/HOME/JDK

3) configuration file 3:slaves

HADOOP03 hadoop04

    4) configuration file 4:core-site.xml

<configuration>         <property>                 <name>hadoop.tmp.dir</name>                 <value>/data/hadoop-${user.name}</value>         </property>        <property >                <name >fs.default.name</name>                 <value>hdfs://hadoop02:9000</value>         </property></configuration> 

    5) configuration file 5:hdfs-site.xml

<configuration>        <property>                 <name>dfs.http.address</name >                <value >hadoop02:50070</value>        </property>         <property>                 <name>dfs.namenode.secondary.http-address</name>                 <value> hadoop02:50090</value>        </property>         <property>                 <name>dfs.replication</name>                 <value>1</value>         </property></configuration>

&NBSP;&NBSP;&NBSP;&NBSP;6) configuration file 6:mapred-site.xml

<configuration>        <property>                 <name>mapred.job.tracker</ name>                < Value>hadoop02:9001</value>         </property>         <property>                 <name>mapred.map.tasks</name>                 <value>20</value >        </property>         <property>                 <name>mapred.reduce.tasks</name>                 <value>4</value>        </property>         <property>                 <name>mapreduce.framework.name</name>                 <value>yarn </value>        </property>         <property>                 <name>mapreduce.jobhistory.address</name>                 <value>hadoop02:10020</value>         </property>        <property>                 <name> mapreduce.jobhistory.webapp.address</name>                 <value>hadoop02:19888</value>         </property></configuration>

    7) configuration file 7:yarn-site.xml

<configuration>        <property>                 <name> yarn.resourcemanager.address</name>                 <value>hadoop02:8032</value>         </property>        <property>                 <name> yarn.resourcemanager.scheduler.address</name>                 <value>hadoop02:8030</value>         </property>        <property>                 <name>yarn.resourcemanager.webapp.address</name>                 <value>hadoop02:8088</value>         </property>         <property>                 <name>yarn.resourcemanager.resource-tracker.address</name>                 <value>hadoop02:8031</value >        </property>         <property>                 <name>yarn.resourcemanager.admin.address</name>                 <value>hadoop02:8033</value>         </property>        <property>                 <name> yarn.nodemanager.aux-services</name>                 <value>mapreduce_shuffle</value>         </property>        <property>                 <name> yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>                 <value>org.apache.hadoop.mapred.ShuffleHandler< /value>        &lT;/property></configuration> 

Iv. configuration of HADOOP03 and HADOOP04

Their configuration is the same as HADOOP02:

Scp-r/root/.ssh/[email protected]:/root/.ssh/scp-r/root/.ssh/[email protected]:/root/.ssh/scp/etc/profile [Email protected]:/etc/scp/etc/profile [email protected]:/etc/scp/etc/hosts] [email PROTECTED]:/ETC/SCP/ etc/hosts [email protected]:/etc/scp-r/home/[email protected]:/home/scp-r/home/[email protected]:/home/

V. Start the Hadoop cluster

1) format Namenode:

/home/hadoop/bin/hdfs Namenode-format

2) Start HDFs:

/home/hadoop/sbin/start-dfs.sh

At this point the process running on Master is: Namenode secondarynamenode

The processes running above Slave1 and Slave2 are: Datanode

3) Start yarn:

/home/hadoop/sbin/start-yarn.sh

At this point the process running on Master is: Namenode Secondarynamenode ResourceManager

The processes running above Slave1 and Slave2 are: Datanode nodemanaget

4) Check the startup results

To view cluster status:

HDFs Dfsadmin–report

View HDFs:

http://192.168.20.129:50070



Vi. Summary Experiment--Error:

15/05/11 13:41:55 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable

The error is that/home/hadoop/lib/native/libhadoop.so.1.0.0 is a 64-bit system, and I'm using a 32-bit system,

But does not affect the system

#file libhadoop.so.1.0.0




This article is from the "Yang zi" blog, please be sure to keep this source http://kupig.blog.51cto.com/8929318/1650233

Hadoop 2.6.0 fully Distributed Deployment installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.