Hadoop Cluster Environment Installation deployment

Source: Internet
Author: User

1. Environment Readiness:

Installing the Centos6.5 operating system

Download hadoop2.7 file

Download jdk1.8 file

2. Modify the/etc/hosts file and configure trust:

Add the following in the/etc/hosts file:

192.168.1.61 host61

192.168.1.62 host62

192.168.1.63 host63

Configure the SSH trust between the servers

3. Add the user, unzip the file and configure the environment variables:

Useradd Hadoop

passwd Hadoop

TAR-ZXVF hadoop-2.7.1.tar.gz

MV Hadoop-2.7.1/usr/local

Ln-s hadoop-2.7.1 Hadoop

Chown-r Hadoop:hadoop hadoop-2.7.1

TAR-ZXVF jdk-8u60-linux-x64.tar.gz

MV Jdk1.8.0_60/usr/local

Ln-s jdk1.8.0_60 JDK

Chown-r Root:root jdk1.8.0_60


echo ' Export java_home=/usr/local/jdk ' >>/etc/profile

Echo ' Export path=/usr/local/jdk/bin: $PATH ' >/etc/profile.d/java.sh

4. Modify the Hadoop configuration file:

1) Modify the hadoop-env.sh file:

cd/usr/local/hadoop/etc/hadoop/hadoop-env.sh

Sed-i ' s% #export java_home=${java_home}%export java_home=/usr/local/jdk%g ' hadoop-env.sh

2) Modify the Core-site.xml and add the following at the end:

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://host61:9000/</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/hadoop/temp</value>

</property>

</configuration>

3) Modify the Hdfs-site.xml file:

<configuration>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

</configuration>

4) Modify Mapred-site.xml

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>host61:9001</value>

</property>

</configuration>

5) Configuration Masters

Host61

6) Configuration Slaves

host62

Host63

5. Configure host62 and host63 in the same way

6. Format the Distributed File system

/usr/local/hadoop/bin/hadoop-namenode format

7. Running Hadoop

1)/usr/local/hadoop/sbin/start-dfs.sh

2)/usr/local/hadoop/sbin/start-yarn.sh


8. Check:

[Email protected] sbin]# JPS

4532 ResourceManager

4197 NameNode

4793 Jps

4364 Secondarynamenode

[Email protected] ~]# JPS

32052 DataNode

32133 NodeManager

32265 Jps

[Email protected] local]# JPS

6802 NodeManager

6963 Jps

6717 DataNode


This article is from the "Webseven" blog, make sure to keep this source http://webseven.blog.51cto.com/4388012/1699359

Hadoop cluster Environment installation deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.