Installation hadoop-2.3.0-cdh5.1.2 whole process

Source: Internet
Author: User

工欲善其事, its prerequisite, don't say anything, Hadoop download: http://archive.cloudera.com/cdh5/cdh/5/Choose the appropriate version to start, in this article is about Installs the process around the hadoop-2.3.0-cdh5.1.2 version. (Installation environment for three Linux virtual machines built in VMware 10 ).

1,Hadoop is the large Apache open source distributed Offline Computing framework, language: Java, in this case, in order to build a Hadoop environment (whether pseudo-distributed or multi-machine cluster), it must be installed on the server JDK.

Simple process simple, briefly describe the JDK installation configuration process (it is said that hadoop-2.3.0 need jdk1.7+ version support, we also do not care, then install it, Version: Jdk-7u67-linux-i586.tar.gz, before installing the new JDK, remember to find the JDK version of Linux and delete it, and do not understand it from hundred

Ubuntu 12.04 under Install JDK1.7 http://www.linuxidc.com/Linux/2012-06/62239.htm

CentOS6.3 Installing the JDK and environment configuration http://www.linuxidc.com/Linux/2012-09/70780.htm

Ubuntu14.04 64-bit installation JDK1.7 http://www.linuxidc.com/Linux/2015-01/111102.htm

A, extract to the/usr/java directory

Tar xvf jdk-7u67-linux-i586.tar.gz-c/usr/java

B, Vi/etc/profile

Export java_home=/usr/java/jdk1.7.0_67

Export Classpath=/usr/java/jdk1.7.0_67/lib

Export path= $JAVA _home/bin: $PATH

C, source/etc/profile//Do not restart the server case, this sentence let the configuration file take effect

D, java-version//Verify that the JDK is properly installed

2, We first plan three machines, and set up three machines after the role:

Host name IP role

Master 192.168.140.128 NameNode ResourceManager

Slave1 192.168.140.129 datenode nodemanager

Slave2 192.168.140.130 Datenode NodeManager

3. Modify the hostname:

Root privileges: vi/etc/sysconfig/network

What to modify: Hostname=master (similarly two slave hosts also want to modify this configuration and assign the appropriate name)

At the same time vi/etc/hosts (similarly two slave hosts also want to modify this configuration, and assign the corresponding hosts corresponding relationship)
127.0.0.1 Master
192.168.140.129 slave1
192.168.140.130 Slave2

Restart after modification: reboot

4. build Hadoop users (including two slave):

useradd Hadoop

passwd Hadoop

5. Configure SSH password-free login on Master

su hadoop//switch to the Hadoop user directory

Ssh-keygen-t RSA (all the way to the Generate key)

cd/home/hadoop/.ssh/
LS//See if there are two files generated id_rsa.pub id_rsa

6, synchronous SSH information to two slave, log in two slave

mkdir /home/hadoop/.ssh

SCP id_rsa.pub [Email protected]:/home/hadoop/.ssh/

mv id_rsa.pub authorized_keys

7. New Hadoop installation directory (operation under root)

Mkdir-p/data/hadoop
8. Unzip the downloaded Hadoop installation package to the installation directory (operation under root)

Tar xvf hadoop-2.3.0-cdh5.1.2.tar.gz-c/data/hadoop
9. Assign the installation directory permissions to Hadoop users: (Operation under Root)

Chown- R hadoop.hadoop /data/hadoop/
10. Configure Hadoop installation information and path (operation under Root)
Vi/etc/profile (Add the following at the end)
Export Hadoop_home=/data/hadoop
Export path= $HADOOP _home/bin: $JAVA _home/bin: $PATH
Source/etc/profile//Let configuration take effect
11,master on/data/hadoop/etc/hadoop
VI Slaves
Slave1
Slave2
VI Masters
Master
12, modify the following several files, and in the middle add the following corresponding content:

A, vi core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://master:9000</value>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

property>

<property>

<name>hadoop.tmp.dir</name>

<value>file:/data/hadoop/tmpvalue>

</property>

</configuration>

B, VI Hdfs-site.xml

<configuration>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/data/hadoop/dfs/name</value>

</property>

<property>

<name>dfs.namenode.data.dir</name>

<value>file:/data/hadoop/dfs/data</value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

</configuration>

C, VI Yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.address</name>

<value>master:8032</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>master:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>master:8031</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>master:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>master:8088</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

</configuration>

D, Mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>master:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>master:19888</value>

</property>

</configuration>

This configuration file is basically configured.


13. Synchronize the Hadoop installation files on master to slave1 Slave2 (HADOP user rights)

Cd/data/hadoop
Scp-r/data/hadoop/*[email protected]:/data/hadoop///sync to Slave1

SCP- R /data/hadoop/*[email protected]:/data/hadoop///sync to slave2

14, finally we came to the/data/hadoop/bin directory

./hadoop Namenode-format//Start Hadoop

15, if there is no error message, the basic can be said Hadoop up, arbitrarily intercepted the last few sections of the log:

15/01/13 18:08:10 INFO util. GSET:VM type = 32-bit
15/01/13 18:08:10 INFO util. gset:0.25% Max memory 966.7 MB = 2.4 mb
15/01/13 18:08:10 INFO util. gset:capacity = 2^19 = 524288 entries
15/01/13 18:08:10infonamenode.fsnamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/01/13 18:08:10 INFOnamenode.FSNamesystem:dfs.namenode.safemode.min.datanodes = 0
15/01/13 18:08:10 INFOnamenode.FSNamesystem:dfs.namenode.safemode.extension = 30000
15/01/13 18:08:10infonamenode.fsnamesystem: Retry cache on namenode is enabled
15/01/13 18:08:10 INFOnamenode.FSNamesystem:Retry Cache would use 0.03 of all heap and Retry cacheentry expiry time is 6 00000 Millis
15/01/13 18:08:10 INFO util. Gset:computing Capacity for map Namenoderetrycache
15/01/13 18:08:10 INFO util. GSET:VM type = 32-bit
15/01/13 18:08:10 INFO util. gset:0.029999999329447746% Max memory 966.7 MB = 297.0 KB
15/01/13 18:08:10 INFO util. gset:capacity = 2^16 = 65536 entries
15/01/13 18:08:10 INFOnamenode.AclConfigFlag:ACLs enabled? False
Re-format filesystem in Storagedirectory/data/hadoop/dfs/name? (Y or N) Y
15/01/13 18:08:17 INFOnamenode.FSImage:Allocated New blockpoolid:bp-729401054-127.0.0.1-1421143697660
15/01/13 18:08:17 INFOcommon.Storage:Storage Directory/data/hadoop/dfs/name has been successfullyformatted.
15/01/13 18:08:18 INFOnamenode.NNStorageRetentionManager:Going to retain 1 images with Txid >= 0
15/01/13 18:08:18 INFOutil.ExitUtil:Exiting with status 0
15/01/13 18:08:18 INFOnamenode.NameNode:SHUTDOWN_MSG:
/************************************************************
Shutdown_msg:shutting Downnamenode at master/127.0.0.1
************************************************************/

The programmer has been dry for a long time, the character is dull, the words are withered, the simple description is only to make a record, a lot of advice.

CentOS Installation and configuration Hadoop2.2.0 http://www.linuxidc.com/Linux/2014-01/94685.htm

Build a Hadoop environment on Ubuntu 13.04 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +hadoop 1.2.1 version cluster configuration http://www.linuxidc.com/Linux/2013-09/90600.htm

Build a Hadoop environment on Ubuntu (standalone mode + pseudo distribution mode) http://www.linuxidc.com/Linux/2013-01/77681.htm

Configuration of the Hadoop environment under Ubuntu http://www.linuxidc.com/Linux/2012-11/74539.htm

A single version of the Hadoop Environment Graphics tutorial detailed http://www.linuxidc.com/Linux/2012-02/53927.htm

Build a Hadoop environment (build with virtual machine Virtual two Ubuntu system in WINODWS Environment) http://www.linuxidc.com/Linux/2011-12/48894.htm

For more information about Hadoop see the Hadoop feature page http://www.linuxidc.com/topicnews.aspx?tid=13

excerpt from Permanent Update link address : http://www.linuxidc.com/Linux/2015-01/111740.htm

Installation hadoop-2.3.0-cdh5.1.2 whole process

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.