Hadoop2.6 stand-alone installation

Source: Internet
Author: User

One, installation environment
Hardware: Virtual machines
Operating system: Centos 6.4 64-bit
ip:10.51.121.10
Host Name: datanode-4
Installation User: Root

Two. Installing the JDK
Install JDK1.6 or later. Install jdk1.6.0_45 here.
: http://www.oracle.com/technetwork/java/javase/downloads/index.html
1, download jdk1.6.0_45-linux-x64.gz, unzip to/usr/lib/jdk1.6.0_45.
2, add the following configuration in/root/.bash_profile:

export JAVA_HOME=/usr/lib/jdk1.6.0_45export PATH=$JAVA_HOME/bin:$PATH

3, make the environment variable effective, #source ~/.bash_profile
4, installation verification # java -version
Java Version "1.6.0_45"
Java (TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot (TM) 64-bit Server VM (build 20.45-b01, Mixed mode)

Third, configure SSH login without password

$ ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Verify SSH, # ssh localhost
You do not need to enter a password to log in.

Four, install Hadoop2.6
1, download Hadoop2.6
: http://mirrors.hust.edu.cn/apache/hadoop/common/stable2/hadoop-2.6.0.tar.gz

2, unzip the installation
1), copy the hadoop-2.6.0.tar.gz to the/root/hadoop directory,
Then #tar -xzvf hadoop-2.6.0.tar.gz unzip, unzip the directory as:/root/hadoop/hadoop-2.6.0
2), in the/root/hadoop/directory, set up TMP, Hdfs/name, Hdfs/data directory, execute the following command
#mkdir/root/hadoop/tmp
#mkdir/root/hadoop/hdfs
#mkdir/root/hadoop/hdfs/data
#mkdir/root/hadoop/hdfs/name

3), set environment variables,#vi ~/.bash_profile

# set hadoop pathexport HADOOP_HOME=/root /hadoop/hadoop-2.6.0export PATH=$PATH:$HADOOP_HOME/bin

4) to make the environment variable effective,$source ~/.bash_profile

3,hadoop Configuration
Enter the $hadoop_home/etc/hadoop directory, configure hadoop-env.sh, and so on. The configuration files involved are as follows:
hadoop-2.6.0/etc/hadoop/hadoop-env.sh
hadoop-2.6.0/etc/hadoop/yarn-env.sh
Hadoop-2.6.0/etc/hadoop/core-site.xml
Hadoop-2.6.0/etc/hadoop/hdfs-site.xml
Hadoop-2.6.0/etc/hadoop/mapred-site.xml
Hadoop-2.6.0/etc/hadoop/yarn-site.xml

1) Configuration hadoop-env.sh

# The java implementation to use.#export JAVA_HOME=${JAVA_HOME}export JAVA_HOME=/usr/lib/jdk1.6.0_45

2) configuration yarn-env.sh

#export JAVA_HOME=/home/y/libexec/jdk1.6.0/export JAVA_HOME=/usr/lib/jdk1.6.0_45

3) configuration Core-site.xml
Add the following configuration:

<Configuration><Property><Name>fs.default.name</Name><value>hdfs://localhost:9000</Value><description>hdfs uri, File system://namenode Identity: Port number </description></property> <property> < name>hadoop.tmp.dir</name> Span class= "Hljs-tag" ><value>/root/hadoop/tmp</value> <description> Namenode Local Hadoop Temp folder </description> </property></ CONFIGURATION>            

4), configure Hdfs-site.xml
Add the following configuration

<Configuration><!-hdfs-site.xml--><Property><Name>dfs.name.dir</Name><Value>/root/hadoop/hdfs/name</Value><HDFs namespace metadata stored on Description>namenode</Description></Property><Property><Name>dfs.data.dir</Name><Value>/root/hadoop/hdfs/data</Value><description>datanode the physical storage location of the data block </ description></property><property> < Name>dfs.replication</name>  <value>1</value> <description> copy number, configuration default is 3, should be less than the number of Datanode machine </description></ property></configuration>  

5), configure Mapred-site.xml
Add the following configuration:

<configuration><property>        <name>mapreduce.framework.name</name> <value>yarn</value></property></configuration>

6), configure Yarn-site.xml
Add the following configuration:

<Configuration><Property><Name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property>  <property> <name>yarn.resourcemanager.webapp.address</name> < value>${yarn.resourcemanager.hostname}:8099</value></property>  </configuration>              

4,hadoop Start
1) Formatting Namenode

$ bin/hdfs namenode –format

2) Start the Namenode and DataNode daemons

$ sbin/start-dfs.sh

3) Start the ResourceManager and NodeManager daemons

$ sbin/start-yarn.sh

5, start verification
1) Execute the JPS command with the following process to indicate that Hadoop is booting normally

# jps54679 NameNode54774 DataNode15741 Jps9664 Master55214 NodeManager55118 ResourceManager54965 SecondaryNameNode

2) Enter in the browser: http://datanode-4:8099/can see the ResourceManager interface of yarn. Note: The default port is 8088, here I set the yarn.resourcemanager.webapp.address as: ${yarn.resourcemanager.hostname}:8099.

Hadoop2.6 stand-alone installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.