Configure hadoop2.3.0 in ubuntu

Source: Internet
Author: User
Environment: ubuntu12.4hadoop version: 2.3.0 I. Download hadoop-2.3.0-tar.gz unzip two modify configuration files, configuration files are in the $ {hadoop-2.3.0} etchadoop Path 1, core-site.xmlconfigurationpropertynamehadoop.tmp.dirnamevalueusrlocalhadoop-2.3.0tmphadoop-$ {u

Environment: ubuntu12.4 hadoop version: 2.3.0 I. Download hadoop-2.3.0-tar.gz unzip two modify configuration files, configuration files are in the $ {hadoop-2.3.0}/etc/hadoop Path 1, core-site.xml configuration property namehadoop. tmp. dir/name value/usr/local/hadoop-2.3.0/tmp/hadoop-$ {u

Environment: ubuntu12.4

Hadoop version: 2.3.0

I. Download hadoop-2.3.0-tar.gz and decompress it.

Modify the configuration file, the configuration file is in the $ {hadoop-2.3.0}/etc/hadoop path

1. core-site.xml



Hadoop. tmp. dir
/Usr/local/hadoop-2.3.0/tmp/hadoop-$ {user. name}


Fs. defaultFS
Hdfs: // localhost: 8020


2. hdfs-site.xml



Dfs. namenode. name. dir
/Usr/local/hadoop-2.3.0/tmp/dfs/name



Dfs. datanode. data. dir
/Usr/local/hadoop-2.3.0/tmp/dfs/data


Dfs. replication
1


3. mapred-site.xml



Mapreduce. framework. name
Yarn


4. yarn-site.xml



Yarn. resourcemanager. hostname
Localhost



Yarn. nodemanager. aux-services
Mapreduce_shuffle



Yarn. nodemanager. aux-services.mapreduce.shuffle.class
Org. apache. hadoop. mapred. ShuffleHandler

Iii. Command startup

The hadoop script command is in the $ {hadoop-2.3.0}/bin and $ {hadoop-2.3.0}/sbin directories and can execute the command according to the path

You can also configure environment variables for easy command writing.

Run sudo/etc/profile

1

2

3

4

5

6

7

8

export HADOOP_HOME=/usr/local/hadoop-2.3.0

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

Initialize the hadoop File System

Hdfs namenode-format

4. Start and close hadoop

1. Start Script 1:

Sujx @ ubuntu :~ $ Hadoop-daemon.sh start namenode

Starting namenode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
Sujx @ ubuntu :~ $ Hadoop-daemon.sh start datanode
Starting datanode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Sujx @ ubuntu :~ $ Hadoop-daemon.sh start secondarynamenode
Starting secondarynamenode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
Sujx @ ubuntu :~ $ Jps
9310 SecondaryNameNode
Jps 9345
9140 NameNode
9221 DataNode
Sujx @ ubuntu :~ $ Yarn-daemon.sh start resourcemanager
Starting resourcemanager, logging to/opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
Sujx @ ubuntu :~ $ Yarn-daemon.sh start nodemanager
Starting nodemanager, logging to/opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
Sujx @ ubuntu :~ $ Jps
9310 SecondaryNameNode
9651 NodeManager
9413 ResourceManager
9140 NameNode
Jps 9709
9221 DataNode
Sujx @ ubuntu :~ $

2. Start Script 2:

Sujx @ ubuntu :~ $ Start-dfs.sh
Starting namenodes on [hd2-single]
Hd2-single: starting namenode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
Hd2-single: starting datanode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
Sujx @ ubuntu :~ $ Start-yarn.sh
Starting yarn daemons
Starting resourcemanager, logging to/opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
Hd2-single: starting nodemanager, logging to/opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
Sujx @ ubuntu :~ $ Jps
11414 SecondaryNameNode
10923 NameNode
11141 DataNode
Jps 12038
11586 ResourceManager
11811 NodeManager
Sujx @ ubuntu :~ $

3. STARTUP script 3:


Sujx @ ubuntu :~ $ Start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hd2-single]
Hd2-single: starting namenode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
Hd2-single: starting datanode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to/opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
Starting yarn daemons
Starting resourcemanager, logging to/opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
Hd2-single: starting nodemanager, logging to/opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
Sujx @ ubuntu :~ $ Jps
14156 NodeManager
Jps 14445
13267 NameNode
13759 SecondaryNameNode
13485 DataNode
13927 ResourceManager
Sujx @ ubuntu :~ $

In fact, the final effects of these three methods are the same, and they all call each other internally. The corresponding End Script is also simple:
1. End Script 1:
Sujx @ ubuntu :~ $ Hadoop-daemon.sh stop nodemanager
Sujx @ ubuntu :~ $ Hadoop-daemon.sh stop resourcemanager
Sujx @ ubuntu :~ $ Hadoop-daemon.sh stop secondarynamenode
Sujx @ ubuntu :~ $ Hadoop-daemon.sh stop datanode
Sujx @ ubuntu :~ $ Hadoop-daemon.sh stop namenode
2. End Script 2:
Sujx @ ubuntu :~ $ Stop-yarn.sh
Sujx @ ubuntu :~ $ Stop-dfs.sh
3. End Script 3:
Sujx @ ubuntu :~ $ Stop-all.sh

View the status of hadoop namenode or hdfs: http: // localhost: 50070/

View job running status: http: // localhost: 8088/

Hdfs address port accessed by the Client: 8020


The YARN address port accessed by the Client: 8032

So far, the deployment of the standalone pseudo distribution has been completed.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.