Build Hadoop 2.x (2.6.2) on Ubuntu system

Source: Internet
Author: User

The official Chinese version of the Hadoop QuickStart tutorial is already a very old version of the new Hadoop directory structure has changed, so some configuration file location is also slightly adjusted, such as the new version of Hadoop can not find the Conf directory mentioned in the QuickStart, in addition, There are many tutorials on the web that are also about the old version. This tutorial is mainly for the Hadoop 2.X version, on the Ubuntu system build process. If you want to understand each step in depth, you need to consult other materials.

Quick Start for the English version: http://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/SingleCluster.html

Pre-conditions

(1) Ubuntu operating system (Ubuntu 14.04 is used in this tutorial)

(2) Installing the JDK

 $ sudo  apt-get install  openjdk-< Span style= "color: #800080;" >7 -jdk$ java -versionjava version   1.7.0_25   " OPENJDK Runtime Environment (IcedTea  2.3  . 12 ) (7u25--4ubuntu3) OpenJDK  64 -bit Server VM (Build 23.7 -b01, Mixed Mode) $ CD  /usr/lib/jvm$  ln -S Java-7 -openjdk-amd64 JDK 

(3) Installing SSH

sudo Install Openssh-server
Add Hadoop user groups and users (optional)
sudosudo adduser--hdusersudohdusersudo

After creating a user, use hduser to re-login to Ubuntu

Installing an SSH Certificate
Ssh-keygen "'   in/home/hduser/. ssh/ in/home/hduser/. ssh/cat ~/. ssh/id_rsa.pub >> ~/. ssh/ssh localhost
Download Hadoop 2.6.2
$ cd ~wgetsudotar vxzf hadoop-2.6. 2. tar. gz-c/home/hduser/home/hdusersudomv hadoop- 2.6. 2sudochown -R hduser:hadoop Hadoop
Configuring the Hadoop environment variables

(1) Modifying system environment variables

$CD ~$vijava_home=/usr/lib/jvm/jdk/export hadoop_install=/home/hduser /hadoopexport path= $PATH: $HADOOP _install/binexport Path= $PATH: $HADOOP _install/  Sbinexport hadoop_mapred_home=$HADOOP _installexport hadoop_common_home=$HADOOP _ Installexport hadoop_hdfs_home=$HADOOP _installexport yarn_home=$HADOOP _install#end of Paste 

(2) Modifying the Hadoop environment variables

$ cd/home/hduser/hadoop/etc/vi hadoop-env. SH #必改的就一个, that is to modify the Java_home, the other can not modify the export java_home=/usr/lib/jvm/jdk/

After the configuration is complete, re-login to Ubuntu (turn the terminal off, and then open)

Enter the command below to check if the installation was successful

2.6. 2 .........
Configure Hadoop

(1) Core-site.xml

$ cd/home/hduser/hadoop/etc/vi core-site.xml
#把下边的代码复制到<configuration> </configuration> Center <property> <name> Fs.default.name</name> <value>hdfs://localhost:9000</value></ Property>

(2) Yarn-site.xml

vi yarn-site.xml
#把下边的代码复制到<configuration> </configuration> Center <property> <name> Yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value></property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> < Value>org.apache.hadoop.mapred.shufflehandler</value></property>

(3) Mapred-site.xml

mv mapred-site.xml.template mapred-vi mapred-site.xml
#把下边的代码复制到<configuration> </configuration> Center <property> <name> Mapreduce.framework.name</name> <value>yarn</value></property>

(4) Hdfs-site.xml

$ cd ~$ mkdir-P mydata/hdfs/namenode$mkdir-P mydata/hdfs/datanode$ CD/home/hduser/hadoop/etc/hadoop$VIhdfs-site.xml# Copy the code below to<configuration> and </configuration>Middle<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hduser/mydata/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data .dir</name> <value>file:/home/hduser/mydata/hdfs/datanode</value> </property>
To format a new Distributed File system:
$ cd ~-format
Start the Hadoop service
$ START-DFS. SH .... $ start-yarn. SH .... $ jps# If the configuration succeeds, you will see a message similar to the one below . 2583 DataNode 2970 ResourceManager 3461 Jps 3177 NodeManager 2361 NameNode 2840 Secondarynamenode
Running the Hadoop sample
[Email protected]: cd/home/dhuser/Hadoop[email protected]:/home/dhuser/hadoop$ Hadoop jar./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2. Jar PI2 5#然后你会看到类似下边的信息Number of Maps=2Samples per Map=5 the/Ten/ +  -: A:GenevaWARN util. nativecodeloader:unable to load Native-hadoop Library forYour platform ... using builtin-Java classes where applicablewrote input forMAP #0wrote input forMAP #1starting Job the/Ten/ +  -: A:GenevaINFO client. Rmproxy:connecting to ResourceManager at/0.0.0.0:8032 the/Ten/ +  -: A:GenevaINFO input. Fileinputformat:total input paths to process:2 the/Ten/ +  -: A:GenevaINFO MapReduce. Jobsubmitter:number of Splits:2 the/Ten/ +  -: A:GenevaINFO Configuration.deprecation:user.name is deprecated. Instead, use Mapreduce.job.user.name ...

Reference article: http://codesfusion.blogspot.sg/2013/10/setup-hadoop-2x-220-on-ubuntu.html

Script code: https://github.com/ericduq/hadoop-scripts/blob/master/make-single-node.sh

Build Hadoop 2.x (2.6.2) on Ubuntu system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.