Install and configure Hadoop2.2.0 cluster in Ubuntu (64-bit)

Source: Internet
Author: User
Tags xsl

After the previous article compiled the Hadoop-2.2.0, the following describes in detail how to install and configure the Hadoop cluster in Ubuntu12.04-64 server.

Emphasize again: The Hadoop2.2 we downloaded from the Apache official website is the executable file of the linux32-bit system. Therefore, if you need to deploy it on a 64-bit system, you need to download the src Source Code separately and compile it by yourself. For detailed compilation steps, see compiling hadoop2.2.0

For convenience, we will build a small cluster with three hosts.

OS of three hosts: Ubuntu 12.04-64 server

The division of labor for the three machines is as follows:

Master: NameNode/ResouceManager

Slave1: DataNode/NodeManager

Slave2: DataNode/NodeManager

Assume that the IP addresses of the three VMS are as follows, which will be used later.

Master

: 129.1.77.6

Slave1: 129.1.77.5

Slave2: 129.1.77.7

The following describes how to install and configure Hadoop;

1. First, create the same user on the three machines (this is the basic requirement of Hadoop)

To create a user, follow these steps:

(1) sudo addgroup hadoop

(2) sudo adduser -- ingroup hadoop haduser

Edit the/etc/sudoers file and add haduser ALL = (ALL) ALL under the root ALL = (ALL) ALL row. If this row is not added, haduser cannot perform the sudo operation.

2. Next work:

1)

Ensure that jdk has been installed on the three machines and the environment variables are correctly configured. For details about jdk installation, see;

2) OpenSSH is installed on three hosts, and SSH can be correctly configured to log on without a password;

3. Install ssh below

3.1 ssh commands are installed by default. If not, or the version is old, you can reinstall it:

Sodu apt-get install ssh

3.2 set local Login Without Password

After the installation is complete ~ Directory (the current user's main directory, which is/home/haduser), generates a hidden folder. ssh (ls-a can view hidden files ). If you do not have this file, create it yourself (mkdir. ssh ).

The procedure is as follows:

1. Enter the. ssh folder

2. ssh-keygen-t rsa followed by a carriage return (generate a key)

3. append id_rsa.pub to the authorization key (cat id_rsa.pub> authorized_keys)

4. Restart the SSH service command to make it take effect.

Note: The preceding operations must be performed on each machine.

3.4 now, you can log on to the Server Load balancer without a password through ssh. Check whether you can log on to Server Load balancer without a password from the master host. Run the following command:

$: Ssh slave1

$: Ssh slave2

4. Set/etc/hosts and/etc/hostname on the three hosts respectively.

The hosts file defines the ing between host names and IP addresses.

127.0.0.1 localhost

129.1.77.6 master

129.1.77.5 slave1

129.1.77.7 slave2

The hostname file defines the Host Name of Ubuntu, for example, master (or slave1)

5. After completing the preceding steps, you can install Hadoop.

Follow these steps to log on with haduser.

Since the configuration on each machine in the hadoop cluster is basically the same, we first configure and deploy the configuration on namenode and then copy it to other nodes. Therefore, the installation process is equivalent to running on each machine. However, pay attention to the 64-bit and 32-bit systems in the cluster.

5.1 download and decompress

Hadoop-2.2.0.tar.gz

File

Compiled on 64-bit machines

Hadoop-2.2.0 copy

To the/home/hduser/hadoop path.

5.2 HDFS installation and configuration

1)

Configure/home/hduser/hadoop/etc/hadoop/hadoop-env.sh

Replace exportJAVA_HOME =$ {JAVA_HOME} with the following:

Export JAVA_HOME =/usr/jdk1.7.0 _ 45 (Use Your jdk as the standard)

Similarly, Configuration

Yarn-env.sh, add in it:

Export JAVA_HOME =/usr/jdk1.7.0 _ 45 (Use Your jdk as the standard)

2) Configure etc/hadoop/core-site.xml file content:

<? Xml version = "1.0" encoding = "UTF-8"?>
<? Xml-stylesheet type = "text/xsl" href = "configuration. xsl"?>

<Configuration>
<Property>
<Name> fs. default. name </name>
<Value> hdfs: // master: 9000/</value>
<Description> The name of the default file system. a uri whose scheme and authority determine the FileSystem implementation. the uri's scheme determines the config property (fs. SCHEME. impl) naming the FileSystem implementation class. the uri's authority is used to determine the host, port, etc. for a filesystem. </description>
</Property>
<Property>
<Name> dfs. replication </name>
<Value> 3 </value>
</Property>
<Property>
<Name> hadoop. tmp. dir </name>
<Value>/tmp/hadoop-$ {user. name} </value>
<Description> </description>
</Property>
</Configuration>

3) Configure etc/hadoop/hdfs-site.xml file content:

<? Xml version = "1.0" encoding = "UTF-8"?>
<? Xml-stylesheet type = "text/xsl" href = "configuration. xsl"?>

<Configuration>
<Property>
<Name> dfs. namenode. name. dir </name>
<Value>/home/haduser/hadoop/storage/hadoop2/hdfs/name </value>
<Description> Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. </description>
</Property>
<Property>
<Name> dfs. datanode. data. dir </name>
<Value>/home/haduser/hadoop/storage/hadoop2/hdfs/data1,/home/haduser/hadoop/storage/hadoop2/hdfs/data2, /home/haduser/hadoop/storage/hadoop2/hdfs/data3 </value>
<Description> Comma separated list of paths on the local filesystem of a DataNode where it shocould store its blocks. </description>
</Property>
<Property>
<Name> hadoop. tmp. dir </name>
<Value>/home/haduser/hadoop/storage/hadoop2/hdfs/tmp/hadoop-$ {user. name} </value>
<Description> A base for other temporary directories. </description>
</Property>
</Configuration>

5.3 YARN installation and configuration

Configure the content of the etc/hadoop/yarn-site.xml file:

<? Xml version = "1.0"?>

<Configuration>
<Property>
<Name> yarn. resourcemanager. resource-tracker.address </name>
<Value> master: 8031. </value>
<Description> host is the hostname of the resource manager and
Port is the port on which the NodeManagers contact the Resource Manager.
</Description>
</Property>

<Property>
<Name> yarn. resourcemanager. schedager. address </name>
<Value> master: 8030. </value>
<Description> host is the hostname of the resourcemanager and port is the port
On which the Applications in the cluster talk to the Resource Manager.
</Description>
</Property>

<Property>
<Name> yarn. resourcemanager. schedager. class </name>
<Value> org. apache. hadoop. yarn. server. resourcemanager. schedity. capacity. capacityschedity </value>
<Description> In case you do not want to use the default scheduler </description>
</Property>

<Property>
<Name> yarn. resourcemanager. address </name>
<Value> master: 8032. </value>
<Description> the host is the hostname of the ResourceManager and the port is the port on
Which the clients can talk to the Resource Manager. </description>
</Property>

<Property>
<Name> yarn. nodemanager. local-dirs </name>
<Value >$ {hadoop. tmp. dir}/nodemanager/local </value>
<Description> the local directories used by the nodemanager </description>
</Property>

<Property>
<Name> yarn. nodemanager. address </name>
<Value> 0.0.0.0: 8034 </value>
<Description> the nodemanagers bind to this port </description>
</Property>

<Property>
<Name> yarn. nodemanager. resource. memory-mb </name>
<Value> 10240 </value>
<Description> the amount of memory on the NodeManager in GB </description>
</Property>

<Property>
<Name> yarn. nodemanager. remote-app-log-dir </name>
<Value >$ {hadoop. tmp. dir}/nodemanager/remote </value>
<Description> directory on hdfs where the application logs are moved to </description>
</Property>

<Property>
<Name> yarn. nodemanager. log-dirs </name>
<Value >$ {hadoop. tmp. dir}/nodemanager/logs </value>
<Description> the directories used by Nodemanagers as log directories </description>
</Property>

<Property>
<Name> yarn. nodemanager. aux-services </name>
<Value> mapreduce. shuffle </value>
<Description> shuffle service that needs to be set for Map Reduce to run </description>
</Property>
</Configuration>

 

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

  • 1
  • 2
  • Next Page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.