Hadoop 2.2.0 cluster Installation

Source: Internet
Author: User

This article explains how to install Hadoop on a Linux cluster based on Hadoop 2.2.0 and explains some important settings.

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

1. Network Settings

Disable Firewall

Service iptables stop

Disable IPv6

Open/etc/modprobe. d/dist. conf and add:

Alias net-pf-10 off

Alias ipv6 off

After the system is restarted, run the following command:

Lsmod | grep ipv6

Check whether the ipv6 module is no longer loaded

2. installation and configuration

2.1 preparations before installation

Install ssh before installing hadoop, configure key-based password-free logon between nodes, install jdk1.7, and configure JAVA_HOME. For more information about these operations, see other documents, only the configuration reference for JAVA_HOME and HADOOP_HOME in/etc/profile is provided:

JAVA_HOME =/usr/java/jdk1.7.0 _ 51
HADOOP_HOME =/usr/local/hadoop
PATH = $ PATH: $ JAVA_HOME/bin: $ HADOOP_HOME/sbin
Export JAVA_HOME HADOOP_HOME PATH

Note: For convenience, we add $ HADOOP_HOME/sbin to the PATH, and avoid interference with cmd files of the same name when entering the command, you can use rm-f $ HADOOP_HOME/bin /*. cmd; rm-f $ HADOOP_HOME/sbin /*. cmd; Delete the cmd file.

2.2 configure necessary environment variables

The installation in this article is based on such a convention or preference: the installation program is located in/usr/local (or/opt), and the generated files and related data files are centrally placed in/var/hadoop, decompress the release package to/usr/local (or/opt) and edit $ {HADOOP_HOME}/etc/hadoop/hadoop-env.sh and $ {HADOOP_HOME}/etc/hadoop/yarn-env.sh, respectively, find and modify or add the following environment variables in the two files:

Export JAVA_HOME =/your/java/home
Export HADOOP_LOG_DIR =/var/hadoop/logs
Export HADOOP_COMMON_LIB_NATIVE_DIR =$ {HADOOP_PREFIX}/lib/native
Export HADOOP_OPTS = "-Djava. library. path = $ HADOOP_PREFIX/lib"

The preceding environment variable configuration is not required. For the first configuration, the original file is written as export JAVA_HOME =$ {JAVA_HOME }, however, when the cluster is started, the JAVA_HOME is not set and cocould not be found error may be reported. From the annotation of this item, we understand that, in the cluster environment, even if JAVA_HOME is correctly configured for each node, it is better to display JAVA_HOME again. the second configuration is to specify the log storage directory. The default location is the logs folder under the installation directory. According to the previous article, this installation will place the log file under/var/hadoop/logs. Add the third and fourth top configurations as needed. If the problem described in section 4.2 appears, these two items are required!

2.3 configure $ {HADOOP_HOME}/etc/hadoop/core-site.xml

<Configuration>
<Property>
<Name> fs. defaultFS </name>
<Value> hdfs: // YOUR-NAMENODE: 9000 </value>
</Property>
<Property>
<Name> hadoop. tmp. dir </name>
<Value>/var/hadoop </value>
</Property>
</Configuration>

The default configuration for the core-site.xml can be referred to: http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/core-default.xml

For a new cluster, the only item that must be modified is: fs. defaultFS, which specifies the access portal of the file system. In fact, it informs all datanode of which namenode is used to establish communication between namenode and various datanode.

In addition, as agreed above, we set hadoop. tmp. dir to/var/hadoop. Looking at the core-default.xml, we can find that, on all the configuration items that involve directories, the default is on $ {hadoop. tmp. create a sub-folder under dir}, so this installation is simple to hadoop. tmp. original default value of dir/tmp/hadoop-$ {user. change name} to/var/hadoop and put all files generated and used by hadoop in/var/hadoop to avoid mixing with several other files in the/tmp directory.

For more details, please continue to read the highlights on the next page:

  • 1
  • 2
  • 3
  • Next Page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.