Hadoop2.5.0 single-node and multi-node installation tutorial

Source: Internet
Author: User
Tags free ssh

Hadoop2.5.0 single-node and multi-node installation tutorial

1. Install jdk
You can use the whereis java command to view the java installation path, or use which java to view the java execution path, update the/etc/profile file, and add the following command at the end of the file:
Export JAVA_HOME =/usr/lib/jdk/jdk1.8.0 _ 20
Export JRE_HOME =/usr/lib/jdk/jdk1.8.0 _ 20/jre
Export PATH = $ JAVA_HOME/bin: $ JAVA_HOME/jre/bin: $ PATH
Export CLASSPATH = $ CLASSPATH:.: $ JAVA_HOME/lib: $ JAVA_HOME/jre/lib
Use the source/etc/profile command to update the environment variables of the system.
Finally, modify the system's default jdk.

$ Sudo update-alternatives -- install/usr/bin/java/usr/lib/jdk/jdk1.8.0 _ 20/bin/java 300
$ Sudo update-alternatives -- install/usr/bin/javac/usr/lib/jdk/jdk1.8.0 _ 20/bin/javac 300
$ Sudo update-alternatives -- config java
$ Sudo update-alternatives -- config javac

-------------------------------------- Split line --------------------------------------

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

-------------------------------------- Split line --------------------------------------

2. Install SSH (the SSH password-free service has been installed and configured in the cluster; if multiple nodes are set up, password-free SSH Login is required between nodes and counties)
Note:
In Linux, The. ssh directory under the $ HOME directory is owned by the user, and the permission must be 700 (only the user can access it)
The authorization file "authorized_keys" in the. ssh directory is owned by the user and the permission must be 644 (only the permission of 644 can be used for password-free access; otherwise, the password still needs to be entered during ssh connection)
Configure password-free logon between the two compute nodes (assuming that the master node is password-free logon to slave ):
1. Enter the public key and private key: ssh-keygen-t rsa-P ""
2. copy the master's public key to the slave's authorized_keys file: hduser @ master: ssh-copy-id-I $ HOME/. ssh/id_rsa.pub hduser @ slave


3. decompress the Hdoop installation directory to the opt directory.
Tar-zxvf archive_name.tar.gz

4. Modify Hadoop directory permissions and owner
Sudo chown-R hu: hu hadoop-2.5.0
Sudo chmod-R 755 hadoop-2.5.0


5. Configure Hadoop (The following is a single-node configuration. in multiple nodes, you only need to configure the environment variable of the master node)

Add the JAVA installation directory in $ {HADOOP_HOME}/etc/hadoop/hadoop-env.sh:
Export JAVA_HOME =/usr/lib/jdk/jdk1.8.0 _ 20/bin/java

Set User Environment Variables to facilitate the use of shell to operate hadoop. Add the following settings ~ /. Bashrc File

Export JAVA_HOME = $ HOME/java
Export HADOOP_DEV_HOME = $ HOME/hadoop-0.23.1
Export HADOOP_MAPRED_HOME =$ {HADOOP_DEV_HOME}
Export HADOOP_COMMON_HOME =$ {HADOOP_DEV_HOME}
Export HADOOP_HDFS_HOME =$ {HADOOP_DEV_HOME}
Export YARN_HOME =$ {HADOOP_DEV_HOME}
Export HADOOP_CONF_DIR =$ {HADOOP_DEV_HOME}/etc/hadoop
Export HDFS_CONF_DIR =$ {HADOOP_DEV_HOME}/etc/hadoop
Export YARN_CONF_DIR =$ {HADOOP_DEV_HOME}/etc/hadoop
Export HADOOP_LOG_DIR =$ {HADOOP_DEV_HOME}/logs
Export PATH =$ {HADOOP_DEV_HOME}/bin

Modify mapred-site.xml
Under $ {HADOOP_HOME}/etc/hadoop/, rename the mapred-site.xml.templat to a mapred-site.xml and add the following content

<Configuration>
<Property>
<Name> mapreduce. framework. name </name>
<Value> yarn </value>
</Property>
</Configuration>

Modify core-site.xml
In $ {HADOOP_HOME}/etc/hadoop/, modify core-site.xml

<Configuration>
<Property>
<Name> fs. defaultFS </name>
<Value> hdfs: // master: 9000/</value>
</Property>
<Property>
<Name> dfs. replication </name>
<Value> 3 </value>
</Property>
<Property>
<Name> hadoop. tmp. dir </name>
<Value>/opt/hadoop-data/tmp/hadoop-$ {user. name} </value>
<Description> A base for other temporary directories. </description>
</Property>
</Configuration>

Modify yarn-site.xml

In $ {HADOOP_HOME}/etc/hadoop/, modify yarn-site.xml:

<Configuration>
<Property>
<Name> yarn. nodemanager. aux-services </name>
<Value> mapreduce_shuffle </value>
</Property>
<Property>
<Name> yarn. nodemanager. aux-services.mapreduce.shuffle.class </name>
<Value> org. apache. hadoop. mapred. ShuffleHandler </value>
</Property>
</Configuration>

Modify hdfs-site.xml

<Configuration>
<Property>
<Name> dfs. namenode. name. dir </name>
<Value>/hdfs/name </value>
</Property>
<Property>
<Name> dfs. datanode. data. dir </name>
<Value>/hdfs/data </value>
</Property>
<Property>
<Name> dfs. replication </name>
<Value> 1 </value>
</Property>
<Property>
<Name> dfs. permissions </name>
<Value> false </value>
</Property>
</Configuration>

Modify slaves

Add the ip address or host of your node to the slaves file, for example, master.
If there are multiple nodemanagers, you can add them to the file at a time, each occupying a row.

For more details, please continue to read the highlights on the next page:

  • 1
  • 2
  • Next Page

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.