Hadoop pseudo-distributed Installation

Source: Internet
Author: User

Hadoop pseudo-distribution is generally used for learning and testing. production environments are generally not used. (If you have any mistakes, please criticize and correct them)

1. installation environment

Install linux on windows. CenOS is used as an example. hadoop version is hadoop1.1.2.

2. configure a linux Virtual Machine

2.1 make sure that the NIC WMnet1 on the window and the NIC on the Linux virtual machine are in the same network segment (ping it and try to make sure you can ping it)

2.2 Modify host name

It is best to modify it (unified management for easy note). Command: vim/etc/sysconfig/network

2.3 modify ip Address

Users who are not familiar with linux commands should use a graphical interface (recommended)

Command: vim/etc/sysconfig/network-scripts/ifcfg-eth0)

Modify ipaddress, netmask, gateway.

2.4 modify the ing between host names and ip addresses

Command: vim/etc/hosts

2.5 disable firewall (disable startup)

Command: chkconfig iptables off

2.6 restart linux

Command: reboot

3. Install jdk

3.1 upload jdk

3.2 add execution permission

Command: chmod u + x jdk (self-uploaded jdk)

3.3 decompress

Decompress the package to the specified directory (all files should be managed in a unified manner)

Command: tar-zxvf jdk (self-uploaded jdk)-C specifies the Directory

3.4 add Environment Variables

Command: vim/etc/profile

3.5 refresh (make the configured environment variables take effect)

Command: source/etc/profile

4. Install hadoop pseudo-distributed

4.1 upload hadoop

4.2 ensure that you have the execution permission and decompress it (again, it is best to manage the files in a unified manner)

Command: tar-zxvf hadoop (uploaded hadoop version)-C specified directory

4.3 configure hadoop (modify 4 configuration files) and go to the hadoop-1.1.2/conf directory

If you are not familiar with commands (use tools to modify them, such as Notepad ++)

4.31hadoop-env. sh

In row 9, remove the comment and configure JAVA_HOME.

4.32core-site. xml

<Configuration>

<! -- Specify the HDFS namenode address -->

<Property>

<Name> fs. default. name </name>

<Value> hdfs: // configured Host Name: 9000 </value>

</Property>

<! -- Specify the directory of the files generated during hadoop running -->

<Property>

<Name> hadoop. tmp. dir </name>

<Value>/../hadoop-1.1.2/tmp </value>

</Property>

4.33hdfs-site. xml

<! -- Set the number of hdfs replicas -->

<Configuration>

<Property>

<Name> dfs. replication </name>

<Value> 1 </value>

<! -- The default value of distributed mode is 3, but we want to test and learn. 1 is enough. -->

</Property>

</Configuration>

4.34mapred-site. xml

<! -- Specify the jobtracker address of mapreduce -->

<Configuration>

<Property>

<Name> mapred. job. tracker </name>

<Value> configured hostname: 9001 </value>

</Property>

</Configuration>

4.4 Add hadoop Environment Variables

Command: vim/etc/profile

4.5 format hadoop hdfs

Command: hadoop namenode-format

4.6 start hadoop

Command: start-all.sh

4.7 verify that hadoop is successfully started

Command: jps

The following five

NameNode

SecondaryNameNode

DataNode

JobTracker

TaskTracker

It can also be verified through a browser

Http: // linux ip Address: 50070 (hdfs Management Interface)

Http: // ip address of linux: 50030 (mapreduce Management Interface)

However, you must first add the ing between linux host names and ip addresses in C: \ windows \ System32 \ drivers \ etc in Windows.

5. Configure ssh Login-free

Ssh is secure shell

Command to generate an ssh key: ssh-keygen-t rsa press enter (4 times) consecutively.

There is a hidden file under/root. ssh. Go to/root/. ssh/. There are two more files (id_rsa id_rsa.pub) private key and public key, and execute the following command:

Cat ~ /. Ssh/id_rsa.pub> ~ /. Ssh/authorized_keys

Hello hadoop, success.

Ready for development!

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.