hadoop1.2.1 pseudo-Distributed Installation Guide

Source: Internet
Author: User
Tags hadoop fs

1. pseudo-distributed installation

1.1 Modify IP

(1) Open the virtual network card of VMware or VirtualBox

(2) Set the network connection mode to host-only in VMware or VirtualBox

(3) In Linux, modify the IP. There is an icon in the upper corner, right-click, select Edit Connections ....

The IP must be in the same segment as the IP of the virtual network card under Windows, and the gateway must be present.

(4) Restart the network card, execute the command service network restart

errors, such as no suitable adapter error,

(5) Verify: Execute command ifconfig

1.2 shutting down the firewall

(1) Execute command: Service iptables stop shutdown firewall

(2) Verify: Execute Command service iptables status

1.3 turn off automatic opening of the firewall

(1) Execute command chkconfig iptables off

(2) Verify: Execute command chkconfig--list|grep iptables

1.4 Modify hostname

(1) Execute command hostname master to modify hostname in session

(2) Verify: Execute command hostname

(3) Execute command vi/etc/sysconfig/network

Modify the Hostname:hostname=master in a file

(4) Verify: Execute command reboot-h now restart machine

1.5 Set IP with the hostname binding

(1) Execute command vi/etc/hosts

Add a line at the end of the text 192.168.80.100 master

(2) Verify: Ping master

(3) Configure in window: host name corresponding IP

C:\Windows\System32\drivers\etc\hosts

192.168.80.100 Master

1.6 ssh Password-Free login

(1) Execute the command ssh-keygen-t RSA (then all the way to enter) to generate the secret key located in/root/.ssh/

(2) Execute command cp/root/.ssh/id_rsa.pub/root/.ssh/authorized_keys generate authorization file

(3) Authentication: SSH localhost (SSH host name)

1.7 installation JDK

(1) using WINSCP to copy JDK, Hadoop to Linux/home/big_data/zip

(2) cp/home/big_data/zip/*/home/big_data/

(3) Cd/home/big_data

(4) TAR-ZXVF jdk-7u60-linux-i586.tar.gz

(5) Renaming MV Jdk1.7.0_60 JDK

(6) Execute command vi/etc/profile set environment variables

Add two rows

1 export java_home=/home/big_data/jdk2 export path=.: $JAVA _home/bin: $PATH

Save exit

Execute Command source/etc/profile

(7) Verify: Execute command java-version

1.8 installation Hadoop

(1) Execute command TAR-ZXVF hadoop-1.2.1.tar.gz decompression

(2) Execute command MV hadoop-1.2.1 Hadoop

(3) Execute command vi/etc/profile set environment variables

Add a row of export Hadoop_home=/home/big_data/hadoop

Modify one row of export path=.: $HADOOP _home/bin: $JAVA _home/bin: $PATH

Save exit

Execute Command source/etc/profile

(4) Verify: Execute command Hadoop

(5) Modify the configuration file located in conf/hadoop-env.sh, Core-site.xml, Hdfs-site.xml, Mapred-site.xml

Line 9th of the <1> file hadoop-env.sh (Specify the installation path for the JDK)

1 Export java_home=/home/big_data/jdk/

<2> file Core-site.xml (core configuration file for Hadoop for configuring Namenode addresses and Ports)

1 <configuration> 2   <property> 3     <name>fs.default.name</ Name> 4     <value>hdfs://master:9000</value> 5     <description>change Your own hostname</description> 6   </property> 7   <property> 8      <name>hadoop.tmp.dir</name> 9     <value>/home/big_data/hadoop/tmp </value>   </property> </configuration>

<3> file Hdfs-site.xml (configuration replication, i.e. number of data saved)

1<configuration>2<property>3<name>dfs.replication</name>#indicates the number of sets of replicas, default is 34<value>1</value>5</property>6<property>7<name>dfs.name.dir</name>#Create mkdir-p/home/big_data/hadoop/hdfs first8<value>/home/big_data/hadoop/hdfs/name</value>9</property>Ten<property> One<name>dfs.data.dir</name> A<value>/home/big_data/hadoop/hdfs/data</value> -</property> -<property> the<name>dfs.permissions</name>#Indicates whether permission control is set -<value>false</value> -</property> -</configuration>

If it is super-user (Superuser), it is the identity of the namenode process. The system does not perform any permission checks

<4> file Mapred-site.xml (Configure Jobtracker address and port)

1 <configuration>2   <property>3     <name>mapred.job.tracker </name>4     <value>master:9001</value>5     <description>change Your own hostname</description>6   </property>7 </configuration>8   

(6) Execute command Hadoop namenode-format format (format Hadoop file system HDFs)

If error:

[Email protected] hdfs]# Hadoop Namenode-format

14/07/18 05:25:26 INFO Namenode. Namenode:startup_msg:

/************************************************************

Startup_msg:starting NameNode

Startup_msg:host = master/192.168.80.100

Startup_msg:args = [-format]

Startup_msg:version = 1.2.1

Startup_msg:build = Https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r 1503152; Compiled by ' Mattf ' on Mon Jul 15:23:09 PDT 2013

Startup_msg:java = 1.7.0_60

************************************************************/

Re-format filesystem In/home/big_data/hadoop/hdfs/name? (Y or N) n

Format aborted In/home/big_data/hadoop/hdfs/name

14/07/18 05:25:31 INFO Namenode. Namenode:shutdown_msg:

/************************************************************

Shutdown_msg:shutting down NameNode at master/192.168.80.100

************************************************************/

Please remove rm-rf/home/big_data/hdfs/* first

(7) Execute command start-all.sh start Hadoop

(8) Verification:

<1> Execute Command JPS View the Java process and discover 5 processes, namely Namenode, Secondarynamenode, DataNode, Jobtracker, Tasktracker

<2> View via browser: http://master:50070 and http://master:50030

Modify the Hosts file in the c:/windows/system32/drivers/etc/directory of Windows

1.9 If you remove the warning prompt:

[Email protected] ~]# Hadoop fs-ls/

Warning: $HADOOP _home is deprecated. (Remove warning)

Here's how:

[[email protected] ~]# vi/etc/profile (add a sentence)

#/etc/profile

Export Hadoop_home_warn_suppress=1

Export JAVA_HOME=/USR/LOCAL/JDK

Export Hadoop_home=/usr/local/hadoop

Export path=.: $HADOOP _home/bin: $JAVA _home/bin: $PATH

[[email protected] ~]# Source/etc/peofile (effective immediately)

Let's have a nice first, code test:

mkdir input

CP conf/* input/

Bin/hadoop jar Hadoop-examples-1.2.1.jar wordcount file:////home/big_data/hadoop/input output

Hadoop Fs-cat output/p*

Well, see this result, you can smile, look forward to my next chapter of the arrival of the blog???

Ha ha

hadoop1.2.1 pseudo-Distributed Installation Guide

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.