"The hadoop2.4.0 of Hadoop"--a pseudo-distributed installation configuration based on CentOS

Source: Internet
Author: User

Today is finally the hadoop2.4 of the entire development environment, including the Windows7 on the Eclipse connection Hadoop,eclipse configuration and test made irritability of the ~ ~

First on a successful picture, Hadoop's pseudo-distributed installation configuration, just follow the steps, a little basic basically no problem. The eclipse configuration took a very long time to fix, and there were unexpected errors in the middle. The next blog will focus on this difficult process ...



Today, let's talk about the installation and configuration of hadoop2.4.

1. Preparation of the environment:

System: CentOS

JDK Version number: JDK7

The system needs to include SSH services.

CentOS configuration:/etc/profile The last join, such as the following: (This profile is connected to the configuration file of the last compiled hadoop2.4 source code: http://blog.csdn.net/enson16855/article/details/35568049)

Export Java_home=/usr/java/jdk1.7.0_60export path= "$JAVA _home/bin: $PATH" Export maven_home=/home/hadoop/soft/ Apache-maven-3.2.1export path= "$MAVEN _home/bin: $PATH" Export Ant_home=/home/hadoop/soft/apache-ant-1.9.4export Path= "$ANT _home/bin: $PATH" Export Hadoop_prefix=/home/hadoop/soft/hadoop/hadoop-2.4.0export classpath= ".: $JAVA _ Home/lib: $CLASSPATH "Export path=" $JAVA _home/: $HADOOP _prefix/bin: $PATH "Export Hadoop_prefix PATH classpathexport ld_ Library_path= $HADOOP _prefix/lib/native/

Note: This is required to download a good hadoop-2.4.0, and extract to the specified folder (my:/home/hadoop/soft/hadoop) ~

: http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.4.0/



2. Configure Hadoop

Hadoop-env.sh:

Export Java_home=/usr/java/jdk1.7.0_60export Hadoop_common_lib_native_dir=${hadoop_prefix}/lib/nativeexport hadoop_opts= "-djava.library.path= $HADOOP _prefix/lib"
core-site.xml:

<configuration>        <property>                <name>fs.default.name</name>                <value>hdfs ://192.168.0.167:9000</value>        </property>        <property>                <name> Dfs.namenode.name.dir</name>                <value>file:/home/hadoop/soft/hadoop/hadoop-2.4.0/dfs/name</ value>        </property>        <property>                <name>dfs.datanode.data.dir</name>                <value>file:/home/hadoop/soft/hadoop/hadoop-2.4.0/dfs/data</value>        </property></ Configuration>
hdfs-site.xml:
<configuration>        <property>                <name>dfs.replication</name>                <value>1< /value>        </property>        <property>                <name>dfs.permissions</name>                < value>false</value>        </property>        <property>                <name>dfs.namenode.name.dir </name>                <value>file:/home/hadoop/soft/hadoop/hadoop-2.4.0/dfs/name</value>        </ property>        <property>                <name>dfs.datanode.data.dir</name>                <value>file :/home/hadoop/soft/hadoop/hadoop-2.4.0/dfs/data</value>        </property></configuration>
Mapred-site.xml, there is no such file in 2.4.0, can create a new one, or directly change the mapred-site.xml.template

<configuration>        <property>                <name>mapreduce.jobtracker.address </name>                < Value>192.168.0.167:9001</value>        </property></configuration>

Yarn-site.xml:

<configuration><!--Site Specific YARN Configuration Properties--        <property>                <name >mapreduce.framework.name</name>                <value>yarn</value>        </property>        < property>                <name>yarn.nodemanager.aux-services</name>                <value>mapreduce_shuffle< /value>        </property></configuration>

3,ssh Free Password login settings

Command: (here to switch the root user, do not fool hee always use individual users)

Ssh-keygen-t rsa-p ""
You can go directly to the carriage

Cat/root/.ssh/id_rsa.pub >>/root/.ssh/authorized_keys

Try ssh localhost if you can avoid password login display system Information, it should be right. (Here is an input password link, which is the password of the system)


4. Format HDFs:

Command:

The above picture shows that the format succeeds ....

5. Start Hadoop

Command:

./sbin/start-all.sh
The new version number of Hadoop in fact not recommended so direct start-all, suggest step by step, need start-dfs.sh and so on a series of operations, anyway, we just do experiments, I have not so fastidious.

Commands to close:

./sbin/stop-all.sh

Startup successes such as the following:


Basic startup processes such as the following:

Secondarynamenode
DataNode
NodeManager
Jps
ResourceManager
NameNode


Browser Access Q: http://localhost:50070


http://localhost:8088 Hadoop Process Management page


This is the overall success.

"The hadoop2.4.0 of Hadoop"--a pseudo-distributed installation configuration based on CentOS

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.