Hadoop Standalone Installation

Source: Internet
Author: User

Pre-conditions:

1,ubuntu10.10 Installation Success (personally think that it is not necessary to spend too much time on the system installation, we are not for the installed and installed)

2. Successfuljdk installation (jdk1.6.0_23for Linux version, graphical installation process http://freewxy.iteye.com/blog/882784? )

3. Download hhadoop0.21.0.tar.gz(http://apache.etoak.com//hadoop/core/hadoop-0.21.0/ )

??

??

Installing Hadoop

1. First copy the hadoop0.21.0.tar.gz to the local folder under usr (sudo cp hadoop Path /usr/local)1

??

2, into the local directory, decompression hadoop0.21.0.tar.gz,2

??

3, for the convenience of management and Hadoop version upgrade, the unpacked folder renamed to Hadoop,3

??

??

For convenience, add a group of Hadoop and its users with the same name:

1. Create a user group named Hadoop ,4

??

2, create a user named Hadoop users, grouped under the Hadoop Group,5(some information can not be filled in, directly press enter key) 5

??

3, (1) Add User rights: Open etc under the sudoers file, add the following (2) command,6

??

?-------------------------------------------------------------------------------------------------------------- ------------------

?? (Another way is to switch to the root user, and then modify the permissions of the sudoers , but this must be cautious, modify the permissions to change the file back to read-only, otherwise tragic, we have a vote of people died in this many times)

?? (2) under root all = (all) , add the following text:

?? ????? Hadoop all = (all) all

?? 7

-----------------------------------------------------------------------------

?----------------------------------------------------------------------------------------------

?? (The/etc/sudoers file is used to audit execution permissions when the sudo command executes)

Execute command:$:sudo chown hadoop/usr/local/hadoop ( assigning permissions for Hadoop folders to Hadoop users )

??

install ssh(requires networking): (Learn ssh:http://freewxy.iteye.com/blog/910820)

1, installation openssh_server:8

??

2, create Ssh-key, for RSA,9

??

?? Fill in the Save path of key

??

3. Add ssh-key to the trusted list and enable this Ssh-key

??

??

4, verify the configuration of ssh ,

??

??

??

Configure Hadoop

0, browse the hadoop file, there are some things ,

??

??

1. Open conf/hadoop-env.sh,

??

??

?? Configure conf/hadoop-env.sh(Find #export java_home= ..., remove #, and then add the path to the native jdk ), the

---------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------?

2. Open conf/core-site.xml

?? Configuration, as follows:

Java code ??

??

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://localhost:9000</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/hadoop/tmp</value>

</property>

</configuration>

??

??

??

??

3. Open the mapred-site.xml under the conf directory

?? Configure the following:

Java code ??

??

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>localhost:9001</value>

</property>

</configuration>

??

??

??

??

??

??

Run Tests ? :

1, change the user, format namenode,

??

??

?? You may encounter the following error (Daoteng this process many times) ,

??

??

Execute the

??

??

2. Start Hadoop,

??

??

3. Verify that Hadoop started successfully,

??

??

??

Run your own wordcount example ? Son (jidong AH)

1, prepare the need to carry out wordcount files, at the test.txt (random input string, save and exit)

??

-------------------------------------------------------------------------------------------

2. Upload the test file from the previous step to the firsttest directory in the DFS file system (if dfs does not contain firsttest directory automatically creates a directory with the same name, using the command:bin/hadoop dfs-ls to view the directory that is already in the DFS file system)

??

??

3, the implementation of WordCount, firstest (all the files under the wordcount, the results of the statistics output to the result folder, created automatically if the result folder does not exist)

??

4. View results,

??

??

??

From

Hadoop Standalone Installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.