Hadoop stand-alone build graphic explain

Source: Internet
Author: User
Keywords nbsp ; such as map name value implementation
Tags .gz apache blog configuration configure create directory download

Preconditions:

1, ubuntu10.10 successful installation (personally think it does not have to spend too much time on the system installation, we are not installed to install the machine)

2, jdk installed successfully (jdk1.6.0_23for linux version, graphic installation process http://freewxy.javaeye.com/blog/882784)

3, download hhadoop0.21.0.tar.gz (http://apache.etoak.com//hadoop/core/hadoop-0.21.0/)

Hadoop installed

1, first hadoop0.21.0.tar.gz copied to the local folder under usr (sudo cp hadoop path / usr / local) as shown

2, into the local directory, extract hadoop0.21.0.tar.gz, as shown in Figure 2

3, for ease of management and hadoop version upgrade, the folder will be renamed tooexop, as shown in Figure 3

For convenience, the new hadoop group and its namesake user:

1, create a user group named hadoop, as shown in Figure 4

2, create a user named hadoop users, grouped under the hadoop group, as shown in Figure 5 (some of the information may not fill in, you can directly press enter) as shown in Figure 5

3, (1) add user rights: Open the etc sudoers file, add the following (2) command, as shown in Figure 6

-------------------------------------------------- -------------------------------------------------- ----------------------------

(Another method is to switch to the root user, and then modify the sudoers permissions, but this operation must be careful, modify the permissions to change the file back to read-only, or tragedy la la la, we die in one vote This is many times)

(2) Add the following text under root ALL = (ALL) ALL:

hadoop ALL = (ALL) ALL

Figure 7

-------------------------------------------------- ---------------------------

-------------------------------------------------- --------------------------------------------

(/ Etc / sudoers file is used to audit the execution authority sudo command execution)

Execute the command: $: sudo chown hadoop / usr / local / hadoop (hadoop folder permissions assigned to hadoop users)

Install ssh (need networking): (understand ssh: http://freewxy.javaeye.com/blog/910820)

1, install openssh_server: as shown

2, create ssh-key, rsa, as shown in Figure 9

Fill in the key save path, as shown in Figure 10

3, add ssh-key to the trusted list, and enable this ssh-key, as shown in Figure 11

4, verify the ssh configuration, shown in Figure 12

Configure hadoop

0, Browse hadoop files have something, as shown in Figure 13

1, open conf / hadoop-env.sh, as shown in Figure 14

Configure conf / hadoop-env.sh (find #export JAVA_HOME = ..., remove #, and then add the path of the local jdk), as shown in Figure 15

-------------------------------------------------- -------------------------------------------

-------------------------------------------------- ------------------------------------

2, open conf / core-site.xml

Configuration, the following content:

Java code

<configuration> <property> <name> fs.default.name </ name> <value> hdfs: // localhost: 9000 </ value> </ property> <property> <name> dfs.replication </ name> < value> 1 </ value> </ property> <property> <name> hadoop.tmp.dir </ name> <value> / home / hadoop / tmp </ value> </ property> </ configuration>

3, open the conf directory mapred-site.xml

Configure the following:

Java code

<configuration> <property> <name> mapred.job.tracker </ name> <value> localhost: 9001 </ value> </ property> </ configuration>

Run the test:

1, change the user, format namenode, as shown in Figure 18

May encounter the following error (more than the number of this process over), as shown in Figure 19

Execution is as shown in FIG. 20 and is executed again as shown in FIG. 18

2, started hadoop, as shown in Figure 21

3, verify hadoop started successfully, as shown in Figure 22

Run with wordcount example (jidong ah)

1, prepare the need for wordcount files, as shown in Figure 23 (in test.txt casually enter the string, save and exit)

-------------------------------------------------- -----------------------------------------

2, the test file will be uploaded to the dfs file system firstTest directory, as shown in Figure 24 (if dfs does not contain firstTest directory automatically create a directory of the same name, use the command: bin / hadoop dfs-ls dfs file Existing directory in the system)

3, the implementation of wordcount, as shown in Figure 25 (firstest all files under the implementation of wordcount, the results will be output to the result folder, if the result folder does not exist then automatically create)

4, view the results, as shown in Figure 26

Stand-alone version get ~ ~

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.