Build Hadoop pseudo-distributed environment based on CentOS6.5

Source: Internet
Author: User
Tags tmp folder iptables hdfs dfs firewall

Recently, has been learning about Hadoop, if you want to learn a language, then the language of the environment is very important to build, just started to try, not successful, and then do not want to try, the back or try to succeed, can be seen in fact some things are not so difficult, just see you have no heart to learn such things.

I have been learning about the environment of Hadoop is based on the article to force star Bo, very good, article link: http://www.powerxing.com/, which we can comb a few of my problems in the building

Problem one: When I hit the wrong environment, or the name and data generated in the ID is not the same, at this time I can directly in the execution./bin/hdfs Namenode-format.

Answer: No, because at this time you have generated a TMP folder under the environment of Hadoop, if you want to reformat, you can first stop-all.sh the command, in the re./bin/hdfs Namenode-format


Question two: When I'm done./sbin/start-dfs.sh and./sbin/start-yarn.sh These two commands, if we can be in the Linux command underground can be used for hdfs additions and deletions, then why we enter in the browser: IP ( or hostname): 50070, the browser cannot find the interface version of the Hadoop file system, you should turn off the firewall at this time:

1.service iptables Stop Stop Firewall

2.chkconfig iptables off

3.chkconfig iptables--list View status


Upload and download features for Hadoop:

./bin/hdfs-ls the directory on the HDFs , such as can be written./bin/hdfs-ls/(View the file under the directory in HDFs) or./bin/hdfs-ls/user (view a list of files in the/user directory)


. /bin/hdfs Dfs-get/output (/output at this time is the file under the HDFs/directory)/tmp/name1234 (native directory)

Explanation: The file of/output above HDFs is copied to the/tmp/name1234 of this machine, at this time the directory is called/tmp/name1234 this file


./bin/hdfs dfs-put./etc/hadoop/*.xml (local directory, current directory./etc/hadoop/directory below all files ending in. xml)/input (HDFs file)

Explanation: Directories under directory./etc/hadoop directory under the directory of all files uploaded with the. xml file ending in HDFs/input

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.