Change the default hadoop. tmp. dir path in the hadoop pseudo-distributed environment

Source: Internet
Author: User
Tags hadoop fs

Hadoop. tmp. DIR is the basic configuration that the hadoop file system depends on. Many Paths depend on it. Its default location is under/tmp/{$ user}, but the storage in the/tmp path is insecure, because the file may be deleted after a Linux restart.

After following the steps in the Single Node setup section of hadoop getting start, the pseudo-distributed file is running. How can I change the default hadoop. tmp. dir path and make it take effect? Follow these steps:

1. Edit CONF/core-site.xml with the following attributes:

 
<Property> <Name> hadoop. TMP. dir </Name> <value>/home/had/hadoop/Data </value> <description> a base for other temporary directories. </description> </property>

2, stop hadoop: Bin/stop-all.sh

3. reformat the namenode node. Bin/hadoop namenode-format

Note: This is important. Otherwise, namenode cannot be started.

4. Start bin/start-all.sh

5. Test bin/hadoop FS-put conf Conf

Conclusion: The third step is particularly important. At the beginning, I used the wrong command bin/hadoop FS-format for formatting, but the following error was reported:

11/11/20 17:14:14 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 0 time (s ). 11/11/20 17:14:15 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 1 time (s ). 11/11/20 17:14:16 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 2 time (s ). 11/11/20 17:14:17 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 3 time (s ). 11/11/20 17:14:18 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 4 time (s ). 11/11/20 17:14:19 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 5 time (s ). 11/11/20 17:14:20 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 6 time (s ). 11/11/20 17:14:21 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 7 time (s ). 11/11/20 17:14:22 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 8 time (s ). 11/11/20 17:14:23 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 9 time (s ). bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1: 9000 failed on connection exception: java.net. connectexception: Connection refused

Run the JPS command to view the Java Process and find that there is no namenode.

Finally, we found that the command was used incorrectly. The bin/hadoop namenode-format command should be used to format the file system, and operations should be performed before hadoop is started.

In general, you can change the default hadoop. tmp. dir following the steps in this article.

This article lists the author's mistakes in practice and their solutions, hoping to be useful to later users.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.