Hadoop. tmp. DIR is the basic configuration that the hadoop file system depends on. Many Paths depend on it. Its default location is under/tmp/{$ user}, but the storage in the/tmp path is insecure, because the file may be deleted after a Linux restart.
After following the steps in the Single Node setup section of hadoop getting start, the pseudo-distributed file is running. How can I change the default hadoop. tmp. dir path and make it take effect? Follow these steps:
1. Edit CONF/core-site.xml with the following attributes:
<Property> <Name> hadoop. TMP. dir </Name> <value>/home/had/hadoop/Data </value> <description> a base for other temporary directories. </description> </property>
2, stop hadoop: Bin/stop-all.sh
3. reformat the namenode node. Bin/hadoop namenode-format
Note: This is important. Otherwise, namenode cannot be started.
4. Start bin/start-all.sh
5. Test bin/hadoop FS-put conf Conf
Conclusion: The third step is particularly important. At the beginning, I used the wrong command bin/hadoop FS-format for formatting, but the following error was reported:
11/11/20 17:14:14 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 0 time (s ). 11/11/20 17:14:15 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 1 time (s ). 11/11/20 17:14:16 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 2 time (s ). 11/11/20 17:14:17 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 3 time (s ). 11/11/20 17:14:18 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 4 time (s ). 11/11/20 17:14:19 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 5 time (s ). 11/11/20 17:14:20 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 6 time (s ). 11/11/20 17:14:21 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 7 time (s ). 11/11/20 17:14:22 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 8 time (s ). 11/11/20 17:14:23 info IPC. client: retrying connect to server: localhost/127.0.0.1: 9000. already tried 9 time (s ). bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1: 9000 failed on connection exception: java.net. connectexception: Connection refused
Run the JPS command to view the Java Process and find that there is no namenode.
Finally, we found that the command was used incorrectly. The bin/hadoop namenode-format command should be used to format the file system, and operations should be performed before hadoop is started.
In general, you can change the default hadoop. tmp. dir following the steps in this article.
This article lists the author's mistakes in practice and their solutions, hoping to be useful to later users.