When installing the hadoop cluster today, all nodes are configured and the following commands are executed.
Hadoop @ name-node :~ /Hadoop $ bin/hadoop FS-ls
The Name node reports the following error:
11/04/02 17:16:12 Info Security. groups: group mapping impl = org. Apache. hadoop. Security. shellbasedunixgroupsmapping; cachetimeout = 300000
11/04/02 17:16:13 warn Conf. Configuration: mapred. task. ID is deprecated. Instead, use mapreduce. task. attempt. ID
11/04/02 17:16:14 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 0 time (s ).
11/04/02 17:16:15 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 1 time (s ).
11/04/02 17:16:16 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 2 time (s ).
11/04/02 17:16:17 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 3 time (s ).
11/04/02 17:16:18 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 4 time (s ).
11/04/02 17:16:19 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 5 time (s ).
11/04/02 17:16:20 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 6 time (s ).
11/04/02 17:16:21 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 7 time (s ).
11/04/02 17:16:22 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 8 time (s ).
11/04/02 17:16:23 info IPC. Client: retrying connect to server:/10.42.43.55: 9000. Already tried 9 time (s ).
Bad connection to FS. Command aborted.
After a long time, find the following solution:
Hadoop @ name-node :~ $ Sudo Vim/etc/hosts
Delete the line 127.0.0.1 name-node and save it. Restart the hadoop cluster to solve the problem.