SSH has already been used. The connection refused exception occurs in the example running a book, as shown below:
12/04/09 01:00:54 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 0 time (s ).
12/04/09 01:00:56 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 1 time (s ).
12/04/09 01:00:57 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 2 time (s ).
12/04/09 01:00:58 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 3 time (s ).
12/04/09 01:00:59 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 4 time (s ).
12/04/09 01:01:00 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 5 time (s ).
12/04/09 01:01:01 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 6 time (s ).
12/04/09 01:01:02 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 7 time (s ).
12/04/09 01:01:03 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 8 time (s ).
12/04/09 01:01:04 info IPC. Client: retrying connect to server: localhost/127.0.0.1: 8020. Already tried 9 time (s ).
Exception in thread "Main" java.net. connectexception: Call to localhost/127.0.0.1: 8020 failed on connection exception: java.net. connectexception: Connection refused
At org. Apache. hadoop. IPC. Client. wrapexception (client. Java: 1095)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 1071)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 225)
At $ proxy1.getprotocolversion (unknown source)
At org. Apache. hadoop. IPC. rpc. getproxy (rpc. Java: 396)
At org. Apache. hadoop. IPC. rpc. getproxy (rpc. Java: 379)
At org. Apache. hadoop. HDFS. dfsclient. createrpcnamenode (dfsclient. Java: 119)
At org. Apache. hadoop. HDFS. dfsclient. <init> (dfsclient. Java: 238)
At org. Apache. hadoop. HDFS. dfsclient. <init> (dfsclient. Java: 203)
At org. Apache. hadoop. HDFS. distributedfilesystem. initialize (distributedfilesystem. Java: 89)
At org. Apache. hadoop. fs. filesystem. createfilesystem (filesystem. Java: 1386)
At org. Apache. hadoop. fs. filesystem. Access $200 (filesystem. Java: 66)
At org. Apache. hadoop. fs. filesystem $ cache. Get (filesystem. Java: 1404)
At org. Apache. hadoop. fs. filesystem. Get (filesystem. Java: 254)
At com. Lan. hadoop. filesystemcat. Main (filesystemcat. Java: 15)
.............................
I remember this exception before. It was because namenode was not started. Today, hadoop LS and mkdir are used normally, and the namenode process is also found when JPS is used, telnet localhost 8020 failed, but I do not know why it failed, finally dong asked me namenode port number is not 8020, I have not responded to, finally looked at the core-site.xml found that I will FS. default. name is configured
HDFS: // localhost: 9000. The port number is incorrect. Change the port number to 9000 in the code. Thank you, Dong ~~~ I did not know what was going on for almost two hours.