Highlights of problems encountered during hadoop Learning

Source: Internet
Author: User

12:25:47, 472 info org. Apache. hadoop. HDFS. server. namenode. namenode: startup_msg:
/*************************************** *********************
Startup_msg: Starting namenode
Startup_msg: host = Xiaohua-PC/192.168.1.100
Startup_msg: ARGs = []
Startup_msg: version = 0.20.2
Startup_msg: Build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-r 911707; compiled by 'chrisdo 'on Fri Feb 19 08:07:34 UTC 2010
**************************************** ********************/
12:25:47, 550 error org. Apache. hadoop. HDFS. server. namenode. namenode: java.net. bindexception: Problem binding to localhost/127.0.0.1: 9000: address already in use: bind
At org. Apache. hadoop. IPC. server. BIND (server. Java: 190)
At org. Apache. hadoop. IPC. Server $ listener. <init> (server. Java: 253)
At org. Apache. hadoop. IPC. server. <init> (server. Java: 1026)
At org. Apache. hadoop. IPC. RPC $ server. <init> (rpc. Java: 488)
At org. Apache. hadoop. IPC. rpc. getserver (rpc. Java: 450)
At org. Apache. hadoop. HDFS. server. namenode. namenode. initialize (namenode. Java: 191)
At org. Apache. hadoop. HDFS. server. namenode. namenode. <init> (namenode. Java: 279)
At org. Apache. hadoop. HDFS. server. namenode. namenode. createnamenode (namenode. Java: 956)
At org. Apache. hadoop. HDFS. server. namenode. namenode. Main (namenode. Java: 965)
Caused by: java.net. bindexception: address already in use: bind
At sun. NiO. Ch. net. bind0 (native method)
At sun. NiO. Ch. net. BIND (net. Java: 344)
At sun. NiO. Ch. net. BIND (net. Java: 336)
At sun. NiO. Ch. serversocketchannelimpl. BIND (serversocketchannelimpl. Java: 199)
At sun. NiO. Ch. serversocketadaptor. BIND (serversocketadaptor. Java: 74)
At org. Apache. hadoop. IPC. server. BIND (server. Java: 188)
Solution: netstat-an (view all processes)

Check that the process occupies port 9000.

Killing netstat-An caused by repeated pulling of your hadoop may also result in: | grep 9000

Not a regular file appeared during SCP transmission of "Folders". After a long study, it was found that the parameter R was not added during SCP, which caused this problem, if this problem occurs, you can use SCP-R for transmission!


2013-08-16 01:57:39, 447 error Org. apache. hadoop. HDFS. server. datanode. datanode: Java. io. ioexception: Call to master. hadoop/192.168.70.135: 9000 failed on local exception: java.net. noroutetohostexception: No
Route to host at Org. apache. hadoop. IPC. client. wrapexception (client. java: 1107) at Org. apache. hadoop. IPC. client. call (client. java: 1075) at Org. apache. hadoop. IPC. RPC $ invoker. invoke (RPC. java: 225) at $ proxy5.getprotocolversion (unknown source) at Org. apache. hadoop. IPC. RPC. getproxy (RPC. java: 396) at Org. apache. hadoop. IPC. RPC. getproxy (RPC. java: 370) at Org. apache. hadoop. IPC. RPC. getproxy (RPC. java: 429) at Org. apache. hadoop. IPC. RPC. waitforproxy (RPC. java: 331) at Org. apache. hadoop. IPC. RPC. waitforproxy (RPC. java: 296) at Org. apache. hadoop. HDFS. server. datanode. datanode. startdatanode (datanode. java: 356) at Org. apache. hadoop. HDFS. server. datanode. datanode. <init> (datanode. java: 299) at Org. apache. hadoop. HDFS. server. datanode. datanode. makeinstance (datanode. java: 1582) at Org. apache. hadoop. HDFS. server. datanode. datanode. instantiatedatanode (datanode. java: 1521) at Org. apache. hadoop. HDFS. server. datanode. datanode. createdatanode (datanode. java: 1539) at Org. apache. hadoop. HDFS. server. datanode. datanode. securemain (datanode. java: 1665) at Org. apache. hadoop. HDFS. server. datanode. datanode. main (datanode. java: 1682) caused by: java.net. noroutetohostexception: No route to host at Sun. NIO. ch. socketchannelimpl. checkconnect (native method) at Sun. NIO. ch. socketchannelimpl. finishconnect (socketchannelimpl. java: 567) at org.apache.hadoop.net. socketiowithtimeout. connect (socketiowithtimeout. java: 206) at org.apache.hadoop.net. netutils. connect (netutils. java: 489) at Org. apache. hadoop. IPC. client $ connection. setupconnection (client. java: 434) at Org. apache. hadoop. IPC. client $ connection. setupiostreams (client. java: 560) at Org. apache. hadoop. IPC. client $ connection. access $2000 (client. java: 184) at Org. apache. hadoop. IPC. client. getconnection (client. java: 1206) at Org. apache. hadoop. IPC. client. call (client. java: 1050 )... 14 more
Solution: If the firewall is disabled, restart the corresponding machine.

When we started the hadoop cluster to test hbase, we found that only two of the three datanode instances were successfully started. The following exceptions were found in the logs that failed to be started:

2012-09-07 23:58:51, 240 warn Org. apache. hadoop. HDFS. server. datanode. datanode: datanode is shutting down: Org. apache. hadoop. IPC. remoteException: Org. apache. hadoop. HDFS. protocol. unregistereddatanodeexception:
Data Node 192.168.100.11: 50010 is attempting to report storage ID DS-1282452139-218.196.207.181-50010-1344220553439. node 192.168.100.12: 50010 is expected to serve this storage.
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. getdatanode (fsnamesystem. Java: 4608)
At org. Apache. hadoop. HDFS. server. namenode. fsnamesystem. processreport (fsnamesystem. Java: 3460)
At org. Apache. hadoop. HDFS. server. namenode. namenode. blockreport (namenode. Java: 1001)
At sun. Reflect. nativemethodaccessorimpl. invoke0 (native method)
At sun. Reflect. nativemethodaccessorimpl. Invoke (nativemethodaccessorimpl. Java: 57)
At sun. Reflect. delegatingmethodaccessorimpl. Invoke (delegatingmethodaccessorimpl. Java: 43)
At java. Lang. Reflect. method. Invoke (method. Java: 601)
At org. Apache. hadoop. IPC. RPC $ server. Call (RPC. Java: 563)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 1388)
At org. Apache. hadoop. IPC. Server $ handler $ 1.run( server. Java: 1384)
At java. Security. accesscontroller. doprivileged (native method)
At javax. Security. Auth. Subject. DOAs (subject. Java: 415)
At org. Apache. hadoop. Security. usergroupinformation. DOAs (usergroupinformation. Java: 1121)
At org. Apache. hadoop. IPC. Server $ handler. Execute (server. Java: 1382)
At org. Apache. hadoop. IPC. Client. Call (client. Java: 1070)
At org. Apache. hadoop. IPC. RPC $ invoker. Invoke (rpc. Java: 225)
At $ proxy5.blockreport (unknown source)
At org. Apache. hadoop. HDFS. server. datanode. datanode. offerservice (datanode. Java: 958)
At org. Apache. hadoop. HDFS. server. datanode. datanode. Run (datanode. Java: 1458)
At java. Lang. thread. Run (thread. Java: 722)

This exception occurs because the storageid of the two datanode instances conflicts. It should be because I directly backed up and installed it.

The solution is to directly Delete the data directory of the machine that encountered an exception!



2013-08-19 19:24:47, 433 info Org. apache. hadoop. HDFS. server. datanode. datanode: startup_msg: /*************************************** * ******************** startup_msg: starting datanodestartup_msg: host = localhost. localdomain/127.0.0.1startup _ MSG: ARGs = [] startup_msg: version = 1.0.4startup _ MSG: Build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 **************************** *****************************/19:24:47, 686 info Org. apache. hadoop. metrics2.impl. metricsconfig: loaded properties from hadoop-metrics2.properties2013-08-19 19:24:47, 706 info Org. apache. hadoop. metrics2.impl. metricssourceadapter: mbean for source metricssystem, sub = stats registered.2013-08-19 19:24:47, 707 info Org. apache. hadoop. metrics2.impl. metricssystemimpl: scheduled snapshot period at 10 second (s ). 2013-08-19 19:24:47, 707 info Org. apache. hadoop. metrics2.impl. metricssystemimpl: datanode metrics system started2013-08-19 19:24:48, 036 info Org. apache. hadoop. metrics2.impl. metricssourceadapter: mbean for source ugi registered.2013-08-19 19:24:48, 730 error Org. apache. hadoop. HDFS. server. datanode. datanode: Java. io. ioexception: incompatible namespaceids in/tmp/hadoop-lei/dfs/data: namenode namespaceid = 103896252; datanode namespaceid = 726256761 at Org. apache. hadoop. HDFS. server. datanode. datastorage. dotransition (datastorage. java: 232) at Org. apache. hadoop. HDFS. server. datanode. datastorage. recovertransitionread (datastorage. java: 147) at Org. apache. hadoop. HDFS. server. datanode. datanode. startdatanode (datanode. java: 385) at Org. apache. hadoop. HDFS. server. datanode. datanode. <init> (datanode. java: 299) at Org. apache. hadoop. HDFS. server. datanode. datanode. makeinstance (datanode. java: 1582) at Org. apache. hadoop. HDFS. server. datanode. datanode. instantiatedatanode (datanode. java: 1521) at Org. apache. hadoop. HDFS. server. datanode. datanode. createdatanode (datanode. java: 1539) at Org. apache. hadoop. HDFS. server. datanode. datanode. securemain (datanode. java: 1665) at Org. apache. hadoop. HDFS. server. datanode. datanode. main (datanode. java: 1682) 19:24:48, 747 info Org. apache. hadoop. HDFS. server. datanode. datanode: shutdown_msg: /*************************************** * ******************** shutdown_msg: shutting down datanode at localhost. localdomain/127.0.0.1
Solution:/tmp/hadoop-lei/dfs/data/current/version delete reference information: http://blog.csdn.net/wanghai__/article/details/5752199
Files cannot be submitted.

Description: Compile hadoop program using eclipse in window and run on hadoop. the following error occurs:

11/10/28 16:05:53 info mapred. jobclient: running job: job_201110281103_0003
11/10/28 16:05:54 info mapred. jobclient: Map 0% reduce 0%
11/10/28 16:06:05 info mapred. jobclient: task id: attempt_201110281103_0003_m_000002_0, status: Failed
Org. apache. hadoop. security. accesscontrolexception: Org. apache. hadoop. security. accesscontrolexception: Permission denied: User = drwho, access = write, inode = "hadoop": hadoop: supergroup: rwxr-XR-x
At sun. Reflect. nativeconstructoraccessorimpl. newinstance0 (native method)
At sun. Reflect. nativeconstruct%cessorimpl. newinstance (nativeconstruct%cessorimpl. Java: 39)

Solution:

Http://www.cnblogs.com/acmy/archive/2011/10/28/2227901.html

3. Invalid hadoop runtime specified; please click 'configure hadoop install directory' or fill in library location Input
Field


Solution:

This is mainly because the hadoop jar package is not imported.

Eclipse window-> preferences-> MAP/reduce select hadoop root directory


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.