HADOOP2 Installation Error logging

Source: Internet
Author: User
Tags constructor failover prepare socket zookeeper

Error 1: In the process of uploading files to HDFs, the hint file is always in the process of copying and uploading, consuming a lot of time. The error is as follows:

2015-06-30 09:29:45,020 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/tmp/dfs/name/current/edits_inprogress_0000000000000000114-/home/lin/hadoop-2.5.2/tmp/dfs/ name/current/edits_0000000000000000114-0000000000000000127 2015-06-30 09:29:45,020 INFO Org.apache.hadoop.hdfs.server.namenode.FSEditLog:Starting Log segment at 128
2015-06-30 09:29:48,876 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Rescanning After 30002 milliseconds
2015-06-30 09:29:48,877 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Scanned 0 Directive (s) and 0 block (s) in 2 millisecond (s).
2015-06-30 09:30:18,876 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Rescanning After 30000 milliseconds
2015-06-30 09:30:18,876 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Scanned 0 Directive (s) and 0 block (s) in 1 millisecond (s).
2015-06-30 09:30:48,876 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Rescanning After 30000 milliseconds
2015-06-30 09:30:48,876 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Scanned 0 Directive (s) and 0 block (s) in 1 millisecond (s).
2015-06-30 09:31:18,878 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Rescanning After 30002 milliseconds
2015-06-30 09:31:18,879 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor:Scanned 0 Directive (s) and 0 block (s) in 1 millisecond (s).


2015-06-30 09:25:43,935 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ LIN/HADOOP-2.5.2/JOURNAL/MYCLUSTER/CURRENT/EDITS_INPROGRESS_0000000000000000001-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000001-0000000000000000002
2015-06-30 09:27:44,814 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ LIN/HADOOP-2.5.2/JOURNAL/MYCLUSTER/CURRENT/EDITS_INPROGRESS_0000000000000000003-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000003-0000000000000000113
2015-06-30 09:29:45,016 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000114-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000114-0000000000000000127
2015-06-30 09:31:45,158 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000128-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000128-0000000000000000129
2015-06-30 09:33:45,331 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000130-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000130-0000000000000000131
2015-06-30 09:35:45,457 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000132-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000132-0000000000000000133
2015-06-30 09:37:45,575 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000134-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000134-0000000000000000136
2015-06-30 09:39:45,779 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000137-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000137-0000000000000000138
2015-06-30 09:41:47,915 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000139-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000139-0000000000000000140
2015-06-30 09:43:48,061 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000141-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000141-0000000000000000142

2015-06-30 09:45:48,365 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:Finalizing edits file/home/ lin/hadoop-2.5.2/journal/mycluster/current/edits_inprogress_0000000000000000143-/HOME/LIN/HADOOP-2.5.2/ journal/mycluster/current/edits_0000000000000000143-0000000000000000144

The analysis of the above errors is mainly in the process of Journalnode. This problem occurs mainly when configuring Journalnode, a configuration error occurs, or a partial journalnode process is not started. Careful examination will avoid this error.

Error2: When starting HBase, the hmaster process is dead and the error is as follows:

2015-06-29 20:38:39,287 INFO [Master1:60000.activemastermanager] Master. regionstates:onlined 1588230740 on ubuntu.slave1,16020,1435580161840
2015-06-29 20:38:39,288 INFO [Master1:60000.activemastermanager] Master. Servermanager:assignmentmanager hasn ' t finished failover cleanup; Waiting
2015-06-29 20:38:39,289 INFO [Master1:60000.activemastermanager] Master. HMaster:hbase:meta assigned=0, Rit=false, location=ubuntu.slave1,16020,1435580161840
2015-06-29 20:38:39,532 INFO [Master1:60000.activemastermanager] hbase. MetaMigrationConvertingToPB:hbase:meta doesn ' t has any entries to update.
2015-06-29 20:38:39,532 INFO [Master1:60000.activemastermanager] hbase. Metamigrationconvertingtopb:meta already up-to date with PB serialization
2015-06-29 20:38:39,771 INFO [Master1:60000.activemastermanager] Master. Assignmentmanager:clean cluster startup. Assigning user regions
2015-06-29 20:38:39,991 INFO [Master1:60000.activemastermanager] Master. assignmentmanager:joined the cluster in 459ms, Failover=false
2015-06-29 20:38:40,006 INFO [Master1:60000.activemastermanager] Master. Tablenamespacemanager:namespace table not found. Creating ...
2015-06-29 20:38:40,093 FATAL [Master1:60000.activemastermanager] Master. Hmaster:failed to become active master
Org.apache.hadoop.hbase.TableExistsException:hbase:namespace
At Org.apache.hadoop.hbase.master.handler.CreateTableHandler.checkAndSetEnablingTable (Createtablehandler.java : 151)
At Org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare (createtablehandler.java:124)
At Org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable (tablenamespacemanager.java:233)
At Org.apache.hadoop.hbase.master.TableNamespaceManager.start (tablenamespacemanager.java:86)
At Org.apache.hadoop.hbase.master.HMaster.initNamespace (hmaster.java:871)
At Org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization (hmaster.java:722)
At org.apache.hadoop.hbase.master.hmaster.access$500 (hmaster.java:165)
At Org.apache.hadoop.hbase.master.hmaster$1.run (hmaster.java:1428)
At Java.lang.Thread.run (thread.java:745)
2015-06-29 20:38:40,095 FATAL [Master1:60000.activemastermanager] Master. Hmaster:master Server abort:loaded coprocessors is: []
2015-06-29 20:38:40,095 FATAL [Master1:60000.activemastermanager] Master. Hmaster:unhandled exception. Starting shutdown.
Org.apache.hadoop.hbase.TableExistsException:hbase:namespace
At Org.apache.hadoop.hbase.master.handler.CreateTableHandler.checkAndSetEnablingTable (Createtablehandler.java : 151)
At Org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare (createtablehandler.java:124)
At Org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable (tablenamespacemanager.java:233)
At Org.apache.hadoop.hbase.master.TableNamespaceManager.start (tablenamespacemanager.java:86)
At Org.apache.hadoop.hbase.master.HMaster.initNamespace (hmaster.java:871)
At Org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization (hmaster.java:722)
At org.apache.hadoop.hbase.master.hmaster.access$500 (hmaster.java:165)
At Org.apache.hadoop.hbase.master.hmaster$1.run (hmaster.java:1428)
At Java.lang.Thread.run (thread.java:745)
2015-06-29 20:38:40,095 INFO [Master1:60000.activemastermanager] regionserver. HRegionServer:STOPPED:Unhandled exception. Starting shutdown.
2015-06-29 20:38:40,096 INFO [master/master1/192.168.1.107:60000] regionserver. Hregionserver:stopping Infoserver
2015-06-29 20:38:40,110 INFO [master/master1/192.168.1.107:60000] mortbay.log:Stopped selectchannelconnector@0.0.0.0:16010
2015-06-29 20:38:40,212 INFO [master/master1/192.168.1.107:60000] regionserver. Hregionserver:stopping Server master1,60000,1435581499646
2015-06-29 20:38:40,218 INFO [master/master1/192.168.1.107:60000] client. Connectionmanager$hconnectionimplementation:closing Zookeeper sessionid=0x14e3f345e250008
2015-06-29 20:38:40,234 INFO [master/master1/192.168.1.107:60000] zookeeper. zookeeper:session:0x14e3f345e250008 closed
2015-06-29 20:38:40,234 INFO [Master/master1/192.168.1.107:60000-eventthread] zookeeper. Clientcnxn:eventthread shut down
2015-06-29 20:38:40,295 INFO [Master1,60000,1435581499646.splitlogmanagertimeoutmonitor] Master. Splitlogmanager$timeoutmonitor:master1,60000,1435581499646.splitlogmanagertimeoutmonitor exiting
2015-06-29 20:38:40,338 INFO [master/master1/192.168.1.107:60000] regionserver. hregionserver:stopping server master1,60000,1435581499646; All regions closed.
2015-06-29 20:38:40,338 INFO [Master1,60000,1435581499646-balancerchore] balancer. Balancerchore:master1,60000,1435581499646-balancerchore exiting
2015-06-29 20:38:40,339 INFO [Master1,60000,1435581499646-clusterstatuschore] balancer. Clusterstatuschore:master1,60000,1435581499646-clusterstatuschore exiting

Parsing error discovery is an error in HBase's namespace because hbase itself has a metadata table that maintains the metadata management of hbase, a table that exists but conflicts with some. In the end with which there is conflict. We know that HBase needs to be zookeeper in order to work together to manage metadata. It is possible to speculate whether zookeeper caused the problem to occur. Try to remove all system data and reformat Hadoop to find the problem resolved.

Error3: An error occurred while the client was reading the data.

ERROR:java.lang.RuntimeException:org.apache.hadoop.hbase.client.RetriesExhaustedException:Failed after attempts= 7, Exceptions:
Thu may 17:38:13 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, java.net.ConnectException: Connection refused
Thu may 17:39:45 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, java.net.ConnectException: Connection refused
Thu may 17:40:04 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, Org.apache.hadoop.hbase.client.NoServerForRegionException:Unable to find region for SEC, 508872:12:12:6030,99999999999999 after 7 tries.
Thu may 17:40:23 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, Org.apache.hadoop.hbase.client.NoServerForRegionException:Unable to find region for SEC, 508872:12:12:6030,99999999999999 after 7 tries.
Thu may 17:40:43 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, Org.apache.hadoop.hbase.client.NoServerForRegionException:Unable to find region for SEC, 508872:12:12:6030,99999999999999 after 7 tries.
Thu may 17:41:03 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, Org.apache.hadoop.hbase.client.NoServerForRegionException:Unable to find region for SEC, 508872:12:12:6030,99999999999999 after 7 tries.
Thu may 17:41:26 CST, Org.apache.hadoop.hbase.client.scannercallable@16e7eff, Org.apache.hadoop.hbase.client.NoServerForRegionException:Unable to find region for SEC, 508872:12:12:6030,99999999999999 after 7 tries.




INFO zookeeper. Clientcnxn:unable to read additional data from server SessionID 0x14d4b624e860006, likely server have closed socket, Closi Ng socket connection and attempting reconnect
15/05/13 18:39:48 INFO Zookeeper. Clientcnxn:opening socket connection to server ubuntu.slave6/192.168.1.122:2181
15/05/13 18:39:48 INFO Zookeeper. Clientcnxn:socket connection established to ubuntu.slave6/192.168.1.122:2181, initiating session
15/05/13 18:40:01 INFO Zookeeper. Clientcnxn:client session timed out, with not heard from server in 13335ms for SessionID 0x14d4b624e860006, closing Socke T connection and attempting reconnect
15/05/13 18:40:02 INFO Zookeeper. Clientcnxn:opening socket connection to server ubuntu.slave5/192.168.1.124:2181
15/05/13 18:40:02 INFO Zookeeper. Clientcnxn:socket connection established to ubuntu.slave5/192.168.1.124:2181, initiating session
15/05/13 18:40:07 INFO Zookeeper. Clientcnxn:unable to read additional data from server SessionID 0x14d4b624e860006, likely server have closed socket, Closi Ng socket connection and attempting reconnect
15/05/13 18:40:07 INFO Zookeeper. Clientcnxn:opening socket connection to server ubuntu.master/192.168.1.103:2181
15/05/13 18:40:07 INFO Zookeeper. Clientcnxn:socket connection established to ubuntu.master/192.168.1.103:2181, initiating session
15/05/13 18:40:08 INFO Zookeeper. Clientcnxn:session establishment complete on server ubuntu.master/192.168.1.103:2181, SessionID = 0x14d4b624e860006, Negotiated timeout = 40000




15/05/17 09:39:56 INFO Zookeeper. Clientcnxn:socket connection established to ubuntu.master/192.168.1.103:2181, initiating session
15/05/17 09:39:56 INFO Zookeeper. Clientcnxn:unable to read additional data from server SessionID 0x14d5f7dc853000c, likely server have closed socket, Closi Ng socket connection and attempting reconnect
15/05/17 09:39:57 INFO Zookeeper. Clientcnxn:opening socket connection to server ubuntu.master/192.168.1.103:2181. Will isn't attempt to authenticate using SASL (Unable to locate login configuration)

Parsing error discovery When a client connection is denied, a lookup system process discovers that one of the regionserver processes is dead while the client is connecting to regionserver on that node, causing an error, restarting the process, and resolving the problem.

Error4: Problems that arise during the programming process

Org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:Failed 634 actions:servers with issues: ubuntu.slave5:60020,
At Org.apache.hadoop.hbase.client.hconnectionmanager$hconnectionimplementation.processbatchcallback ( hconnectionmanager.java:1674)
At Org.apache.hadoop.hbase.client.hconnectionmanager$hconnectionimplementation.processbatch ( HCONNECTIONMANAGER.JAVA:1450)
At Org.apache.hadoop.hbase.client.HTable.flushCommits (htable.java:916)
At Org.apache.hadoop.hbase.client.HTable.doPut (htable.java:772)
At Org.apache.hadoop.hbase.client.HTable.put (htable.java:748)
At Org.apache.hadoop.hbase.mapreduce.tableoutputformat$tablerecordwriter.write (TableOutputFormat.java:123)
At Org.apache.hadoop.hbase.mapreduce.tableoutputformat$tablerecordwriter.write (TableOutputFormat.java:84)
At Org.apache.hadoop.mapred.reducetask$newtrackingrecordwriter.write (reducetask.java:587)
At Org.apache.hadoop.mapreduce.TaskInputOutputContext.write (taskinputoutputcontext.java:80)
At Newframe.newframetwo$reduce.reduce (newframetwo.java:176)
At Newframe.newframetwo$reduce.reduce (newframetwo.java:1)
At Org.apache.hadoop.mapreduce.Reducer.run (reducer.java:176)
At Org.apache.hadoop.mapred.ReduceTask.runNewReducer (reducetask.java:649)
At Org.apache.hadoop.mapred.ReduceTask.run (reducetask.java:417)
At Org.apache.hadoop.mapred.child$4.run (child.java:255)
At java.security.AccessController.doPrivileged (Native Method)
At Javax.security.auth.Subject.doAs (subject.java:396)
At Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1121)
At Org.apache.hadoop.mapred.Child.main (child.java:249)

Mainly because I am operating version 2, the use is still version 1 of the code, and at this time version 2 of the interface a little change, by looking at the specific function interface, and then correct, the problem is resolved.

The Error5:jar bag can't be found.



caused by:java.lang.reflect.InvocationTargetException
At Sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
At Sun.reflect.NativeConstructorAccessorImpl.newInstance (nativeconstructoraccessorimpl.java:62)
At Sun.reflect.DelegatingConstructorAccessorImpl.newInstance (delegatingconstructoraccessorimpl.java:45)
At Java.lang.reflect.Constructor.newInstance (constructor.java:422)
At Org.apache.hadoop.hbase.client.ConnectionFactory.createConnection (connectionfactory.java:238)
... More
caused By:java.lang.noclassdeffounderror:io/netty/channel/eventloopgroup
At JAVA.LANG.CLASS.FORNAME0 (Native Method)
At Java.lang.Class.forName (class.java:348)
At Org.apache.hadoop.conf.Configuration.getClassByNameOrNull (configuration.java:1844)
At Org.apache.hadoop.conf.Configuration.getClassByName (configuration.java:1809)
At Org.apache.hadoop.conf.Configuration.getClass (configuration.java:1903)
At Org.apache.hadoop.conf.Configuration.getClass (configuration.java:1929)
At Org.apache.hadoop.hbase.client.connectionmanager$hconnectionimplementation.<init> ( connectionmanager.java:631)
... More
caused By:java.lang.ClassNotFoundException:io.netty.channel.EventLoopGroup
At Java.net.URLClassLoader.findClass (urlclassloader.java:381)
At Java.lang.ClassLoader.loadClass (classloader.java:424)
At Sun.misc.launcher$appclassloader.loadclass (launcher.java:331)
At Java.lang.ClassLoader.loadClass (classloader.java:357)
... More

The missing Netty package in the program causes the class associated with it not to be found, adding the jar package to the program

ERROR6: The error caused by the specified zookeeper parameter is missing.

Create table:sed
15/04/29 22:32:56 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
15/04/29 22:32:56 WARN mapred. Jobclient:use Genericoptionsparser for parsing the arguments. Applications should implement Tool for the same.
15/04/29 22:32:56 INFO Zookeeper. Zookeeper:initiating Client connection, connectstring=localhost:2181 sessiontimeout=180000 watcher=hconnection
15/04/29 22:32:56 INFO Zookeeper. Recoverablezookeeper:the identifier of this process is 6240@dell-pc
15/04/29 22:32:56 INFO Zookeeper. Clientcnxn:opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will isn't attempt to authenticate using SASL (Unable to locate login configuration)
15/04/29 22:32:57 WARN Zookeeper. Clientcnxn:session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java.net.ConnectException:Connection Refused:no Further information
At Sun.nio.ch.SocketChannelImpl.checkConnect (Native Method)
At Sun.nio.ch.SocketChannelImpl.finishConnect (socketchannelimpl.java:599)
At Org.apache.zookeeper.ClientCnxnSocketNIO.doTransport (clientcnxnsocketnio.java:350)
At Org.apache.zookeeper.clientcnxn$sendthread.run (clientcnxn.java:1068)
15/04/29 22:32:57 WARN Zookeeper. recoverablezookeeper:possibly transient ZooKeeper exception:org.apache.zookeeper.keeperexception$ Connectionlossexception:keepererrorcode = Connectionloss For/hbase/hbaseid
15/04/29 22:32:57 INFO util. Retrycounter:sleeping 2000ms before retry #1 ...
15/04/29 22:32:58 INFO Zookeeper. Clientcnxn:opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will isn't attempt to authenticate using SASL (Unable to locate login configuration)
15/04/29 22:32:59 WARN Zookeeper. Clientcnxn:session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java.net.ConnectException:Connection Refused:no Further information
At Sun.nio.ch.SocketChannelImpl.checkConnect (Native Method)
At Sun.nio.ch.SocketChannelImpl.finishConnect (socketchannelimpl.java:599)
At Org.apache.zookeeper.ClientCnxnSocketNIO.doTransport (clientcnxnsocketnio.java:350)
At Org.apache.zookeeper.clientcnxn$sendthread.run (clientcnxn.java:1068)

Zookeeper parameters are not configured in the program, through the configuration class to the specific zookeeper parameters to the program, the problem is resolved

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.