Exception One:
2014-03-13 11:10:23,665 INFO org.apache.Hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 0 time (s); Retry policy is retryuptomaximumhttp://www.aliyun.com/zixun/aggregation/16460.html ">countwithfixedsleep ( maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:24,667 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 1 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:25,667 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 2 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:26,669 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 3 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:27,670 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 4 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:28,671 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 5 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:29,672 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 6 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:30,674 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 7 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:31,675 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 8 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:32,676 INFO org.apache.hadoop.ipc.Client:Retrying Connect to server:linux-hadoop-38/ 10.10.208.38:9000. Already tried 9 time (s); Retry policy is Retryuptomaximumcountwithfixedsleep (maxretries=10, sleeptime=1 SECONDS)
2014-03-13 11:10:32,677 WARN Org.apache.hadoop.hdfs.server.datanode.DataNode:Problem connecting to server: linux-hadoop-38/10.10.208.38:9000
Solution:
1, Ping linux-hadoop-38 can pass, Telnet linux-hadoop-38 9000 can not pass, the explanation opened the firewall
2, to linux-hadoop-38 host shutdown firewall/etc/init.d/iptables stop, show:
Iptables: Clear Firewall rules: [OK]
Iptables: Set the chain as policy accept:filter [OK]
Iptables: Uninstalling module: [OK]
3, restart
Exception Two:
2014-03-13 11:26:30,788 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode:Initialization failed for block pool Block pool bp-1257313099-10.10.208.38-1394679083528 (storage ID ds-743638901-127.0.0.1-50010-1394616048958) service To linux-hadoop-38/10.10.208.38:9000
Java.io.IOException:Incompatible clusterids In/usr/local/hadoop/tmp/dfs/data:namenode Clusterid = cid-8e201022-6faa-440a-b61c-290e4ccfb006; Datanode Clusterid = clustername
At Org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (datastorage.java:391)
At Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:191)
At Org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (datastorage.java:219)
At Org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (datanode.java:916)
At Org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (datanode.java:887)
At Org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (bpofferservice.java:309)
At Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (bpserviceactor.java:218)
At Org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (bpserviceactor.java:660)
At Java.lang.Thread.run (thread.java:662)
Solution:
1, in the Hdfs-site.xml configuration file, configured Dfs.namenode.name.dir, in master, this configuration directory has a current folder, which has a version file, which reads as follows:
#Thu 10:51:23 CST 2014
namespaceid=1615021223
clusterid=cid-8e201022-6faa-440a-b61c-290e4ccfb006
Ctime=0
Storagetype=name_node
blockpoolid=bp-1257313099-10.10.208.38-1394679083528
Layoutversion=-40
2, in the Core-site.xml configuration file, configured Hadoop.tmp.dir, in slave, the directory of this configuration has a dfs/data/current directory, which also has a version file, content
#Wed Mar 17:23:04 CST 2014
storageid=ds-414973036-10.10.208.54-50010-1394616184818
Clusterid=clustername
Ctime=0
Storagetype=data_node
Layoutversion=-40
3, at a glance, two content is not the same, caused. Delete the wrong content in the slave, reboot, fix!
References: http://www.linuxidc.com/Linux/2014-03/98598.htm
Exception Three:
2014-03-13 12:34:46,828 FATAL org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:Failed to Initialize Mapreduce_shuffle
Java.lang.RuntimeException:No class Defiend for Mapreduce_shuffle
At Org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.init (auxservices.java:94)
At Org.apache.hadoop.yarn.service.CompositeService.init (compositeservice.java:58)
At Org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init ( containermanagerimpl.java:181)
At Org.apache.hadoop.yarn.service.CompositeService.init (compositeservice.java:58)
At Org.apache.hadoop.yarn.server.nodemanager.NodeManager.init (nodemanager.java:185)
At Org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager (nodemanager.java:328)
At Org.apache.hadoop.yarn.server.nodemanager.NodeManager.main (nodemanager.java:351)
2014-03-13 12:34:46,830 FATAL Org.apache.hadoop.yarn.server.nodemanager.NodeManager:Error starting NodeManager
Java.lang.RuntimeException:No class Defiend for Mapreduce_shuffle
At Org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.init (auxservices.java:94)
At Org.apache.hadoop.yarn.service.CompositeService.init (compositeservice.java:58)
At Org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init ( containermanagerimpl.java:181)
At Org.apache.hadoop.yarn.service.CompositeService.init (compositeservice.java:58)
At Org.apache.hadoop.yarn.server.nodemanager.NodeManager.init (nodemanager.java:185)
At Org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager (nodemanager.java:328)
At Org.apache.hadoop.yarn.server.nodemanager.NodeManager.main (nodemanager.java:351)
2014-03-13 12:34:46,846 INFO Org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Resourcecalculatorplugin is unavailable to this system. Org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is disabled.
Solution:
1, yarn-site.xml Configuration error:
Yarn.nodemanager.aux-services
Mapreduce_shuffle
2, modified to:
Yarn.nodemanager.aux-services
Mapreduce.shuffle
3. Restart Service
Warning:
WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
Solution:
Reference http://www.linuxidc.com/Linux/2014-03/98599.htm
Anomaly Four:
14/03/13 17:25:41 ERROR Lzo. Gplnativecodeloader:could not load native GPL library
Java.lang.UnsatisfiedLinkError:no Gplcompression in Java.library.path
At Java.lang.ClassLoader.loadLibrary (classloader.java:1734)
At Java.lang.Runtime.loadLibrary0 (runtime.java:823)
At Java.lang.System.loadLibrary (system.java:1028)
At Com.hadoop.compression.lzo.GPLNativeCodeLoader. (gplnativecodeloader.java:32)
At Com.hadoop.compression.lzo.LzoCodec. (lzocodec.java:67)
At Com.hadoop.compression.lzo.LzoIndexer. (lzoindexer.java:36)
At Com.hadoop.compression.lzo.LzoIndexer.main (lzoindexer.java:134)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.util.RunJar.main (runjar.java:208)
14/03/13 17:25:41 ERROR Lzo. Lzocodec:cannot load Native-lzo without Native-hadoop
14/03/13 17:25:43 INFO Lzo. Lzoindexer: [INDEX] LZO indexing File/test2.lzo, size 0.00 GB ...
Exception in thread ' main ' Java.lang.runtimeexception:native-lzo library not available
At Com.hadoop.compression.lzo.LzopCodec.createDecompressor (lzopcodec.java:91)
At Com.hadoop.compression.lzo.LzoIndex.createIndex (lzoindex.java:222)
At Com.hadoop.compression.lzo.LzoIndexer.indexSingleFile (lzoindexer.java:117)
At Com.hadoop.compression.lzo.LzoIndexer.indexInternal (lzoindexer.java:98)
At Com.hadoop.compression.lzo.LzoIndexer.index (lzoindexer.java:52)
At Com.hadoop.compression.lzo.LzoIndexer.main (lzoindexer.java:137)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.util.RunJar.main (runjar.java:208)
Solution: Obviously, no native-lzo
Compile Install/compile lzo,http://www.linuxidc.com/linux/2014-03/98601.htm
Exception Five:
14/03/17 10:23:59 INFO MapReduce. Jobsubmitter:cleaning up the staging area/tmp/hadoop-yarn/staging/hadoop/.staging/job_1394702706596_0003
Java.lang.IllegalArgumentException:Compression codec Com.hadoop.compression.lzo.LzoCodec not found.
At Org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (compressioncodecfactory.java:134)
At Org.apache.hadoop.io.compress.CompressionCodecFactory. (compressioncodecfactory.java:174)
At Org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable (textinputformat.java:58)
At Org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits (fileinputformat.java:276)
At Org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits (jobsubmitter.java:468)
At Org.apache.hadoop.mapreduce.JobSubmitter.writeSplits (jobsubmitter.java:485)
At Org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal (jobsubmitter.java:369)
At Org.apache.hadoop.mapreduce.job$11.run (job.java:1269)
At Org.apache.hadoop.mapreduce.job$11.run (job.java:1266)
At Java.security.AccessController.doPrivileged (Native method)
At Javax.security.auth.Subject.doAs (subject.java:396)
At Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1408)
At Org.apache.hadoop.mapreduce.Job.submit (job.java:1266)
At Org.apache.hadoop.mapreduce.Job.waitForCompletion (job.java:1287)
At Org.apache.hadoop.examples.WordCount.main (wordcount.java:84)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.util.programdriver$programdescription.invoke (programdriver.java:72)
At Org.apache.hadoop.util.ProgramDriver.driver (programdriver.java:144)
At Org.apache.hadoop.examples.ExampleDriver.main (exampledriver.java:68)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:39)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
At Java.lang.reflect.Method.invoke (method.java:597)
At Org.apache.hadoop.util.RunJar.main (runjar.java:208)
caused By:java.lang.ClassNotFoundException:Class Com.hadoop.compression.lzo.LzoCodec not found
At Org.apache.hadoop.conf.Configuration.getClassByName (configuration.java:1680)
At Org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (compressioncodecfactory.java:127)
... More
Interim solution:
Copy/usr/local/hadoop/lib/hadoop-lzo-0.4.10.jar to/usr/local/jdk/lib and reboot Linux