Hadoop installation encountered a variety of anomalies and solutions

Source: Internet
Author: User
Keywords java manager -13
Tags apache class client configuration connect directory error file

2014-03-13 11: 10: 23,665 INFO org.apache.Hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 0 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)

2014-03-13 11: 10: 24,667 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 1 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 25,667 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 2 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 26,669 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 3 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 27,670 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 4 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 28,671 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 5 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 29,672 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 6 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 30,674 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 7 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 31,675 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 8 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 32,676 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Linux-hadoop-38 / 10.10.208.38: 9000. Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1 SECONDS)
2014-03-13 11: 10: 32,677 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: Linux-hadoop-38 / 10.10.208.38: 9000
Solution:
1, ping Linux-hadoop-38 can pass, telnet Linux-hadoop-38 9000 can not pass, open the firewall
2, go to Linux-hadoop-38 host to turn off the firewall /etc/init.d/iptables stop, show:
iptables: Clear firewall rules: [OK]
iptables: Set Chain to Policy ACCEPT: filter [OK]
iptables: Uninstalling the module: [OK]

3, restart

Abnormal two:

2014-03-13 11: 26: 30,788 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1257313099-10.10.208.38-1394679083528 (storage id DS-743638901-127.0. 0.1-50010-1394616048958) service to Linux-hadoop-38 / 10.10.208.38: 9000
java.io.IOException: Incompatible clusterIDs in / usr / local / hadoop / tmp / dfs / data: namenode clusterID = CID-8e201022-6faa-440a-b61c-290e4ccfb006; datanode clusterID = clustername
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition (DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead (DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage (DataNode.java: 916)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool (DataNode.java:887)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo (BPOfferService.java:309)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake (BPServiceActor.java:218)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run (BPServiceActor.java:660)
at java.lang.Thread.run (Thread.java:662)
Solution:
1, in hdfs-site.xml configuration file, configured dfs.namenode.name.dir, in the master, the configuration directory has a current folder, there is a VERSION file, as follows:
#Thu Mar 13 10:51:23 CST 2014
namespaceID = 1615021223
clusterID = CID-8e201022-6faa-440a-b61c-290e4ccfb006
cTime = 0
storageType = NAME_NODE
blockpoolID = BP-1257313099-10.10.208.38-1394679083528
layoutVersion = -40
2, in the core-site.xml configuration file, configured hadoop.tmp.dir in slave, the configuration directory has a dfs / data / current directory, there is also a VERSION file, the content
#Wed Mar 12 17:23:04 CST 2014
storageID = DS-414973036-10.10.208.54-50010-1394616184818
clusterID = clustername
cTime = 0
storageType = DATA_NODE
layoutVersion = -40
3, at a glance, the two are not the same result. Delete the error in the slave, restart, get!

Reference: http://www.linuxidc.com/Linux/2014-03/98598.htm

Abnormal three:

2014-03-13 12: 34: 46,828 FATAL org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Failed to initialize mapreduce_shuffle
java.lang.RuntimeException: No class defiend for mapreduce_shuffle
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.init (AuxServices.java:94)
at org.apache.hadoop.yarn.service.CompositeService.init (CompositeService.java:58)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init (ContainerManagerImpl.java:181)
at org.apache.hadoop.yarn.service.CompositeService.init (CompositeService.java:58)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.init (NodeManager.java:185)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager (NodeManager.java:328)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main (NodeManager.java:351)
2014-03-13 12: 34: 46,830 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
java.lang.RuntimeException: No class defiend for mapreduce_shuffle
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.init (AuxServices.java:94)
at org.apache.hadoop.yarn.service.CompositeService.init (CompositeService.java:58)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init (ContainerManagerImpl.java:181)
at org.apache.hadoop.yarn.service.CompositeService.init (CompositeService.java:58)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.init (NodeManager.java:185)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager (NodeManager.java:328)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main (NodeManager.java:351)
2014-03-13 12: 34: 46,846 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: ResourceCalculatorPlugin is unavailable on this system. Org.apache.hadoop.yarn.server.nodemanager.containermanager. monitor.ContainersMonitorImpl is disabled.
Solution:
1, yarn-site.xml configuration error:
<property>
<name> yarn.nodemanager.aux-services </ name>
<value> mapreduce_shuffle </ value>
</ property>
2, revised as:
<property>
<name> yarn.nodemanager.aux-services </ name>
<value> mapreduce.shuffle </ value>
</ property>
3, restart the service

caveat:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ... using builtin-java classes where applicable

Solution:

Reference Http://www.linuxidc.com/Linux/2014-03/98599.htm

Abnormal four:

14/03/13 17:25:41 ERROR lzo.GPLNativeCodeLoader: Could not load native gpl library
java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
at java.lang.ClassLoader.loadLibrary (ClassLoader.java:1734)
at java.lang.Runtime.loadLibrary0 (Runtime.java:823)
at java.lang.System.loadLibrary (System.java:1028)
at com.hadoop.compression.lzo.GPLNativeCodeLoader. <clinit> (GPLNativeCodeLoader.java:32)
at com.hadoop.compression.lzo.LzoCodec. <clinit> (LzoCodec.java:67)
at com.hadoop.compression.lzo.LzoIndexer. <init> (LzoIndexer.java:36)
at com.hadoop.compression.lzo.LzoIndexer.main (LzoIndexer.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke (Method.java:597)
at org.apache.hadoop.util.RunJar.main (RunJar.java:208)
14/03/13 17:25:41 ERROR lzo.LzoCodec: Can not load native-lzo without native-hadoop
14/03/13 17:25:43 INFO lzo.LzoIndexer: [INDEX] LZO Indexing file /test2.lzo, size 0.00 GB ...
Exception in thread "main" java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzopCodec.createDecompressor (LzopCodec.java:91)
at com.hadoop.compression.lzo.LzoIndex.createIndex (LzoIndex.java:222)
at com.hadoop.compression.lzo.LzoIndexer.indexSingleFile (LzoIndexer.java:117)
at com.hadoop.compression.lzo.LzoIndexer.indexInternal (LzoIndexer.java:98)
at com.hadoop.compression.lzo.LzoIndexer.index (LzoIndexer.java:52)
at com.hadoop.compression.lzo.LzoIndexer.main (LzoIndexer.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke (Method.java:597)
at org.apache.hadoop.util.RunJar.main (RunJar.java:208)

The solution: Obviously there is no native-lzo
Compile to install / compile lzo, http: //www.linuxidc.com/Linux/2014-03/98601.htm

Abnormal five:

14/03/17 10:23:59 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1394702706596_0003
java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (CompressionCodecFactory.java:134)
at org.apache.hadoop.io.compress.CompressionCodecFactory. <init> (CompressionCodecFactory.java:174)
at org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable (TextInputFormat.java:58)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits (FileInputFormat.java:276)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits (JobSubmitter.java:468)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits (JobSubmitter.java:485)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal (JobSubmitter.java:369)
at org.apache.hadoop.mapreduce.Job $ 11.run (Job.java:1269)
at org.apache.hadoop.mapreduce.Job $ 11.run (Job.java:1266)
at java.security.AccessController.doPrivileged (Native Method)
at javax.security.auth.Subject.doAs (Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java:1408)
at org.apache.hadoop.mapreduce.Job.submit (Job.java:1266)
at org.apache.hadoop.mapreduce.Job.waitForCompletion (Job.java:1287)
at org.apache.hadoop.examples.WordCount.main (WordCount.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke (Method.java:597)
at org.apache.hadoop.util.ProgramDriver $ ProgramDescription.invoke (ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver (ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main (ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke (Method.java:597)
at org.apache.hadoop.util.RunJar.main (RunJar.java:208)
Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
at org.apache.hadoop.conf.Configuration.getClassByName (Configuration.java:1680)
at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (CompressionCodecFactory.java:127)
... 26 more
Interim solution:
Copy /usr/local/hadoop/lib/hadoop-lzo-0.4.10.jar to / usr / local / jdk / lib and restart linux

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.