Spark compile-time issues

Source: Internet
Author: User

I had already compiled the deployment of the spark environment, can Zennai run a simple wordcount test program when the program unexpectedly exited, search the internet for a long time to find the following this blog post, and finally recompile the installation, everything is normal ~~~~~

I'm using spark1.2 hadoop2.4.1 here.

scala> val rdd1 = Sc.textfile ("hdfs://master:9001/spark/spark02/directory/")
14/07/19 17:09:36 INFO memorystore:ensurefreespace (138763) called with curmem=0, maxmem=309225062
14/07/19 17:09:36 INFO memorystore:block broadcast_0 stored as values to memory (estimated size 135.5 KB, free 294.8 MB)
Rdd1:org.apache.spark.rdd.rdd[string] = mappedrdd[1] at textfile at <console>:12

scala> 14/07/19 17:09:45 INFO sparkdeployschedulerbackend:registered executor:actor[akka.tcp://[email protected] : 42733/user/executor#-2006581551] with ID 1
14/07/19 17:09:48 INFO blockmanagerinfo:registering block manager slave01:60074 with 593.9 MB RAM


Scala> rdd1.todebugstring
Java.lang.VerifyError:class org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$ Setownerrequestproto overrides final method Getunknownfields. () Lcom/google/protobuf/unknownfieldset;
At Java.lang.ClassLoader.defineClass1 (Native Method)
At Java.lang.ClassLoader.defineClass (classloader.java:800)
At Java.security.SecureClassLoader.defineClass (secureclassloader.java:142)
At Java.net.URLClassLoader.defineClass (urlclassloader.java:449)
At java.net.urlclassloader.access$100 (urlclassloader.java:71)
At Java.net.urlclassloader$1.run (urlclassloader.java:361)
At Java.net.urlclassloader$1.run (urlclassloader.java:355)
At java.security.AccessController.doPrivileged (Native Method)
At Java.net.URLClassLoader.findClass (urlclassloader.java:354)
At Java.lang.ClassLoader.loadClass (classloader.java:425)
At Sun.misc.launcher$appclassloader.loadclass (launcher.java:308)
At Java.lang.ClassLoader.loadClass (classloader.java:358)
At Java.lang.Class.getDeclaredMethods0 (Native Method)
At Java.lang.Class.privateGetDeclaredMethods (class.java:2531)
At Java.lang.Class.privateGetPublicMethods (class.java:2651)
At Java.lang.Class.privateGetPublicMethods (class.java:2661)
At Java.lang.Class.getMethods (class.java:1467)
At Sun.misc.ProxyGenerator.generateClassFile (proxygenerator.java:426)
At Sun.misc.ProxyGenerator.generateProxyClass (proxygenerator.java:323)
At Java.lang.reflect.Proxy.getProxyClass0 (proxy.java:636)
At Java.lang.reflect.Proxy.newProxyInstance (proxy.java:722)
At Org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy (protobufrpcengine.java:92)
At Org.apache.hadoop.ipc.RPC.getProtocolProxy (rpc.java:537)
At Org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol (namenodeproxies.java:328)
At Org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy (namenodeproxies.java:235)
At Org.apache.hadoop.hdfs.NameNodeProxies.createProxy (namenodeproxies.java:139)
At Org.apache.hadoop.hdfs.dfsclient.<init> (dfsclient.java:510)
At Org.apache.hadoop.hdfs.dfsclient.<init> (dfsclient.java:453)
At Org.apache.hadoop.hdfs.DistributedFileSystem.initialize (distributedfilesystem.java:136)
At Org.apache.hadoop.fs.FileSystem.createFileSystem (filesystem.java:2433)
At org.apache.hadoop.fs.filesystem.access$200 (filesystem.java:88)
At Org.apache.hadoop.fs.filesystem$cache.getinternal (filesystem.java:2467)
At Org.apache.hadoop.fs.filesystem$cache.get (filesystem.java:2449)
At Org.apache.hadoop.fs.FileSystem.get (filesystem.java:367)
At Org.apache.hadoop.fs.Path.getFileSystem (path.java:287)
At Org.apache.hadoop.mapred.FileInputFormat.listStatus (fileinputformat.java:221)
At Org.apache.hadoop.mapred.FileInputFormat.getSplits (fileinputformat.java:270)
At Org.apache.spark.rdd.HadoopRDD.getPartitions (hadooprdd.scala:172)
At Org.apache.spark.rdd.RDD[Math processing Error]Anonfun$partitions$2.apply (rdd.scala:202)
At Scala. Option.getorelse (option.scala:120)
At Org.apache.spark.rdd.RDD.partitions (rdd.scala:202)
At Org.apache.spark.rdd.MappedRDD.getPartitions (mappedrdd.scala:28)
At Org.apache.spark.rdd.RDD[Math processing Error]Anonfun$partitions$2.apply (rdd.scala:202)
At Scala. Option.getorelse (option.scala:120)
At Org.apache.spark.rdd.RDD.partitions (rdd.scala:202)
At Org.apache.spark.rdd.rdd.org$apache$spark$rdd$rdd[Math processing Error]IwC[Math processing Error]Iwc.<init> (<console>:15)
At $iwC[Math processing Error]Iwc.<init> (&LT;CONSOLE&GT;:20)
At $iwC[Math processing Error]ANONFUN$PROCESS$1.APPLY$MCZ$SP (sparkiloop.scala:936)
At Org.apache.spark.repl.SparkILoop[Math processing Error]Anonfun$process$1.apply (sparkiloop.scala:884)
At Scala.tools.nsc.util.scalaclassloader$.savingcontextloader (scalaclassloader.scala:135)
At Org.apache.spark.repl.SparkILoop.process (sparkiloop.scala:884)
At Org.apache.spark.repl.SparkILoop.process (sparkiloop.scala:982)
At Org.apache.spark.repl.main$.main (main.scala:31)
At Org.apache.spark.repl.Main.main (Main.scala)
At Sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
At Sun.reflect.NativeMethodAccessorImpl.invoke (nativemethodaccessorimpl.java:57)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.spark.deploy.sparksubmit$.launch (sparksubmit.scala:292)
At Org.apache.spark.deploy.sparksubmit$.main (sparksubmit.scala:55)
At Org.apache.spark.deploy.SparkSubmit.main (Sparksubmit.scala)

Google has a search

I ' ve received the same error with Spark built using Maven. It turns out this mesos-0.13.0 depends on protobuf-2.4.1 which are causing the clash at runtime. Protobuf included by Akka are shaded and doesn ' t cause any problems.

The solution is to update the Mesos dependency to 0.18.0 in Spark ' s pom.xml. Rebuilding the JAR with this configuration solves the issue.

-anant

I looked at the Pom.xml file, spark1.0.0 inside the Mesos is indeed already 0.18.1 version, so should not be the Mesos version of the issue of low. I found it in pom.xml. I did find a two version of Protobuf.

There was a place where 2.4.1 was used.


My spark-1.0.0 here is compiled with Maven, and I suspect it was a problem when I compiled it, causing the error above. Because I couldn't find a solution, I had to recompile spark by using make-distribution.sh. After a long wait after finally recompiling a copy, re-unzip the distribution deployment. Execute the same command again, and the error above disappears.

There may be a problem with maven-compiled spark.

Transferred from: http://blog.csdn.net/smallboy2011/article/details/37965083


This article is from the "Gentleman's book" blog, please be sure to keep this source http://youling87.blog.51cto.com/5271987/1619806

Spark compile-time issues

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.