Zeppelin using the Spark interpreter

Source: Internet
Author: User

Zeppelin default comes with local spark, can not rely on any cluster, download bin package, unzip the installation can be used.

Use a different spark cluster in yarn mode.

Configuration:

VI zeppelin-env. SH

Add to:

Export spark_home=/usr/crh/current/spark-clientexport spark_submit_options="-- Driver-memory 512M--executor-memory 1G"export Hadoop_conf_dir=/etc/hadoop/conf

Zeppelin Interpreter Configuration

Note: After setting up the restart interpreter.

The Properties Master property is as follows:

New Notebook

Tips: A few months ago Zeppelin or 0.5.6, now the latest 0.6.2,zeppelin 0.5.6 write notebook in front must add%spark, and 0.6.2 if nothing is the default is the Scala language.

Zeppelin 0.5.6 do not add the following error:

Connect to ' databank:4300 ' failed
%spark.sqlSelect Count (*) from tc.gjl_test0

Error:

Com.fasterxml.jackson.databind.JsonMappingException:Could not the Find creator property with the name ' ID ' (in class Org.apache. Spark.rdd.RDDOperationScope) at [Source: {"id": "2", "Name": "Converttosafe"}; line:1, Column:1]at Com.fasterxml.jackson.databind.JsonMappingException.from (jsonmappingexception.java:148) at Com.fasterxml.jackson.databind.DeserializationContext.mappingException (deserializationcontext.java:843) at Com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps (beandeserializerfactory.java:533) at Com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer ( beandeserializerfactory.java:220) at Com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer ( beandeserializerfactory.java:143) at Com.fasterxml.jackson.databind.deser.deserializercache._createdeserializer2 (deserializercache.java:409) at Com.fasterxml.jackson.databind.deser.deserializercache._createdeserializer ( deserializercache.java:358) at Com.fasterxml.jackson.datAbind.deser.deserializercache._createandcache2 (deserializercache.java:265) at Com.fasterxml.jackson.databind.deser.deserializercache._createandcachevaluedeserializer (DeserializerCache.java : 245) at Com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer (deserializercache.java:143 ) at Com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer (Deserializationcontext.java : 439) at Com.fasterxml.jackson.databind.objectmapper._findrootdeserializer (objectmapper.java:3666) at Com.fasterxml.jackson.databind.objectmapper._readmapandclose (objectmapper.java:3558) at Com.fasterxml.jackson.databind.ObjectMapper.readValue (objectmapper.java:2578) at Org.apache.spark.rdd.rddoperationscope$.fromjson (rddoperationscope.scala:85) at org.apache.spark.rdd.rddoperationscope$ $anonfun $5.apply (rddoperationscope.scala:136) at org.apache.spark.rdd.rddoperationscope$ $anonfun $5.apply (rddoperationscope.scala:136) at Scala. Option.map (option.scala:145) at Org.apache.spark.rdd.RDDOperationSCope$.withscope (rddoperationscope.scala:136) at Org.apache.spark.sql.execution.SparkPlan.execute (Sparkplan.scala : (rowformatconverters.scala:56) at Org.apache.spark.sql.execution.ConvertToSafe.doExecute org.apache.spark.sql.execution.sparkplan$ $anonfun $execute$5.apply (sparkplan.scala:132) at org.apache.spark.sql.execution.sparkplan$ $anonfun $execute$5.apply (sparkplan.scala:130) at Org.apache.spark.rdd.rddoperationscope$.withscope (rddoperationscope.scala:150) at Org.apache.spark.sql.execution.SparkPlan.execute (sparkplan.scala:130) at Org.apache.spark.sql.execution.SparkPlan.executeTake (sparkplan.scala:187) at Org.apache.spark.sql.execution.Limit.executeCollect (basicoperators.scala:165) at Org.apache.spark.sql.execution.SparkPlan.executeCollectPublic (sparkplan.scala:174) at org.apache.spark.sql.dataframe$ $anonfun $org$apache$spark$sql$dataframe$ $execute $1$1.apply (dataframe.scala:1499 ) at org.apache.spark.sql.dataframe$ $anonfun $org$apache$spark$sql$dataframe$ $execute $1$1.apply (Dataframe.scala : 14(sqlexecution.scala:56) at Org.apache.spark.sql.execution.sqlexecution$.withnewexecutionid Org.apache.spark.sql.DataFrame.withNewExecutionId (dataframe.scala:2086) at org.apache.spark.sql.dataframe.org$ apache$spark$sql$dataframe$ $execute $ (dataframe.scala:1498) at org.apache.spark.sql.dataframe.org$apache$spark$ sql$dataframe$ $collect (dataframe.scala:1505) at org.apache.spark.sql.dataframe$ $anonfun $head$1.apply ( dataframe.scala:1375) at org.apache.spark.sql.dataframe$ $anonfun $head$1.apply (dataframe.scala:1374) at Org.apache.spark.sql.DataFrame.withCallback (dataframe.scala:2099) at Org.apache.spark.sql.DataFrame.head ( dataframe.scala:1374) at Org.apache.spark.sql.DataFrame.take (dataframe.scala:1456) at SUN.REFLECT.NATIVEMETHODACCESSORIMPL.INVOKE0 (Native Method) at Sun.reflect.NativeMethodAccessorImpl.invoke ( nativemethodaccessorimpl.java:57) at Sun.reflect.DelegatingMethodAccessorImpl.invoke ( delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:606) at Org.apache.zeppeLin.spark.ZeppelinContext.showDF (zeppelincontext.java:297) at Org.apache.zeppelin.spark.SparkSqlInterpreter.interpret (sparksqlinterpreter.java:144) at Org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret (classloaderinterpreter.java:57) at Org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret (lazyopeninterpreter.java:93) at Org.apache.zeppelin.interpreter.remote.remoteinterpreterserver$interpretjob.jobrun ( remoteinterpreterserver.java:300) at Org.apache.zeppelin.scheduler.Job.run (job.java:169) at Org.apache.zeppelin.scheduler.fifoscheduler$1.run (fifoscheduler.java:134) at java.util.concurrent.executors$ Runnableadapter.call (executors.java:471) at Java.util.concurrent.FutureTask.run (futuretask.java:262) at java.util.concurrent.scheduledthreadpoolexecutor$scheduledfuturetask.access$201 ( scheduledthreadpoolexecutor.java:178) at java.util.concurrent.scheduledthreadpoolexecutor$ Scheduledfuturetask.run (scheduledthreadpoolexecutor.java:292) at Java.util.concurrent.ThreadPoolExecutor.runWOrker (threadpoolexecutor.java:1145) at Java.util.concurrent.threadpoolexecutor$worker.run ( threadpoolexecutor.java:615) at Java.lang.Thread.run (thread.java:745)

Reason:

Enter the/opt/zeppelin-0.5.6-incubating-bin-all directory:

# ls Lib |grep jacksonjackson-annotations-2.5.0.jarjackson-core-2.5.3.jarjackson-databind-2.5.3.jar

Replace the version with the following version:

# ls Lib |grep jacksonjackson-annotations-2.4.4.jarjackson-core-2.4.4.jarjackson-databind-2.4.4.jar

Test success!

Reference website

Zeppelin using the Spark interpreter

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.