Exception occurs when executing Java calls to Scala's packaged jar
/14 23:57:08WARN taskschedulerimpl:initial Job has not accepted any resources; Check your cluster UI to ensure that workers is Registered and has sufficient memory15/04/14 23:57:23WARN taskschedulerimpl:initial Job has not accepted any resources; Check your cluster UI to ensure that workers is Registered and has sufficient memory15/04/14 23:57:38WARN taskschedulerimpl:initial Job has not accepted any resources; Check your cluster UI to ensure that workers is Registered and has sufficient memory15/04/14 23:57:39 INFO appclient$clientactor:executor UPDATED:APP-20150414235011-0003/9 is now EXITED (Command EXITED wi Th code 1)15/04/14 23:57:39 INFO sparkdeployschedulerbackend:executor app-20150414235011-0003/9 Removed:command exited with code 115/04/14 23:57:39ERROR Sparkdeployschedulerbackend:application has been killed. Reason:master removed our application:failed15/04/14 23:57:39 INFO taskschedulerimpl:removed TaskSet 0.0, whose tasks has all completed, from pool15/04/14 23:57:39 Info taskschedulerimpl:cancelling stage 015/04/14 23:57:39 info dagscheduler:failed to run count at Sp Arkselect03.scala:55Exception in Thread"Main"Org.apache.spark.SparkException:Job aborted due to stage failure:master removed we application:failed at ORG.A pache.spark.scheduler.dagscheduler.org$apache$spark$scheduler$dagscheduler$ $failJobAndIndependentStages ( Dagscheduler.scala:1049) at org.apache.spark.scheduler.dagscheduler$ $anonfun $abortstage$1.apply (dagscheduler.scala:1033) at org.apache.spark.scheduler.dagscheduler$ $anonfun $abortstage$1.apply (dagscheduler.scala:1031) at scala.collection.mutable.resizablearray$class. foreach (resizablearray.scala:59) at Scala.collection.mutable.ArrayBuffer.foreach (Arraybuffer.scala:47) at Org.apache.spark.scheduler.DAGScheduler.abortStage (Dagscheduler.scala:1031) at org.apache.spark.scheduler.dagscheduler$ $anonfun $handletasksetfailed$1.apply (dagscheduler.scala:635) at org.apache.spark.scheduler.dagscheduler$ $anonfun $handletasksetfailed$1.apply (dagscheduler.scala:635) at Scala. Option.foreach (Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed (Dagscheduler.scala:635) at org.apache.spark.scheduler.dagschedulereventprocessactor$ $anonfun $receive$2.applyOrElse (dagscheduler.scala:1234) at Akka.actor.ActorCell.receiveMessage (Actorcell.scala:498) at Akka.actor.ActorCell.invoke (Actorcell.scala:456) at Akka.dispatch.Mailbox.processMailbox (Mailbox.scala:237) at Akka.dispatch.Mailbox.run (Mailbox.scala:219) at Akka.dispatch.forkjoinexecutorconfigurator$akkaforkjointask.exec (Abstractdispatcher.scala:386) at Scala.concurrent.forkjoin.ForkJoinTask.doExec (Forkjointask.java:260) at Scala.concurrent.forkjoin.forkjoinpool$workqueue.runtask (Forkjoinpool.java:1339) at Scala.concurrent.forkjoin.ForkJoinPool.runWorker (Forkjoinpool.java:1979) at Scala.concurrent.forkjoin.ForkJoinWorkerThread.run (Forkjoinworkerthread.java:107)
Question 1:
/14 23:57:08 WARN taskschedulerimpl:initial job have not accepted any resources; Check your cluster UI to ensure that Workers is registered and has sufficient MEMORY15/04/14 23:57:23 WARN taskschedulerimpl:initial job had not accepte d any resources; Check your cluster UI to ensure that workers is registered and has sufficient MEMORY15/04/14 23:57:38 WARN tasksched Ulerimpl:initial job has not accepted any resources; Check your cluster UI to ensure that workers is registered and has sufficient memor
Analysis: This is not enough memory?
My spark-env.sh profile information is as follows
Export Java_home=/home/hadoop/jdk1.7.0_75export Scala_home=/home/hadoop/scala-2.11.6export HADOOP_HOME=/home/ Hadoop/hadoop-2.3.0-cdh5.0.2export Hadoop_conf_dir=/home/hadoop/hadoop-2.3.0-cdh5.0.2/etc/hadoopexport SPARK_ Classpath=/home/hadoop/hbase-0.96.1.1-cdh5.0.2/lib/*export Spark_master_ip=masterexport SPARK_MASTER_PORT= 17077export spark_master_webui_port=18080 export Spark_worker_cores=1export spark_worker_memory=1gexport SPARK _worker_webui_port=18081export Spark_worker_instances=1
Question 2:
15/04/14 23:57:39 INFO dagscheduler:failed to run count at sparkselect03.scala:55
The code for this sentence:
Val count = Hbaserdd.count () println ("HBase RDD Count:" + count) Hbaserdd.cache ()
Question 3:
In thread "main" Org.apache.spark.SparkException:Job aborted due-stage failure:master removed our APPLICATION:FA Iled
Have encountered similar or know how to solve the message under the
Java executing spark query for the jar package for HBase appears with error: OB aborted due to stage failure:master removed our application:failed