spark-shell on yarn 出錯解決啟動命令bin/spark-shell --master yarn-client出現錯誤,類ExecutorLauncher 找不到 __spark

來源:互聯網
上載者:User

文章來源:http://www.dataguru.cn/thread-331456-1-1.html


今天想要將spark-shell 在yarn-client的狀態下 結果出錯:
[python] view plaincopy


[hadoop@localhost spark-1.0.1-bin-hadoop2]$ bin/spark-shell --master yarn-client   Spark assembly has been built with Hive, including Datanucleus jars on classpath   14/07/22 17:28:46 INFO spark.SecurityManager: Changing view acls to: hadoop   14/07/22 17:28:46 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)   14/07/22 17:28:46 INFO spark.HttpServer: Starting HTTP Server   14/07/22 17:28:46 INFO server.Server: jetty-8.y.z-SNAPSHOT   14/07/22 17:28:46 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:49827   Welcome to         ____              __        / __/__  ___ _____/ /__       _ / _ / _ `/ __/  '_/      /___/ .__/_,_/_/ /_/_   version 1.0.1         /_/      Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_55)   Type in expressions to have them evaluated.   Type :help for more information.   14/07/22 17:28:51 WARN spark.SparkConf:    SPARK_CLASSPATH was detected (set to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar').   This is deprecated in Spark 1.0+.      Please instead use:   - ./spark-submit with --driver-class-path to augment the driver classpath   - spark.executor.extraClassPath to augment the executor classpath             14/07/22 17:28:51 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar' as a work-around.   14/07/22 17:28:51 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar' as a work-around.   14/07/22 17:28:51 INFO spark.SecurityManager: Changing view acls to: hadoop   14/07/22 17:28:51 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)   14/07/22 17:28:51 INFO slf4j.Slf4jLogger: Slf4jLogger started   14/07/22 17:28:51 INFO Remoting: Starting remoting   14/07/22 17:28:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@localhost:41257]   14/07/22 17:28:51 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@localhost:41257]   14/07/22 17:28:51 INFO spark.SparkEnv: Registering MapOutputTracker   14/07/22 17:28:51 INFO spark.SparkEnv: Registering BlockManagerMaster   14/07/22 17:28:51 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140722172851-5d58   14/07/22 17:28:51 INFO storage.MemoryStore: MemoryStore started with capacity 294.9 MB.   14/07/22 17:28:51 INFO network.ConnectionManager: Bound socket to port 36159 with id = ConnectionManagerId(localhost,36159)   14/07/22 17:28:51 INFO storage.BlockManagerMaster: Trying to register BlockManager   14/07/22 17:28:51 INFO storage.BlockManagerInfo: Registering block manager localhost:36159 with 294.9 MB RAM   14/07/22 17:28:51 INFO storage.BlockManagerMaster: Registered BlockManager   14/07/22 17:28:51 INFO spark.HttpServer: Starting HTTP Server   14/07/22 17:28:51 INFO server.Server: jetty-8.y.z-SNAPSHOT   14/07/22 17:28:51 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:57197   14/07/22 17:28:51 INFO broadcast.HttpBroadcast: Broadcast server started at http://localhost:57197   14/07/22 17:28:51 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-9b5a359c-37cf-4530-85d6-fcdbc534bc84   14/07/22 17:28:51 INFO spark.HttpServer: Starting HTTP Server   14/07/22 17:28:51 INFO server.Server: jetty-8.y.z-SNAPSHOT   14/07/22 17:28:51 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:34888   14/07/22 17:28:52 INFO server.Server: jetty-8.y.z-SNAPSHOT   14/07/22 17:28:52 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040   14/07/22 17:28:52 INFO ui.SparkUI: Started SparkUI at http://localhost:4040   14/07/22 17:28:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable   --args is deprecated. Use --arg instead.   14/07/22 17:28:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032   14/07/22 17:28:53 INFO yarn.Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 1   14/07/22 17:28:53 INFO yarn.Client: Queue info ... queueName: default, queueCurrentCapacity: 0.0, queueMaxCapacity: 1.0,         queueApplicationCount = 1, queueChildQueueCount = 0   14/07/22 17:28:53 INFO yarn.Client: Max mem capabililty of a single resource in this cluster 8192   14/07/22 17:28:53 INFO yarn.Client: Preparing Local resources   14/07/22 17:28:53 INFO yarn.Client: Uploading file:/home/hadoop/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar to hdfs://localhost:9000/user/hadoop/.sparkStaging/application_1406018656679_0002/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar   14/07/22 17:28:54 INFO yarn.Client: Setting up the launch environment   14/07/22 17:28:54 INFO yarn.Client: Setting up container launch context   14/07/22 17:28:54 INFO yarn.Client: Command for starting the Spark ApplicationMaster: List($JAVA_HOME/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=$PWD/tmp, -Dspark.tachyonStore.folderName="spark-10325217-bdb0-4213-8ae8-329940b98b95", -Dspark.yarn.secondary.jars="", -Dspark.home="/home/hadoop/spark", -Dspark.repl.class.uri="http://localhost:49827", -Dspark.driver.host="localhost", -Dspark.app.name="Spark shell", -Dspark.jars="", -Dspark.fileserver.uri="http://localhost:34888", -Dspark.executor.extraClassPath="/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar", -Dspark.master="yarn-client", -Dspark.driver.port="41257", -Dspark.driver.extraClassPath="/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar", -Dspark.httpBroadcast.uri="http://localhost:57197",  -Dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.ExecutorLauncher, --class, notused, --jar , null,  --args  'localhost:41257' , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)   14/07/22 17:28:54 INFO yarn.Client: Submitting application to ASM   14/07/22 17:28:54 INFO impl.YarnClientImpl: Submitted application application_1406018656679_0002 to ResourceManager at /0.0.0.0:8032   14/07/22 17:28:54 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:         appMasterRpcPort: 0        appStartTime: 1406021334568        yarnAppState: ACCEPTED      14/07/22 17:28:55 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:         appMasterRpcPort: 0        appStartTime: 1406021334568        yarnAppState: ACCEPTED      14/07/22 17:28:56 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:         appMasterRpcPort: 0        appStartTime: 1406021334568        yarnAppState: ACCEPTED      14/07/22 17:28:57 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:         appMasterRpcPort: 0        appStartTime: 1406021334568        yarnAppState: ACCEPTED   <span style="color:#FF0000;">   14/07/22 17:28:58 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:         appMasterRpcPort: 0        appStartTime: 1406021334568        yarnAppState: ACCEPTED      14/07/22 17:28:59 INFO cluster.YarnClientSchedulerBackend: Application report from ASM:         appMasterRpcPort: 0        appStartTime: 1406021334568        yarnAppState: FAILED      org.apache.spark.SparkException: Yarn application already ended,might be killed or not able to launch application master.       at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApp(YarnClientSchedulerBackend.scala:105)       at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:82)       at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:136)       at org.apache.spark.SparkContext.<init>(SparkContext.scala:318)       at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:957)       at $iwC$$iwC.<init>(<console>:8)       at $iwC.<init>(<console>:14)       at <init>(<console>:16)       at .<init>(<console>:20)       at .<clinit>(<console>)       at .<init>(<console>:7)       at .<clinit>(<console>)       at $print(<console>)       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)       at java.lang.reflect.Method.invoke(Method.java:606)       at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788)       at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056)       at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614)       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645)       at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609)       at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796)       at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:841)       at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:753)       at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:121)       at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:120)       at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:263)       at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:120)       at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:56)       at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:913)       at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:142)       at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:56)       at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:104)       at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:56)       at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:930)       at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)       at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)       at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:884)       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:982)       at org.apache.spark.repl.Main$.main(Main.scala:31)  

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.