Windows平台開發Mapreduce程式遠程調用運行在Hadoop叢集—Yarn調度引擎異常

來源:互聯網
上載者:User

標籤:roc   snapshot   create   inpu   nat   ext   too   調度   nod   

共用原因:雖然用一篇博文寫問題感覺有點奢侈,但是搜尋百度,相關文章太少了,苦苦探尋日誌才找到解決方案。

遇到問題:在windows平台上開發的mapreduce程式,運行遲遲沒有結果。

Mapreduce程式

public class Test {    public static void main(String [] args) throws Exception{        Configuration conf = new Configuration();       conf.set("fs.defaultFS", "hdfs://master:9000/");        conf.set("mapreduce.job.jar", "D:/intelij-workspace/aaron-bigdata/aaorn-mapreduce/target/aaorn-mapreduce-1.0-SNAPSHOT.jar".trim());        conf.set("mapreduce.framework.name", "yarn");        conf.set("yarn.resourcemanager.hostname", "master");        conf.set("mapreduce.app-submission.cross-platform", "true");        Job job = Job.getInstance(conf);        job.setMapperClass(WordCountMapper.class);        job.setReducerClass(WordCountReducer.class);        job.setMapOutputKeyClass(Text.class);        job.setMapOutputValueClass(LongWritable.class);        job.setOutputKeyClass(Text.class);        job.setOutputValueClass(LongWritable.class);        FileInputFormat.setInputPaths(job,"hdfs://master:9000/input/");        FileOutputFormat.setOutputPath(job,new Path("hdfs://master:9000/output3/"));        job.waitForCompletion(true);    }}

運行結果

[QC] INFO [main] org.apache.hadoop.yarn.client.RMProxy.createRMProxy(98) | Connecting to ResourceManager at master/192.168.56.100:8032[QC] WARN [main] org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(64) | Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.[QC] INFO [main] org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(283) | Total input paths to process : 2[QC] INFO [main] org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(198) | number of splits:2[QC] INFO [main] org.apache.hadoop.mapreduce.JobSubmitter.printTokens(287) | Submitting tokens for job: job_1496627557122_0004[QC] INFO [main] org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(273) | Submitted application application_1496627557122_0004[QC] INFO [main] org.apache.hadoop.mapreduce.Job.submit(1294) | The url to track the job: http://master:8088/proxy/application_1496627557122_0004/[QC] INFO [main] org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(1339) | Running job: job_1496627557122_0004

Master(NameNode)日誌

java.io.IOException: Connection reset by peer        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)        at sun.nio.ch.IOUtil.read(IOUtil.java:197)        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)        at org.apache.hadoop.ipc.Server.channelRead(Server.java:2603)        at org.apache.hadoop.ipc.Server.access$2800(Server.java:136)        at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1481)        at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771)        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637)        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608
Slave(DataNode)的日誌異常
2017-06-05 09:49:40,464 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:41,464 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:42,465 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:43,467 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:44,468 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:45,470 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:46,472 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)2017-06-05 09:49:47,474 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
  說明我的hadoop叢集是Master(namenode)、Slave1、Slave2、Slave3  解決辦法在所有的Slave機器的yarn-site.xml,之前我只在Master機器上添加了這些內容
<configuration>  <property>      <name>yarn.resourcemanager.hostname</name>      <value>master</value>  </property>  <property>       <name>yarn.nodemanager.aux-services</name>       <value>mapreduce_shuffle</value>   </property>   <property>      <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>      <value>org.apache.hadoop.mapred.ShuffleHandler</value>  </property></configuration>

Windows平台開發Mapreduce程式遠程調用運行在Hadoop叢集—Yarn調度引擎異常

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.