MapReduce Distributed Cache program, unable to perform problem resolution in eclipse under Windows

Source: Internet
Author: User
Tags symlink

Hadoop's automated distributed cache Distributedcache (the new version of the API) is often used in the write MapReduce program, but executes in eclipse under Windows, with an error similar to the following:

2016-03-03 10:53:21,424 WARN [main] util. Nativecodeloader (nativecodeloader.java:<clinit>)-Unable to load Native-hadoop library for your platform ... u Sing Builtin-java classes where applicable
2016-03-03 10:53:22,152 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated (1019))- Session.id is deprecated. Instead, use Dfs.metrics.session-id
2016-03-03 10:53:22,152 INFO [main] JVM. Jvmmetrics (JvmMetrics.java:init)-Initializing JVM Metrics with Processname=jobtracker, sessionid=
2016-03-03 10:53:24,366 WARN [main] MapReduce. Jobsubmitter (JobSubmitter.java:copyAndConfigureFiles)-Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with Toolrunner to remedy this.
2016-03-03 10:53:26,447 WARN [main] MapReduce. Jobsubmitter (JobSubmitter.java:copyAndConfigureFiles (259))-No job jar file set. User classes May is not found. See Job or Job#setjar (String).
2016-03-03 10:53:26,487 INFO [main] input. Fileinputformat (FileInputFormat.java:listStatus (281))-Total input paths to process:4
2016-03-03 10:53:30,876 INFO [main] MapReduce. Jobsubmitter (JobSubmitter.java:submitJobInternal (396))-Number of Splits:4
2016-03-03 10:53:31,065 INFO [main] MapReduce. Jobsubmitter (JobSubmitter.java:printTokens (479))-Submitting tokens for job:job_local1862629830_0001
2016-03-03 10:53:31,133 WARN [main] Conf. Configuration (Configuration.java:loadProperty (2368))-file:/tmp/hadoop-root/mapred/staging/root1862629830/. Staging/job_local1862629830_0001/job.xml:an attempt to override final parameter: Mapreduce.job.end-notification.max.retry.interval; Ignoring.
2016-03-03 10:53:31,142 WARN [main] Conf. Configuration (Configuration.java:loadProperty (2368))-file:/tmp/hadoop-root/mapred/staging/root1862629830/. Staging/job_local1862629830_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2016-03-03 10:53:31,953 INFO [main] mapred. Localdistributedcachemanager (LocalDistributedCacheManager.java:symlink (207))-Creating symlink: \tmp\hadoop-root\ mapred\local\1456973611218\part-r-00003 <-f:\javawork2014\mapreduce/part-r-00003
2016-03-03 10:53:32,004 WARN [main] fs. Fileutil (FileUtil.java:symLink (824))-Fail to create symbolic links on Windows. The default security settings in Windows disallow non-elevated administrators and all non-administrators from creating sym Bolic links. This behavior can is changed in the Local Security Policy Management Console
2016-03-03 10:53:32,004 WARN [main] mapred. Localdistributedcachemanager (LocalDistributedCacheManager.java:symlink (209))-Failed to create symlink: \tmp\ hadoop-root\mapred\local\1456973611218\part-r-00003 <-f:\javawork2014\mapreduce/part-r-00003
2016-03-03 10:53:32,005 INFO [main] mapred. Localdistributedcachemanager (LocalDistributedCacheManager.java:setup (171))-Localized hdfs://node1:8020/usr/ Output/weibo1/part-r-00003 as file:/tmp/hadoop-root/mapred/local/1456973611218/part-r-00003
2016-03-03 10:53:32,012 INFO [main] mapred. Localdistributedcachemanager (LocalDistributedCacheManager.java:symlink (207))-Creating symlink: \tmp\hadoop-root\ mapred\local\1456973611219\part-r-00000 <-f:\javawork2014\mapreduce/part-r-00000
2016-03-03 10:53:32,050 WARN [main] fs. Fileutil (FileUtil.java:symLink (824))-Fail to create symbolic links on Windows. The default security settings in Windows disallow non-elevated administrators and all non-administrators from creating sym Bolic links. This behavior can is changed in the Local Security Policy Management Console
2016-03-03 10:53:32,052 WARN [main] mapred. Localdistributedcachemanager (LocalDistributedCacheManager.java:symlink (209))-Failed to create symlink: \tmp\ hadoop-root\mapred\local\1456973611219\part-r-00000 <-f:\javawork2014\mapreduce/part-r-00000
2016-03-03 10:53:32,052 INFO [main] mapred. Localdistributedcachemanager (LocalDistributedCacheManager.java:setup (171))-Localized hdfs://node1:8020/usr/ output/weibo2/part-r-00000 as file:/tmp/hadoop-root/mapred/local/1456973611219/part-r-00000
2016-03-03 10:53:32,172 WARN [main] Conf. Configuration (Configuration.java:loadProperty (2368))-File:/tmp/hadoop-root/mapred/local/localrunner/root/job_ Local1862629830_0001/job_local1862629830_0001.xml:an attempt to override final parameter: Mapreduce.job.end-notification.max.retry.interval; Ignoring.
2016-03-03 10:53:32,177 WARN [main] Conf. Configuration (Configuration.java:loadProperty (2368))-File:/tmp/hadoop-root/mapred/local/localrunner/root/job_ Local1862629830_0001/job_local1862629830_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2016-03-03 10:53:32,182 INFO [main] MapReduce. Job (Job.java:submit (1289))-the URL to track the job:http://localhost:8080/
2016-03-03 10:53:32,183 INFO [main] MapReduce. Job (Job.java:monitorAndPrintJob (1334))-Running job:job_local1862629830_0001
2016-03-03 10:53:32,184 INFO [Thread-12] mapred. Localjobrunner (LocalJobRunner.java:createOutputCommitter (471))-outputcommitter set in config null
2016-03-03 10:53:32,190 INFO [Thread-12] mapred. Localjobrunner (LocalJobRunner.java:createOutputCommitter (489))-Outputcommitter is Org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-03-03 10:53:32,236 INFO [Thread-12] mapred. Localjobrunner (LocalJobRunner.java:runTasks (448))-Waiting for map tasks
2016-03-03 10:53:32,237 INFO [localjobrunner Map Task Executor #0] mapred. Localjobrunner (LocalJobRunner.java:run (224))-Starting Task:attempt_local1862629830_0001_m_000000_0
2016-03-03 10:53:32,262 INFO [localjobrunner Map Task Executor #0] util. Procfsbasedprocesstree (ProcfsBasedProcessTree.java:isAvailable (181))-Procfsbasedprocesstree currently is Supported only on Linux.
2016-03-03 10:53:32,306 INFO [localjobrunner Map Task Executor #0] mapred. Task (Task.java:initialize (587))-Using resourcecalculatorprocesstree: [Email protected]
2016-03-03 10:53:32,310 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:runNewMapper (733))-processing split:hdfs://node1:8020/usr/output/weibo1/part-r-00001:0+ 195718
2016-03-03 10:53:32,319 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:createSortingCollector (388))-Map output Collector class = org.apache.hadoop.mapred.maptask$ Mapoutputbuffer
2016-03-03 10:53:32,344 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:setEquator (1182))-(EQUATOR) 0 kvi 26214396 (104857584)
2016-03-03 10:53:32,344 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:init (975))-mapreduce.task.io.sort.mb:100
2016-03-03 10:53:32,344 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:init (976))-Soft limit at 83886080
2016-03-03 10:53:32,344 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:init (977))-Bufstart = 0; Bufvoid = 104857600
2016-03-03 10:53:32,344 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:init (978))-Kvstart = 26214396; Length = 6553600

................................
2016-03-03 10:53:32,614 INFO [localjobrunner Map Task Executor #0] mapred. Maptask (MapTask.java:flush (1437))-Starting flush of map output
2016-03-03 10:53:32,626 INFO [Thread-12] mapred. Localjobrunner (LocalJobRunner.java:runTasks (456))-Map task executor complete.
2016-03-03 10:53:32,734 WARN [Thread-12] mapred. Localjobrunner (LocalJobRunner.java:run (560))-job_local1862629830_0001
JAVA.LANG.EXCEPTION:JAVA.IO.FILENOTFOUNDEXCEPTION:PART-R-00003 (the system cannot find the file specified.) )
At Org.apache.hadoop.mapred.localjobrunner$job.runtasks (localjobrunner.java:462)
At Org.apache.hadoop.mapred.localjobrunner$job.run (localjobrunner.java:522)
caused by:java.io.filenotfoundexception:part-r-00003 (the system cannot find the file specified.) )
At Java.io.FileInputStream.open (Native Method)
At Java.io.fileinputstream.<init> (fileinputstream.java:146)
At Java.io.fileinputstream.<init> (fileinputstream.java:101)
At Java.io.filereader.<init> (filereader.java:58)
At Com.laoxiao.mr.tf.LastMapper.setup (lastmapper.java:49)
At Org.apache.hadoop.mapreduce.Mapper.run (mapper.java:142)
At Org.apache.hadoop.mapred.MapTask.runNewMapper (maptask.java:764)
At Org.apache.hadoop.mapred.MapTask.run (maptask.java:340)
At Org.apache.hadoop.mapred.localjobrunner$job$maptaskrunnable.run (localjobrunner.java:243)
At Java.util.concurrent.executors$runnableadapter.call (executors.java:471)
At Java.util.concurrent.FutureTask.run (futuretask.java:262)
At Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1145)
At Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:615)
At Java.lang.Thread.run (thread.java:745)
2016-03-03 10:53:33,186 INFO [main] MapReduce. Job (Job.java:monitorAndPrintJob (1355))-job job_local1862629830_0001 running in Uber Mode:false
2016-03-03 10:53:33,188 INFO [main] MapReduce. Job (Job.java:monitorAndPrintJob (1362))-map 0% reduce 0%
2016-03-03 10:53:33,191 INFO [main] MapReduce. Job (Job.java:monitorAndPrintJob (1375))-job job_local1862629830_0001 failed with state failed due To:na
2016-03-03 10:53:33,197 INFO [main] MapReduce. Job (Job.java:monitorAndPrintJob (1380))-counters:0

After careful analysis of the error, found that the problem occurs on the local machine can not copy the cached data to the local disk, the specific error reference above the red text

Workaround

1. Directly make the program into a jar package and run it under the Linux server using the Hadoop jar command

2. Or turn off Windows7 UAC, and then you can run the MapReduce in local test mode under Eclipse

Also, if you use either of the above methods, you will see an error similar to the following:

java.lang.exception:java.io.filenotfoundexception:file:\tmp\hadoop-root\mapred\local\1456979002661\tb_dim_ City.dat (the file name, directory name, or volume label method is incorrect.) )
At Org.apache.hadoop.mapred.localjobrunner$job.runtasks (localjobrunner.java:462)
At Org.apache.hadoop.mapred.localjobrunner$job.run (localjobrunner.java:522)
caused By:java.io.filenotfoundexception:file:\tmp\hadoop-root\mapred\local\1456979002661\tb_dim_city.dat (file name, The directory name or volume label method is incorrect. )
At Java.io.FileInputStream.open (Native Method)
At Java.io.fileinputstream.<init> (fileinputstream.java:146)
At Java.io.fileinputstream.<init> (fileinputstream.java:101)
At Java.io.filereader.<init> (filereader.java:58)
At Mapjoin.mapsidejoin_leftouterjoin$leftoutjoinmapper.setup (mapsidejoin_leftouterjoin.java:100)
At Org.apache.hadoop.mapreduce.Mapper.run (mapper.java:142)
At Org.apache.hadoop.mapred.MapTask.runNewMapper (maptask.java:764)
At Org.apache.hadoop.mapred.MapTask.run (maptask.java:340)
At Org.apache.hadoop.mapred.localjobrunner$job$maptaskrunnable.run (localjobrunner.java:243)
At Java.util.concurrent.executors$runnableadapter.call (executors.java:471)
At Java.util.concurrent.FutureTask.run (futuretask.java:262)
At Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1145)
At Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:615)
At Java.lang.Thread.run (thread.java:745)

The possible reason is that the file name suffix of the HDFs upload, such as. dat, can be deleted after the files suffix to try

MapReduce Distributed Cache program, unable to perform problem resolution in eclipse under Windows

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.