The console cannot print progress information when running a mapreduce program in eclipse

Source: Internet
Author: User
Tags deprecated flush posix shuffle sort split log4j

The following information is typically printed on the console:

Log4j:warn No Appenders could is found for logger (Org.apache.hadoop.util.Shell).
Log4j:warn Initialize the log4j system properly.
Log4j:warn See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

The MapReduce progress information is then not present.

This situation is generally due to log4j this log information print module configuration information is not given, you can in the project SRC directory, create a new file, named "Log4j.properties", fill in the following information:

Log4j.rootlogger=info, stdout
log4j.appender.stdout=org.apache.log4j.consoleappender
Log4j.appender.stdout.layout=org.apache.log4j.patternlayout
Log4j.appender.stdout.layout.ConversionPattern =%d%p [%c]-%m%n
log4j.appender.logfile=org.apache.log4j.fileappender
log4j.appender.logfile.file=target /spring.log
log4j.appender.logfile.layout=org.apache.log4j.patternlayout
log4j.appender.logfile.layout.conversionpattern=%d%p [%c]-%m%n

You can print all of the log information that starts with [INFO] in the process of Hadoop running.

Once added, the effect is as follows:

2014-05-25 20:11:47,738 INFO [org.apache.hadoop.conf.Configuration.deprecation]-session.id is deprecated. Instead, use Dfs.metrics.session-id 2014-05-25 20:11:47,741 INFO [Org.apache.hadoop.metrics.jvm.JvmMetrics]- Initializing JVM Metrics with Processname=jobtracker, sessionid= 2014-05-25 20:11:49,079 WARN [  Org.apache.hadoop.mapreduce.JobSubmitter]-No job jar file set. User classes May is not found.
See Job or Job#setjar (String). 2014-05-25 20:11:49,346 INFO [Org.apache.hadoop.mapreduce.lib.input.FileInputFormat]-Total input paths to process:3 20 14-05-25 20:11:49,496 Info [Org.apache.hadoop.mapreduce.JobSubmitter]-Number of splits:3 2014-05-25 20:11:50,155 info [ Org.apache.hadoop.mapreduce.JobSubmitter]-Submitting tokens for job:job_local196560459_0001 2014-05-25 20:11:50,386 WARN [Org.apache.hadoop.conf.Configuration]-File:/tmp/hadoop-lichao/mapred/staging/lichao196560459/.staging/job _local196560459_0001/job.xml:an attempt to override final PARAMETER:MAPREDUCE.JOB.ENd-notification.max.retry.interval;
Ignoring. 2014-05-25 20:11:50,424 WARN [org.apache.hadoop.conf.Configuration]-file:/tmp/hadoop-lichao/mapred/staging/ Lichao196560459/.staging/job_local196560459_0001/job.xml:an attempt to override final parameter:  mapreduce.job.end-notification.max.attempts;
Ignoring. 2014-05-25 20:11:50,925 WARN [org.apache.hadoop.conf.Configuration]-file:/tmp/hadoop-lichao/mapred/local/ Localrunner/lichao/job_local196560459_0001/job_local196560459_0001.xml:an attempt to override final parameter:  Mapreduce.job.end-notification.max.retry.interval;
Ignoring. 2014-05-25 20:11:50,951 WARN [org.apache.hadoop.conf.Configuration]-file:/tmp/hadoop-lichao/mapred/local/ Localrunner/lichao/job_local196560459_0001/job_local196560459_0001.xml:an attempt to override final parameter:  mapreduce.job.end-notification.max.attempts;
Ignoring. 2014-05-25 20:11:50,973 INFO [org.apache.hadoop.mapreduce.Job]-the URL to track the job:http://localhost:8080/2014-05- 25 20:11:50,976 info [Org.apache.hadoop.mapreduce.Job]-Running job:job_local196560459_0001 2014-05-25 20:11:50,983 INFO [ Org.apache.hadoop.mapred.LocalJobRunner]-outputcommitter set in config null 2014-05-25 20:11:51,011 INFO [ Org.apache.hadoop.mapred.LocalJobRunner]-Outputcommitter is Org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2014-05-25 20:11:51,252 INFO [ Org.apache.hadoop.mapred.LocalJobRunner]-Waiting for map tasks 2014-05-25 20:11:51,256 INFO [ Org.apache.hadoop.mapred.LocalJobRunner]-Starting Task:attempt_local196560459_0001_m_000000_0 2014-05-25 20:11:51,464 INFO [Org.apache.hadoop.mapred.Task]-Using resourcecalculatorprocesstree: [] 2014-05-25 20:11:51,481 INF O [Org.apache.hadoop.mapred.MapTask]-processing split:hdfs://localhost:9000/data/score_in/chinese:0+16 2014-05-25 20:11:51,557 INFO [Org.apache.hadoop.mapred.MapTask]-Map output collector class = org.apache.hadoop.mapred.maptask$ Mapoutputbuffer 2014-05-25 20:11:52,000 INFO [org.apache.hadoop.mapreduce. Job]-Job job_local196560459_0001 running in Uber mode:false 2014-05-25 20:11:52,006 INFO [Org.apache.hadoop.mapreduce. JOB]-map 0% reduce 0% 2014-05-25 20:11:52,983 INFO [Org.apache.hadoop.mapred.MapTask]-(EQUATOR) 0 kvi 26214396 (104857 584) 2014-05-25 20:11:52,984 INFO [Org.apache.hadoop.mapred.MapTask]-mapreduce.task.io.sort.mb:100 2014-05-25 20:11:52,984 info [org.apache.hadoop.mapred.MapTask]-Soft limit at 83886080 2014-05-25 20:11:52,984 INFO [org.apache.ha Doop.mapred.MapTask]-Bufstart = 0; bufvoid = 104857600 2014-05-25 20:11:52,984 INFO [Org.apache.hadoop.mapred.MapTask]-kvstart = 26214396; Length = 6553600 2014-05-25 20:11:53,421 info [org.apache.hadoop.mapred.LocalJobRunner]-2014-05-25 20:11:53,433 info [o Rg.apache.hadoop.mapred.MapTask]-Starting flush of map output 2014-05-25 20:11:53,434 INFO [ Org.apache.hadoop.mapred.MapTask]-Spilling map output 2014-05-25 20:11:53,434 INFO [org.apache.hadoop.mapred.MapTask ]-Bufstart = 0; Bufend = 18; Bufvoid = 104857600 2014-05-25 20:11:53,434 INFO [Org.apache.hadoop.mapred.MapTask]-Kvstart = 26214396 (104857584); Kvend = 26214388 (104857552); Length = 9/6553600 2014-05-25 20:11:53,474 INFO [Org.apache.hadoop.mapred.MapTask]-finished spill 0 2014-05-25 20:11:53, 487 INFO [Org.apache.hadoop.mapred.Task]-Task:attempt_local196560459_0001_m_000000_0 is done. and is in the process of committing 2014-05-25 20:11:53,535 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-map 2014-05-2
5 20:11:53,535 INFO [Org.apache.hadoop.mapred.Task]-Task ' attempt_local196560459_0001_m_000000_0 ' done. 2014-05-25 20:11:53,536 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Finishing Task:attempt_local196560459_0001_ M_000000_0 2014-05-25 20:11:53,536 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Starting Task:attempt_ LOCAL196560459_0001_M_000001_0 2014-05-25 20:11:53,545 INFO [Org.apache.hadoop.mapred.Task]-Using Resourcecalculatorprocesstree: [] 2014-05-25 20:11:53,550 INFO [org.apache.hadoop.mapred.MapTaSK]-processing split:hdfs://localhost:9000/data/score_in/english:0+16 2014-05-25 20:11:53,553 INFO [
Org.apache.hadoop.mapred.MapTask]-Map output collector class = Org.apache.hadoop.mapred.maptask$mapoutputbuffer 2014-05-25 20:11:54,024 Info [org.apache.hadoop.mapreduce.Job]-map 100% reduce 0% 2014-05-25 20:11:55,083 INFO [Org.apa Che.hadoop.mapred.MapTask]-(EQUATOR) 0 kvi 26214396 (104857584) 2014-05-25 20:11:55,083 INFO [ Org.apache.hadoop.mapred.MapTask]-mapreduce.task.io.sort.mb:100 2014-05-25 20:11:55,083 INFO [ Org.apache.hadoop.mapred.MapTask]-Soft limit at 83886080 2014-05-25 20:11:55,083 INFO [ Org.apache.hadoop.mapred.MapTask]-Bufstart = 0; bufvoid = 104857600 2014-05-25 20:11:55,083 INFO [Org.apache.hadoop.mapred.MapTask]-kvstart = 26214396; Length = 6553600 2014-05-25 20:11:55,104 info [org.apache.hadoop.mapred.LocalJobRunner]-2014-05-25 20:11:55,104 info [o Rg.apache.hadoop.mapred.MapTask]-Starting flush of map output 2014-05-25 20:11:55,104 INFO [Org.apacHe.hadoop.mapred.MapTask]-Spilling map output 2014-05-25 20:11:55,105 INFO [Org.apache.hadoop.mapred.MapTask]- Bufstart = 0; Bufend = 18; bufvoid = 104857600 2014-05-25 20:11:55,105 INFO [Org.apache.hadoop.mapred.MapTask]-Kvstart = 26214396 (104857584); Kvend = 26214388 (104857552); Length = 9/6553600 2014-05-25 20:11:55,113 INFO [Org.apache.hadoop.mapred.MapTask]-finished spill 0 2014-05-25 20:11:55, 121 INFO [Org.apache.hadoop.mapred.Task]-TASK:ATTEMPT_LOCAL196560459_0001_M_000001_0 is done. and is in the process of committing 2014-05-25 20:11:55,135 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-map 2014-05-2
5 20:11:55,135 INFO [Org.apache.hadoop.mapred.Task]-Task ' attempt_local196560459_0001_m_000001_0 ' done. 2014-05-25 20:11:55,136 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Finishing Task:attempt_local196560459_0001_ M_000001_0 2014-05-25 20:11:55,136 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Starting Task:attempt_ Local196560459_0001_m_000002_0 2014-05-25 20:11:55,146 info [org.apache.hadoop.mapred.Task]-Using resourcecalculatorprocesstree: [] 2014-05-25 20:11:55,150 INFO [Org.apache.hadoop.mapred.MapTask]-processing split:hdfs://localhost:9000/data/score_in/math:0+16 2014-05-25 20:11:55,152 INFO [Org.apache.hadoop.mapred.MapTask]-Map output collector class = org.apache.hadoop.mapred.maptask$
Mapoutputbuffer 2014-05-25 20:11:57,197 INFO [Org.apache.hadoop.mapred.MapTask]-(EQUATOR) 0 kvi 26214396 (104857584) 2014-05-25 20:11:57,197 INFO [Org.apache.hadoop.mapred.MapTask]-mapreduce.task.io.sort.mb:100 2014-05-25 20:11:57,197 info [org.apache.hadoop.mapred.MapTask]-Soft limit at 83886080 2014-05-25 20:11:57,197 INFO [org.apache.ha Doop.mapred.MapTask]-Bufstart = 0; bufvoid = 104857600 2014-05-25 20:11:57,197 INFO [Org.apache.hadoop.mapred.MapTask]-kvstart = 26214396; Length = 6553600 2014-05-25 20:11:57,219 info [org.apache.hadoop.mapred.LocalJobRunner]-2014-05-25 20:11:57,220 info [o Rg.apache.hadoop.mapred.MapTask]-Starting flush of map output 2014-05-25 20:11:57,220 INFO [Org.apache.hadoop.mapred.MapTask]-Spilling map output 2014-05- 20:11:57,220 INFO [Org.apache.hadoop.mapred.MapTask]-bufstart = 0; Bufend = 18; bufvoid = 104857600 2014-05-25 20:11:57,220 INFO [Org.apache.hadoop.mapred.MapTask]-Kvstart = 26214396 (104857584); Kvend = 26214388 (104857552); Length = 9/6553600 2014-05-25 20:11:57,227 INFO [Org.apache.hadoop.mapred.MapTask]-finished spill 0 2014-05-25 20:11:57, 236 INFO [Org.apache.hadoop.mapred.Task]-task:attempt_local196560459_0001_m_000002_0 is done. and is in the process of committing 2014-05-25 20:11:57,250 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-map 2014-05-2
5 20:11:57,250 INFO [Org.apache.hadoop.mapred.Task]-Task ' attempt_local196560459_0001_m_000002_0 ' done. 2014-05-25 20:11:57,250 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Finishing Task:attempt_local196560459_0001_ M_000002_0 2014-05-25 20:11:57,251 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Map task executor complete. 2014-05-25 20:11:57,266 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Waiting for reduce tasks 2014-05-25
20:11:57,266 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Starting Task:attempt_local196560459_0001_r_000000_0 2014-05-25 20:11:57,329 INFO [Org.apache.hadoop.mapred.Task]-Using resourcecalculatorprocesstree: [] 2014-05-25 20:11 : 57,354 INFO [Org.apache.hadoop.mapred.ReduceTask]-Using shuffleconsumerplugin: org.apache.hadoop.mapreduce.task.reduce.shuffle@7f262312 2014-05-25 20:11:57,475 INFO [ Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl]-mergermanager:memorylimit=1302488704,
maxsingleshufflelimit=325622176, mergethreshold=859642560, iosortfactor=10, memtomemmergeoutputsthreshold=10 2014-05-25 20:11:57,501 INFO [Org.apache.hadoop.mapreduce.task.reduce.EventFetcher]-Attempt_local196560459_0001_r _000000_0 Thread Started:eventfetcher for fetching MAP completion Events 2014-05-25 20:11:57,714 INFO [Org.apache.hadoop. mapreduce.tAsk.reduce.LocalFetcher]-localfetcher#1 about to shuffle output of map Attempt_local196560459_0001_m_000001_0 Decomp:2 6 len:30 to MEMORY 2014-05-25 20:11:57,732 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput]-Read byt Es from Map-output for attempt_local196560459_0001_m_000001_0 2014-05-25 20:11:57,744 INFO [ Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl]-closeinmemoryfile-map-output of Size:26, Inmemorymapoutputs.size () 1, commitmemory-0, usedmemory->26 2014-05-25 20:11:57,757 INFO [Org.apache.hado Op.mapreduce.task.reduce.LocalFetcher]-localfetcher#1 about to shuffle output of map attempt_local196560459_0001_m_ 000000_0 decomp:26 len:30 to MEMORY 2014-05-25 20:11:57,760 INFO [ Org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput]-Read bytes from Map-output for attempt_local196560459_ 0001_m_000000_0 2014-05-25 20:11:57,760 INFO [Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl]- Map Closeinmemoryfile-output of Size:26, Inmemorymapoutputs.size (), 2, commitmemory, usedmemory->52 2014-05-25 20:11:57,764 WARN [Org.apache.hadoop.io.ReadaheadPool]-Failed readahead on ifile ebadf:bad the file descriptor at Org.apache.hadoop.io. Nativeio. Nativeio$posix.posix_fadvise (Native Method) at org.apache.hadoop.io.nativeio.nativeio$ Posix.posixfadviseifpossible (nativeio.java:263) at org.apache.hadoop.io.nativeio.nativeio$posix$ Cachemanipulator.posixfadviseifpossible (nativeio.java:142) at org.apache.hadoop.io.readaheadpool$ Readaheadrequestimpl.run (readaheadpool.java:206) at Java.util.concurrent.ThreadPoolExecutor.runWorker (
	threadpoolexecutor.java:1145) at Java.util.concurrent.threadpoolexecutor$worker.run (ThreadPoolExecutor.java:615) At Java.lang.Thread.run (thread.java:745) 2014-05-25 20:11:57,769 INFO [ Org.apache.hadoop.mapreduce.task.reduce.LocalFetcher]-localfetcher#1 about to shuffle output of map Attempt_ Local196560459_0001_m_000002_0 decomp:26 len:30 to MEMORY 2014-05-25 20:11:57,778 INFO [org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput]-Read bytes from Map-output for at Tempt_local196560459_0001_m_000002_0 2014-05-25 20:11:57,778 INFO [ Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl]-closeinmemoryfile-map-output of Size:26, Inmemorymapoutputs.size (), 3, commitmemory, usedmemory->78 2014-05-25 20:11:57,784 INFO [Org.apache.had Oop.mapreduce.task.reduce.EventFetcher]-Eventfetcher is interrupted. Returning 2014-05-25 20:11:57,784 WARN [Org.apache.hadoop.io.ReadaheadPool]-Failed readahead on ifile ebadf:bad file de Scriptor at Org.apache.hadoop.io.nativeio.nativeio$posix.posix_fadvise (Native Method) at Org.apache.hadoop.io.nativeio.nativeio$posix.posixfadviseifpossible (nativeio.java:263) at Org.apache.hadoop.io.nativeio.nativeio$posix$cachemanipulator.posixfadviseifpossible (NativeIO.java:142) at Org.apache.hadoop.io.readaheadpool$readaheadrequestimpl.run (readaheadpool.java:206) at JAVA.UTIL.COnCurrent. Threadpoolexecutor.runworker (threadpoolexecutor.java:1145) at Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:615) at Java.lang.Thread.run (thread.java:745) 2014-05-25 20:11:57,793 INFO [
Org.apache.hadoop.mapred.LocalJobRunner]-3/3 copied. 2014-05-25 20:11:57,793 INFO [Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl]-Finalmerge called with 3  In-memory map-outputs and 0 on-disk map-outputs 2014-05-25 20:11:57,829 INFO [Org.apache.hadoop.mapred.Merger]-Merging 3 Sorted segments 2014-05-25 20:11:57,830 INFO [Org.apache.hadoop.mapred.Merger]-the last Merge-pass with 3 segm Ents left of total size:66 bytes 2014-05-25 20:11:57,840 INFO [Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl ]-Merged 3 segments, bytes to disk to satisfy reduce memory limit 2014-05-25 20:11:57,841 INFO [Org.apache.hadoop.map Reduce.task.reduce.MergeManagerImpl]-Merging 1 files, bytes from disk 2014-05-25 20:11:57,844 INFO [org.apache.hAdoop.mapreduce.task.reduce.MergeManagerImpl]-Merging 0 segments, 0 bytes from memory to reduce 2014-05-25 20:11:57,84 4 Info [Org.apache.hadoop.mapred.Merger]-Merging 1 sorted segments 2014-05-25 20:11:57,847 INFO [org.apache.hadoop.mapr Ed. Merger]-the last Merge-pass, with 1 segments left of total size:70 bytes 2014-05-25 20:11:57,850 INFO [Org.apac
He.hadoop.mapred.LocalJobRunner]-3/3 copied. 2014-05-25 20:11:57,970 INFO [org.apache.hadoop.conf.Configuration.deprecation]-Mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2014-05-25 20:11:58,616 INFO [Org.apache.hadoop.mapred.Task]-Task:attempt_ Local196560459_0001_r_000000_0 is done. 
and is in the process of committing 2014-05-25 20:11:58,628 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-3/3 copied. 2014-05-25 20:11:58,629 INFO [Org.apache.hadoop.mapred.Task]-Task Attempt_local196560459_0001_r_000000_0 is allowed To commit now 2014-05-25 20:11:58,667 INFO [org.apache.hadoop.mapreduce.lib. Output. Fileoutputcommitter]-Saved output of task ' attempt_local196560459_0001_r_000000_0 ' to hdfs://localhost:9000/data/ score_out3/_temporary/0/task_local196560459_0001_r_000000 2014-05-25 20:11:58,672 INFO [ Org.apache.hadoop.mapred.LocalJobRunner]-reduce > reduce 2014-05-25 20:11:58,673 INFO [
Org.apache.hadoop.mapred.Task]-Task ' attempt_local196560459_0001_r_000000_0 ' done. 2014-05-25 20:11:58,673 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Finishing Task:attempt_local196560459_0001_
R_000000_0 2014-05-25 20:11:58,673 INFO [Org.apache.hadoop.mapred.LocalJobRunner]-Reduce task executor complete. 2014-05-25 20:11:59,034 Info [org.apache.hadoop.mapreduce.Job]-map 100% reduce 100% 2014-05-25 20:11:59,035 INFO [org.a PACHE.HADOOP.MAPREDUCE.JOB]-Job job_local196560459_0001 completed successfully 2014-05-25 20:11:59,099 INFO [ ORG.APACHE.HADOOP.MAPREDUCE.JOB]-counters:38 File System Counters file:number of bytes read=3536 File:number of Bytes written=883072 FILe:number of Read operations=0 file:number of large read operations=0 File:number of write Operations=0 Hdfs:num  ber of bytes read=144 hdfs:number of bytes written=16 hdfs:number of Read operations=37 hdfs:number of large read Operations=0 Hdfs:number of Write operations=6 map-reduce Framework Map input records=9 Map output records=9 Ma P output bytes=54 Map output materialized bytes=90 input split bytes=321 Combine input records=0 Combine output re cords=0 reduce input groups=3 reduce shuffle bytes=90 reduce input records=9 reduce output records=3 spilled Rec
		Ords=18 shuffled Maps =3 Failed shuffles=0 merged Map outputs=3 GC time Elapsed (ms) =188 CPU Time Spent (ms) =0 Physical memory (bytes) snapshot=0 Virtual Memory (bytes) snapshot=0 Total committed heap usage (bytes) =1585971200 S Huffle Errors bad_id=0 connection=0 io_error=0 wrong_length=0 wrong_map=0 wrong_reduce=0 File Input Format Co
Unters Bytes read=48	File Output Format Counters Bytes written=16
 


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.