Error when using hive to analyze Lzo data

Source: Internet
Author: User
Tags auth constructor readline
Before establishing the map job, the text file is merged into the Lzo file by Combineinputformat, and the job settings are: Conf.setint ("Mapred.min.split.size", 1);         conf.setlong ("Mapred.max.split.size", 600000000); 600MB, make each compressed file 120MB around         conf.set ("Mapred.output.compression.codec", "
Com.hadoop.compression.lzo.LzopCodec ");
        conf.set ("Mapred.output.compression.type", "BLOCK"); 

        conf.setboolean ("Mapred.output.compress", true); Then use hive to parse the Lzo directory: 2014-03-03 17:00:01,494 WARN com.hadoop.compression.lzo.LzopInputStream:IOException in Getcompresseddata;
Likely LZO corruption. java.io.IOException:Compressed length 2004251197 exceeds max block size 67108864 (probably corrupt file) at COM.HADOOP.C Ompression.lzo.LzopInputStream.getCompressedData (lzopinputstream.java:286) at Com.hadoop.compression.lzo.LzopInputStream.decompress (lzopinputstream.java:256) at Org.apache.hadoop.io.compress.DecompRessorstream.read (decompressorstream.java:83) at Java.io.InputStream.read (inputstream.java:82) at Org.apache.hadoop.util.LineReader.readDefaultLine (linereader.java:209) at Org.apache.hadoop.util.LineReader.readLine (linereader.java:173) at Org.apache.hadoop.util.LineReader.readLine ( linereader.java:308) at Com.hadoop.mapred.deprecatedlzolinerecordreader.<init> ( deprecatedlzolinerecordreader.java:64) at Com.hadoop.mapred.DeprecatedLzoTextInputFormat.getRecordReader ( deprecatedlzotextinputformat.java:158) at Org.apache.hadoop.hive.ql.io.combinehiverecordreader.<init> ( COMBINEHIVERECORDREADER.JAVA:65) at Sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method) at Sun.reflect.NativeConstructorAccessorImpl.newInstance (nativeconstructoraccessorimpl.java:39) at Sun.reflect.DelegatingConstructorAccessorImpl.newInstance (delegatingconstructoraccessorimpl.java:27) at Java.lang.reflect.Constructor.newInstance (constructor.java:513) at Org.apache.hadoop.hive.shims.HadOopshimssecure$combinefilerecordreader.initnextrecordreader (hadoopshimssecure.java:355) at Org.apache.hadoop.hive.shims.hadoopshimssecure$combinefilerecordreader.<init> (HadoopShimsSecure.java:316 ) at Org.apache.hadoop.hive.shims.hadoopshimssecure$combinefileinputformatshim.getrecordreader ( hadoopshimssecure.java:430) at Org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader ( combinehiveinputformat.java:540) at Org.apache.hadoop.mapred.MapTask.runOldMapper (maptask.java:395) at Org.apache.hadoop.mapred.MapTask.run (maptask.java:333) at Org.apache.hadoop.mapred.child$4.run (child.java:268) at
	Java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:396) At Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1408) at Org.apache.hadoop.mapred.Child.main (child.java:262) 2014-03-03 17:00:01,501 INFO org.apache.hadoop.mapred.TaskLogsTruncater:Initializing logs ' Truncater with mapretainsize=-1 and Reduceretainsize=-1 2014-03-03 17:00:01,503 ERROR org.apache.hadoop.security.UserGroupInformation: Priviledgedactionexception As:hdfs (auth:simple) Cause:java.io.IOException: Java.lang.reflect.InvocationTargetException 2014-03-03 17:00:01,503 WARN org.apache.hadoop.mapred.Child:Error Running child java.io.IOException:java.lang.reflect.InvocationTargetException at Org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException ( HIVEIOEXCEPTIONHANDLERCHAIN.JAVA:97) at Org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException ( hiveioexceptionhandlerutil.java:57) at org.apache.hadoop.hive.shims.hadoopshimssecure$ Combinefilerecordreader.initnextrecordreader (hadoopshimssecure.java:369) at Org.apache.hadoop.hive.shims.hadoopshimssecure$combinefilerecordreader.<init> (HadoopShimsSecure.java:316 ) at Org.apache.hadoop.hive.shims.hadoopshimssecure$combinefileinputformatshim.getrecordreader ( hadoopshimssecure.java:430) at org. Apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader (combinehiveinputformat.java:540) at Org.apache.hadoop.mapred.MapTask.runOldMapper (maptask.java:395) at Org.apache.hadoop.mapred.MapTask.run ( maptask.java:333) at Org.apache.hadoop.mapred.child$4.run (child.java:268) at
	Java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subject.doAs (subject.java:396) At Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1408) at Org.apache.hadoop.mapred.Child.main (child.java:262) caused by:java.lang.reflect.InvocationTargetException at Sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method) at Sun.reflect.NativeConstructorAccessorImpl.newInstance (nativeconstructoraccessorimpl.java:39) at Sun.reflect.DelegatingConstructorAccessorImpl.newInstance (delegatingconstructoraccessorimpl.java:27) at Java.lang.reflect.Constructor.newInstance (constructor.java:513) at Org.apache.hadoop.hive.shims.hadoopshimssecure$combiNefilerecordreader.initnextrecordreader (hadoopshimssecure.java:355) ... Ten more caused by:java.io.IOException:Compressed length 2004251197 exceeds max block size 67108864 (probably corrupt fil e) at Com.hadoop.compression.lzo.LzopInputStream.getCompressedData (lzopinputstream.java:286) at Com.hadoop.compression.lzo.LzopInputStream.decompress (lzopinputstream.java:256) at Org.apache.hadoop.io.compress.DecompressorStream.read (decompressorstream.java:83) at Java.io.InputStream.read ( inputstream.java:82) at Org.apache.hadoop.util.LineReader.readDefaultLine (linereader.java:209) at Org.apache.hadoop.util.LineReader.readLine (linereader.java:173) at Org.apache.hadoop.util.LineReader.readLine ( linereader.java:308) at Com.hadoop.mapred.deprecatedlzolinerecordreader.<init> ( deprecatedlzolinerecordreader.java:64) at Com.hadoop.mapred.DeprecatedLzoTextInputFormat.getRecordReader ( deprecatedlzotextinputformat.java:158) at Org.apache.hadoop.hive.ql.io.combinehiverecordreader.<init> (COMBINEHIVERECORDREADER.JAVA:65) ...
 More search a lot of articles finally found in Job.xml configuration:Mapred.input.format.class=org.apache.hadoop.hive.ql.io.combinehiveinputformatHive.hadoop.supports.splittable.combineinputformat=true decisively will
Hive.hadoop.supports.splittable.combineinputformat is set to false after normal.
The reason is that the native does not support sharding after Lzo compression, if the support shard needs to be indexed. And here each Lzo file is relatively small  120MB, so do not need to build  an index does not support sharding.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.