New and old interface issues encountered when sorting using Totalorderpartitioner on hadoop-2.2.0 clusters

Source: Internet
Author: User

A recent Hadoop version of my project was migrated from hadoop-0.20.2 to hadoop-2.2.0, and the older MapReduce job was written in the old API, but it was basically compatible in the new environment, with only two involved in global Sort job, the same problem occurred, the error is as follows:



1. Wrong key class:org.apache.hadoop.io.LongWritable is not class Com.cmri.bcpdm.v2.filters.sort.NewTextat Org.apache.hadoop.io.sequencefile$recordcompresswriter.append (sequencefile.java:1380) at Org.apache.hadoop.mapreduce.lib.partition.InputSampler.writePartitionFile (inputsampler.java:340) at Org.apache.hadoop.mapred.lib.InputSampler.writePartitionFile (inputsampler.java:49) at Com.cmri.bcpdm.v2.filters.sort.Sort.run (sort.java:295)

2. Wrong key class:org.apache.hadoop.io.LongWritable is not class Org.apache.hadoop.io.IntWritableat Org.apache.hadoop.io.sequencefile$recordcompresswriter.append (sequencefile.java:1380) at Org.apache.hadoop.mapreduce.lib.partition.InputSampler.writePartitionFile (inputsampler.java:340) at Org.apache.hadoop.mapred.lib.InputSampler.writePartitionFile (inputsampler.java:49) at Com.cmri.bcpdm.v2.filters.counttransform.CountTransform.run (counttransform.java:223)

At first, I couldn't figure out what was going on. Later on the internet to find a half-day, only found in the Hadoop source package inside the example with Sort.java program, carefully compared the new and old two versions, feel the need to use the new API to change the old code. The API is placed in the org.apache.hadoop.mapred package, and the new API is placed in the Org.apache.hadoop.mapreduce package.


After changing the procedure in example, the above error is missing, but a new error has occurred:

Can ' t read partitions file

caused By:java.io.FileNotFoundException:File _partition.lst does not

Exist.

It means that the partition file could not be found.

Google also found a half-day, and finally found the answer here, in fact, the code in the Conf are changed to Job.getconfiguration (), the specific reason is not too clear.

PS: in the process of using the API rewrite, also found another problem, mapreduce in the Conf.set ("name", "value") when setting global variables, this line of code should be placed before the job definition, that is, as far as possible to put forward, Otherwise the setting may not be successful, and map or reduce reads NULL when reading this global variable, resulting in a subsequent error.


This article is from the "7389921" blog, please be sure to keep this source http://7399921.blog.51cto.com/7389921/1584544

New and old interface issues encountered when sorting using Totalorderpartitioner on hadoop-2.2.0 clusters

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.