There is a set of writable implementations in Hadoop that can meet most requirements, but in some cases we need to construct a new implementation based on our own needs, and with custom writable, we have complete control over binary representations and sort orders.
To demonstrate how to create a custom writable type, we need to write an implementation that represents a pair of strings:
BlicclassTextpairImplementsWritablecomparable<textpair> { PrivateText first; PrivateText Second; PublicTextpair () {Set (NewText (),NewText ()); } PublicTextpair (string First, string second) {Set (NewText (first),NewText (second)); } PublicTextpair (text first, text second) {Set (first, second); } Public voidSet (text first, text second) { This. First =First ; This. Second =second; } PublicText GetFirst () {returnFirst ; } PublicText Getscond () {returnsecond; } Public voidWrite (DataOutput out)throwsIOException {first.write (out); Second.write (out); } Public voidReadFields (Datainput in)throwsIOException {first.readfields (in); Second.readfields (in); } Public inthashcode () {returnFirst.hashcode () * 163 +Second.hashcode (); } Public Booleanequals (Object o) {if(OinstanceofTextpair) {Textpair TP=(Textpair) o; returnFirst.equals (Tp.first) &&second.equals (Tp.second); } return false; } PublicString toString () {returnFirst + "\ T" +second; } Public intcompareTo (textpair tp) {intCMP =First.compareto (Tp.first); if(CMP! = 0) { returnCMP; } returnSecond.compareto (Tp.second); } }
Achieve a rawcomparator for speed
Further optimizations can be made, as the key in MapReduce needs to be compared, because he has been serialized and wants to compare them, the first thing to do is to deserialize an object and then call the CompareTo object for comparison, but this is too inefficient, Is it possible to directly compare the results after serialization, the answer is yes, yes.
We just need to split the emploeewritable's serialized results into member objects and then compare the member objects:
classComparatorextendsWritablecomparator {Private Static FinalText.comparator Text_comparator =NewText.comparator (); PublicComparator () {Super(Textpair.class); } Public intCompara (byte[] B1,intS1,intL1,byte[] B2,intS2,intL2) { Try { intfirstL1 = Writableutils.decodevintsize (b1[s1]) +Readvint (B1, S1); intfirstL2 = Writableutils.decodevintsize (B2[s2]) +readvint (B2, S2); intCMP =Text_comparator.compare (B1, S1, firstL1, B2, S2, firstL2); if(CMP! = 0) { returnCMP; } returnText_comparator.compare (B1, S1 + firstL1, L1-FIRSTL1, B2, S2 + firstL2, L2-firstL2); } Catch(IOException e) {Throw NewIllegalArgumentException (e); } }}
Custom comparators
Sometimes, in addition to the default comparator, you may also need some custom comparator to generate a different sort queue, take a look at the following example, only compare name, two compare is the same meaning, are compared to the name size:
Public intComparebyte[] B1,intS1,intL1,byte[] B2,intS2,intL2) { Try { intfirstL1 = Writableutils.decodevintsize (b1[s1]) +Readvint (B1, S1); intfirstL2 = Writableutils.decodevintsize (B2[s2]) +readvint (B2, S2); returnText_comparator.compare (B1, S1, firstL1, B2, S2, firstL2); } Catch(IOException e) {Throw NewIllegalArgumentException (e); } } Public intCompare (Writablecomparable A, writablecomparable b) {if(AinstanceofTextpair && binstanceofTextpair) { return((Textpair) a). First.compareto ((((Textpair) b). first); } return Super. Compare (A, b); }
Implementing custom writable Classes in Hadoop