Read custom writable type values in Sequencefile

Source: Internet
Author: User
Tags hadoop fs

1) Hadoop allows programmers to create custom data types, and if they are key, they must inherit writablecomparable because key is involved in sorting, and value only needs to inherit writable. The following defines a doublearraywritable, inherited from Arraywritable. The code is as follows:

1  PackageMatrix;2 ImportOrg.apache.hadoop.io.*;3  Public classDoublearraywritableextendsarraywritable {4        Publicdoublearraywritable () {5           Super(doublewritable.class);6       }7        Public  Double[] convert2double (doublewritable[] w) {8           Double[] value=New Double[w.length];9            for(inti = 0; i < value.length; i++) {Tenvalue[i]=double.valueof (W[i].get ()); One           } A           returnvalue; -       } -        the  -     } -      -    

2) The following is the reading of the TansB.txt file, converting its value to doublearraywritable storage into sequencefile.

1  Packageconvert;2 3 /**4 * Created with IntelliJ idea.5 * User:hadoop6 * date:16-1-197 * Time: PM 3:098 * To change this template use File | Settings | File Templates.9  */Ten Importjava.io.IOException; One ImportJava.net.URI; A  - Importorg.apache.hadoop.conf.Configuration; - ImportOrg.apache.hadoop.fs.FileSystem; the ImportOrg.apache.hadoop.fs.Path; - Importorg.apache.hadoop.io.DoubleWritable; - Importorg.apache.hadoop.io.IOUtils; - Importorg.apache.hadoop.io.IntWritable; + Importorg.apache.hadoop.io.LongWritable; - ImportOrg.apache.hadoop.io.SequenceFile; + ImportOrg.apache.hadoop.io.Text; A Importorg.apache.commons.io.FileUtils; at ImportOrg.apache.commons.io.LineIterator; -  -  -  - //import jama.matrix.*; - //import java.io.IOException; in ImportJava.io.File; -  to //import javax.sound.midi.SysexMessage; +  Public classSequencefilewritedemo { -      Public Static voidMain (string[] args)throwsIOException { theString uri = "/home/hadoop/srcdata/bdoublearrayseq"; *Configuration conf =NewConfiguration (); $FileSystem fs =Filesystem.get (Uri.create (URI), conf);Panax NotoginsengPath PATH =NewPath (URI); -Intwritable key =Newintwritable (); theDoublearraywritable value =Newdoublearraywritable (); +Sequencefile.writer Writer =NULL; A         Try { thewriter =Sequencefile.createwriter (FS, conf, Path, Key.getclass (), + Value.getclass ()); -  $  $             FinalLineiterator it2 = Fileutils.lineiterator (NewFile ("/home/hadoop/srcdata/transb.txt"), "UTF-8"); -             Try { -                 intI=0; the string[] strings; - doublewritable[] arraydoublewritables;Wuyi                  while(It2.hasnext ()) { the++i; -                     FinalString line =it2.nextline (); Wu Key.set (i); -Strings=line.split ("\ t"); Aboutarraydoublewritables=NewDoublewritable[strings.length]; $                      for(intj = 0; J < Arraydoublewritables.length; J + +) { -ARRAYDOUBLEWRITABLES[J] =Newdoublewritable (double.valueof (strings[j)); -                          -                     } A                      + Value.set (arraydoublewritables); the writer.append (key,value); -                     //System.out.println ("FfD"); $  the                 } the}finally { the it2.close (); the             } -  in}finally { the Ioutils.closestream (writer); the         } AboutSystem.out.println ("OK"); the  the     } the  +}

3) upload the Seq file and then use the command to view the contents of this SEQ file:

Hadoop fs-text/lz/data/transbseq

Results hint:

Java.lang.RuntimeException:java.io.IOException:WritableName can ' t load Class:matrix. Doublearraywritable

4) The reason is that the newly defined double array belongs to a third-party package, Hadoop cannot be directly identified, it needs to doublearraywritable the source of its above into a jar package, The path of this jar package is then configured in the hadoop-env.sh file on the master side, where the location information of the third-party class is added, and multiple jar packets are separated by commas (,):

Export Hadoop_classpath=/home/hadoop/doublearraywritable.jar;

5) Then, use Hadoop fs-text/lz/data/transbseq to see the contents of the file.

Reference:

http://www.eveningdrum.com/2014/05/04/hadoop%E4%BD%BF%E7%94%A8%E7%AC%AC%E4%B8%89%E6%96%B9%E4%BE%9D%E8%B5%96jar%E5%8C%85/

Read custom writable type values in Sequencefile

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.