Hadoop Programming Tips (7)---Define the output file format and output to a different folder

Source: Internet
Author: User

Code test Environment: Hadoop2.4

Application scenario: This technique can be used when custom output data formats are required, including the presentation of custom output data. The output path. The output file name is called and so on.

The output file formats built into Hadoop are:

1) fileoutputformat<k,v> often used by the parent class.

2) textoutputformat<k,v> default output string output format.

3) sequencefileoutputformat<k,v> serialized file output;

4) multipleoutputs<k,v> can transfer the output data to different folders;

5) nulloutputformat<k,v> output to the/dev/null, that is, no output no matter what data. This scenario is logically handled in Mr. At the same time the output file has been output in Mr, without the need to output the case;

6) lazyoutputformat<k,v> only have to invoke the Write method is to produce the file, such a word, assuming no call to write will not produce empty files;

Steps:

Similar input data format, you can define the same output data format to participate in the following steps

1) Define a class that inherits from OutputFormat, but generally inherits Fileoutputformat;

2) implement its Getrecordwriter method and return a recordwriter type;

3) Define a class that inherits Recordwriter. Define its write method. Write file data for each <key,Value>.


Example 1 (change the default output file name of the file and the default key and value separators):

Input data:


Define your own Customfileoutputformat (replace the default file name prefix):

Package Fz.outputformat;import Java.io.ioexception;import Org.apache.hadoop.fs.fsdataoutputstream;import Org.apache.hadoop.fs.path;import Org.apache.hadoop.io.longwritable;import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.recordwriter;import Org.apache.hadoop.mapreduce.taskattemptcontext;import Org.apache.hadoop.mapreduce.lib.output.fileoutputformat;public class Customoutputformat extends FileOutputFormat <longwritable, text> {private String prefix = "custom_"; @Overridepublic recordwriter<longwritable, text> Getrecordwriter (Taskattemptcontext job) throws IOException, Interruptedexception {//Create a new writable file path OutputDir = Fileoutputformat.getoutputpath (Job);//system.out.println ("Outputdir.getname ():" +outputdir.getname () + ", Otuputdir.tostring (): "+outputdir.tostring ()); String Subfix = Job.gettaskattemptid (). GetTaskID (). toString (); Path PATH = new Path (outputdir.tostring () + "/" +prefix+subfix.substring (Subfix.length () -5, Subfix.length ())); Fsdataoutputstream fileout = PAth.getfilesystem (Job.getconfiguration ()). Create (path); return new Customrecordwriter (fileout);}} 
Define the Customwriter yourself (Specify the Key,value delimiter):

Package Fz.outputformat;import Java.io.ioexception;import Java.io.printwriter;import Org.apache.hadoop.fs.fsdataoutputstream;import Org.apache.hadoop.io.longwritable;import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.recordwriter;import Org.apache.hadoop.mapreduce.taskattemptcontext;public class Customrecordwriter extends recordwriter< Longwritable, text> {private PrintWriter out;private String separator = ",";p ublic customrecordwriter ( Fsdataoutputstream fileout) {out = new PrintWriter (fileout);} @Overridepublic void Write (longwritable key, Text value) throws Ioexception,interruptedexception {out.println (Key.get ( ) +separator+value.tostring ());} @Overridepublic void Close (Taskattemptcontext context) throws Ioexception,interruptedexception {Out.close ();}}

Call the main class:

Package Fz.outputformat;import Org.apache.hadoop.conf.configuration;import org.apache.hadoop.conf.Configured; Import Org.apache.hadoop.fs.path;import Org.apache.hadoop.io.longwritable;import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.job;import Org.apache.hadoop.mapreduce.mapper;import Org.apache.hadoop.mapreduce.reducer;import Org.apache.hadoop.mapreduce.lib.input.fileinputformat;import Org.apache.hadoop.mapreduce.lib.input.textinputformat;import Org.apache.hadoop.mapreduce.lib.output.fileoutputformat;import Org.apache.hadoop.util.tool;import Org.apache.hadoop.util.toolrunner;public class Fileoutputformatdriver extends configured implements tool{/** * @param args * @throws Exception */public static void Main (string[] args) throws Exception {//TODO auto-generated method Stubtoo Lrunner.run (New Configuration (), New Fileoutputformatdriver (), args); @Overridepublic int Run (string[] arg0) throws Exception {if (arg0.length!=3) {System.err.println ("usage:\ Nfz.outputformat.FileOutputformatdriver <in> <out> <numReducer> "); return-1;} Configuration conf = getconf (); Path in = new Path (arg0[0]); Path out= New Path (Arg0[1]); Boolean delete=out.getfilesystem (conf). Delete (out, true); System.out.println ("deleted" +out+ "?" +delete); Job Job = job.getinstance (conf, "Fileouttputformat Test Job"); Job.setjarbyclass (GetClass ()); Job.setinputformatclass ( Textinputformat.class); Job.setoutputformatclass (Customoutputformat.class); Job.setmapperclass (Mapper.class); Job.setmapoutputkeyclass (Longwritable.class); Job.setmapoutputvalueclass (Text.class); Job.setOutputKeyClass ( Longwritable.class); Job.setoutputvalueclass (Text.class); Job.setnumreducetasks (Integer.parseint (arg0[2])); Job.setreducerclass (Reducer.class); Fileinputformat.setinputpaths (Job, in); Fileoutputformat.setoutputpath (Job, out); return job.waitforcompletion (true)? 0:-1;}}

To view the output:

From the output you can see the output format and the file name is actually output according to expectations.


Example 2 (output data to a different folder based on key and value values):
Define the main class yourself (the main class is actually the way to change the output):

Package Fz.multipleoutputformat;import Org.apache.hadoop.conf.configuration;import Org.apache.hadoop.conf.configured;import Org.apache.hadoop.fs.path;import org.apache.hadoop.io.LongWritable; Import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.job;import Org.apache.hadoop.mapreduce.Mapper ; Import Org.apache.hadoop.mapreduce.lib.input.fileinputformat;import Org.apache.hadoop.mapreduce.lib.input.textinputformat;import Org.apache.hadoop.mapreduce.lib.output.fileoutputformat;import Org.apache.hadoop.mapreduce.lib.output.multipleoutputs;import Org.apache.hadoop.mapreduce.lib.output.textoutputformat;import Org.apache.hadoop.util.tool;import Org.apache.hadoop.util.toolrunner;public class Fileoutputformatdriver extends configured implements tool{/** * @param args * @throws Exception */public static void Main (string[] args) throws Exception {//TODO auto-generated method Stubtoo Lrunner.run (New Configuration (), New Fileoutputformatdriver (), args); @Overridepublic int Run (string[] ARG0) throws Exception {if (arg0.length!=3) {System.err.println ("usage:\ Nfz.multipleoutputformat.FileOutputFormatDriver <in> <out> <numReducer> "); return-1;} Configuration conf = getconf (); Path in = new Path (arg0[0]); Path out= New Path (Arg0[1]); Boolean delete=out.getfilesystem (conf). Delete (out, true); System.out.println ("deleted" +out+ "?" +delete); Job Job = job.getinstance (conf, "Fileouttputformat Test Job"); Job.setjarbyclass (GetClass ()); Job.setinputformatclass ( Textinputformat.class);//job.setoutputformatclass (Customoutputformat.class); Multipleoutputs.addnamedoutput (Job, "ignore", Textoutputformat.class,longwritable.class, Text.class); Multipleoutputs.addnamedoutput (Job, "Other", Textoutputformat.class,longwritable.class, Text.class); Job.setmapperclass (Mapper.class); Job.setmapoutputkeyclass (Longwritable.class); Job.setmapoutputvalueclass ( Text.class); Job.setoutputkeyclass (Longwritable.class); Job.setoutputvalueclass (Text.class); Job.setnumreducetasks (Integer.parseint (arg0[2)); JoB.setreducerclass (Multiplereducer.class); Fileinputformat.setinputpaths (Job, in); Fileoutputformat.setoutputpath (Job, out); return job.waitforcompletion (true)? 0:-1;}}
Define your own reducer (you need to define your own logic because you want to export data to a different folder based on the value of key and value)

Package Fz.multipleoutputformat;import Java.io.ioexception;import Org.apache.hadoop.io.longwritable;import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapreduce.reducer;import Org.apache.hadoop.mapreduce.lib.output.multipleoutputs;public class Multiplereducer extendsReducer< Longwritable, Text, longwritable, text> {private multipleoutputs<longwritable,text> out; @Overridepublic void Setup (Context cxt) {out = new multipleoutputs<longwritable,text> (CXT);} @Overridepublic void reduce (longwritable key,iterable<text> Value,context cxt) throws IOException, Interruptedexception{for (Text v:value) {if (v.tostring (). StartsWith ("Ignore")) {//system.out.println (" Ignore--------------------value: "+v"), Out.write ("Ignore", Key, V, "IGN"); Else{//system.out.println ("Other---------------------value:" +v); Out.write ("Other", Key, V, "oth");}} @Overridepublic void Cleanup (Context cxt) throws Ioexception,interruptedexception{out.close ();}}

To view the output:


Can see the output of the data is actually based on the value of different values are written to a different file folder, but here the same can see the default file generation, at the same time notice that the size of the file is 0, this temporary has not been resolved.


Summary: You define the output format, you can customize some special requirements, but generally use the Hadoop built-in output format. In this case, the application is not very significant.

Just using Hadoop built-in multipleoutputs can be output to different folders based on different characteristics of the data. is very practical.


Share. Grow up, be happy

Reprint Please specify blog address: http://blog.csdn.net/fansy1990


Hadoop Programming Tips (7)---Define the output file format and output to a different folder

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.