A simple test program for hadoop File compression and decompression:
Package Org. myorg; import Java. io. *; import Org. apache. hadoop. conf. configuration; import Org. apache. hadoop. io. compress. compressioncodec; import Org. apache. hadoop. io. compress. compressionoutputstream; import Org. apache. hadoop. util. reflectionutils; public class streamcompressor {public static void main (string [] ARGs) throws exception {string codecclassname = ARGs [0]; Class <?> Codecclass = Class. forname (codecclassname); configuration conf = new configuration (); compressioncodec codec = (compressioncodec) reflectionutils. newinstance (codecclass, conf); // compressionoutputstream out = codec. createoutputstream (New fileoutputstream (new file ("text"); string STR = "Try compress and decompress"; byte [] bytes = new byte [1024]; bytes = Str. getbytes (); // ioutils. copybytes (New bytearrayinputstream (bytes), Out, 4096, false); out. write (bytes); out. finish (); // extract the data from the text file and output it to the console inputstream in = codec. createinputstream (New fileinputstream (new file ("text"); bufferedinputstream bfin = new bufferedinputstream (in); bfin. read (bytes); system. out. println (new string (bytes ));}}
1. The ARG [0] value is org. Apache. hadoop. Io. Compress. gzipcodec.
2. first create a compressed output stream out, write data to the output stream (try compress and decompress), and then close the output stream. At this time, the data is compressed into the text file.
3. Extract the input stream from the text file and output it. The output result is as follows. Some warning information is not clear yet.