Use protocolbuffer with lzo in hadoop (2)

Source: Internet
Author: User
Use protocolbuffer with lzo in hadoop (2) 1. lzo Introduction

Lzo is a type of encoding with high compression ratio and extremely high compression rate. It features

The decompression speed is very fast.
Lzo is lossless compression, and the compressed data can be accurately restored.
Lzo is block-based and allows data to be split into chunks, which can be decompressed in parallel.

For installation instructions, refer to this article: lzo Installation

2. How to Write a mapreduce program that reads and writes protocolbuffer + lzo files

Here, I used the class in the elephant-bird-1.0.jar

Mapred/input/lzoprotobufblockinputformat. Java reads the data file of proto + lzo and parses it into a proto object, which is mainly used by mapreduce programs;

Mapred/output/lzoprotobufblockoutputformat. Java obtains the proto object and stores it as a data file of proto + lzo. It is mainly used by mapreduce programs.

1. Generate a Java file using the Protocol buffers command (described in the previous chapter)

Assume that this Java file is called log. Java and only defines the meaning of each field in the input text.

2. Call hadoop to import data in the specified format:

hadoop jar /home/app_admin/load.jar com.Test -libjars /home/app_admin/lib/protobuf-java-2.3.0.jar,/home/app_admin/lib/netty-3.5.5.Final.jar,/home/app_admin/lib/elephant-bird-core-3.0.2.jar,/home/app_admin/lib/slf4j-api-1.6.4.jar,/home/app_admin/lib/slf4j-log4j12-1.6.4.jar,/home/app_admin/lib/commons-lang-2.4.jar,/home/app_admin/lib/guava-11.0.1.jar $d

The test class is called like this:

        Job job = new Job(getConf(), "load");        job.setJarByClass(com.Test.class);        job.setOutputKeyClass(org.apache.hadoop.io.Text.class);        job.setOutputValueClass(org.apache.hadoop.io.IntWritable.class);        job.setMapperClass(com.TestMapper.class);        job.setNumReduceTasks(0);        job.setInputFormatClass(org.apache.hadoop.mapreduce.lib.input.TextInputFormat.class);        LzoProtobufBlockOutputFormat.setClassConf(com.Logformat.Log.class, job.getConfiguration());        job.setOutputFormatClass(com.twitter.elephantbird.mapreduce.output.LzoProtobufBlockOutputFormat.class);        Date date;        if(args.length >= 1)        {            String st = args[0];            SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd");            try            {                date = sdf.parse(st);            }            catch(ParseException ex)            {                throw new RuntimeException((new StringBuilder()).append("input format error,").append(st).toString());            }        } else        {            Calendar cal = Calendar.getInstance();            cal.add(5, -1);            date = cal.getTime();        }        SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd");        String inputdir = (new StringBuilder()).append("/log/raw/").append(sdf.format(date)).append("/*access.log*.gz").toString();        logger.info((new StringBuilder()).append("inputdir = ").append(inputdir).toString());        String outfilename = (new StringBuilder()).append("/log/").append((new SimpleDateFormat("yyyyMMdd")).format(date)).toString();        logger.info((new StringBuilder()).append("outfile dir = ").append(outfilename).toString());        Path outFile = new Path(outfilename);        FileSystem.get(job.getConfiguration()).delete(outFile, true);        FileInputFormat.addInputPath(job, new Path(inputdir));        FileOutputFormat.setOutputPath(job, outFile);        job.waitForCompletion(true);

3. Use pig to process data

Register/home/app_admin/apps/hadoop/lib/hadoop-lzo-0.4.16.jarregister/home/app_admin/piglib /*. jar/** COM. logformat. log class here */register/home/app_admin/loadplainlog/loadplainlog-1.0.0.jar/** first load the accesslog of a day from hadoop */a = load'/log/$ logdate /*. lzo 'using COM. twitter. elephantbird. pig. load. protobufpigloader ('com. logformat. log ');

Here we only use the map function to import data, and then use the pig script to process the data without using reduce.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.