MapReduce Programming for Learning

Source: Internet
Author: User

MapReduce consists of two phases: one is map, and the other is reduce. Each step has key-value pairs as input and output.

The format of the Key-value pair in the map stage is determined by the input format, and if it is the default Textinputformat, each row is processed as a record process where key is the starting position of the line relative to the beginning of the file, and value is the character text of the line. The Key-value pair format of the output of the map phase must correspond to the format of the input Key-value pair in the reduce phase.

Let's start by trying, assuming we need to process a batch of weather-related data in the following format:

Store in ASCII code, one record per line
Each line of characters counts from 0, 15th to 18th characters Fu Weihan
25th to 29th characters for temperature, where 25th bit is symbol +/-

Example text text:

0067011990999991950051507+0000+0043011990999991950051512+0022+0043011990999991950051518-0011+ 0043012650999991949032412+0111+0043012650999991949032418+0078+0067011990999991937051507+0001+ 0043011990999991937051512-0002+0043011990999991945051518+0001+0043012650999991945032412+0002+ 0043012650999991945032418+0078+

On the code:

 PackageHadoop;Importjava.io.IOException;ImportJava.util.StringTokenizer;Importorg.apache.hadoop.conf.Configured;ImportOrg.apache.hadoop.fs.Path;Importorg.apache.hadoop.io.IntWritable;Importorg.apache.hadoop.io.LongWritable;ImportOrg.apache.hadoop.io.Text;ImportOrg.apache.hadoop.mapreduce.Job;ImportOrg.apache.hadoop.mapreduce.Mapper;ImportOrg.apache.hadoop.mapreduce.Reducer;ImportOrg.apache.hadoop.mapreduce.Reducer.Context;ImportOrg.apache.hadoop.mapreduce.lib.input.FileInputFormat;ImportOrg.apache.hadoop.mapreduce.lib.input.TextInputFormat;ImportOrg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;ImportOrg.apache.hadoop.mapreduce.lib.output.TextOutputFormat;ImportOrg.apache.hadoop.util.Tool;ImportOrg.apache.hadoop.util.ToolRunner;Importjava.io.IOException;ImportJava.util.StringTokenizer;/*** Created by Root on 4/23/16.*/ Public classHadooptestextendsConfiguredImplementsThe tool{//map copies the value in the input to the key of the output data and outputs it directly Public Static classMapextendsmapper<longwritable, text, text, intwritable> {

Implement the Map function @Override Public voidMap (longwritable key, Text value, context context)throwsIOException, interruptedexception {String line=value.tostring (); String Year= Line.substring (15, 19); intairtemperature; if(Line.charat (25) = = ' + ') {airtemperature= Integer.parseint (Line.substring (26, 30)); } Else{airtemperature= Integer.parseint (line.substring (25, 30)); } context.write (NewText (year),Newintwritable (airtemperature)); } }
Reduce copies the key from the input to the key of the output data and outputs it directly Public Static classReduceextendsReducer<text, Intwritable, Text, intwritable> { Public voidReduce (Text key, iterable<intwritable>values, context context)throwsIOException, interruptedexception {intMaxValue =Integer.min_value; for(intwritable sorce:values) {MaxValue=Math.max (MaxValue, Sorce.get ()); } context.write (Key,Newintwritable (MaxValue)); }} @Override Public intRun (string[] arg0)throwsException {
Here for testing, the incoming path is directly assigned a String inputparths= "/usr/local/hadooptext.txt"; String OutputPath= "/usr/local/hadoopout";
Declares a Job object, where the getconf is to obtain configuration information for Hadoop and to inherit the configured. Job Job=NewJob (getconf ());
Set Job name Job.setjobname ("Avgsorce");
Sets the format of the Key-value pair for the mapper output Job.setoutputkeyclass (Text.class);
       
Set mapper, which defaults to Identitymapper, which sets the code in Mapper Job.setmapperclass (Hadooptest.map.class);
Combiner can be understood as a small reducer, in order to reduce the network transport load and subsequent reducer the computational pressure can be written in a separate way to call Job.setcombinerclass (Reduce.class);
To format the key-value pair of the reduce output
Job.setoutputvalueclass (intwritable.class);
Set the input format Job.setinputformatclass (Textinputformat.class);
Set the input and output directory fileinputformat.setinputpaths (Job,NewPath (inputparths)); Fileoutputformat.setoutputpath (Job,NewPath (OutputPath)); BooleanSuccess = Job.waitforcompletion (true); returnSuccess? 0:1; } Public Static voidMain (string[] args)throwsException {intret = Toolrunner.run (Newhadooptest (), args); System.exit (ret); } }

Execution Result:

MapReduce Programming for Learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.