Hadoop Reading Notes series article:http://blog.csdn.net/caicongyang/article/category/2166855
1. Description:
Finds the maximum value from the given file
2. Code:
Topapp.java
Package Suanfa;import Java.io.ioexception;import Java.net.uri;import org.apache.hadoop.conf.configuration;import Org.apache.hadoop.fs.filesystem;import Org.apache.hadoop.fs.path;import Org.apache.hadoop.io.longwritable;import Org.apache.hadoop.io.nullwritable;import Org.apache.hadoop.io.text;import Org.apache.hadoop.mapred.testminimrclientcluster.myreducer;import Org.apache.hadoop.mapreduce.job;import Org.apache.hadoop.mapreduce.mapper;import Org.apache.hadoop.mapreduce.reducer;import Org.apache.hadoop.mapreduce.lib.input.fileinputformat;import org.apache.hadoop.mapreduce.lib.output.fileoutputformat;/** * <p> * Title:TopApp.java * Package SUANFA * </p& Gt * <p> * Description: Find maximum value from 1000w data * <p> * @author tom.cai * @created 2014-12-2 pm 10:28:33 * @version V1.0 * */public class Topapp {private static final String Input_path = "Hdfs://192.168.80.100:9000/top_input";p rivate static Final String Out_path = "Hdfs://192.168.80.100:9000/top_out";p ublic static void Main (string[] args) throws Exception {configuration conf = new Configuration (); final FileSystem FileSystem = FILESYSTEM.G ET (New URI (Input_path), conf), final path Outpath = new Path (Out_path), if (Filesystem.exists (Outpath)) { Filesystem.delete (Outpath, True);} Final Job Job = new Job (conf, TopApp.class.getSimpleName ()); Fileinputformat.setinputpaths (Job, Input_path); Job.setmapperclass (Mymapper.class); Job.setreducerclass ( Myreducer.class); Job.setoutputkeyclass (Longwritable.class); Job.setoutputvalueclass (NullWritable.class); Fileoutputformat.setoutputpath (Job, Outpath); Job.waitforcompletion (true);} Static class Mymapper extends Mapper<longwritable, Text, longwritable, Nullwritable>{long max = Long.max_value; @Ov errideprotected void Map (longwritable key, Text value, Context context) throws IOException, interruptedexception {long tem p = Long.parselong (value.tostring ()); if (Temp>max) {max = temp;}} @Overrideprotected void Cleanup (context context) throws IOException, interruptedexception {conText.write (New Longwritable (max), Nullwritable.get ());}} Static class Myreducer extends Reducer<longwritable, nullwritable, longwritable, Nullwritable>{long max = Long.MAX _value; @Overrideprotected void reduce (longwritable key, iterable<nullwritable> VALUE, context context) throws IOException, interruptedexception {Long temp = Long.parselong (value.tostring ()), if (Temp>max) {max = temp;}} @Overrideprotected void Cleanup (context context) throws IOException, Interruptedexception {context.write (new Longwritable (max), Nullwritable.get ());}}}
Welcome everybody to discuss the study together!
Useful Self-collection!
Record and share, let you and I grow together! Welcome to my other blogs, my blog address: Http://blog.csdn.net/caicongyang
Hadoop reading notes (13) The top algorithm in MapReduce