Turn the Hadoop program into a jar pack and run it on a command line (such as a word-calculation program) under Linux __linux

Source: Internet
Author: User

Custom Mapper

Import java.io.IOException;
Import org.apache.hadoop.io.LongWritable;
Import Org.apache.hadoop.io.Text;

Import Org.apache.hadoop.mapreduce.Mapper;    /** * Mapper<keyin, Valuein, Keyout, valueout> (refers to generics) * Keyin means the starting position of each line (offset offset) * K1 That is, V1 represents the textual content of each line * Keyout that is K2 represents every word in each line (words can appear in the same row, not grouped here) * Valueout that is, V2 represents the number of occurrences of each word in each row, where the fixed value is 1 * * 1.1 read from File: * Hello JAVA * Hello HADOOP * Convert to <0,heelo Java>,<11,hello hadoop> form 0 and 10 means that the offset of each row starts at 1, and one word and space is an offset * 1.2 converts <0,heelo Java>,<11,hello hadoop> to  


Custom Reducer

Import java.io.IOException;

Import org.apache.hadoop.io.LongWritable;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.mapreduce.Reducer;

/**
 *  reducer<keyin, Valuein, Keyout, valueout> (refers to generics)
 * Keyin means     every word in each line ( Words can appear in the same row, which has not been grouped here)
 *  Valuein    that v2     represents the number of occurrences of each word in each row, where a fixed value of 1
 *  keyout     is K3     represents the different words in the entire file (which has been grouped here)
 *  Valueout   that V3 represents the total number of occurrences of different words in the     entire file
 * contents of the document  :
 * The      parameters received by the Hello JAVA * Hello HADOOP
 *  Reduce method are the results of the map method output: 


Program Driven

Import Java.net.URI;
Import org.apache.hadoop.conf.Configuration;
Import Org.apache.hadoop.fs.FileSystem;
Import Org.apache.hadoop.fs.Path;
Import org.apache.hadoop.io.LongWritable;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.mapreduce.Job;
Import Org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

Import Org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
	public class Wordcounttest {static String Input_path = "";
	
	static String Out_path = "";
		public static void Main (string[] args) throws Exception {Input_path = args[0];
		
		Out_path = args[1];
		
		Configuration conf = new Configuration ();
		Delete filesystem filesystem = Filesystem.get (new URI (Input_path), conf) if the output directory exists; if (filesystem.exists (Out_path)) {Filesystem.delete (New path (Out_path), true);//The second argument, true means that the folder is deleted,
		
		False indicates that the file is deleted//the second parameter is the job name (optional) Job job = new Job (conf, WordCountTest.class.getSimpleName ()); /** *----------------------This code is used to pack and run (must write)-------------------* * * job.setjarbyclass (Wordcounttest.class);
		
		1.1 Specifies the input file directory fileinputformat.setinputpaths (Job, Input_path);
		
		1.2 Specifies the custom mapper class Job.setmapperclass (Mymapper.class); 1.3 Partition//1.4 sorting, grouping//1.5 (optional)//2.1 allocation node, do not need our care//2.2 specifies a custom reducer class Job.setreducerclass (
		Myreducer.class);
		Specifies the key and value type of the reducer output the following two sentences can not be omitted, because the omitted mapper is not based on the Job.setoutputkeyclass (Text.class);
		
		Job.setoutputvalueclass (Longwritable.class);
		
		2.3 Specifies the output path Fileoutputformat.setoutputpath (Job, New Path (Out_path));
	Submit the job to Jobtracker run Job.waitforcompletion (true); }
}

1. Select the program entry to be packaged in the Eclipse project and click the right button to select Export

2. Click the jar file option in the Java folder


3. Select the Java file to be beaten into a jar package and the output directory of the jar package


4. Click Next


5. Select the entry of the program, click Finish


6. Copy the jar package to the Linux environment, and enter the following statement on the Linux command line

Hadoop jar Jar.jar Hdfs://hadoop:9000/hello hdfs://hadoop:9000/testout

The first path is the file read address (that is, the file to be counted) the second path is the file output path (that is, the file that is printed after the word is counted)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.