Configure your own Maven repository in eclipse
1. Installing Maven (for managing the warehouse, managing the jar package)
-1. Unzip the MAVEN installation package
-2. Add maven to the environment variable/etc/profile
-3. Add the Conf/setting.xml file under the Maven directory to the ~/.M2 folder
2. Install Eclipse
-1. Unzip the Eclipse installation file
-2. Execute Eclipse.inst File
-3. Follow the steps
3. Configure your Maven repository in eclipse
1.window>>perfoemence>>maven>>installations (add a maven directory to use, step 1.1)
Add>> Selecting a path in 1.1
2.window>>perfoemence>>maven>>user settings (select the configuration file for the local warehouse, step 1.3)
UESR settings>> Select the file in 1.3
4. Create a new MAVEN project
-new>>maven project>> Create a simple project >>next>>next>>group ID: domain inversion >>artfact ID: project name > >finish
-Modify the Pom.xml file
Write a small program to test
Create a new Hadoop_test class under Src/main/java
Package hadoop_test;
Import org.apache.hadoop.conf.Configuration;
Import org.apache.hadoop.conf.Configured;
Import Org.apache.hadoop.util.Tool;
Import Org.apache.hadoop.util.ToolRunner;
public class Conftest extends configured implements tool{
public int run(String[] arg0) throws Exception { // TODO Auto-generated method stub Configuration conf =getConf(); return 0;} public static void main(String[] args) throws Exception { System.out.println("hello world!!!"); int status = ToolRunner.run(new ConfTest(), args); System.exit(status);}
}
Package, in the terminal into the Java Project Pom.xml folder, execute mvn install clean, in the target folder can find a jar package (hadoop_ Test-0.0.1-snapshot.jar), if jarhadoop jar hadoop_test-0.0.1-snapshot.jar hadoop_test/conftest instruction executes output hello The world is basically a success. You can also test the WordCount class that comes with the system, specifically, by $./bin/$ Hadoop jar $HADOOP _prefix/share/hadoop/mapreduce/ Hadoop-mapreduce-examples-2.2.0.jar wordcount Input Output
The last write program reads the files on the HDFs for MapReduce and returns the results back to HDFs
类:package hadoop_test;
Import java.io.IOException;
Import Java.util.StringTokenizer;
Import org.apache.hadoop.conf.Configuration;
Import org.apache.hadoop.conf.Configured;
Import Org.apache.hadoop.fs.Path;
Import org.apache.hadoop.io.IntWritable;
Import org.apache.hadoop.io.LongWritable;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.mapreduce.Job;
Import Org.apache.hadoop.mapreduce.Mapper;
Import Org.apache.hadoop.mapreduce.Reducer;
Import Org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
Import Org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
Import Org.apache.hadoop.util.Tool;
Import Org.apache.hadoop.util.ToolRunner;
public class WordCount extends configured implements tool{
Static Class Wordcountmapper
Extends Mapper
/** * Key: Current read line offset * Value: currently read line * Context:map method Execution Context */@Override protected void Map (Longwri Table key, Text value, Context context) throws IOException, interruptedexception {//TODO auto-generate D method Stub StringTokenizer words= new StringTokenizer (Value.tostring (), ""); while (Words.hasmoretokens ()) {Word.set (Words.nexttoken ()); Context.write (Word, one); }}}static class Wordcountreducer extends Reducer<text, intwritable, Text, intwritable>{private intwritable C Ounter = new Intwritable (); /** * Key: Word * values to be counted: all statistical identifiers to be counted for Word * Context:reduce Method Execution Context */@Override protected void red UCE (text key, iterable<intwritable> values, Reducer<text, intwritable, text, INTWRITABLE&G t;. Context context) throws IOException, interruptedexception {//TODO auto-generated method stub int count=0; for (intwritable one:values) {count+=one.get (); } counter.set (count); Context.write (key, counter); }}
//@Override
public int run (string[] args) throws Exception {
//Get configuration information for program Runtime
Config conf=getconf ();
String inputpath=conf.get ("input");
String outputpath=conf.get ("output");
//构建新的作业 Job job = Job.getInstance(conf, "Word Frequence Count"); job.setJarByClass(WordCount.class); //给job设置mapper类及map方法输出的键值类型 job.setMapperClass(WordCountMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); //给job设置reducer类及reduce方法输出的键值类型 job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); //设置数据的读取方式(文本文件)及结果的输出方式(文本文件) job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); //设置输入和输出目录 TextInputFormat.addInputPath(job, new Path(inputPath)); TextOutputFormat.setOutputPath(job, new Path(outputPath)); //将作业提交集群执行 return job.waitForCompletion(true)?0:1;}public static void main(String[] args) throws Exception{ int status = ToolRunner.run(new WordCount(), args); System.exit(status);}
}
Execute the Hadoop jar Hadoop_test-0.0.1-snapshot.jar hadoop_test/wordcount-dinput=hdfs:/usr/hadoop/maven*-Doutput=hdfs:/ USR/HADOOP/MAVEN1 instruction (Note the file path and/usr/local section at this time)
Well, here basically our environment is built successfully, and some of the details of these days will be slowly added.
Reference address: Maven configuration section: Https://www.cnblogs.com/cenzhongman/p/7093672.html Invasion and deletion
Eclipse is packaged with Maven and WordCount files on HDFs