Java Programmer's Big Data Path (3): Using MAVEN to build a Hadoop project __hadoop

Source: Internet
Author: User

background

Since the Hadoop project is mostly a larger project, we chose to use the build tool to build the Hadoop project, where we use Maven. Of course, you can also use the more popular building tools such as Gradle to build the process

Here's a summary of the process I used IntelliJ idea to develop the MAVEN project. Create maven Project

First create a new MAVEN project

You do not need to tick the create from archetype, fill in groupid and Artifactid according to the project, and then fill in the project name and engineering path.
After finish, we created a MAVEN project configuration Pom.xml

Next add dependencies, I'm using the latest stable version of Hadoop 2.7.4
You need to add the following dependencies: Hadoop-common Hadoop-hdfs hadoop-mapreduce-client-core hadoop-mapreduce-client-jobclient log4j
The contents of the modified Pom.xml file are:

<?xml version= "1.0" encoding= "UTF-8"?> <project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= "HTT" P://www.w3.org/2001/xmlschema-instance "xsi:schemalocation=" http://maven.apache.org/POM/4.0.0 Http://maven.apach E.org/xsd/maven-4.0.0.xsd "> <modelVersion>4.0.0</modelVersion> <groupid>mavenhadoop</gro upid> <artifactId>mavenHadoop</artifactId> <version>1.0-SNAPSHOT</version> <rep ositories> <repository> <id>apache</id> <url>http://maven.apach e.org</url> </repository> </repositories> <dependencies> <dependency&
            Gt <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.1
            2</version> <scope>test</scope> </dependency> <dependency> &lT;groupid>org.apache.hadoop</groupid> <artifactId>hadoop-common</artifactId> <version>2.7.4</version> </dependency> <dependency> <groupid>or G.apache.hadoop</groupid> <artifactId>hadoop-hdfs</artifactId> <version>2. 7.4</version> </dependency> <dependency> &LT;GROUPID&GT;ORG.APACHE.HADOOP&L T;/groupid> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2. 7.4</version> </dependency> <dependency> &LT;GROUPID&GT;ORG.APACHE.HADOOP&L T;/groupid> <artifactId>hadoop-mapreduce-client-jobclient</artifactId> <version& gt;2.7.4</version> </dependency> <dependency> <groupid>log4j</grou Pid> <Artifactid>log4j</artifactid> <version>1.2.17</version> </dependency> & Lt;/dependencies> </project>
Configure log4j

Configure LOG4J to print logs for easy debugging

Log4j.rootlogger = debug,stdout

### output information to control lift ###
log4j.appender.stdout = Org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
Log4j.appender.stdout.layout.ConversionPattern = [%-5p]%d{yyyy-mm-dd hh:mm:ss,sss} method:%l%n%m%n

Once configured, if you don't start Hadoop, you need to start Hadoop first. Configure Run/debug Configurations

After you start Hadoop, configure the run parameters. Select the class that contains the main function, fill in the input file and the output path. (see the article in detail: Java Programmer's Big Data Path (2): Create the first Hadoop program)
You can now run WordCount locally, before running to ensure that the output path does not exist, in order to avoid every time you want to delete the output path, you can implement in the code.

Import Java.io.File;

public class Fileutil {public
    static Boolean Deletedir (String path) {
        file dir = new File (path);
        if (dir.exists ()) {for
            (File f:dir.listfiles ()) {
                if (f.isdirectory ()) {
                    Deletedir (F.getname ());
                } else {
                    f.delete ();
                }
            }
            Dir.delete ();
            return true;
        } else {
            System.out.printf ("File do not exist");
            return false;}}

Read HDFs file

If the input path in the previous step is configured as a HDFs file path, the file in HDFs can be read directly, but the Fileutil class does not work. Encoding invokes the HDFS process I will share later. Reference Articles

Http://www.cnblogs.com/licheng/p/6833342.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

Tags Index: