Spark+eclipse+java+maven Windows Development environment Setup and Getting started example "with detailed code"

Source: Internet
Author: User
Tags knowledge base

http://blog.csdn.net/xiefu5hh/article/details/51707529

spark+eclipse+java+maven Windows Development environment Setup and Getting started example "with detailed code"Tags: sparkeclipsejavamavenwindows2016-06-18 22:35 405 People read Comments (0) favorite reports Classification:Spark (5)

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Directory (?) [+]

Preface This article is intended to document a beginner spark, based on a piece of Java code in the quick start of the official website, build the application on Maven and implement it. The first recommendation is a good starting document library, the CSDN spark Knowledge Base, which has a lot of spark from getting started to mastering all sorts of materials, 1. Development software Congratulations, you get the spark driving test, you can start to participate in driving school training ~ Http://lib.csdn.net/base/spark         Probably understand: Spark is mainly divided into  1. Core  2. Real-time streaming 3. Support for SQL Sparksql 4. Machine learning Mllib   There's something else for the time being.      This article introduces only Sparkcore Core Department case introduction, the other students to explore it ~     environment preparation:    Window computer One is not rare, otherwise how to play   other software needs to install:     version can choose their own, the following is my choice      1)   JDK           version: 1.7&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;2)  maven      version: 3.2.3     :          http://maven.apache.org/              &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;3)  spark        version: spark-1.1.0    http://spark.apache.org/       4 "Eclipse          Note the number of bits of software installed and the number of operating system bits.     1.JDK installation        Specific installation steps are too simple ~     2.MAVEN installation        Download decompression, the specific installation steps are too simple ~  3.spark installation        Download decompression, the specific installation procedures are too simple Slightly ~ 4.eclipse installation         Download unzip, the specific installation steps are not very simple, cannot be slightly, because to install maven     Download a maven version of Eclipse and recommend Luna's eclipse comes with maven     download winutil into the bin directory of Spark       Configure environment variables       need to configure Java_home   Hadoop_home (configured to Spark_home for winutil use)  spark_home maven_home   Add the above three bin      cmd in path to test mvn-v   java-version  spark-shell installation successfully   &NBS p;       Project Construction Build a number of lines in a statistic   file that appear with a character.   Set up MAVEN project modification Pom.xml:    <project xmlns= "http://maven.apache.org/POM/4.0.0" xmlns:xsi= " Http://www.w3.org/2001/XMLSchema-instance "
xsi:schemalocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd" >
<modelVersion>4.0.0</modelVersion>
<groupId>com.dt.spark</groupId>
<artifactId>SparkApps</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>SparkApps</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-graphx_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-5</version>
</dependency>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.3</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/java</sourceDirectory>
<testSourceDirectory>src/test/java</testSourceDirectory>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<maniClass></maniClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.3.1</version>
<executions>
<execution>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>java</executable>
<includeProjectDependencies>false</includeProjectDependencies>
<classpathScope>compile</classpathScope>
<mainClass>com.dt.spark.SparkApps.App</mainClass>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>


<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
</plugins>
</build>
</project> After the modification, you need to download more things to build a workspace, after this process can go to eat a meal, run two laps to come back ~ Write code
public static void Main (string[] args) {String readme= "d:\\spark\\changes.txt";        sparkconf conf=new sparkconf (). Setappname ("Tiger's first spark app");        Javasparkcontext SC =new javasparkcontext (conf);        Javardd<string> Logdata=sc.textfile (Readme). cache (); Long Num=logdata.filter (new function<string,boolean> () {public Boolean call (String s) {return S.contai        NS ("a");                }}). Count ();            System.out.println ("The Count of Word A is" +num); }

Run the program to compile the application      enter the Workplace project, run MVN compile, compile the code                  Packager       Run MVN package, The system will automatically package the application code as a jar. After a successful run, under the target folder under the project folder, a name of Simapp-1.0-snapshot.jar is generated.   Run the program using Spark-submit to run the application locally   in Elipse the following parameters always error, still in exploration:         in cmd, enter the path of the Spark-submit file in the Spark package bin, the class name (plus the package name), the path of the jar package         d:\spark_tools\apache-maven-3.2.3\maven_project\simapp>d:\spark_tools\ Spark-1.1.0-bin-hadoop1\bin\spark-submit--class "AA"--master local[4]  bb   aa is the program's main program path  com. Xxxxxx.xxxx &NBSP;BB is your packaged jar path   under target, configured by Pom.xml.        in the last line of the output information, you will see the results of the program running:       the Count of Word a is 20132  congratulations, get your spark driver's license. You can drive.   on the road need to be careful, more practice it ~    have questions can leave a message, welcome to discuss, I will actively reply! ~

Spark+eclipse+java+maven Windows Development environment Setup and getting started instance "with detailed code"

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.