IntelliJ build Spark development environment _spark

Source: Internet
Author: User

Spark how to learn it. In the premise of ignorance, first go to the official website to quickly understand what spark is doing, the official website here. Then, install the development environment and start learning from WordCount. Third, you can learn other algorithms after the start. Finally, do not give up and continue to study further.

So, the first thing to solve is how to build the development environment problem.

1, to ensure that your computer installed JDK, and configured the JAVA_HOME environment variables.

2, install IntelliJ idea, download the address. The 15.0 version is now a good support for Scala.

3, the installation of Scala plug-ins. In the first use of IntelliJ will appear when the installation of Plug-ins prompts, if missed also does not matter, in setting, find plugins, input scala, installation can.

4, build spark development environment.

4.1 Download the spark jar package and download the address. For example, I want to download the 1.5.0 version of Spark,hadoop is version 2.4, as shown in the following options:

4.2 Unzip the download package, we need to use the Spark-assembly-1.5.0-hadoop2.4.0.jar under the Lib this package.

4.3 New Scala project, File-> new project-> Scala-> next fill in name and SDK-> finish.

4.4 On the project page "File"-> "Project Structure"-> "libraries", point "+", select Java, Find the Spark-assembly-1.5.0-hadoop2.4.0.jar import so that you can write a spark Scala program.

4.5 Most of the time we need to use MAVEN or SBT management dependencies, where I use Maven. Intellij15.0 is also good for MAVEN, simply configure the Maven warehouse address.

5, Happy coding bar.
Inserting code

Package Main.scala

Import org.apache.spark.{ sparkconf, Sparkcontext}

object Simpleapp {
  def main (args:array[string]) {
    val logFile = "d:/ideaprojects/ Spark-test/readme.md "//Should is some file on your system
    val conf = new sparkconf (). Setappname (" Simple Application " ). Setmaster ("local")
    val sc = new Sparkcontext (conf)
    val logdata = Sc.textfile (LogFile, 2). Cache ()
    Val Numas = Logdata.filter (line => line.contains ("a")). Count ()
    val numbs = logdata.filter (line => line.contains (" B ")). Count ()
    println (" Lines with a:%s, Lines with B:%s ". Format (Numas, numbs))
  }
}

6. Pack and export to cluster operation.

6.1 If there is a Hadoop or spark dependency in the Pom.xml file, please comment it out before packing. Because the cluster already has a package, commenting out can reduce the size of the package and avoid some jar version conflicts.

6.2 IntelliJ Click "File-project struction-artifacts-+-jar-from modules with dependencies ...", fill in Modules, Main class, and path And so on, click OK to generate the jar package.

6.3 IntelliJ Click "Build-build artifacts ... ", select the jar package you just generated to build.

6.4 Upload the packaged jar package to the server under a path.

6.5 Execute the Submit command:

Spark-submit Whereisyourjar Other parameters

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.