Open idea under the SRC under main under Scala right click to create a Scala class named Simpleapp, the content is as follows
Org.apache.spark.SparkContext org.apache.spark.sparkcontext._ org.apache.spark.SparkConf"a"). Count () numbs = logdata.filter (line = Line.contains ("B")). Count () println ("Lines with a:%s, Lines with B:%s". Format (Numas, numbs))}}
Packaging files:
File-->>projectstructure-click artificats-->> click the Green Plus-click jar-->> Select from module with Dependices
Click Output Layout to see if there are no third-party jar packages, because you use the Spark cluster environment, so you don't need a third-party jar package
Re-build:
Build-->>build Artifcat. --->>build or rebuild.
After execution, you will see the Study-scala.jar in the D:\mygit\study-scala\out\artifacts\study_scala_jar directory.
Upload to the spark_home under MyApp under the Spark Cluster Server
Upload/home/spark/opt/spark-1.2.0-bin-hadoop2.4/readme.md to HDFs
To submit a spark task:
./bin/spark-submit--class "Simpleapp"--master Local[4] Myapp/study-scala.jar
Execution result is a:60,b:29
Locally developed spark code uploads the spark Cluster service and runs it (based on the Spark website documentation)