Spark 1.1.1 Submitting applications

Source: Internet
Author: User

Submitting applications

The spark-submit script in Spark's bin directory is used to launch applications on a cluster. It can use the all of Spark's supported cluster Managersthrough a uniform interface so you don ' t has to configure your applic ation specially for each one.

Bundling Your application ' s Dependencies

If Your code depends on other projects, you'll need to package them alongside your application in order to distribute The code to a Spark cluster. To does this, the to create a assembly jar (or "Uber" jar) containing your code and its dependencies. Both sbt and maven have Assembly plugins. When creating assembly jars, List Spark and Hadoop as  provided  dependencies; These need not being bundled since they is provided by the cluster manager at runtime. Once you has an assembled jar can call the  bin/spark-submit  script as shown here while passing Your jar.

For Python, you can use --py-files the argument's spark-submit to add .py , .zip or files to be distributed with .egg your Applica tion. If you depend the multiple Python files we recommend packaging them into a .zip or .egg .

Launching applications with Spark-submit

Once A user application is bundled, it can be launched using the bin/spark-submit script. This script takes care of setting up the classpath with Spark and it dependencies, and can support different cluster Gers and deploy modes that Spark supports:

  --class <main-class>  --master <master-url>   --deploy-mode <deploy-mode>   --conf <key>=<value>  ... # other options <application-jar>  [application-arguments]

Some of the commonly used options are:

  • --class: The entry point for your application (e.g. org.apache.spark.examples.SparkPi )
  • --master: The master URL for the cluster (e.g. spark://23.195.26.187:7077 )
  • --deploy-mode: Whether to deploy your driver on the worker nodes ( cluster ) or locally as an external client ( client default: client ) *
  • --conf: Arbitrary Spark Configuration property in Key=value format. For values this contain spaces wrap "key=value" in quotes (as shown).
  • application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that's present on AL L nodes.
  • application-arguments: Arguments passed to the main method of the your main class, if any

*a Common deployment Strategy is-submit your application from A Gateway machine, that's physically co-located with your Worker machines (e.g. Master node in a standalone EC2 cluster). In this setup, client mode is appropriate. In client mode, the driver are launched directly within the client spark-submit process, with the input and output of the Applicati On attached to the console. Thus, this mode was especially suitable for applications that involve the REPL (e.g. Spark shell).

Alternatively, if your application is submitted from a machine far from the worker machines (e.g. locally on your laptop), It is common to use cluster mode to minimize network latency between the drivers and the executors. Note cluster that the mode is currently not supported for standalone clusters, Mesos clusters, or Python applications.

For Python applications, simply pass a .py file in <application-jar> the place of instead of a JAR, and add Python .zip , .egg or c4/> files to the search path with --py-files .

To enumerate all options available to spark-submit run it with --help . Here is a few examples of common options:

# Run application locally on 8 cores./bin/spark-submit--class Org.apache.spark.examples.SparkPi--masterLocal[8]/path/to/examples.jar100# Run on a Spark standalone cluster./bin/spark-submit--class Org.apache.spark.examples.SparkPi--master spark://207.184.161.138:7077 --executor-memory 20G -- Total-executor-cores /path/to/examples.jar  1000# Run on a YARN Clusterexport hadoop_conf_dir=xxx./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster \ # can also is ' yarn-client ' for client mode--executor-memory 20G --n Um-executors /path/to/examples.jar  1000# Run a Python Application on a cluster./bin/spark-submit --master spark://207.184.161.138:7077  examples/src/main/python/pi.py          
Master URLs

The master URL passed to Spark can is in one of the following formats:

Master URL meaning
Local Run Spark locally with one worker thread (i.e. no parallelism at all).
LOCAL[K] Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).
Local[*] Run Spark locally with as many worker threads as logical cores on your machine.
Spark://host:port Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, and which is 7077 by default.
Mesos://host:port Connect to the given Mesos cluster. The port must be whichever one your are configured to use, and which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, use mesos://zk://... .
Yarn-client Connect to a YARN cluster in client mode. The cluster location is found based on the Hadoop_conf_dir variable.
Yarn-cluster Connect to a YARN cluster in cluster mode. The cluster location is found based on Hadoop_conf_dir.
Loading Configuration from a File

The spark-submit script can load default Spark configuration values from a properties file and pass them in to your application. By default it would read options from in the conf/spark-defaults.conf Spark directory. For more detail, see the sections on loading default configurations.

Loading Default Spark configurations This is obviate the need for certain flags to spark-submit . For instance, if the property spark.master was set, you can safely omit the --master flag from spark-submit . In general, configuration values explicitly set on a take SparkConf the highest precedence, then flags passed spark-submit to, then V Alues in the defaults file.

If you are ever unclear where configuration options is coming from, you can print out fine-grained debugging information By running with the spark-submit --verbose option.

Advanced Dependency Management

When using spark-submit , the application jar along with any jars included with the --jars option would be automatically transferred to the cluster. Spark uses the following URL scheme to allow different strategies for disseminating jars:

    • file:  -Absolute paths and  file:/  uris is served by the driver ' s HTTP file server, and every executor pulls the file from the driver HTTP server.
    • HDFs: ,   http: ,   https: ,   ftp:  -These pull down files and JARs from the URI as expected
    • Local:  -a URI St Arting with local:/are expected to exist as a local file on each worker node. This means, no network IO would be incurred, and works well for large files/jars that is pushed to each worker, or Sha Red via NFS, GlusterFS, etc.

Note that JARs and files is copied to the working directory is sparkcontext on the executor nodes. This can use up a significant amount of space over time and would need to is cleaned up.  With YARN, cleanup are handled automatically, and with Spark standalone, automatic cleanup can being configured spark.worker.cleanup.appDataTtl with the Property.

For Python, the equivalent --py-files option can is used to distribute .egg , .zip and .py libraries to executors.

More information

Once you has deployed your application, the cluster mode overview describes the components involved in distributed execut Ion, and how to monitor and debug applications.

Spark 1.1.1 Submitting applications

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.