SPARK MASTER High ha available deployment

Source: Internet
Author: User

With regard to ha highly available deployment, Spark offers two scenarios:

    • File system-based single-point recovery (single-node Recovery with Local file systems)

Used primarily for development or test environments. Provide a catalog for spark to save the registration information for spark application and workers, and write their recovery status to that directory, and once master fails, you can restart the master process (sbin/ start-master.sh) to restore the registered information for the running spark application and worker.

    • Standby Masters based on ZooKeeper (Standby Masters with ZooKeeper)

Used in production mode. The basic principle is to elect a master through zookeeper, and the other master is in the standby state.

By connecting the standalone cluster to the same zookeeper instance and launching multiple master, you can make one master elected while the other master is in the standby state, using the election and state saving features provided by zookeeper. If the incumbent master dies, the other master is elected, reverts to the old master state, and resumes the dispatch. The entire recovery process may take up to 1-2 minutes.

The following parameters are designed for the highly available deployment configuration above:

Spark.deploy.recoveryMode is used to set which recovery mode to use (FILESYSTEM| ZOOKEEPER).

When using filesystem (file system single point recovery), the following parameters are required:

spark.deploy.recoveryDirectory Spark Save directory for recovery status

When using zookeeper, the following parameters are required:

Spark.deploy.zookeeper.url 192.168. 1.100:2181,192.168. 1.101:2181).
Spark.deploy.zookeeper.dir
The directory in ZooKeeper to store recovery State (default:/spark).

The simplest way to modify the configuration is to add in the $SPARK _home/conf/spark-env.sh file

Export spark_daemon_java_opts="-dspark.deploy.recoverymode=filesystem- Dspark.deploy.recoverydirectory=/data/spark/recovery"

Or

Export spark_daemon_java_opts="-dspark.deploy.recoverymode=zookeeper-dspark.deploy.zookeeper.url= n1:2181,n2: 2181,n3: 2181-dspark.deploy.zookeeper.dir=/temp/spark"

Precautions when using the Zookeeper method :

    • When you use the Zookeeper mode, new task submissions may report an error during the switchover, but the running task has no effect.
    • When a task submits a specified master address, it needs to use a format similar to the following:spark://n1:7077,n2:7077,n3:7077

References :

Http://www.cnblogs.com/hseagle/p/3673147.html

Https://spark.apache.org/docs/0.9.0/spark-standalone.html#standby-masters-with-zookeeper

SPARK MASTER High ha available deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.