Refer to official documents: high Availability
A total of 2 modes, based on file system, based on zookeeper.
1. Based on the file system, the information of the master is synchronized to a file directory, when a master hangs up, will start the Ning outside a master read directory information, the executing Spark application data will not be lost. As the document says, set the following parameters in the spark-env.sh and reboot.
Spark_daemon_java_opts= "-dspark.deploy.recoverymode=filesystem-dspark.deploy.recoverydirectory=/home/hadoop/ Apps/spark-1.3.0-bin-hadoop2.3/tmp "
(Enter the Spark shell, define a variable and assign a value, kill the master process, and then restart Master, and you can observe that the variable you just added can be used.) The argument is wrong, and even if all of them are dead, the variables can be used. )
Summary: 1. You need to restart master manually. 2. It has been observed that the directory spark.deploy.recoveryDirectory only one copy in master, does not save on other nodes, or is there a risk of not knowing if it can be stored on HDFs?
2. Based on the zookeeper, the data is stored to the Zoopeeper, there are several alternate master, according to the document, in the spark-env.sh set the following parameters, restart.
Spark_daemon_java_opts= "-dspark.deploy.recoverymode=zookeeper-dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2 : 2181,hadoop3:2181-dspark.deploy.zookeeper.dir=/spark "
1.master nodes are automatically switched. 2. You need to configure the master in the spark-defaults.conf of Spark to
Spark.master spark://hadoop2:7077,hadoop3:7077
How do I verify that the HA configuration was successful?
Spark Standalone-mode ha