How do I start multiple executor on the work node of the spark cluster?
By default, the worker under the spark cluster will only start a executorand run only one coarsegrainedexecutorbackend process. The Worker controls the start and stop of the coarsegrainedexecutorbackend by holding the Executorrunner object.
So how do you start multiple executor? To resolve by setting parameters:
1. Set the number of CPUs used per executor to 4
Spark.executor.cores 4
2, limit the number of CPU usage, here will start 3 executor (12/4)
Spark.cores.max 12
3, set the memory size of each executor to 8g
Spark.executor.memory 12g
The above settings will start 3 executor, each executor using 4cpu,12gram.
Total worker resource 12cpu,36gram.
The source part of Spark1.6 is:
Protected final String executor_memory = "--executor-memory";
Protected final String total_executor_cores = "--total-executor-cores";
Protected final String executor_cores = "--executor-cores";
You can also add a task when you submit it:
Sparksubmit--class com.dyq.spark.MyClass--master:spark://master:7077--total-executor-cores 24--executor-cores- -executor-memory 12g
Tip
In the course of use, it is found that if you use the following versions of spark1.5, there are cases where resources are not available.
Finally, we enclose all the parameter tables of Sparksubmit:
Sparksubmitoptionparser. {string = String = String = String = String = String = String = String = String = String = Stri ng = String = String = String = String = String = String = String = String = String = String = String = String = String = String = String = String = String = String = String =
How do I start multiple executor on the work node of the spark cluster?