This problem occurs when you run the Spark program
WARN taskschedulerimpl:initial Job has not accepted any resources; Check your cluster Uito ensure that workers is registered and has sufficient memory
Then stop, login WebUI see the status is wait, allocate the kernel is 0, memory is parameter –executor-memory (or-dspark.executor.memory)
This is because the available resources for the current cluster do not meet the resources requested by the application.
This first examines the memory of the executor that is assigned to each worker in the spark-env.sh file. When the above parameters are exceeded in the app, the app does not work with the worker, but instead goes to the worker that satisfies the value.
Of course, the distribution of each node is generally the same, so it is basically because the parameter exceeds the memory value of the executor assigned to the worker.
Change the value of executor (worker) to the spark-env.sh file (if you have so much memory, of course)
The problem will be solved.