Introduction to the Spark Cluster Manager

Source: Internet
Author: User
Tags apache mesos

Spark can run on a variety of cluster managers and access other machines in the cluster through the cluster manager.
Spark has three cluster managers, and if you just want spark to run, you can use a standalone cluster manager from Spark, a standalone deployment mode, and if you want spark to be deployed on other clusters, there are two cluster managers that can be used for sharing clusters: Hadoop Yarn or Apache Mesos.

One, independent cluster Manager

Spark Standalone Cluster Manager provides an easy way to run an app on a cluster. To use the cluster startup script, follow these steps:
1. Send the compiled spark to the same directory as the other nodes in the cluster, for example:/home/opt/spark
2, set the cluster's master node and other machines ssh password-free login
3, edit the master node's conf/slaves file, add the hostname of all the work nodes
4, run sbin/start-all.sh boot cluster on the master node, you can see the cluster management interface on the http://masternode:8080
5. To stop the cluster, run sbin/stop-all.sh on the master node

Second, Hadoop Yarn

Yarn is the Cluster manager introduced in Hadoop2.0, which allows the multi-medium data processing framework to run on a shared resource pool and to be installed on the same physical node as the distributed Storage System (HDFS) of Hadoop. So it's a good choice to have spark running on a cluster configured with yarn, so that when the Spark program runs on the storage node, it can quickly access the data in HDFs.
Steps for using yarn in spark:

1. Locate your Hadoop configuration directory and set it to ask the environment variable HADOOP_CONF_DIR.
Export hadoop_conf_dir= "..."
Then submit the job using the following method
Spark-submit--master Yarn YourApp

2. Configure Resource Usage
(1)--executor-memory set the memory usage per actuator
(2)--executor-cores sets the number of cores that each executor process occupies from yarn
(3)--num-wxecutors Spark application uses a fixed number of executor nodes, default to 2

Third, Apache Mesos

Mesos is a common cluster manager that can run both analytic and long-running services.
Using spark on Mesos can be used in the following ways:
Spark-submit--master mesos://masternode:5050 YourApp

1, mesos the scheduling mode
The scheduling mode of Mesos is divided into two kinds: coarse-grained and fine-grained mode
Coarse-grained mode: Only spark assigns a fixed number of CPUs to each executor in advance and does not release these resources until the task ends.
Coarse-grained dispatch mode can be turned on by setting Spark.mesos.coarse to True
Fine-grained mode (default): The number of CPU cores occupied by the executor process changes dynamically during the execution of the task.

2. Configure Resource Usage
(1)--executor-memory set the resources for each actuator
(2)--total-executor-cores set the number of cores the app occupies

Introduction to the Spark Cluster Manager

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.