Hadoop scheduler summary)

Source: Internet
Author: User

With the popularity of mapreduce, its open-source implementation of hadoop has become increasingly popular. In the hadoop system, a component is very important, that is, the scheduler, which distributes idle resources in the system to jobs according to certain policies. In hadoop, the scheduler is a pluggable module. You can design a scheduler based on your actual application requirements. There are three common schedulers in hadoop:

(1) default scheduler FIFO

The default scheduler in hadoop, which first selects the executed job according to the priority of the job and then the time of arrival.

(2) Capacity schedity

Supports multiple queues. Each queue can be configured with a certain amount of resources. Each queue adopts a FIFO scheduling policy. To prevent tasks of the same user from occupying Resources in the queue, the scheduler limits the resources of Jobs submitted by the same user. During scheduling, select an appropriate Queue according to the following policy: calculate the ratio between the number of running tasks in each queue and the computing resources that should be allocated, and select a queue with the smallest ratio; select a job in the queue according to the following policies: select jobs according to the job priority and submission time order, and consider the user resource limit and memory limit.

(3) fair scheduler fair Scheduler

Similar to the computing capability scheduler, the scheduler supports multiple queues and multiple users. Resources in each queue can be configured. Jobs in the same queue share all resources in the queue fairly.AlgorithmSee my blog article "hadoop fair scheduler algorithm analysis"

In fact, there are more than three types of hadoop schedulers. Recently, many hadoop schedulers for new applications have emerged.

(4) scheduler late for heterogeneous Clusters

The existing hadoop schedulers are built on the assumption of homogeneous clusters. The specific assumptions are as follows:

1) the performance of each node in the cluster is identical

2) For reduce tasks, each of the three stages is copy, sort, and reduce. Each stage occupies 1/3 of the total time.

3) Tasks of the same type in the same job are completed in batches, and they are basically the same.

The existing hadoop scheduler has major defects, mainly reflected in the algorithm for detecting backward tasks: if the progress of a task lags behind 20% of the progress of the same type of tasks, this task is regarded as a backward task (this task determines the job completion time and the execution time should be shortened as much as possible), so as to start a speculative task for it ). If the cluster is heterogeneous, the execution time of the same task on the same node is significantly different. Therefore, a large number of backup tasks are easily generated in the heterogeneous cluster.

Late (longest approximate time to end, reference [4]) The scheduler solves the problem of the existing scheduler to some extent. It defines three thresholds: speculativecap, the maximum number of speculative tasks executed simultaneously in the system (recommended by the author is 10% of the total number of slots); slownodethreshold (recommended by the author is 25%): score (for the score calculation method, see the thesis) speculative tasks are not started on nodes that are lower than this threshold; slowtaskthreshold (recommended by the author): When the task progress is lower than the average slowtaskthreshold of the same type of tasks in the same batch, the speculative task is started for the task. Its scheduling policy is: when a node has idle resources and the total number of backup tasks in the system is smaller than speculativecap, (1) if the node is a fast node (the node score is higher than slownodethreshold ), the request is ignored. (2) Sort tasks that are currently running by the estimated remaining completion time (3) Select a task with the maximum remaining completion time and the progress below slowtaskthreshold to start the backup task for the task.

(5) deadline schedline and constraint-based Scheduler for real-time jobs

This scheduler is mainly used for a time-limited job (deadline job), that is, a deadline time for the job to be completed within that time. In fact, this type of scheduler can be divided into two types: Soft Real-Time (allow a job to have a certain timeout) Job scheduler and hard real-time (the job must be completed strictly on time) Job scheduler.

Deadline schedline (reference [5]) is mainly used for Soft Real-time jobs. The scheduler dynamically adjusts the resources of jobs based on their running progress and remaining time, so that the job can be completed within the deadline time as much as possible.

Constraint-based scheduler (reference [6]) is mainly used for hard real-time jobs. The scheduler is based on the deadline of jobs and the running status of real-time jobs in the current system, predict whether a new real-time job can be completed within the deadline time. If not, report the job to the user and ask him to re-adjust the deadline of the job.

-----------------------------------------

References:

[1] capacity schedity Introduction: http://hadoop.apache.org/common/docs/r0.19.2/capacity_scheduler.html

Download: http://hadoop.apache.org/common/docs/r0.20.0/capacity_scheduler.pdf

【2 】 fair scheduler Introduction: http://hadoop.apache.org/common/docs/r0.20.2/fair_scheduler.html

Download: http://svn.apache.org/repos/asf/hadoop/mapreduce/trunk/src/contrib/fairscheduler/designdoc/fair_scheduler_design_doc.pdf

[3] Fair scheduler Thesis: M. zaharia, D. borthakur, J. s. sarma, K. elmeleegy, S. shenker, and I. stoica, "Job Scheduling for multi-user mapreduce clusters," EECs Department, University of California, Berkeley, Tech. rep ., apr 2009.

[4] C. Tian, H. Zhou, Y. He, and L. Zha, "a dynamic mapreduce schedads for heterogeneous workloads," inProceedings of the 2009 eighth International Conference on grid and cooperative computing, Ser. GCC '09. Washington, DC, USA: IEEE Computer Society, 2009, pp.218-224. [Online]. Available: http://dx.doi.org/10.1109/GCC.2009.19

[5] deadline scheduler Thesis: J. polo, D. carrera, Y. becsierra, J. torres, E. ayguade And, M. steinder, and I. whalley, "performance-driven task co-scheduling for mapreduce environments," inNetwork Operations and Management Symposium (NOMS), 2010 IEEE, 2010, pp. 373-380.

[6] constraint-based scheduler thesis K. KC and K. Anyanwu, "Scheduling hadoop jobs to meet deadlines," in2nd IEEE International Conference on Cloud Computing Technology And science (cloudcom), 2010, pp. 388-392.

 

From http://dongxicheng.org/mapreduce/hadoop-schedulers/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.