Yarn Resource Scheduler
1, Capacity Scheduler
design Objective: to divide resources by queue, and to make distributed cluster resources shared by multiple users, to be shared by multiple application, to dynamically migrate resources between different queues, to avoid resources being monopolized by individual application or individual users, and to improve cluster resource throughput and utilization.
Core idea: Traditional multiple independent clusters occupy a group of machine resources for a rainy day, and have their own management overhead, which often leads to poorer resource utilization and significant administrative overhead, capacityscheduler through an elastic organization way for each queue resource Build a mutual fund, and then share resources across resource queues based on their demand for resources, that is, the idle resources in one queue can be used by another "busy" queue.
Characteristics:
(1) Resource capacity Guarantee
in a way, the queue implements a resource division, and all applications are assigned to a specific queue, and the resources that these applications can use are limited by the resources owned by the queue, and can be configured soft limits or optional hard limits To restrict the resources that are owned by the queue.
(2) Hierarchical queue
hierarchical queues support the prioritization of shared resources in an organization's own queues, providing more control and forecasting capabilities.
2. the difference between Fair Scheduler and capacity Scheduler
A. Fair sharing of resources: in each queue, Fair Scheduler can choose to assign resources to applications according to FIFO, Fair, or DRF policies. Fair policies are evenly distributed, by default each queue allocates resources in that way
B. Support for resource preemption: when there are remaining resources in a queue, the scheduler shares those resources with other queues, and when a new application commits in that queue, the scheduler recycles the resources for it. In order to minimize unnecessary computational waste, the scheduler uses a strategy of waiting for re-enforcement, that is, if a resource is not returned after waiting for a period of time, resource preemption is done, and a portion of the task is killed from the queue that uses the excess resources to free resources
C, load Balancing:Fair Scheduler provides a task-based load balancing mechanism that distributes the tasks in the system to each node as much as possible. In addition, users can design a load balancing mechanism based on their needs
D, Scheduling Policy Flexible configuration: Fiar Scheduler allows administrators to set the scheduling policy separately for each queue (currently supports FIFO, fair, or DRF three)
E, improve small application response time: Due to the use of the maximum minimum fairness algorithm, small jobs can quickly get resources and run the completion
Erlang process Scheduling
Erlang process scheduling is preemptive scheduling , that is, to ensure fair scheduling between processes, each process is assigned a certain number of reduce, each time the reduction reduction of 1, when reduce number reduced to 0 o'clock will not be dispatched, in the scheduling process once the CPU time slice, The Erlang scheduler, regardless of whether the current process is executing, forces the CPU to be detached from the process for use by the next scheduled process.
Yarn resource scheduling and Erlang process scheduling two or three things