Process scheduling Summary

Source: Internet
Author: User
Tags semaphore

process scheduling
Basic Properties1. Polymorphism from birth, run, until eliminated. 2. A number of different processes can include the same program 3. Three basic states can be converted between them 4. Concurrency concurrent execution of the process takes up the processor base State 1. Wait state: Wait for an event to complete; 2. Ready state: Wait for the system to allocate the processor for operation; 3. Running state: Possession processor is running. Running state → wait states are often caused by waiting for peripherals, waiting for resources such as main memory to be allocated or waiting for manual intervention. Wait state → Ready state The waiting condition has been fulfilled and can be run only after being assigned to the processor. Run state → Ready state is not due to its own reasons, but by external causes the process of running the processor to let go, this time becomes the ready state. For example, a time slice is exhausted, or a higher priority process is used to preempt the processor, and so on. Ready state → Run state system Select a process in the ready queue in a certain strategy to occupy the processor, it becomes a running state processor advanced, intermediate and low-level scheduling jobs from the start to completion, often experienced the following three levels of scheduling: Advanced scheduling: (high-level scheduling) Also known as job scheduling, it decides to put the backup job into memory run, low-level scheduling: (low-level scheduling) is also known as process scheduling, it determines the ready queue of a process to obtain the CPU; Intermediate Dispatch: (Intermediate-level scheduling)   Also known as the introduction of virtual memory, in the internal and external memory swap area for the process of swapping. Mode of non-deprivation distribution program once the processor has been assigned to a process, let it continue to run until the process is complete or occurswhen an event is blocked, the processor is assigned to another process. Deprivation mode when a process is running, the system can, based on some principle, deprive the processor that has been assigned to it and distribute it to other processes. The principle of deprivation is: priority principle, short process priority principle, time slice principle. For example, there are three processes P1, P2, and P3, each of which requires 20, 4, and 2 units of time to run. If they are executed in the order of P1, P2, P3, and are not deprived, the turnaround time for each of the three processes is 20, 24, 26 unit time, and the average turnaround time is 23.33 units of time. If the time-slice principle of the deprivation of the scheduling mode, you can get: P1, P2, P3 turnaround time is 26, 10, 6 unit time (assuming the time slice is 2 units of time), the average turnaround time is 14 units of time.    Metrics to measure process scheduling performance are: Turnaround time, response time, cpu-i/o execution period. Algorithm one, first come first service and short job (process) priority scheduling algorithm 1. First come first service scheduling algorithm

First come first service (FCFS) scheduling algorithm is the simplest scheduling algorithm, which can be used for both job scheduling and process scheduling. When this algorithm is used in job scheduling, each schedule selects one or more jobs that first enter the queue from the fallback job queue, puts them into memory, assigns them resources, creates processes, and then puts them in the ready queue. When the FCFS algorithm is used in the process scheduling, each schedule is to select a process from the ready queue that is first entered into the queue, assigning the processor to it and putting it into operation. The process has been running until it has completed or an event has been blocked before discarding the processor.

2. Short job (process) Priority scheduling algorithm

Short job (process) Priority scheduling algorithm SJ (P) F, refers to the algorithm for short or short process priority scheduling. They can be used for job scheduling and process scheduling, respectively. The scheduling algorithm for short job first (SJF) is to select one or several jobs with the shortest estimated run time from the fallback queue and transfer them into the memory run. The short process first (SPF) scheduling algorithm chooses a process that estimates the shortest running time from the ready queue, assigns the processor to it, causes it to execute immediately and executes until it is completed, or when an event is blocked to abort the processor and then re-dispatched.

Second, high priority priority scheduling algorithm 1. Types of priority scheduling algorithms

The highest priority priority (FPF) scheduling algorithm is introduced in order to take care of the urgent operation, so that it gets preferential treatment after entering the system. This algorithm is often used in batch processing system, as a job scheduling algorithm, also as a process scheduling algorithm in a variety of operating systems, can also be used in real-time systems. When the algorithm is used for job scheduling, the system will select several of the highest priority jobs from the fallback queue to load the memory. When used for process scheduling, the algorithm is to assign the processor to the ready queue of the highest priority process, at this time, the algorithm can be further divided into the following two kinds.

1) Non-preemptive priority algorithm

In this way, once the system has allocated the processor to the highest priority process in the ready queue, the process continues until it is completed, or the system can reassign the processor to another process that has the highest priority because of an event that causes the process to abandon the processor. This scheduling algorithm is mainly used in batch processing systems, and can also be applied to some real-time systems where real-time requirements are not stringent.

2) preemptive priority scheduling algorithm

In this way, the system also assigns the processor to the process with the highest priority and makes it execute. During its execution, however, as soon as another process with its higher priority arises, the process scheduler immediately stops the execution of the current process (the one that has the highest priority) and re-assigns the processor to the new highest priority process. Therefore, when this scheduling algorithm is used, the priority pi is compared with the priority PJ of the executing process J whenever a new ready process I is present in the system. If PI≤PJ, the original process PJ will continue to execute, but if it is PI>PJ, then immediately stop the execution of PJ, do a process switch, so that I process into execution. Obviously, this preemptive priority scheduling algorithm can better meet the requirements of urgent operation, so it is often used in the demanding real-time system, and in the batch processing and timeshare system with high performance requirements.

2. High response ratio priority scheduling algorithm

In batch processing system, the short job first algorithm is a better algorithm, and its main disadvantage is that the operation of long operation is not guaranteed. If we can introduce the dynamic priority described above for each job, and the priority of the job increases with the wait time to increase at rate A, then the long job waits for a certain period of time, there must be a chance to assign to the processor.

As the sum of waiting time and service time is the response time of the system to the job, the priority is equal to the response than the RP.

It can be seen from the above formula:

(1) If the waiting time of the job is the same, the shorter the service time, the higher the priority, and therefore the algorithm is advantageous to the short job.

(2) When the time required for the service is the same, the priority of the job depends on its waiting time, the longer the waiting time, the higher the priority, so it realizes the first to serve first.

(3) For long jobs, the priority of the job can increase with the waiting time, and when its waiting time is long enough, its priority can be raised to high, thus also can obtain the processor. In short, the algorithm not only takes care of the short work, but also takes into account the order of the arrival of the operation, will not make long work long-term service. Therefore, the algorithm realizes a good tradeoff. Of course, when using this algorithm, the response ratio calculation must be done before each dispatch, which increases the system overhead.

Three, time-slice-based rotation scheduling algorithm 1. Time Slice Rotation method

1) Fundamentals

In the early time-slice rotation, the system queues up all the ready processes on a first-come-first-served basis, allocating the CPU to the team's first process and making it perform a time slice each time it is scheduled. The size of the time slice is from several ms to hundreds of Ms. When the elapsed time slice runs out, the clock interrupt request is made by a timer, which signals the scheduler to stop the execution of the process and sends it to the end of the ready queue, and then assigns the processor to the new team first process in the ready queue, while also allowing it to execute a time slice. This ensures that all processes in the ready queue can receive a time slice of processor execution times within a given time. In other words, the system can respond to requests from all users within a given time.

2. Multilevel Feedback Queue scheduling algorithm

The various algorithms used in the process scheduling have some limitations. such as the short process first scheduling algorithm, only take care of the short process and ignore the long process, and if the length of the process is not indicated, the short process first and process-based preemptive scheduling algorithm will not be used. The multi-level feedback queue scheduling algorithm does not need to know the execution time of various processes in advance, but also can satisfy the needs of various types of processes, so it is now recognized as a good process scheduling algorithm. In the system using multilevel feedback queue scheduling algorithm, the implementation process of scheduling algorithm is described below.

(1) You should set up multiple ready queues and assign different priorities to each queue. The first queue has the highest priority, the second queue is followed, and the priority of the remaining queues is lowered one by one. The algorithm gives each queue the size of the process execution time slices, and in the higher priority queue, the execution time slices for each process are smaller. For example, the time slice of the second queue is one times longer than the time slice of the first queue ..., the time slice of the i+1 queue is one times longer than the time slice of the I queue.

(2) When a new process enters memory, it is first placed at the end of the first queue and queued for dispatch by the FCFS principle. When it is time for the process to execute, it can prepare the evacuation system if it can be completed on the chip, and if it is not completed at the end of a time slice, the scheduler moves the process to the end of the second queue, and then similarly waits for dispatch execution by the FCFS principle, if it is still not completed after running a time slice in the second queue Then put it in the third queue, ..., and so on, when a long job (process) from the first queue down to the nth queue, in the nth queue will be taken by the time slice rotation operation.

(3) The scheduler dispatches the processes in the second queue only when the first queue is idle, and the processes in queue I are scheduled to run only if the 1~ (i-1) queue is empty. If the processor is servicing a process in queue I, and a new process enters a higher priority queue (1~ (i-1)), then the new process will preempt the processor that is running the process, that is, the scheduler puts the running process back to the end of the queue. Assign the processor to the new high priority process.

Realize cause of cause
    • The execution of the executing process is completed or no further execution occurs due to an event;
    • The executing process suspends execution due to an I/O request;
    • Some primitive operations such as p manipulation, blocking, and suspending primitives are performed during process communication or synchronization;
    • In a stripped-down dispatch, a process with a higher priority than the current process enters the ready queue;
    • In the time slice rotation method, the time slice finishes;
Usually, the system is organized by first-come-first service or priority form to organize scheduling queues.   Where RQ is the ready queue Pointer and EP is the run queue pointer. Function recording the execution of all processes in the system the specific functions of process scheduling can be summarized as follows:As preparation for process scheduling, the Process management module mustProcess SchedulingThe execution and status characteristics of each process in the system are recorded in the PCB table of each process. And, according to the status characteristics and resource requirements of each process, the process Management module also queues the PCB tables of each process into corresponding queue and makes dynamic queue transfer. Process scheduling module through the PCB changes to master the system of all the processes in the implementation and status characteristics, and at the appropriate time from the ready queue to select a process to occupy the processor. the process of choosing a possessive processorThe main function of process scheduling is to select a process that is in a ready state and get the processor to execute according to a certain strategy. According to different system design purposes, there are a variety of selection strategies, such as the static priority number scheduling method with less overhead, which is suitable for the rotation method (Round Rolin) and multi-level mutual feed rotation (Round Robin with MULTIP1E feedback) of time-sharing system. These selection strategies determine the performance of the scheduling algorithm. making a process context switch-the context of a process includes the state of the process, the values of the variables and data structures, the value of the machine register and the PCB, and the program, data, and so on. The execution of a process is performed in the context of the process. When a process is being executed for some reason to give up the processor, the system does a process context switch to enable another process to execute. When a context switch is made, it is first checked whether context switching is allowed (in some cases, context switching is not allowed, such as when the system is performing a primitive that does not allow interrupts). The system then retains enough information about the process being switched so that it can be successfully resumed when it is later switched back to the process.    After the system retains the CPU field, the scheduler chooses a new process in the ready state and assembles the context of the process so that the control of the CPU is held in the hands of the selected process. Time the causes of process scheduling are as follows:What is the time when the process scheduling occurs? This is related to the cause of the process scheduling and the way the process is dispatched.
    1. Execution of the executing process is complete. At this point, if you do not select a new ready process execution, the processor resources will be wasted.
    2. The executing process itself calls the blocking primitive to block itself into sleep and other states.
    3. The executing process invokes the P primitive operation, which is blocked due to insufficient resources, or the V primitive operation activates the queue of processes waiting for the resource.
    4. An I/O request is blocked by the executing process.
    5. Time slices have been exhausted in the timeshare system.
    6. When the user process is returned after a system call, such as system calls, it can be thought that the system process is complete, thus scheduling the selection of a new user process execution.
    7. The priority of a process in the ready queue becomes higher than the priority of the current execution process, which also raises the process schedule.
All of the above are the cause of process scheduling in a disenfranchised manner. When the CPU is being executed in a manner that is disenfranchised. There are two more ways to consume CPU can be deprived of type(preemptive preemptive): if there is a process in the ready queue that has a priority higher than the current execution process priority, a process dispatch immediately occurs, transferring the processor. Non-stripping(Non-preemptive non_preemptive): Even if the readiness queue exists with a priority higher than the current execution process, the current process will still occupy the processor until the process itself enters a blocking, sleep state, or a time slice runs out due to a call to primitive operation or waiting for I/O. The process context is composed of the body segment, the data segment, the contents of the hardware register, and the data structure. The hardware register mainly includes the program counter PC that holds the virtual address of the next instruction that the CPU will execute, and indicates the processor State Register PS of the hardware state associated with the process, and the General register r and the stack pointer register s of the parameters passed when the procedure call (or system call) is stored. The data structure includes all the forms, arrays, chains, etc. of the management and control related to the execution of the process, including PCB etc. A process context switch is made when a process is scheduled to occur. to switch in a process (context)
    1. Save the context of the processor, including program counters and other registers
    2. Update the PCB that is running the process with new status and other relevant information
    3. Move the original process to the appropriate queue-ready, blocked
    4. Select another process to execute
    5. Update the PCB of the selected process
    6. Reload the CPU context from the selected process
     Performance evaluation       process scheduling is a low-level scheduling within the system, but the quality of process scheduling directly affects the performance of job scheduling. So, how to evaluate the merits of process scheduling? The turnaround time and average turnaround time that reflect job scheduling are only to some extent reflect the performance of the process scheduling, for example, its execution time part actually contains the time that the process waits (including the readiness State), and how much of the process wait time depends on the process scheduling policy and when the wait event occurs. Therefore, the negotiation of process scheduling performance is an important indicator of operating system design.       we say that the process scheduling performance measurement methods can be divided into two types of shape and quantitative. In terms of shape measurement, the first is the reliability of dispatch. Including whether a process scheduling can cause data structure corruption and so on. This requires us to choose the timing of scheduling and save the CPU site very carefully. In addition, simplicity is also an important measure of process scheduling, because the execution of the scheduler involves multiple processes and must be context-switched, if the scheduler is too cumbersome and complex, it will consume a large system overhead. This can result in a significant increase in response time when the user process calls the system more often. The quantitative evaluation of       process scheduling includes CPU utilization evaluation, the ratio of waiting time and execution time in the ready queue of the process. In fact, the stochastic model of the process into the ready queue is difficult to determine, and the process context switch will also affect the execution efficiency of the process, ll and the process scheduling to parse is very difficult. In general, the performance of process scheduling is evaluated mostly by means of simulation or test system response time.    Real-time system review       real-time system differs from other operating systems in that the computer is able to respond to requests for external events in a timely manner, and to complete the processing of the event within the stipulated strict time, and control all real-time equipment and real-time tasks in a coordinated manner, for time-critical differences, real-time systems are divided into hard-time systems and soft real-time systems, in which the hard-time system refers to the requirements of this time is absolute, any real-time task can be completed before the time limit And the requirements for soft real-time systems are not so stringent, Allows occasional real-time tasks that do not meet time limits. Real-time system is generally used in embedded system, which is divided into real-time process control and real-time communication processing, in which real-time process control is mainly used in industrial control and military control field The pursuit is: to the external request within a strict time range to doHigh reliability and completeness. In order to meet the time requirement, the scheduling strategy of the process is very important. Priority       the simplest and most intuitive process scheduling strategy is priority-based scheduling, most real-time systems use priority-based scheduling, Each process is given a different priority based on its degree of importance, and at each dispatch, the total selection of the highest priority process starts to execute .      the first thing to consider is how to assign a priority. The allocation of process priorities can take both static and dynamic methods, Static priority scheduling algorithm: This scheduling algorithm statically assigns a priority to all processes that are running in the system. Static priority allocations can be based on the properties of the application, such as process cycles, user priorities, or other pre-determined policies. Monotonic rate Algorithm (RM) Scheduling algorithm is a typical static priority scheduling algorithm, which decides the scheduling priority according to the length of the process execution period. Processes with small execution cycles have higher precedence. Dynamic priority scheduling algorithm: This scheduling algorithm dynamically assigns process priorities based on the resource requirements of the process, and is designed to provide greater flexibility in resource allocation and scheduling. In real-time systems, the earliest term priority algorithm (EDF) Algorithm is a dynamic priority scheduling algorithm that uses the most, the algorithm assigns priority to each process in the ready queue according to their deadlines (Deadline), and the process with the most recent deadline has the highest priority .       The next question to consider when assigning a good priority is when the high-priority process takes advantage of CPU usage, depending on the kernel of the operating system, There are two types of non-preemptive and preemptive. The non-preemptive kernel requires that each process self-abandons the ownership of the CPU, and each process cooperates with each other to share a CPU. An asynchronous event is also handled by an interrupt service. Interrupt service enables a high-priority process to become ready by a pending state Process until the process actively abandons the use of the CPU, That high-priority process can gain access to the CPU. This is the response time problem, the high priority process has entered the ready state, but cannot execute, so that the process response time is no longer determined that this is inconsistent with the requirements of real-time system, so the general real-time operating system is required to be a preemptive kernel, when a running process makes a better A high-level process entered the ready state, the current process of the CPU access is stripped, or suspended, the high-priority process immediately get the CPU control, if the interrupt service subroutine enables a high-priority process into the ready state, when the interruption is completed, the interrupted process isHangs, The high-priority process starts running. In this kernel setting, multiple processes may be in a concurrent state, which occurs when multiple processes share resources, so we need to set the semaphore to ensure the correct use of critical resources, any process that wants to use critical resources must have a semaphore that uses critical resources before entering the critical section, otherwise it cannot be Row-critical section code .      the priority-based preemptive process scheduling strategy is done on the basic architecture, but at this point there is still a risk of system crashes, assuming that there are 3 processes in the system, P1,P2 and P3, respectively. The priority of P1 is higher than that of P2, and P2 priority is higher than P3. At this point P1 and P2 are blocked for some reason, and the system Dispatches P3 execution. P3 is awakened after a period of time P1 is implemented. Because of the PBP scheduling policy, p1 preempted the P3 CPU, P1 execution. P1 executes for a period of time to enter the critical section, but at this point P3 occupies the semaphore of this critical resource. So P1 is blocked, waiting, waiting for P3 Release this semaphore. After such a period of time, P2 is now in a ready state. Therefore, the system scheduling P2 execution. If P3 has not been able to be dispatched during the execution of the P2, then P1 and P3 will wait until the P2 executes, p1 wait until P3 releases the amount of semaphore it holds to execute, and this period of time is fully Can exceed the deadline of P1, Make P1 crash. We see that in this process, due to the use of critical resources to make the priority of the process before the high priority of the process first executed, which resulted in a priority reversal problem, resulting in a system crash, the problem can be used in priority inheritance approach to resolve. In the priority inheritance scenario, when the high-priority process waits for a low- When the semaphore is occupied by a first-level process, the low-priority process inherits the priority of the high-priority process, which is the priority of the low-priority process to the high-priority process, and when the low-priority process releases the semaphore that the high-priority process waits for, Immediately reduce its priority to the original priority. This method can effectively solve the problem of priority reversal described above. When the high-priority process P1 wants to enter the critical section, the P1 is blocked because the low-priority process P3 occupies the semaphore of this critical resource. At this point, the system takes the priority of P3 to the priority of P1, where priority The process P2 between P1 and P3 is not scheduled to be executed even if it is in a ready state because P3 has precedence over P2 at this time, So P3 is scheduled to execute at this time. When P3 releases P1 required semaphores, the system immediately drops the P3 priority to the original height to ensure that P1 and P2 are properly executed, and there are many real-time systems that use this approach to prevent priority reversals, such as vxworks. Proportional sharing  &NBsp;    Although priority-based scheduling is simple and easy to implement, it is currently the most widely used process scheduling strategy for real-time systems, but for some soft real-time systems this method is no longer applicable, such as real-time multimedia conferencing, In this case, we can choose a shared-based process scheduling algorithm, the basic idea is to follow a certain weight (proportional) to a set of scheduling processes to schedule, Their execution time is directly proportional to their weight. We can achieve proportional sharing scheduling algorithm in two ways: the first method is to adjust the frequency of each ready process in the scheduling queue, and to dispatch the team's first process execution; the second approach is to run each process in a successive schedule-ready queue, but assign each The running time slice of the process. One of the problems with the proportional sharing scheduling algorithm is that it does not define any priority concepts; All processes share CPU resources based on the proportion of their requests, and when the system is overloaded, all processes are scaled down. So in order to ensure that the real-time process in the system can obtain a certain CPU processing time, generally use A method for dynamically adjusting process weights. Time       for those simple systems with stable, known inputs, a time-driven scheduling algorithm can be used, It can provide good predictability for data processing. This scheduling algorithm is essentially an off-line static scheduling method that is defined at design time. In the design phase of the system, in a clear system of all the processing situation, the start of each process, switching, and the end time to make a clear arrangement and design. This scheduling algorithm is suitable for small embedded Systems, sensors and other applications. The advantage of this scheduling algorithm is that the execution of the process is very predictable, but the biggest drawback is the lack of flexibility and the situation where the process needs to be executed while the CPU remains idle .       for the different requirements of the real-time system can adopt different process scheduling strategy to design, but also can be combined with these methods to obtain a more appropriate scheduling strategy.

Process scheduling Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.