Common scheduling algorithm for operating system

Source: Internet
Author: User

Common scheduling algorithm for operating system

A. first come first service scheduling algorithm

First come first service (FCFS) scheduling algorithm is the simplest scheduling algorithm, which can be used for both job scheduling and process scheduling. When this algorithm is used in job scheduling, each schedule selects one or more jobs that first enter the queue from the fallback job queue, puts them into memory, assigns them resources, creates processes, and then puts them in the ready queue. When the FCFS algorithm is used in the process scheduling, each schedule is to select a process from the ready queue that is first entered into the queue, assigning the processor to it and putting it into operation. The process has been running until it has completed or an event has been blocked before discarding the processor.

Advantages and disadvantages: FCFS scheduling algorithm is characterized by simple algorithm, but low efficiency, it is advantageous for the long operation, but unfavorable to short work (relative SJF and high response ratio), it is advantageous to CPU busy job, but not conducive to I/O busy job.

two. the algorithm of short job priority scheduling

Short job (process) Priority scheduling algorithm SJ (P) F, refers to the algorithm for short or short process priority scheduling. They can be used for job scheduling and process scheduling, respectively. The scheduling algorithm for short job first (SJF) is to select one or several jobs with the shortest estimated run time from the fallback queue and transfer them into the memory run. The short process first (SPF) scheduling algorithm chooses a process that estimates the shortest running time from the ready queue, assigns the processor to it, causes it to execute immediately and executes until it is completed, or when an event is blocked to abort the processor and then re-dispatched.

Disadvantage: It is necessary to anticipate the running time of the operation, which is very unfavorable to the long operation, so as to consider the urgency, so it can not guarantee the urgent work to be solved in time, and the short job priority scheduling algorithm cannot realize human-computer interaction.

three. high priority priority scheduling algorithm

1. Priority scheduling algorithm type

(1) Non-preemption is the priority scheduling algorithm

In this way, once the system has allocated the processor to the highest priority process in the ready queue, the process continues until it is completed, or the system can reassign the processor to another process that has the highest priority because of an event that causes the process to abandon the processor. This scheduling algorithm is mainly used in batch processing systems, and can also be applied to some real-time systems where real-time requirements are not stringent.

(2) preemptive priority scheduling algorithm

In this way, the system also assigns the processor to the process with the highest priority and makes it execute. During its execution, however, as soon as another process with its higher priority arises, the process scheduler immediately stops the execution of the current process (the one that has the highest priority) and re-assigns the processor to the new highest priority process. Therefore, when this scheduling algorithm is used, the priority pi is compared with the priority PJ of the executing process J whenever a new ready process I is present in the system. If PI≤PJ, the original process PJ will continue to execute, but if it is PI>PJ, then immediately stop the execution of PJ, do a process switch, so that I process into execution. Obviously, this preemptive priority scheduling algorithm can better meet the requirements of urgent operation, so it is often used in the demanding real-time system, and in the batch processing and timeshare system with high performance requirements .

2. High response ratio priority scheduling algorithm

   in batch processing system, the short job first algorithm is a good algorithm, its main disadvantage is that the long operation can not be guaranteed. If we can introduce the dynamic priority described above for each job, and the priority of the job increases with the wait time to increase at rate A, then the long job waits for a certain period of time, there must be a chance to assign to the processor. The change law of this priority can be described as:
650) this.width=650; "Src=" _4-s_1724502128.png "title=" 1.png "alt=" wkiom1dunrxclwjmaacrvpb1olo978.png-wh_50 "/>

650) this.width=650; "Src=" Http:// -wmp_4-s_2917100898.png "title=" 2.png "alt=" Wkiol1dunurhvu4haadrmzakc44848.png-wh_50 "/>

The above formula can be seen:

(1) If the waiting time of the job is the same, the shorter the service time, the higher the priority, and therefore the algorithm is advantageous to the short job.

(2) When the time required for the service is the same, the priority of the job depends on its waiting time, the longer the waiting time, the higher the priority, so it realizes the first to serve first.

(3) For long jobs, the priority of the job can increase with the waiting time, and when its waiting time is long enough, its priority can be raised to high, thus also can obtain the processor. In short, the algorithm not only takes care of the short work, but also takes into account the order of the arrival of the operation, will not make long work long-term service. Therefore, the algorithm realizes a good tradeoff. Of course, when using this algorithm, the response ratio calculation must be done before each dispatch, which increases the system overhead.

Four. Time-slice-based rotation scheduling algorithm

1. Time Slice Rotation method

1) Fundamentals

In the early time-slice rotation, the system queues up all the ready processes on a first-come-first-served basis, allocating the CPU to the team's first process and making it perform a time slice each time it is scheduled. The size of the time slice is from several ms to hundreds of Ms. When the elapsed time slice runs out, the clock interrupt request is made by a timer, which signals the scheduler to stop the execution of the process and sends it to the end of the ready queue, and then assigns the processor to the new team first process in the ready queue, while also allowing it to execute a time slice. This ensures that all processes in the ready queue can receive a time slice of processor execution times within a given time. In other words, the system can respond to requests from all users within a given time.

2. Multilevel Feedback Queue scheduling algorithm

The various algorithms used in the process scheduling have some limitations. such as the short process first scheduling algorithm, only take care of the short process and ignore the long process, and if the length of the process is not indicated, the short process first and process-based preemptive scheduling algorithm will not be used. The multi-level feedback queue scheduling algorithm does not need to know the execution time of various processes in advance, but also can satisfy the needs of various types of processes, so it is now recognized as a good process scheduling algorithm. In the system using multilevel feedback queue scheduling algorithm, the implementation process of scheduling algorithm is described below.

(1) You should set up multiple ready queues and assign different priorities to each queue. The first queue has the highest priority, the second queue is followed, and the priority of the remaining queues is lowered one by one. The algorithm gives each queue the size of the process execution time slices, and in the higher priority queue, the execution time slices for each process are smaller. For example, the time slice of the second queue is one times longer than the time slice of the first queue ..., the time slice of the i+1 queue is one times longer than the time slice of the I queue.

(2) When a new process enters memory, it is first placed at the end of the first queue and queued for dispatch by the FCFS principle. When it is time for the process to execute, it can prepare the evacuation system if it can be completed on the chip, and if it is not completed at the end of a time slice, the scheduler moves the process to the end of the second queue, and then similarly waits for dispatch execution by the FCFS principle, if it is still not completed after running a time slice in the second queue Then put it in the third queue, ..., and so on, when a long job (process) from the first queue down to the nth queue, in the nth queue will be taken by the time slice rotation operation.

(3) The scheduler dispatches the processes in the second queue only when the first queue is idle, and the processes in queue I are scheduled to run only if the 1~ (i-1) queue is empty. If the processor is servicing a process in queue I, and a new process enters a higher priority queue (1~ (i-1)), then the new process will preempt the processor that is running the process, that is, the scheduler puts the running process back to the end of the queue. Assign the processor to the new high priority process.

The advantages of multi-level feedback queues are:

Terminal type job User: Short job first.

Short batch job User: Turnaround time is short.

Long batch job users: Some of the previous queues have been partially executed and will not be processed for long periods of time.

Five. non-matching algorithm of free partition

(1) First adaptation algorithm. When the algorithm is used for memory allocation, it is searched from the top of the idle partition chain until it finds an idle partition that meets its size requirements. Then, by the size of the job, a chunk of memory from the partition is allocated to the requestor, and the remaining free partitions remain in the idle partition chain.

The algorithm prefers to use an idle partition in the low-address part of memory, where the free partition at the high address portion is seldom exploited, preserving the large idle area of the high-address part. Obviously, the allocation of large memory space for later large jobs has created conditions. The downside is that the low-address section is constantly being divided, leaving many hard-to-use, small, idle areas, and each time the lookup starts from the low-address section, it will undoubtedly increase the cost of finding.

(2) Cyclic first adaptation algorithm. The algorithm is evolved from the first adaptation algorithm. When allocating memory space for a process, it is no longer a lookup from the top of the chain each time, but a lookup from the last found free partition until a free partition is found that satisfies the requirement, and a piece is drawn out to be distributed to the job. This algorithm can make the memory partition in the idle distribution more evenly, but will lack the large free partition.

(3) optimal adaptation algorithm. The algorithm always assigns both the required and the smallest free partition to the job.

To speed up lookups, the algorithm requires that all the idle areas be sorted by their size, forming a blank chain in ascending order. So every time we find the first to meet the requirements of the idle area, it must be optimal. In isolation, the algorithm seems to be optimal, but it does not necessarily. Since the remaining space must be minimal after each allocation, there will be a lot of hard-to-use small idle areas left in the memory. It also has to be reordered after each allocation, which brings a certain amount of overhead.

(4) worst-fit algorithm. In the worst-fit algorithm, the algorithm forms an idle area chain in descending order of size, which is allocated directly from the first free partition of the Idle zone chain (which does not satisfy the need). Obviously, if the first free partition is not satisfied, then there is no more free partition to meet the needs. This method of allocation does not seem reasonable at first, but it also has a strong visual appeal: After placing the program in the large idle area, the rest of the spare area is often very large, and a larger new program can be installed.

The worst-fit algorithm is exactly the same as the best-fit algorithm, and its queue pointer always points to the largest idle area, where the search is always started from the largest idle area when the allocation is made.

The algorithm overcomes the shortcomings of many small fragments left by the optimal adaptation algorithm, but the possibility of preserving large idle areas is reduced, and the recovery of idle areas is as complex as the optimal adaptive algorithm.

Six. page replacement algorithm in virtual page storage Management

1. Ideal page replacement algorithm (OPT): This is an ideal algorithm that cannot be implemented in practice. The idea of this algorithm is: When a page fault occurs, select the memory pages that will never be used or are no longer accessed in the longest time.

2. FIFO page replacement algorithm: Select the first page to enter the memory to be eliminated.

3. The most recent unused algorithm (LRU): Choose a page that has been unused for the longest time in a recent period and retire it.

4. Least-use Algorithm (LFU): Select the page conversion that is least visited by the current time.

Seven. disk scheduling algorithm

1. First serve algorithm (FCFS) First Come service

This is a relatively simple disk scheduling algorithm. It is dispatched according to the order in which the process requests access to the disk. The advantages of this algorithm are fair and simple, and each process request can be processed in turn, and there will be no long-term unmet request for a process. Because the algorithm does not optimize the seek path, the algorithm will reduce the throughput of the equipment service, and the average seek time may be longer, but the response time of each process is less than that of the access request of the disk.

2. Shortest seek Time Priority algorithm (SSTF) shortest seek times first

The algorithm chooses such a process, which requires access to the track with the current head of the track distance nearest, so that the shortest time to seek, the algorithm can get better throughput, but can not guarantee the shortest average seek time. The disadvantage is that the response time to the user's service request is not equal, resulting in a significant change in response times. In the case of many service requests, requests for internal and external edge tracks will be delayed indefinitely, and some requests will not be expected to respond.

3, scanning algorithm (scan) Elevator scheduling

The scanning algorithm takes into account not only the distance between the track to be accessed and the current track, but also the current direction of movement of the head. For example, when the head is moving from inside out, the next Access object selected by the scanning algorithm should be the track it wants to access, both outside the current track and closest to the distance. In this way from the field visits, until there is no more outside the track need to access the magnetic arm reversing, from the outward movement. At this time, it is also a process to choose such a schedule, that is, the track to be accessed, in the current track, so as to avoid the emergence of hunger phenomenon. Because the law of the head movement in this algorithm is quite like the operation of elevator, it is also called elevator scheduling algorithm. This algorithm basically overcomes the disadvantage that the service of the shortest seek time first algorithm concentrates on the change of intermediate track and response time, while the advantage of the shortest seek time first algorithm is that the throughput is large, the average response time is small, but because of the swing scanning method, the track of both sides is still lower than the intermediate track.

4. Cyclic scanning algorithm (Cscan)

The cyclic scanning algorithm is an improvement to the scanning algorithm. If the access request to the track is evenly distributed, there is a relatively small number of access requests that fall behind the head when the head reaches one end of the disk and moves backwards. This is due to the fact that these tracks have just been processed and the request density on the other side of the disk is quite high, and these access requests wait longer, in order to resolve this situation, the cyclic scanning algorithm specifies one-way head movement. For example, only moving from the inside out, when the head is moved to the most visited track, the head immediately returns to the top of the desired track, the smallest track number followed by the maximum track number to form a cycle, to scan.

This article is from the "11275984" blog, please be sure to keep this source

Common scheduling algorithm for operating system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.