CPU scheduling and cpu scheduling
Because the processor is the most important computer resource, improving the utilization rate of the processor and improving the system performance (throughput and response time) depends largely on the Processor Scheduling Performance, processor Scheduling becomes one of the central issues in operating system design.
I. Layers of Processor Scheduling
1. Advanced scheduling: it is also called Job Scheduling or long-range scheduling. Its main function is to transfer jobs in the backup queue to the memory based on an algorithm. That is to say, its scheduling object is a job.
① A job is a more extensive concept than a program. It not only contains the common program and data, but also has a statement of work, the system controls the operation of the program according to this manual. In the batch processing system, memory is transferred from external memory as the basic unit of a job.
② Job Scheduling: the main function of Job Scheduling is to review whether the system can meet the resource requirements of users' jobs based on the information in the job control block and follow certain algorithms, select some jobs from the external backup queue to transfer to the memory, create processes for them, and allocate necessary resources. Then, insert the newly created process into the ready queue for execution. Therefore, Job Scheduling is also called accept scheduling. You must make the following two decisions for each job scheduling: decide how many jobs are accepted and what jobs are accepted. However, in a time-sharing system, in order to respond in a timely manner, the commands or data entered by the user through the keyboard are directly sent to the memory, so you do not need to configure the preceding job scheduling mechanism, however, some restrictive measures are also required to limit the number of users entering the system.
2. Low-Level Scheduling: it is also called process scheduling or short-range scheduling. Its main function is to determine which process in the ready queue should obtain the processor, then, the dispatcher executes the specific operation to allocate the processor to the process. It mainly completes the following tasks: Saving the field information of the processor, selecting the process based on a certain algorithm, and assigning the processor to the process.
① Three basic mechanisms in process scheduling: queuing machine, distributor, and context switching mechanism.
② Process scheduling mode: Non-preemption mode and preemption Mode
1) Non-preemptible mode: Once a processor is assigned to a process, it keeps running no matter how long it takes, never preemptible the processing machine of the running program due to clock interruption or other reasons, nor allow other processes to preemptible the processing machine allocated to it. The processor is allocated to other processes only when the process is completed, the processor is released voluntarily, or an event is blocked.
When a non-preemptible scheduling method is adopted, the possible causes for process scheduling can be as follows:
The executing process is completed, or the execution cannot continue due to an event;
The execution of processes in progress is suspended due to an I/O request;
Some primitive operations are performed during process communication or synchronization, such as p operations (wait operations), Block operations, and Wakeup primitives.
2) preemption mode: allows the scheduler to pause a running process based on certain principles and re-allocate the processor allocated to the process to another process. It is based on the following principles: priority, short job (process) priority, and time slice.
3. Intermediate scheduling: it is also called intermediate scheduling. The main objective is to improve memory utilization and system throughput. To this end, processes that are temporarily unavailable should no longer occupy valuable memory resources, but should be transferred to external storage for waiting. The process status at this time should be called the ready external storage status or suspended status. When these processes have re-running conditions and the memory is slightly idle, it is up to the intermediate scheduling to decide to re-transfer the ready processes with the running conditions on the external storage to the memory, and change the status to the ready state, and hung on the ready queue to wait for process scheduling.
Ii. Selection of scheduling methods and scheduling algorithm principles
1. user-oriented principles
① Short turnaround time: the so-called turnaround time refers to the interval between the time when the job is submitted to the system and the completion of the job. It consists of four parts: the waiting time of the job on the external storage reserve team, and the waiting time of the process on the ready queue, the time when the process is executed on the CPU and the time when the process waits for the completion of the I/O operation.
Average turnaround time T = 1/n (T1 + T2 +... + Tn );
The weighted turnover time is the ratio of the turnover time T of the job to the time Ts that the system provides services for it, that is, W = T/Ts;
Average Weighted turnaround time W = 1/n (T1/Ts + T2/Ts +... + Tn/Ts );
② Fast response time: the so-called Response Time starts when the user submits a request through the keyboard until the system generates a response for the first time, or, the interval until the result is displayed on the screen. It includes three parts of time: the time when the request information entered by the keyboard is sent to the processor, and the time when the processor processes the request information, and the time when the response information is sent back to the terminal display.
③ Deadline guarantee: The deadline refers to the latest time that a task must start to run or the latest time that must be completed.
④ Priority principle
2. System-oriented principles
① High system throughput: throughput refers to the number of jobs completed by the system per unit time. Therefore, it is closely related to the average length of batch processing jobs.
② Good processor utilization
③ Balanced utilization of various resources
Iii. Scheduling Algorithm
1. First, the service scheduling algorithm (FCFS)
2. Short job (process) Priority Scheduling Algorithm (SJ (P) F)
3. High-priority scheduling algorithm
1) scheduling algorithm type
① Non-preemptible Priority Algorithm
② Preemptible Priority Algorithm
2) Priority type
① Static priority: it is determined during Process Creation and remains unchanged throughout the process.
② Dynamic priority: the priority granted during process creation can be changed as the process progresses or as the wait time increases, so as to achieve better scheduling performance.
4. time slice rotation method: a desirable size is that the time slice is slightly larger than the time required for a typical interaction. In this way, most processes can be completed in one time slice.
5. Implement the multi-level feedback queue scheduling algorithm according to the following process:
① Set multiple ready queues and assign different priorities to each queue. The first queue has the highest priority, and so on.
② When a new process enters the memory, it is first placed at the end of the first queue, waiting for scheduling according to the FCFS original queue. If it is not completed after a time slice ends, the scheduler transfers the process to the end of the second queue, and so on.
③ Only when 1st ~ (I-1) The process running in the I queue is scheduled only when the queue is empty. If a new process enters a queue with higher priority when the processor is serving a process in queue I, the new process will seize the processor of the running process at this time, that is, the scheduler puts the running process back to the end of the I queue and assigns the processor to the new high-priority process.
Iv. Real-Time Scheduling
1. in real-time systems, the system processing capability must be strong. Assume that there are m periodic hard real-time tasks in the system. Their processing time can be expressed as Ci and the cycle time is expressed as Pi. In the case of a single processor, the following constraints must be met: c1/P1 + C2/P2 +... + Cm/Pm <= 1: The system is schedulable. For example, there are 6 hard real-time tasks in the system, whose cycle time is 50 ms, and each processing time is 10 ms, it is not difficult to calculate, the system is not scheduling at this time.
2. Classification of real-time scheduling algorithms
1) Non-preemptible Scheduling Algorithm
① Non-preemptible rotation scheduling algorithm
② Non-preemptible Priority Scheduling Algorithm
2) preemptible Scheduling Algorithm
① Preemptible Priority Scheduling Algorithm Based on clock interruption
② Priority Scheduling Algorithm for immediate Preemption
3) EDF algorithm
4) The lowest relaxation priority is LLF algorithm.
What are common CPU scheduling features? What are their basic meanings?
Front-End bus
A bus is a set of transmission lines that transmit information from one or more source components to one or more target components. Generally speaking, it is a public connection between multiple parts, which is used to transmit information between parts. People often describe the bus frequency at a speed expressed in MHz. There are many Bus types. The English name of the Front-end Bus is the Front Side Bus, which is usually expressed as FSB. It is the Bus that connects the CPU to the North Bridge Chip. When purchasing the motherboard and CPU, pay attention to the problem of the combination of the two. Generally, if the CPU is not overclock, The frontend bus is determined by the CPU. If the motherboard does not support the front-end bus required by the CPU, the system cannot work. That is to say, the system can work only when both the motherboard and CPU support a certain Front-End bus, but the default Front-End bus of a CPU is unique. Therefore, the front-end bus of a system depends on the CPU.
The beiqiao chip is responsible for connecting the components with the highest data throughput, such as memory and video card, to the nanqiao chip. The CPU is connected to the beiqiao chip through the front-end bus (FSB), and then exchanged data with the memory and video card through the beiqiao chip. The front-end bus is the main channel for data exchange between the CPU and the outside world. Therefore, the data transmission capability of the front-end bus has a great effect on the overall performance of the computer. If the front-end bus is not fast enough, A strong CPU does not significantly increase the overall speed of the computer. The maximum bandwidth of data transmission depends on the width and transmission frequency of all data transmitted simultaneously, that is, the data bandwidth = (bus frequency × data Bit Width) ÷ 8. Currently, the frontend bus frequency on a PC is 266 MHz, 333 MHz, 400 MHz, 533 MHz, or MHz. The higher the frontend bus frequency, it indicates that the greater the data transmission capability between the CPU and beiqiao chip, the more CPU functions can be fully utilized. The current CPU technology is developing rapidly, and the computing speed is increasing rapidly. The large enough front-end bus can ensure that enough data is provided to the CPU. The low front-end bus cannot supply enough data to the CPU, this limits the CPU performance and becomes a system bottleneck. Obviously, the faster the front-end bus, the better the system performance.
The difference between the external frequency and the front-end bus frequency: the speed of the front-end bus refers to the speed of the bus between the CPU and the North Bridge Chip. More importantly, it represents the speed of the CPU and external data transmission. The concept of external frequency is based on the fluctuation speed of Digital pulse signals. That is to say, a 10 thousand MHz external frequency refers to a digital pulse signal that oscillates million times per second, it affects the frequency of PCI and other bus. The two concepts of Front-End bus and outer frequency are confusing, mainly because during a long period of time (mainly before the emergence of Pentium 4 and when the emergence of Pentium 4 ), the frequency of the front-end bus is the same as that of the outer frequency. Therefore, it is often called the front-end bus as the outer frequency, which leads to such misunderstanding. With the development of computer technology, it is found that the frequency of the front-end bus must be higher than that of the outer frequency. Therefore, QDR (Quad Date Rate) technology or other similar technologies are used to achieve this purpose. The principles of these technologies are similar to the 2X or 4X of AGP. They make the front-end bus frequency twice, 4X, or even higher than the outer frequency, since then, the difference between the front-end bus and the external frequency has been paid attention. In addition, HyperTransport of AMD64 is special in the front-end bus.
External frequency
The external frequency of the CPU, usually the operating frequency of the system bus (system clock frequency), the frequency of data transmission between the CPU and peripheral devices, specifically the bus speed between the CPU and the chipset. The external frequency is the synchronous speed between the CPU and the motherboard, and the external frequency in most computer systems is also the synchronous speed between the memory and the motherboard. In this way, it can be understood that the external frequency of the CPU is directly connected to the memory to realize the synchronous running status between the two.
Before 486, the CPU clock speed is still in a low stage, and the CPU clock speed is generally equal to the external frequency. However, after the advent of 486, due to the continuous increase in the CPU operating frequency, some other devices (such as plug-in cards and hard disks) on the PC are restricted by the technology and cannot withstand a higher frequency, this limits the further improvement of the CPU frequency. Therefore, the frequency doubling technology is introduced. This technology can change the internal frequency of the CPU to a multiple of the external frequency, so as to increase the frequency by increasing the frequency doubling. Multiplier ...... remaining full text>
In Windows, is job scheduling and process scheduling in Processor Scheduling a problem?
Indeed, only one process occupies the cpu at a time, and two dual-core processes (without hyper-Threading Technology) because the cpu is constantly switching the process, the switching speed depends on your cpu clock speed.