Knowledge points in this chapter: 1. Multiple ChannelsProgramDesign 2. process 3. Process status 4. Process Control Module 5. Process queue 6. reentrant program 7. Interrupt and interrupt response 8. Interrupt priority 9. Process Scheduling
Self-study requirements: Through this chapter, we should learn how multi-channel programming improves the efficiency of computer systems; What are the differences between processes and programs; the basic status and status changes of processes; process queue and process scheduling policy; the role of interruption.
Focuses: Multi-program design, process definition and attributes, and process scheduling policies.
I. Multi-Channel programming (understanding)
1. What is multi-channel programming.
Let multiple computing problems be loaded into the primary memory of a computer system for parallel execution. This design technology is called "multi-channel program design ", this computer system is called "multi-channel programming system" or "multi-channel system ".
Storage protection: In a multi-channel program design system, the main memory stores programs with multiple jobs at the same time. To avoid mutual interference, you must provide necessary means for each program in the primary storage to only access its own region. In this way, each program will not destroy other programs and data during execution. Especially when an error occurs in a program, it will not affect other programs.
Program floating: In a multi-channel program design system, there are some special requirements for the program, that is, the program can be randomly moved from one primary area to another, after a program is moved, it does not affect its execution. This technology is called "program floating ".
In multi-channel programming systems, there are three basic requirements:
Use the "storage protection" method to ensure that each program does not infringe upon each other;
The "program floating" technology enables the program to flexibly change the storage area and perform the operation correctly;
Resources must be allocated and scheduled according to certain policies.
2. Multi-Channel programming utilizes the parallel working capability of the system and peripheral devices to improve work efficiency. Specific performance:
Improves the CPU utilization;
Make full use of peripheral device resources: The computer system is configured with a variety of peripheral devices. When multi-channel program design is used for parallel work, programs using different devices can be combined into the primary storage at the same time, the peripheral devices in the system are often busy and the system resources are fully utilized;
The parallel working capability between the processor, peripheral devices, and peripheral devices is utilized;
In general, the multi-channel programming technology can effectively improve the utilization of resources in the system, increase the computing workload per unit time, and thus improve the throughput.
3. The impact of multi-channel programming on question count and Question Time. Multi-channel programming can change the usage of system resources and improve system efficiency. However, pay attention to the following two issues:
The execution time of the program may be extended;
The number of parallel workers is not proportional to the system efficiency. On the surface, increasing the number of parallel workers can improve the system efficiency. However, the number of parallel workers is not proportional to the system efficiency, because the number of parallel channels depends on the resources configured by the system and the resource requirements of users:
(1) The size of the primary storage limits the number of programs that can be loaded simultaneously;
(2) The number of peripheral devices is also a constraint;
(3) multiple programs require the same resource at the same time.
In short, multi-channel programming can improve the efficiency of system resource usage and increase the number of computing questions per unit time. However, for each computing question, the time required to complete the calculation question may be extended, in addition, when determining the number of parallel working channels, the system's resource configuration and user requirements should be integrated.
2. Process (understanding)
1. Process Definition: Call an execution of a program on a dataset as a "process ".
2. A process consists of three parts: a program, a dataset, and a process control block.
For example, if you have a user program notepad.exe (Notepad), when it is stored on a disk, It is a program. When you run it in a Windows operating system, a Notepad program process is created in the memory, the current text we edit in notepad is the dataset of this process. The operating system will set a process control block for the current process. If we open another window of the Notepad program, we will create another process, which runs the same program, but there are two processes, the edited content in the second window is the dataset of the second process.
3. Differences and relationships between processes and programs. The program is static and the process is dynamic. Processes include objects (datasets) processed by programs and programs. processes can obtain results processed by programs. Processes and programs do not correspond one by one. A program runs on different datasets to form different processes. Generally, a process is divided into two categories: "system process" and "user process". The process that completes the operating system function is called a system process, and the process that completes the user function is called a user process.
3. Process status (understanding)
1. Three basic states of a process. Generally, three basic states can be summarized based on the States at different time points during the process execution:
. Wait state: Wait for the completion of an event;
. Ready state: waiting for the system to allocate a processor for running;
. Running State: the processor is running.
2. Process status changes
The state of a process is constantly changing during execution. Each process is always in one of the three basic states at any time. The Conversion Relationship Between Process statuses is shown in:
Running State → waiting state is often caused by waiting for peripherals, waiting for the allocation of primary storage and other resources, or waiting for manual intervention.
The waiting state → ready state is that the waiting condition has been met and can be run only after being allocated to the processor.
Running State → ready state is not for its own reasons, but for external reasons, the running state process is transferred out of the processor. At this time, it becomes ready state. For example, the time slice is used up, or a process with a higher priority is used to seize the processor.
Ready state → the running state system selects a process in the ready queue to occupy the processor according to a certain policy, and then changes to the running state.
A process has four basic attributes:
. Polymorphism starts from birth, operation, and elimination.
. Multiple different processes can include the same program
. Three basic states can be converted between them
. Concurrent Processes occupy the processor in turn.
4. Process Control Module (understanding)
1. Basic Content of the Process Control Block. Generally, process control blocks contain four types of information:
. The flag information contains a unique process name.
. The information includes Process status, reason for waiting, process program storage location, and process data storage location.
. Field information includes general, control, and program status word register content
. Manage Information Storage program priority and queue pointer
2. Role of process control blocks
Process control block (PCB) is a process allocated by the operating system to mark processes and record the execution of each process. A process control block is a sign of the existence of a process. It records the dynamic changes of the process from creation to extinction. The process queue is actually a link to the process control block. The operating system uses process control blocks to control and manage processes.
Process control blocks have the following functions:
(1) record the relevant information of the process so that the process scheduling program of the operating system can schedule the process. Such information includes the mark information, description information, on-site information and management information;
(2) indicates the existence of a process, and the process control block is the unique identifier of the process.
5. Process Queue (understanding)
1. Process queue link.
Multiple processes are created at the same time in a multi-program design system. In the case of a single processor, only one process can run at a time, and other processes are in the ready or waiting state. For ease of management, processes in the same state are often linked together, called "process queue". Because process control blocks can mark the existence of processes and dynamically portray the features of processes, process queues can be formed through the connection of process control blocks. There are two Connection Methods: One-Way link and two-way link.
2. Basic Process queue
Ready queue: a queue connected by several ready processes in a certain order.
Waiting queue: queue of processes waiting for resources or waiting for certain events
3. Process team-up and team-out.
Team-out and team-in: when an event changes the status of a process, the process will exit from a queue and be discharged to another queue.
A process exits from the queue.
Queuing: The operation that a process queues to a specified queue is called queuing.
In the system, queue management is responsible for process queuing and team-out.
No matter whether it is a one-way link or a two-way link, to solve the inbound and outbound problems, you must first find the first line pointer of the queue, locate the process to be queued along the chain and the position to be inserted, or find out the process to be out of the queue, and then modify the pointer value of the process pointer (in the queue) and the relevant pointer value of the adjacent process.
6. reentrant Program (recognition)
(1) What is a reentrant program? a program that can be called by multiple users at the same time is called a "reentrant" program.
(2) the nature of the reentrant program.
The reentrant program must be pure.CodeDoes not change itself during execution;
A reentrant program requires the caller to provide a workspace to ensure that the program serves each user in the same way.
Compiling programs and operating system programs are usually "reentrant" programs that can be called by different users at the same time to form different processes.
VII. Interrupt and interrupt response (understanding)
1. Definition of interruption.
When a process occupies a processor for running, the operation is interrupted due to its own or external reasons (an event occurs) and the operating system is allowed to handle the event, when appropriate, let the interrupted process continue to run. This process is called "interruption ".
2. Interrupt type.
Based on the nature of the interrupt event, the interrupt event can be divided into two categories:
. Forced interruptions include hardware fault interruptions, procedural interruptions, external interruptions, and input/output interruptions.
. A voluntary interruption event is an interruption that occurs when a running process executes an access control command to request a system call. This interruption is also called an access control interruption ".
The breakpoint of voluntary interruption is fixed, and the breakpoint of forced interruption may occur anywhere.
3. Interrupt response and handling.
Interrupt response (hardware means device interrupt operation)
After each instruction is executed by the processor, the interruption location of the hardware is immediately checked for any interruptions. If any interruptions occur, the execution of the current process is suspended, the Interrupt Processing Program of the operating system occupies the processor. This process is called "interrupt response ".
During the interrupt response process, the interrupt device must do the following:
Indicates whether an interruption event has occurred.
To identify a voluntary interruption, you only need to check whether the operation code is an access control command.
To identify forced interruption, check the content of the interrupt register. If the value is 0, no interruption occurs. If the value is not 0, an interruption event occurs.
If an interruption occurs, the breakpoint information is protected.
Each program has a program status word (psw) to reflect the execution status of the current State, such as the basic status, code disconnection, and interrupt shielding bit. The processor has a "program status word register" to store the psw of the current running program. program status words can be divided into the current psw, the old psw and the new psw.
When an interruption event occurs, save the psw of the interrupted process as the old psw to complete breakpoint information protection.
Start the Interrupt Processing Program of the Operating System
The Interrupt Device completes this task through the "Switch psw" process, that is, stores the interrupt event to the location where the current psw is disconnected, and then saves the current psw as the old psw, then, send the new psw of the interrupt processing program of the operating system to the program status word register to become the current psw.
Interrupt handling (software is an operating system operation)
When the interrupt handler of the operating system processes the interrupt event, there are three aspects to do:
Protects the on-site information of the interrupted process
Store the General registers, control registers, and old psws in the process control block of the interrupted process.
Analyze the cause of Interruption
The specific cause of the interruption is known Based on the middle-out code of the old psw.
Handle the interrupted event
Generally, only some simple processing is performed. In most cases, the specific processing is handed over to other program modules.
VIII. Interrupt priority and interrupt blocking (recognition)
1. the interrupt priority is determined during hardware design. The Interrupt Device responds to the simultaneous interruption event in a predetermined order, which is called the "interrupt priority ". The interrupt priority is determined by the importance and urgency of the interrupt event, which is fixed by the hardware design. Generally, the priorities are hardware fault interruption, voluntary interruption, procedural interruption, external interruption, and input/output interruption.
2. nested Interrupt Processing
3. Interrupt shielding. The interrupt priority only specifies the order of interruptions that occur when the Interrupt Device responds to an interruption. When the interrupt handling program is processing the response after the interruption occurs, the interrupt device may also respond to another interruption event. Therefore, the handling of Interrupt events with a lower priority will lead to the handling of Interrupt events with a higher priority, so that the handling sequence of the interrupt events is inconsistent with the response sequence, and multiple nested processes will be formed, complex work such as multi-site protection and program return.
The interrupt shielding technology is designed to solve the problem above. It proposes not to respond to other interrupt events before an interrupt processing ends, or only to respond to a interrupt event that is higher than the current level. Therefore, after the interruption device detects an interruption event, it will check the interruption blocking mark in psw. If it does not, it will respond to the interruption; otherwise, it should be interrupted for the moment, wait until the blocked flag is cleared and then respond. Voluntary Interruptions cannot be blocked.
9. Process Scheduling (understanding)
1. Process Scheduling responsibilities. Schedule by selected ProcessAlgorithmSelect a process from the ready queue to occupy the processor.
2. Several criteria for selecting a process scheduling algorithm:
. Improve processor utilization
. Increase Throughput
. Reduce wait time
. Shorten response time
3. Common Algorithm for process scheduling: Service First, priority calculation, rotation, and hierarchical scheduling.
First, the service scheduling algorithm. This algorithm selects the processes that can occupy the processor according to the order in which the process enters the ready queue.
The priority scheduling algorithm determines a priority for each process. This algorithm always enables the process with the highest priority to use the processor first. For processes with the same priority, the processor is allocated first in the order of service first. The system usually determines the priority of processes based on the urgency of tasks and system efficiency. The priority of a process can be fixed or dynamically changed as the process is executed. After a high-priority process occupies a processor, the system can process the process in two ways: "non-preemptible" and "preemption ". The former means that this process is running until the end of the process is occupied by the processor, unless the processor itself actively gives up, the latter means to strictly ensure that the process with the highest priority always runs on the processor at any time.
The time slice rotation scheduling method calls the maximum time for a specified process to use a processor at a time as "time slice ". The time slice rotation scheduling algorithm queues ready processes in the order of readiness. The first process in the queue occupies the processor, but only one time slice can be used. If the process is not completed, it is placed at the end of the team, waiting for the next time slice for its use. Each process is rotated in this way. The time slice Rotation Algorithm is often used in time-sharing operating systems.
The hierarchical scheduling algorithm is used by the system to set multiple ready queues. Processes in each ready queue occupy the processor by time slice rotation. This is the hierarchical scheduling algorithm.
4. process switching process scheduling selects another process from the ready queue to occupy the processor, so that one process gets out of the processor. The process occupied by another process is called "process switching ".
If a process changes from the running state to the waiting state, or is withdrawn after the work is completed, process switching will certainly occur. If a process changes from the running or waiting state to the ready state, process switching does not necessarily occur.