I. Introduction to the operating system
The operating system is a set of programs that can effectively block and manage the hardware and software resources of the computer, dispatch the various functions reasonably, and facilitate the user's use.1. Objectives and role of the operating systemThe main goal of configuring the operating system on a computer system is to:convenience, effectiveness, expandability and openness。
Ease of Access:A computer system that is not configured is extremely difficult to use. After the operating system is configured, the system can use the compile command to translate the user's program written in high-level language into machine code, or to manipulate the computer directly through the various commands provided by the OS, which greatly facilitates the user.Validity:Improve system resource utilization and wash off tender meat throughput.Scalability :Can easily add new features and modules, as well as the original features to add and modify.Openness:Refers to the system to follow the world standard specification.the role of the operating system
as an interface between the user and the computer hardware operating system。 The user can use the computer hardware through the OS.as a manager of computer system resources。 These resources are mainly divided into: processors, memory, I/O devices, and files (data and programs).The abstraction of computer resources is realized.。2. The operating system development processManual operation mode: artificial input and output, the user exclusive whole machine. Offline input/Output mode: High-speed tape is introduced. Reduces CPU idle time and improves I/O speed. Single-channel batch processing system: to achieve continuous processing of the operation of the system, the resources are not fully utilized. Multi-channel batch processing system: Multi-channel operation is stored in external memory queue, there are job scheduling to select several jobs into memory, resource utilization is high, system throughput is big, but average turnaround time is long, no interaction ability. CTSS: Meet the user needs of human-machine interaction. Refers to the system in which a host connects multiple terminals with a display and keyboard, which allows multiple users to use the computer interactively and share resources in the host through their own terminals. Real-time system: refers to the system in a timely manner to respond to external events requests, within the specified time to complete the processing of the event, and control all real-time tasks to run in unison.3. Basic features of the operating system
The multi-batch system, the time-sharing system and the realtime system described above each have their own characteristics, but at the same time, they also have the common features of concurrent, shared, virtual, and asynchronous four. concurrency : Programs in the system can execute concurrently, allowing the OS to effectively improve the resource utilization of the system and increase the throughput of the system.
Concurrency: Refers to two or more events that occur within the same time interval. Parallelism: Refers to two or more events occurring at the same time. If there are multiple processors in the computing system, these programs can be executed concurrently to implement parallel execution on multiple processors. When a process is not introduced, only two programs that belong to one application can be executed sequentially. By introducing a process to create a process for multiple programs in memory, they can execute concurrently, greatly improving system resource utilization and system throughput. sharing : Refers to resources in the system that can be used by multiple concurrent execution processes in memory. Main implementation:
mutually exclusive sharing: For a period of time, only one process is allowed to access the resource, we become the resource of the critical resource. Simultaneous access to concurrency and sharing is the two most basic feature of a multiuser OS. Virtualization : The use of Time division multiplexing Technology and Space-division multiplexing technology to achieve. The
Time Division Multiplexing (TDM) technology is used to improve the utilization of the processor by running other programs during the idle hours of the processor. The Space division multiplexing technology is used to store and run other multi-channel programs in the free time of memory, so as to improve the utilization of memory. Async : Processes move forward at unpredictable speeds, where the user does not know when the process is getting the CPU. 4. Main functions of the operating system
The purpose of the introduction of the OS is to provide a good operating environment for the operation of the multi-channel program, in order to ensure the orderly operation of the multi-channel program, and to maximize the utilization of various resources in the system to facilitate the user's use. The OS has five functions: processor management, memory management, device management, file management, and user interface. The main functions of processor management: Process Control, process synchronization, process communication, and process scheduling. Main functions of memory management: memory allocation, memory protection, address mapping, memory expansion. The main functions of device management: buffer management, equipment allocation, equipment processing. The main functions of file management: File storage space management, directory management, file read-write management and protection. The main function of the interface: to provide user interface and program interface. two. Description and control of the process 1. Processes and Threads
In order to enable the execution of the program concurrency, and the concurrent execution of the program can be controlled, the introduction of the process. In order for each program that participates in concurrent execution to run independently, a Process control block PCB is configured for each process, and a PCB is used to describe the basic situation of the process and the process of activity to control and manage the process. In this way, the process entity is composed of the program segment, the related data section and the PCB three parts. Definition of the process
A process is a single execution of a program. A process is an activity that occurs when a program and its data are processed in the order in which they are executed. A process is a process in which a program with independent functionality runs on a data collection, which is a separate unit of system resource allocation.
The purpose of introducing threads is to reduce the time and space overhead of the program when executing concurrently, and to make the OS more concurrency. The process is created, and the system assigns it all the resources it needs, except the processor, and creates the appropriate PCB. Process revocation, you must perform a recycling operation on the resources it occupies, and then revoke the PCB. Process switching, when switching to the city, you need to keep the CPU of the current process, set the CPU environment of the new process, so it takes a lot of processor time.
A process is the owner of a resource, and if frequent creation of the undo switch creates a significant overhead. Therefore, the thread is introduced as the unit of Dispatch and dispatch, and the process acts as a separate unit for resource allocation. 2. Comparison of programs, processes, threads
The first difference between a program and a process is that the process has a PCB that the program does not have, and that the process is an execution. Dynamic nature:
The essence of the process is the process entity's execution, the dynamic performance in: The process is created by the creation, by the dispatch and execution, by the revocation and extinction. The visible process has a certain life cycle. A program is a set of ordered instructions, and stored in a medium IQ, which itself does not have the meaning of activity, is static. Concurrency:
Multiple processes are stored in memory and can be run concurrently over a period of time. The program does not establish a PCB and cannot participate in concurrent execution. Independence:
Process entity is a basic unit that can run independently and obtain resources independently. No PCB-based programs can be operated as a standalone unit. Async:
Processes run asynchronously, moving forward at their own, unpredictable speed. If the program participates in concurrent execution, it will produce the non-reproducibility of its results.
The second is the difference between a process and a thread: the basic unit of Dispatch:
A process is the basic unit of resource allocation. Threads are the basic unit of dispatch and dispatch. Thread switching requires only a small number of registers to be saved and set. Switching between threads in the same process does not cause the process to switch, and switching between threads in different processes causes the process to switch. Concurrency:
Not only can the process be executed concurrently with the process, but it can also be executed concurrently between multiple threads in a process. Own resources:
A process is the basic unit of resources in a system. Threads do not have system resources, but are only a few essential resources that can guarantee independent operation, such as TCB, a set of registers and stacks for PCs, reserved local variables, a few state parameters, and return addresses. Independence:
Each process has a separate address space and other resources. Other processes are not allowed to access in addition to shared global variables. threads, in addition to having a small amount of resources, can have the same address space for all threads that belong to the same process. And can access all the resources of the owning process. System overhead:
The cost of creating and revoking a process is much larger than the thread. Threads in the same process have the same address space, and switching between threads is less costly than process switching. Support Multiprocessor System:
On multiprocessor system, for a traditional process, no matter how many processors, the process can only be run on a single processing machine. However, for multithreaded processes, you can assign threads from one process to multiple handlers so that they can be executed in parallel, speeding up the completion of the process. 3. Status of the process
1) Three basic states of the process: Ready state: The process has been divided into all the necessary resources except the CPU, as long as the CPU is obtained, it can be executed immediately. Execution status: Refers to the process is already getting the CPU, its program is executing. Blocking state: The state that the executing process is temporarily unable to resume because of something, such as an I/O request, request buffer failure, and so on.
The three basic states of the process and their transformations are shown below:
2) in order to meet the integrity requirements of the process control block for data and operations, and to enhance the flexibility of management, the creation state and termination state are introduced. Create state: Ensure that the scheduling of the process must be done after the creation is complete to ensure the integrity of the process control block operations. Once the required resources have been obtained and the initialization of the PCB has been completed, the creation status can be entered into a ready state. Termination Status: Wait for the operating system to do the aftercare, finally the PCB cleared, and the PCB space return system.
The five basic states of the process and the transformations are as follows:
3) The introduction of suspend state: increase memory capacity, realize virtual memory. Suspend is to transfer the process out of memory to ensure sufficient memory.
After the introduction of the pending state, the picture of the state is as follows:
4. Process Synchronization
The main task of the process synchronization mechanism is to coordinate the execution order of multiple related processes, so that the concurrent execution processes can share the system resources according to certain rules, and can cooperate well with each other, thus the execution of the program is reproducible.
1) Two forms of restrictive relationship Indirect mutual restriction relationship: When multiple programs are executing concurrently, because of the shared system resources, only these concurrently executed programs form a mutually restrictive relationship, multiple processes mutually exclusive access to these resources. Direct Mutual restriction: Multiple processes work together in order to complete a task, these processes have a certain chronological order to complete the task. Some processes have to be started after the other processes.
2) The rules that should be followed by the synchronization mechanism: idle let in busy wait for limited wait right wait
3) Hardware synchronization shutdown interrupt using Test-and-set instruction to implement mutex using swap instruction for process mutex
4) Semaphore mechanism integer semaphore: non-interruptible during execution. The description is as follows:
Wait (S) {
while (s<=0);
s--;
}
Signal (S) {
s++;
}
Wait (s) and signal (s) are two atomic operations and do not conform to the "let right Wait" principle. Record-type semaphore: A record-based data structure is used. The description is as follows:
typedef struct{
int value;
struct Process_control_block *list;
} semaphore;
Wait (semaphore * S) {
s->value--;
if (S-value < 0) block (s->list);
}
Signal (semaphore * S) {
s->value + +;
if (s->value <= 0) Wakeup (s->list);
}
and type semaphore: Jiang Jincheng All the resources that are needed throughout the run, all at once, all at once, with the process exhausted and released together. Semaphore set
5) The application of signal volume realizes the process mutual exclusion by using the semaphore, so that multiple processes can access a critical resource mutually exclusive. Using the signal volume to achieve the precursor relationship. 5. Process Classic synchronization problem producer-consumer problem philosopher meal question reader-writer question 6. Process Communication
Process communication type: Shared memory system Pipeline communication system messaging system client-server system three. Processor scheduling and Deadlock 1. The level of processor scheduling and the target of scheduling algorithm
1) Processor Scheduling advanced scheduling: Also known as long scheduling or job scheduling, objects when the job, select several jobs from the backup queue into memory, create processes for them, allocate the necessary resources, and put them in the Ready list. It is mainly used in multi-channel batch processing system, and no advanced scheduling is set up in time-sharing system. Low-level scheduling: Also known as process scheduling or short-range scheduling, the object is a process or thread, select a process from the ready queue to get the processor. This level of dispatch must be configured in multi-channel batch processing, time-sharing, and realtime systems. Intermediate scheduling: Also known as memory scheduling, the temporarily unable to run the program to external memory wait. At this point the status of the process is called the suspended state, when they have a running condition and there is a little idle, by the intermediate scheduling to determine the outer village of those with the operating conditions of the ready process to re-enter the memory, set to the ready state. The goal is to improve memory utilization and system throughput.
2) target of processor scheduling algorithm
CPU Utilization:
CPU utilization =CPU Effective working time CPU effective working time +CPU idle wait time CPU utilization = \DFRAC{CPU Active Working time}{CPU active working time + CPU idle wait Time}
Turnaround time: Refers to the time when the job is submitted to the system to begin, until the job is completed. Average turnaround time: t= The turnaround time for each job and the number of jobs t = \dfrac{Each job's turnaround time and} {number of jobs} Take ownership turnaround time: The turnaround time T for the job and the ratio of time TS that the system provides for it. The average take-right turnaround time is: each with the right turnaround time and/or the number of jobs. 2. Job scheduling
1) The three stages of the operation are reception, operation and completion. The three states of the job correspond to the fallback state, the running state, and the completion status.
2) The main task of job scheduling is to decide on how many jobs to accept and which jobs to accept.
3) The algorithm for job scheduling is as follows: first come first service (FCFS) short job first (SJF) Priority scheduling algorithm (PSA): The system chooses several high-priority loaded memory from the fallback queue. High response ratio priority scheduling algorithm (HRRN): Consider the job's waiting time, consider the running time of the job, introduce a dynamic priority:
Priority = Wait time + require service time