Python: process, python Process
I. Development of Operating Systems
1. At the beginning of the computer, there was no operating system. The programmer loaded the perforated tape (or card) corresponding to the program and data into the input machine, then start the input machine to input the program and data into the computer memory, and then start the program to run the data through the console switch;
After the computation is complete, the printer outputs the computation result. The user can take the result and remove the tape (or card) before the next user can perform the computation.
Manual operation has two features: (1) the user exclusive full machine. There is no waiting phenomenon because the resources are occupied by other users, but the resource utilization is low. (2) The CPU is waiting for manual operation. Insufficient CPU utilization. 2. A batch processing system emerged to improve operation efficiency. Batch Processing System: A system software loaded on a computer, under its control, computers can automatically and in batches process one or more user jobs (including programs, data, and commands ). Batch Processing System-serial and fast
Fast Online batch processing of read tapes
Offline batch processing of read tapes and cpu concurrency
3. The so-called multi-channel programming technology allows multiple programs to simultaneously enter the memory and run. That is, multiple programs are put into the memory at the same time, and they are allowed to run in the CPU alternately. They share various hardware and software resources in the system.
When a program is paused due to an I/O Request, the CPU immediately switches to another program.
Multi-channel Program System-parallel
Various management functions of the Operating System
Space-time reuse: the space is isolated, and the cpu seems to be able to process multiple tasks.
4. Time-Sharing Technology: divides the processing machine running time into short time slices, and distributes the processing machines to various online jobs in turn by time slice.
Time-sharing system-better parallel implementation
Reducing cpu Efficiency
5. Real-Time System
Real-time systems can be divided into two types: (1) Real-time control systems. When it is used for automatic control of aircraft flight and missile launch, the computer is required to process the data measured by the measurement system as soon as possible and control the aircraft or missile in a timely manner, or provide the information to the decision makers through the display terminal. When used for control of industrial production processes such as steel rolling and petrochemical industry, computers are also required to promptly process the data sent by various sensors and then control the corresponding execution institutions. (2) Real-time information processing system. When used to book airline tickets, query flights, routes, fares, and other matters, or when used in banking and intelligence retrieval systems, the computer is required to answer the service requests sent from the terminal device in a timely and correct manner. These requirements for timely response are slightly weaker than those of the first category.
Main features of Real-time Operating Systems: (1) timely response. Each process of receiving, analyzing, processing, and sending information must be completed within a strict time limit. (2) high reliability. Redundancy measures should be taken to work in the front and back of the dual-host system, including necessary confidentiality measures. 6. three basic types of operating systems: Multi-Channel batch processing system, time-based system, and real-time system.
General operating system: an operating system with multiple types of operating features. Multiple batch processing, time-sharing, real-time processing, or more than two types of features are supported.
7. Operating System Functions
Encapsulates the hardware operation process and provides easy-to-use interfaces for applications.
Schedule and manage multiple jobs to allocate hardware resources
II,Process
1. What is a process?
A Process is a running activity of a computer program about a closed dataset. It is the basic unit for the system to allocate and schedule resources and the basis of the operating system structure.
2. Relationship between operating systems and processes
Process is the most basic and important concept in the operating system. Is the emergence of multiple program systems, in order to portray the internal dynamics of the system,
Describes a concept introduced by the activities of various programs in the system. All multi-channel programming operating systems are built on the basis of processes.
3. How is a process scheduled?
1) service scheduling algorithm first
The FCFS scheduling algorithm is the simplest scheduling algorithm. It can be used for both job scheduling and process scheduling. The FCFS algorithm is more conducive to long jobs (processes), rather than short jobs (processes ). This algorithm is suitable for CPU-busy jobs, but not for I/O-busy jobs (processes ).
Service First (FCFS)
2) Short job Priority Scheduling Algorithm
Short job (process) Priority Scheduling Algorithm (SJ/PF) is an algorithm for priority scheduling of short jobs or short processes. This algorithm can be used for Job Scheduling or process scheduling. However, it is not good for long jobs; it cannot ensure that the urgency jobs (processes) are processed in a timely manner; the job length is only estimated.
Short job (process) Priority Scheduling Algorithm (SJ/PF)
3) time slice Rotation Method
The basic idea of the Round Robin (RR) method is to make the waiting time of each process in the ready queue proportional to the time when the service is available. In the time slice rotation method, you need to divide the CPU processing time into a fixed size slice, for example, dozens of milliseconds to hundreds of milliseconds. If a process runs out of the time slice specified by the system but has not completed the required task after being selected by scheduling, it releases its own CPU to the end of the ready queue, wait for the next scheduling. At the same time, the process scheduler schedules the first process in the current ready queue. Obviously, the rotation method can only be used to schedule and allocate resources that can be preemptible. Resources that can be preemptible can be deprived at any time and can be distributed to other processes. CPU is a type of resource that can be preemptible. However, printers and other resources cannot be preemptible. Job Scheduling is the allocation of all system hardware resources except the CPU, including resources that cannot be preemptible, so Job Scheduling does not use the rotation method. In the rotation method, the length of time slice is very important. First, the length of the time slice will directly affect the system overhead and response time. If the length of the time slice is too short, the number of preemptible processors of the scheduler increases. This will greatly increase the number of context switches of processes and increase the system overhead. If the length of the time slice is too long, for example, if a time slice can ensure that the process with the longest execution time in the ready queue can be completed, the rotation method becomes the first-served method. The length of the time slice is determined based on the system's requirements for response time and the maximum number of processes allowed in the ready queue. In the rotation method, there are three conditions for the process to be added to the ready queue: one is that the time slice allocated to it is used up, but the process has not been completed, return to the end of the ready queue and wait for the next scheduling to continue execution. In another case, the time slice assigned to the process is not used up, but is blocked because of the request I/O or the mutex and synchronization relationship of the process. After the blocking is removed, the system returns to the ready queue. The third case is that the newly created process enters the ready queue. If these processes are treated differently, different priorities and time slices can be intuitively viewed to further improve the system service quality and efficiency. For example, we can divide the ready queue into different ready queues based on the type of the Process arriving in the ready queue and the blocking reason when the process is blocked. Each queue is arranged according to the FCFS principle, processes in different queues have different priorities, but the same queue has the same priority. In this way, a process enters a different ready queue after it is executed, or is awakened and created from sleep. Time slice Rotation Method
Time slice Rotation Method
4)
Various algorithms used for process scheduling have some limitations. For example, the short process-first scheduling algorithm only takes care of the Short Process and ignores the long process. If the length of the process is not specified, short Process Priority and preemptible Scheduling Algorithms Based on Process length cannot be used. The multi-level feedback queue scheduling algorithm does not need to know the execution time required by various processes in advance, but can also meet the needs of various types of processes, therefore, it is a well-recognized process scheduling algorithm. In a system that uses a multi-level feedback queue scheduling algorithm, the implementation process of the scheduling algorithm is described as follows. (1) Multiple ready queues should be set and different priorities should be given to each queue. The first queue has the highest priority, the second queue is the second, and the priority of the other queues is reduced one by one. The size of the time slice that the algorithm grants to processes in each queue is also different. In a queue with higher priority, the smaller the time slice for each process. For example, the time slice of the second queue is twice as long as that of the first queue ,......, The time slice of the I + 1 queue is twice the time slice of the I + 1 queue. (2) When a new process enters the memory, it is first placed at the end of the first queue and queued for scheduling according to the FCFS principle. When the process is executed, if it can be completed in the time slice, you can prepare to evacuate the system; if it is not completed at the end of a time slice, the scheduler transfers the process to the end of the second queue, and then waits for scheduling according to the FCFS principle. If the process is not completed after running a time slice in the second queue, put it in the third queue in sequence ,......, In this case, when a long job (process) drops from the first queue to the nth queue in turn, it runs in the nth queue by time slice. (3) The scheduler schedules the processes in the second queue only when the first queue is idle; only when the first queue is idle ~ (I-1) The process running in the I queue is scheduled only when the queue is empty. If the processor is serving a process in the I queue, another new process enters the queue with higher priority (1st ~ (I-1) in any queue), then the new process will seize the processor of the running process, that is, the scheduler puts the running process back to the end of the I queue, allocate the processor to the new high-priority process.
Multi-level feedback queue
4. Parallel and concurrent processes
Parallel Execution: Parallel Execution means that the two run at the same time, such as a race, and both of them are continuously running forward; (resources are sufficient, such as three threads and quad-core CPUs) concurrency: concurrency refers to the use of resources in turn when resources are limited. For example, A (Single-core CPU resources) can only cross one person at the same time. After A takes A while, B is given, when B is used up, it is used repeatedly to improve efficiency. Difference: parallelism is from the micro level, that is, at a precise moment, there are different programs in execution, which requires that there must be multiple processors. Concurrency is executed at the same time in a single time period. For example, a server processes multiple sessions at the same time.
5. synchronous asynchronous blocking non-blocking
1) status Introduction
(1) Ready status
When a process has been allocated to all necessary resources except the CPU, it can be executed immediately as long as the processor is obtained. At this time, the Process status is called ready state.
(2) Execution/Running status
When a process has obtained a processor, its program is being executed on the processing machine. The state of the process is called the execution state.
(3) Blocked status
When a running process cannot be executed due to waiting for an event, it gives up the processor and is in a blocking state. There may be a variety of events that cause process blocking, such as waiting for I/O to complete, applying for a buffer not meeting, waiting for letters (signals), and so on.
2) synchronous and asynchronous
Synchronization is a reliable task sequence that depends on another task only after the dependent task is completed. Either the task succeeds or fails, and the status of the two tasks can be consistent. Asynchronous means that you do not need to wait for the dependent task to complete. Instead, you only need to notify the dependent task to complete the task and immediately execute the dependent task. If you have completed the entire task, you can complete the task. As to whether the depended task is actually completed, the task dependent on it cannot be determined, so it is an unreliable task sequence.
3) blocking and non-blocking
The concepts of blocking and non-blocking are related to the state of the Program (thread) waiting for message notifications (synchronous or asynchronous. That is to say, blocking and non-blocking are mainly based on the state of the Program (thread) waiting for message notifications.
4) synchronous/asynchronous and blocking/non-blocking
1. The efficiency of synchronous blocking is the lowest. Taking the above example as an example, you can concentrate on queuing and do nothing else ., 2. asynchronous blocking: if the bank waits for the person to handle the business to wait for the message to be triggered (notified) in an asynchronous way, that is, a small piece of paper is received, if he cannot leave the bank to do other things during this time, it is clear that this person is blocked in the waiting operation; asynchronous operations can be blocked, however, it is not blocked when processing messages, but blocked when waiting for message notifications. 3. Non-blocking synchronization is actually inefficient. Imagine that you still need to look up and check whether the team has reached you while making a phone call. If you think of the call and the position of the observation queue as two operations of the program, this program needs to switch back and forth between the two different behaviors, and the efficiency can be imagined as low. 4. the asynchronous non-blocking mode is more efficient, because the call is your (waiting) thing, and the notification is the counter (message trigger mechanism) thing, the program does not switch back and forth in two different operations. For example, this person suddenly finds himself addicted to smoking cigarettes and needs to go out to smoke cigarettes. So he told the lobby manager that when the number is exceeded, please let me know, then he is not blocked in the waiting operation. Naturally, this is the asynchronous + non-blocking method.
Synchronous/asynchronous and blocking/non-blocking
6. Process Creation and Termination
1) There are many ways to create a process-that is, let a program run.
For general-purpose systems (running many applications), the ability to create or withdraw processes during system running is mainly divided into four forms to create new processes:
1. system initialization (view processes in linux using ps commands, windows using Task Manager, foreground processes are responsible for interacting with users, and processes running in the background are irrelevant to users, A daemon process that runs in the background and only wakes up when necessary, such as email, web pages, news, and printing)
2. A sub-process is enabled during running of a process (for example, enable multi-process in nginx, OS. fork, subprocess. Popen, etc)
3. Create a new process for the user's interactive request (for example, double-click the storm video)
4. Initialization of a batch processing job (only applied in the batch processing system of the mainframe)
2) Process Termination
1. Exit normally
2. Exit with an error
3. Serious errors
4. Killed by other processes
Example:
Import osimport timefrom multiprocessing import Process # Call the multiprocessing module to create the Process def func (I): time. sleep (1) print ('% d: sub-process % d, parent process: % d' % (I, OS. getpid (), OS. getppid () # getpid is the process number of the child process, and getppid is the process Number of the parent process. If _ name __= = '_ main _': p_lst = [] for I in range (10): p = Process (target = func, args = (I,) # create a process and use the func program. I indicates the serial number. Print (p) p. start () # p. start running process p_lst.append (p) # Add the process address to the p_lst list for l in p_lst: l. join () # l. join () waits for the process to finish running. Print ('_ main process __')