The theory of the Python process

Source: Internet
Author: User

Process theory knowledge ************ operating system Background Knowledge * * * *As the name implies, a process and a process that is being executed, a process is an abstraction of the running program. The concept of a process originates from the operating system, is the core concept of the operating system, and is one of the oldest and most important abstract concepts provided by the operating system. All other aspects of the operating system are carried out around the concept of processes. So to really understand the process, you must implement an understanding of the operating system. PS: The ability to support (pseudo) concurrency can be guaranteed even if there is only one CPU available (this is true for earlier computers). Turn a single CPU into multiple virtual CPUs (multi-channel technology: Time multiplexing and spatial multiplexing+hardware supports isolation), without abstraction of the process, modern computers will no longer exist. Essential Theoretical basis: the function of the operating system1hides complex hardware interfaces and provides a good abstraction interface2. Manage, schedule processes, and compete for hardware with multiple processes in order. Second, multi-channel technology1background: For the single core, to achieve concurrent PS: Now the host is generally multicore, then each core will use multi-channel technology has 4 CPUs, a program running in CPU1 encountered IO blocking, will wait until the end of Io rescheduling, will be dispatched to any of the 4 CPUs that have a specific operating system scheduling algorithm to decide. 2Space Reuse: If there is a multi-channel program in memory3. Time reuse: Multiplexing a CPU time slice PS: Encountered IO cut, take up CPU time too long also cut, the core is to cut the state of the process before saving, so as to ensure the next switch back, can be based on the location of the cut to continue to run. What is process * * * *process is a computer program on a data set on a running activity, the system is the basic unit of resource allocation and scheduling, operating system structure. In the early process design-oriented computer architecture, the process is the basic execution entity of the program, and in the contemporary thread-oriented computer architecture, the process is the container of the thread. A program is a description of instructions, data, and its organization, and the process is the entity of the program. Narrowly defined: A process is an instance of a running program (an instance of a computer programs that isbeing executed). Generalized definition: A process is a running activity of a program with certain independent functions about a data set. It is the basic unit of the operating system dynamic execution, in the traditional operating system, the process is not only the basic allocation unit, but also the basic execution unit. **concept of process first, a process is an entity. Each process has its own address space, which, in general, includes the text area, the data region, and the stack area. The text area stores the code executed by the processor, the data region stores variables and the dynamically allocated memory used during process execution, and the stack area stores the instructions and local variables for the active procedure call. Second, the process is an "executing procedure". The program is an inanimate entity, and only the processor gives the program life (the operating system executes it) to become an active entity, which we call a process. Process is the most basic and important concept in the operating system. is a multi-channel program system appears, in order to describe the internal dynamics of the system, describing the system Department of the activities of the procedures of the introduction of a concept, all multi-channel program design operating systems are based on the process. **The conceptual reason of operating system introduction process is the abstraction of the running program process from the theoretical point of view, and it is a data structure from the angle of implementation, which aims at clearly describing the inherent law of dynamic system, and effectively managing and dispatching the program which enters the computer system main memory running. **The dynamic nature of the process: the essence of the process is the process of executing a program in a multi-channel program system, and the process is dynamically generated and dies dynamically. Concurrency: Any process can perform its independence concurrently with other processes: The process is a basic unit capable of running independently, but also an independent unit of system allocation of resources and scheduling; Asynchrony: Because of the inter-process constraints, the process has a discontinuity of execution, that is, the process according to their own independent, Unpredictable speed forward structural features: The process consists of three parts, the program, the data and the process Control block. Many different processes can contain the same program: a program that forms different processes in different datasets and can get different results, but the program cannot change during execution.**the process and program of the difference program is an ordered set of instructions and data, which itself does not have any meaning of running, is a static concept. The process is a process of execution of the program on the processing machine, it is a dynamic concept. The program can be used as a software data for a long time, and the process has a certain life period. The program is permanent and the process is temporary. Note: The same program executes two times, there will be two processes in the operating system, so we can run a software at the same time, do different things are not confusing. Inheritance Scheduling * * * *In order to run multiple processes alternately, the operating system must dispatch these processes, which is not executed immediately, but follows certain rules, and thus has a process scheduling algorithm. **First come first service scheduling algorithm first come first service (FCFS) scheduling algorithm is one of the simplest scheduling algorithm, the algorithm can be used for job scheduling, but also for process scheduling. The FCFS algorithm is advantageous for long jobs (processes) and is not conducive to short jobs (processes). This algorithm is suitable for CPU busy operation, but not conducive to I/o Busy type of job (process). **Short job priority scheduling algorithm short job (process) Priority scheduling algorithm (SJ/PF) refers to the algorithm of priority scheduling for short or short processes, which can be used for both job scheduling and process scheduling. However, it is unfavorable to the long-term operation; The urgency of the operation (process) is not handled in time; the length of the job is only estimated. **The basic idea of the time slice rotation (Round robin,rr) method is to make each process wait in the ready queue proportional to the time it takes to enjoy the service. In time-slice rotation, the processing time of the CPU needs to be divided into a fixed-size time slice, for example, dozens of milliseconds to hundreds of milliseconds. If a process has run out of time slices of the system after being dispatched, but does not complete the required task, it releases its own CPU and queues to the end of the ready queue, waiting for the next dispatch.      At the same time, the process scheduler then dispatches the first process in the current ready queue. Obviously, the rotation method can only be used to dispatch some resources that can be preempted. These resources can be preempted at any time and can be reassigned to other processes. CPU is one of the resources that can be preempted. But the printer and other resources are not preemptive. Because job scheduling is the allocation of all system hardware resources except the CPU, which contains non-preemptive resources, job scheduling does not use a rotation method. In the rotation method, the selection of time slice length is very important. First, the choice of time slice length directly affects the overhead and response time of the system. If the time slice length is too short, the dispatcher preemption processor more times. This will significantly increase the number of process context switches, thereby aggravating the system overhead. Conversely, if the time slice length is chosen too long, for example, a time slice can guarantee that the process that takes the longest execution time in the ready queue can be completed, the rotation method becomes the first-come-first service method.      The choice of time slice length is determined by the system's response time requirements and the maximum number of processes allowed in the ready queue.      In the rotation method, there are 3 scenarios for joining the ready queue: one is to run out of time slices for it, but the process is not complete, and it is back to the end of the ready queue waiting for the next schedule to continue execution. Another situation is that the time slices given to the process are not exhausted, just because the request I/O or blocked because of the mutex of the process and the synchronization relationship.      When the blocking is lifted, return to the ready queue.      The third scenario is that the newly created process enters the ready queue.      If these processes are treated differently, the quality and efficiency of the system services can be further improved by giving different priorities and time slices from an intuitive perspective. For example, we can divide the ready queue into different ready queues according to the type of process arrival readiness queue and the blocking reason when the process is blocked, each queue is arranged according to FCFS principle, the processes between each queue have different priority, but the same queue has the same priority.  Thus, when a process finishes its time slice, or is awakened from sleep and created, it enters a different ready queue. **The various algorithms used for process scheduling described in the multilevel feedback queue have some limitations. such as the short process first scheduling algorithm, only take care of the short process and ignore the long process, and if the length of the process is not indicated, the short process first and process-based preemptive scheduling algorithm will not be used. The multi-level feedback queue scheduling algorithm does not need to know the execution time of various processes in advance, but also can satisfy the needs of various types of processes, so it is now recognized as a good process scheduling algorithm. In the system using multilevel feedback queue scheduling algorithm, the implementation process of scheduling algorithm is described below. (1You should set up multiple ready queues and assign different priorities to each queue. The first queue has the highest priority, the second queue is followed, and the priority of the remaining queues is lowered one by one. The algorithm gives each queue the size of the process execution time slices, and in the higher priority queue, the execution time slices for each process are smaller. For example, the second queue has a time slice longer than the first one, ...,+the 1-queue time slice is one-fold longer than the time slice of the first queue. (2when a new process enters memory, it is first placed at the end of the first queue and queued for dispatch by the FCFS principle. When it is time for the process to execute, it can prepare the evacuation system if it can be completed on the chip, and if it is not completed at the end of a time slice, the scheduler moves the process to the end of the second queue, and then similarly waits for dispatch execution by the FCFS principle, if it is still not completed after running a time slice in the second queue Then put it in the third queue, ..., and so on, when a long job (process) from the first queue down to the nth queue, in the nth queue will be taken by the time slice rotation operation. (3) only if the first queue is idle, the scheduler dispatches the processes in the second queue to run; only if section 1~ (i-1when the queue is empty, the process in queue I is scheduled to run. If the processing machine is servicing a process in queue I, a new process enters the queue with higher priority (Section 1~ (i-1), the new process will preempt the processor that is running the process, where the scheduler puts the running process back to the end of queue I and assigns the processor to the new high-priority process. Concurrent and parallel ****** of processesParallel: Parallel is the execution of both, such as running, two people are running forward; (resources sufficient, such as three threads, quad-core CPUs)**Concurrency: Concurrency is a limited value resource situation, two alternating use of resources, such as a section of the road (single-core CPU resources) at the same time only one person walk a big paragraph, let B go, B used to continue to give a, alternating use, the purpose is to improve efficiency. **The difference: parallelism is from the micro, that is, in an accurate moment, there are different programs to execute, which requires that there must be multiple processors concurrency is from the micro, in a time period can be seen at the simultaneous execution, such as a server simultaneously processing multiple sessionsSynchronous asynchronous blocking non-blocking ******Status Introduction

HTTPS:images2017.cnblogs.com/blog/827651/201801/827651-20180110201327535-1120359184. png before we learn about other concepts, we first need to understand several states of the process. In the process of running the program, due to the operating system scheduling algorithm control, the program will enter several states: ready, running and blocking. 1ready state when the process is assigned to all the necessary resources except the CPU, the process state is called ready when the processor is available for immediate execution. 2) Execution/Run (Running) state when a process has been acquired by a processor, its program is executing on a processing machine, at which time the process state is called the execution state. 3blocking (Blocked) state is executing a process that discards the processor while waiting for an event to occur and is in a blocked state. There can be many events that cause a process to block, for example, waiting for I/o completion, application buffer not satisfied, waiting for letter (signal), etc.**synchronous and asynchronous so-called synchronization is the completion of a task to rely on another task, only wait for the dependent task to complete, the dependent task can be counted, this is a reliable task sequence. Either success succeeds, failure fails, and the state of the two tasks remains the same. The so-called asynchronous is not required to wait for the task to be relied upon to complete, but to inform the dependent task to complete what work, dependent on the task is also immediately executed, as long as the completion of their own complete task, even if completed. As to whether the dependent task is ultimately completed, the task that relies on it cannot be determined, so it is an unreliable task sequence. Example: For example, I go to the bank for business, there may be two ways: the first: choose to wait in line; the second: choose to take a small note with my number, wait until the line to my this number by the counter people inform me my turn to handle business; The first: the former (queued) is synchronous waiting for message notification, That is, I have been waiting for the bank to handle the business situation; the second: the latter (waiting for someone else's notice) is waiting for the message to be notified asynchronously. In asynchronous message processing, waiting for a message notifier (in this case the person waiting for business) tends to register a callback mechanism that is triggered by the triggering mechanism (the person who is the counter here) through some mechanism (here is the number that is written on the small note, shouting) to find the person waiting for the event. **The two concepts of blocking and non-blocking blocking and non-blocking are related to the state of the program (thread) Waiting for message notification (no synchronization or asynchronous). That is to say, blocking and non-blocking is primarily an example of a state's perspective when a program (thread) waits for a message notification: Continue with the example above, whether it is queued or using a number to wait for notification, if in this waiting process, waiting for the wait, in addition to waiting for message notification can not do other things, then the mechanism is blocking Performance in the program, that is, the program has been blocked at the function call can not continue to execute. On the contrary, some people like to call and send text messages while the bank is doing these operations, which is non-blocking because he (the waiting person) does not block on the message, but waits while doing his own thing. Note: The synchronous non-blocking form is actually inefficient, imagine that you need to look up while you're on the phone. If the telephone and observation queue position as a program of two operations, the program needs to switch between the two different behaviors, the efficiency is imagined to be low, and asynchronous non-blocking form does not have such a problem, because the phone is your (waiting) thing, and inform you is the counter (message triggering mechanism) , the program does not switch back and forth between two different operations. * * Synchronous/Asynchronous vs. blocking/non-blocking1The synchronous blocking form is the least efficient. In the example above, you are in line and doing nothing else. 2Asynchronous blocking form if the person waiting for the business in the bank uses an asynchronous way to wait for the message to be triggered (notification), that is, to take a small note, if he cannot leave the bank to do other things during this time, it is clear that this person is blocked on this waiting operation An asynchronous operation can be blocked, except that it is not blocked while processing the message, but is blocked while waiting for a message notification. 3the synchronous non-blocking form is actually inefficient. Imagine you're on the phone and you need to look up and see if the line is up to you. No, if the telephone and observation queue position as a program of two operations, the program needs to switch between the two different behaviors, efficiency can be imagined is low. 4asynchronous non-blocking forms are more efficient because the call is for you (the person waiting), and the notification is a counter (message triggering mechanism), and the program does not switch back and forth between two different operations. For example, the person suddenly found himself addicted to smoking, need to go out to smoke a cigarette, so he told the lobby manager said, when the line to my number when trouble to inform me, then he is not blocked in this waiting operation, naturally this is asynchronous+non-blocking way out. Many people confuse synchronization with blocking because in many cases synchronous operations are shown as blocking, and many people confuse asynchronous and non-blocking, as asynchronous operations are generally not blocked at real IO operations. Process is created at end ******process Creation But all the hardware, need to have the operating system to manage, as long as there is an operating system, there is the concept of process, you need to have a way to create processes, some operating systems for a single application program design, such as the microwave oven controller, once the microwave oven, all processes already exist. For general-purpose systems (running many applications), there is a need to have the ability to create or revoke processes during a system run, mainly in the form of 4 to create new processes:1. System initialization (view process Linux with the PS command, Windows with Task Manager, the foreground process is responsible for interacting with the user, the process running in the background is not user-independent, the process that runs in the background and wakes only when needed, called daemons, such as e-mail, Web pages, news, Print)2a process opens a subprocess during operation (such as Nginx open multi-process, os.fork,subprocess. Popen, etc.)3the interactive request of the user creates a new process (such as a user double-click the Storm Movie)4. Initialization of a batch job (applied only in a mainframe batch system) either way, the creation of a new process is created by an existing process that executes a system call to create the process. #Create a process1the system call in UNIX is: Fork,fork creates a copy that is identical to the parent process, with the same storage image, the same environment string, and the same open file (in the shell interpreter process, executing a command creates a child process)2the system call in Windows is: Createprocess,createprocess handles both the creation of the process and the loading of the correct program into the new process. About creating child processes, Unix and Windows1the same is true: After a process is created, the parent and child processes have their own different address spaces (the multi-channel technology requires the physical plane to implement the isolation of the memory between processes), and the modification of any one process in its address space does not affect another process. 2. The difference is that in Unix, the initial address space for a child process is a copy of the parent process, which indicates that the child process and the parent process can have a read-only shared memory area. However, for Windows systems, the address space of the parent process and the child process is different from the beginning. **the end of the process1Normal exit (voluntary, such as the user clicks on the interactive page of the cross, or the program has completed the call to initiate a system call exit, in Linux with exit, in Windows with ExitProcess)2. Error exiting (voluntary, a.py not present in Python a.py)3. Critical Error (involuntary, execution of illegal instructions, such as referencing non-existent memory, 1/0, etc., can catch an exception,Try...except... )4. Killed by other processes (involuntary, e.g. kill-9)

The theory of the Python process

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.