!: Control and description of the process
* Process Definition: A process is a separate function of the program on a data set on a dynamic running process (is the operating system scheduling and resource allocation of the basic unit, the process of communication, synchronization and context switching overhead is slightly larger)
* Characteristics of the process (understanding)
1. Dynamic: Dynamic is relative to the program itself, the program is only stored on the hard disk code, and the process is the program on a particular data set on the dynamic operation, so, dynamic is the most basic characteristics of the program
2. Concurrency: concurrency refers to multiple processes within the operating system executing concurrently over a period of time
3. Independence: Process is relatively independent, each process has available memory space
4. Asynchrony: That is, the running between processes runs asynchronously, that is, each process moves independently at unpredictable speeds.
* Status of the process
1. Three states: Ready state, blocking state, operating state
2. Five states: Initial state, ready state, blocking state, operating state, end state
3. Seven states: Initial state, active block, standstill block (after suspend), active ready, still Ready (suspend), run state, end state
* Process hangs
The process stops running and is swapped out of memory to the hard disk
The possible causes of the process are: memory in the program is not enough, to swap out some of the memory content; operating system load regulation, if the operating system does not suspend some programs, the system may not function properly; the parent process may have requested it for inter-process synchronization; End user requests, etc.
* Data structures in Process management
1.PCB Process Control block: The data structure used to describe the process state information and the running process, is the unique identity that represents the existence of the process, and the process management becomes the PCB management
2. There are four main aspects of the Process Control block: Process identifier information, processor status (processor context), process scheduling information (process status and priority information, etc.), Process Control information (inter-process synchronization and traffic semaphores, etc.)
* Process Control
The organization of 1.Unix internal processes is organized in a tree-like structure, but there is no concept of process hierarchy within Windows, and all processes have the same status.
2. Events that could cause a process to be created: User logon, process own request to create a child process, etc.
3. Events that may cause the process to end: normal ending, process itself because of an exception that occurred in the execution itself died or was killed by the operating system, or did not happen to be killed by the parent process or the operating system
* Process Synchronization
1. Synchronizing using semaphore Mechanisms: in-process use of semaphores and in Java concurrent programming using the Semaphore class for inter-thread synchronization is similar, through the operation of the number of licenses held by the semaphore to achieve inter-process synchronization
2. Using the process of synchronization: The system of various hardware resources and software resources can be abstracted from the data structure to describe its resource characteristics, that is, with a small amount of information and operations performed on the resource to characterize the resource, and ignore their internal implementation details, so the use of shared data structure to represent the system of shared resources, and defines the specific operations implemented for that data structure as a set of procedures. The process must manipulate the shared resources indirectly by manipulating this set of procedures.
* Process Communication
Type of process communication:
* Shared Memory Systems (Shared-memory System): In shared memory systems, processes that communicate with each other share certain data structures or shared storage areas that can communicate between processes through these spaces
* Pipeline Communication System (pipe): The so-called pipeline is a connection between the write process and read the process of a shared file, also known as a pipeline file, write process to write data into the pipeline file, read process from the pipeline file read data out to achieve interprocess communication
* Messaging Systems (Message passing system): In this mechanism, process communication does not rely on any shared data structure to share the shared storage file or something, but rather in a formatted message unit, the data to be communicated in the message, And using a set of interprocess communication commands (primitives) provided by the operating system for interprocess communication
* Client-server systems (Client-server system): Remote communication via sockets, remote engineering calls, remote method calls, etc.
!: Thread
* Thread definition: A thread is an execution path of a process, is a smaller execution unit than a process, is the basic unit of CPU dispatch, the same address space is shared between different threads of the same process, so the overhead of communication synchronization and context switching between threads is much smaller than that of the process, in addition, Create a line turndown the overhead of creating a thread is also much smaller
* For thread creation, synchronization, communication, Java programming language provides a detailed and powerful set of basic class library and Advanced Tool class library, can be easily multithreaded concurrent programming (also can use the Fork/join framework to use multi-core processor for parallel programming)
!: Processor scheduling and Deadlock
* Processor (CPU) scheduling level
1. Advanced scheduling: Also known as long-range scheduling or job scheduling, is to load the program from the hard disk into memory and initialize, possibly also to add the initialized process to the ready queue
2. Intermediate scheduling: Also known as memory scheduling, due to the use of virtual memory technology, many processes due to a variety of reasons may be suspended to the hard disk, intermediate scheduling is to put these suspended to the hard disk process re-dispatched to memory
3. Low-level scheduling: Also called short-range scheduling or process scheduling, is the use of some scheduling algorithm, from the process in the ready queue to pick one out, and assigned to its processor resources
* Target of processor scheduling algorithm
Different operating systems because of their use of different scenarios and requirements, its processor scheduling algorithm objectives are different, but there is a common goal is: to maximize the utilization of the processor
* Task of process scheduling
1. Save the processing machine field from the previous process
2. Select the next process to process according to some scheduling algorithm
3. Assign the processor to the selected process and start processing
* How the process is dispatched
1. Non-preemptive: When a process is assigned to a processor, the execution of the process cannot be interrupted until the process is voluntarily abandoned, either gracefully ended, or for other reasons, it voluntarily abandons the processor. There is a big problem is: Processor utilization is particularly low, because the other hardware resources running speed is always lower than the processor, often wait, resulting in a particularly low utilization of the processor
2. Preemption: This allows processes to preempt processor resources, of course, preemption is not chaos, but there are certain principles: the priority principle (allow high priority processes to preempt the process of low priority processes), the short process priority principle (allow the new short process can preempt the long process of processor execution), Time slice principle (the processor resources are obtained by the time slice polling mechanism between each ready process, and the time of the current process runs out of the machine and is dispatched to the processor)
* Process scheduling algorithm
0. The process scheduling algorithm is mainly aimed at preemptive scheduling mode, the general is based on the preemption of the implementation of several principles of the scheduling algorithm and on the basis of these algorithms have been upgraded
1. Rotation scheduling algorithm (Round Robin): The algorithm uses a very fair processor allocation, each time the process takes up the processor is only a time slice. In the RR algorithm, the system makes a ready queue of all the ready processes according to FCFS (first Come,first Served: pre-first service) and then allocates according to the RR process time slice
2. Priority scheduling algorithm: Generally used in systems that require higher real-time requirements. When a process with a higher priority is new, it can preempt the processor of the lower priority process. Typically, a process has two priorities: a static priority (the process itself), a dynamic priority (in the process of running the process, the priority of the process is constantly changing according to the progress of the process in order to achieve better scheduling performance)
3. Multi-Queue scheduling algorithm: Mainly used in multi-processor systems. To maintain a different readiness queue for different processors, each ready queue can use different scheduling algorithms to suit the requirements of the system.
4. Multilevel Feedback queue scheduling algorithm (multileved feedback queue): is currently considered a good process scheduling algorithm. The basic description is: With multiple ready queues, each queue is given a different priority. The first queue has the highest priority, the second, and the remaining queue priorities are reduced one at a time. This is one. Second, different priority queues are assigned different time slices, and the higher the priority, the longer the time slice. Priority scheduling algorithm is used between queues, and the scheduling algorithm using FCFS first comes first in the queue. After the new process arrives, first in the first queue, if not finished in the first queue, OK, put to the end of the second queue process wait, sequentially down, after the last queue, has not finished, the use of RR rotation scheduling algorithm to dispatch
* Handling of deadlocks
1. Handling Methods
1.1 Preventing deadlocks: The prevention of deadlocks at the code level roughly
1.2 Avoid deadlocks: Unlike the prevention of deadlocks, a certain algorithm is used to prevent deadlocks from occurring during process execution, because even if there is no deadlock inside the process, it is possible that the operating system is bothered by the fact that the process of sharing resources between processes can cause deadlocks. A common banker's algorithm
1.3 Detection of deadlocks: not to take measures to avoid the occurrence of deadlocks, but to allow the occurrence of deadlocks, but can be detected by the detection mechanism in time for the occurrence of deadlocks, and then respond to the measures to save the process from the deadlock
1.4 Unlocking the deadlock: it is clear that the deadlock has occurred and that the related process has to be freed from the deadlock. Common can be: cancel some processes, reclaim their resources, unlock the deadlock.
2. Avoid deadlocks: During process operation, the OS takes some action (algorithm) to avoid deadlocks
* Banker algorithm: Because the algorithm was originally designed for bank loans and later referenced to the operating system, the banker algorithm was handed in
1.OK, first there are four data structures:
The Max Matrix (which represents the maximum number of each resource required per process), such as the following table: The maximum number of Class A resources required by the P0 process is 7, Class B 5, Class C 3
Allocation matrix (indicates the number of resources that each process has received), such as the following table: The Class A resource required by the P0 process has gained 0, Class B 1, Class C 0
The need matrix (which represents the number of resources that each process now requires), and the value is equal to the desired minus the resulting
Available matrix (representing the number of available idle resources Now), with only one group value per stage
2. Description of the Banker algorithm:
STEP1 (Resource attempt allocation): If a process issued a resource request, if the resource requested is greater than the required resources, OK, error, do not change so much ah, if not more, OK, see the Idle resources enough, if not enough, then let him wait, if enough, then allocate, response to the data structure to respond to the change
STEP2 (Security check): After the allocation of resources, to do security checks, that is, if I have such a point, all the process can be peaceful implementation of the completion? It depends on whether you can find out the process sequence of a resource allocation, which enables the process to complete normally. As follows: For example, a process allocates resources according to the first step, then carries out security checks, depending on how many resources are available in the available matrix, finds a process that satisfies the allocation, allocates resources to him, recycles his resources, updates the available matrix, Then look down, until all the processes are able to get the resources to execute, indicating OK, this time the resource allocation is secure, then the resource allocation to the process of the resource request; otherwise, the current process requests the resource if it is assigned, the system is not allocated for the insecure state.
Operating system basics-down-process control and processor scheduling