Operating System Process

Source: Internet
Author: User
Tags semaphore

The context of the article is as follows:


Process

A process is an execution process of a segment program that can be concurrently executed with other programs. It is the basic unit for the system to allocate and schedule resources. That is, a process is a running program, is a running activity of a program, dynamic and concurrency.

From a static perspective, a process entity consists of three parts: a program block, a process control block (PCB), and a data block.


Program block: the task to be completed by the Process


Data Block: includes the data and workspace required for program execution.

Process control block: includes the process description, control information, resource management information, and CPU Field Protection Information, reflecting the dynamic nature of the process, as shown below:

Process ID

Status

Priority

Control Information

Queue

Access permission

Site

Status

Ready state: the process is allocated to the status of necessary resources except the CPU. After a process is created, it is ready. There can be multiple ready processes.

Execution status: occupies the status where the processor is executing on the CPU. In the CPU system, only one process is in the execution status at the same time.

Blocking status: the process stops the CPU and enters the waiting status because it waits for an event. There can be multiple at the same time.

Three-state diagram of the process

The difference between a ready state and a running state is whether or not it occupies CPU resources. Generally, there are two situations in which the CPU resources of a process are deprived:

  • Advanced execution
  • Time slice execution method (several threads are executed in turn)

Process five-state diagram

The difference between active readiness and static readiness is that the active readiness is stored in the memory or external storage, and the static readiness is compared with external storage.

Deadlock

If a process is waiting for something that is not possible, the process will enter the Deadlock State. Multiple processes will be deadlocked, and the system will enter the Deadlock State.

Cause


Mutex condition: A resource can only be used by one process at a time.


Persistence and waiting condition: A process has obtained some resources, but it does not store the resources it has obtained because the request for other resources is blocked.


Non-deprivation condition: Some system resources are unretrievable. When a process has obtained such resources, the system cannot forcibly withdraw them. It can only be released when the process is used up.


Loop wait condition: multiple processes form a ring chain, and each of them occupies the next resource requested by the other party.

Solve deadlocks


Deadlock Prevention: the user is required to apply for all required resources together when applying for resources, which breaks the conditions for persistence and waiting. After the resource is layered and the previous layer of resources is obtained, to request the next layer of resources, which breaks the loop wait condition. Prevention usually reduces system efficiency.


Deadlock Avoidance: Avoid means that the process determines whether these operations are safe when resources are requested each time. The typical algorithm is the Banker algorithm ". However, this algorithm increases system overhead.


Deadlock Detection: the first two are pre-book measures, while the Deadlock Detection is to determine whether the system is in a deadlock state. If yes, the deadlock removal policy is executed.


Deadlock removal: This is used in combination with Deadlock Detection. The method used is deprivation. Forcibly allocate resources to other processes.


Banker algorithm: this name is quite relevant, meaning that the system first finds an algorithm in advance, which can ensure that the resources allocated to various processes can be completed by the process. That is, determine the number of resources; then run a runable process; release the resources of the last process after running; run the next runable process; and so on, all resources are released.

Frontend chart

A forward graph is a directed acyclic graph (DAG) used to describe the relationship between processes before and after execution. Each node in the figure can be used to describe a program segment or process, or even a statement. The directed edge between nodes is used for the partial or forward relationship between two nodes in the table. If PI must be completed before PJ, it can be written as pi → PJ, that is, Pi is the direct frontend of PJ, and PJ is the direct successor of pi. In the frontend diagram, the node without the frontend is called the initial node, and the node without the successor is called the end node.

For example

Is

For example, to analyze the sequence of the precursor map S = a + B * 3/C + D * 9, this is the purpose of the forward graph:

  • S1: z1 = B * 3
  • S2: Z2 = D * 9
  • S3: Z3 = Z1/C
  • S4: Z4 = a + Z3
  • S5: z5 = z4 + Z2

We can see that the front trend is

P/V/semaphore


Mutual Exclusion and Synchronization

Mutually exclusive, resources can only be used by one process, others cannot be used

Synchronization: multiple concurrent processes communicate, cooperate, and wait because of constraints, so that each process is executed in a certain order and speed.


Semaphores


Semaphores can effectively synchronize and mutex processes. In the control system, semaphores are an integer. When semaphores are greater than or equal to zero, tables can use resources for concurrent processes; if the value is less than zero, it indicates the number of processes waiting to use resources. If a semaphore is created, its meaning and initial values must be specified.

For semaphores, only p operations and V operations can be taken. P operations and V Operations are atomic operations that cannot be further divided, that is, the PV operation process will not be interrupted.

P/VOperation

PV operations are the operations on semaphores. P indicates request and pass, and V indicates release. (in Dutch files, vrijgeven is released by passeren ).

As usual, I need to explain why P/V/semaphores exist before the operation, and my understanding: as mentioned above, P/V/semaphores are generated for the relationship between processes. What is the most important thing about processes? Resource and CPU, how to ensure the rational use of resources by multiple processes to avoid deadlock, which is the significance of P/V/semaphore. Okay. Let's continue to explain how the P/V/semaphore adjusts the relationship between processes.

Now we assume that three processes need to use the same resource (assuming two). To accurately allocate these resources, we create a buffer to store these two resources:

Obviously, these two resources are not used by three processes. Assume that priority process a> process B> Process C starts execution and two s resources are occupied by a and B. How does C know that the buffer zone has no resources? When resource C is available, how does one know that there is a resource? This is the significance of the existence of semaphores. We assume that the initial SEM value of semaphores is 2. A occupies a after becomes sem-1 = 1; B occupies a after sem-1 = 0, now has 0 resources; C and want to take a sem-1 =-1, in this case, SEM no longer indicates the number of resources, but waits for the number of processes for the resource. When process a is executed and the occupied s resources are released, process C can be executed, at this time, SEM + 1 = 0.

In the above resource allocation process, we use P for the occupation operation and V for the release operation.

Guan Cheng

A manager defines a data structure and a group of operations that can be performed by a concurrent process (on this data structure). This group of operations can synchronize the process and change the data in the manager ". A pipe is equivalent to a wall. It wraps the shared variable and several processes that operate on it. All processes must go through the pipe to access critical resources, the process allows only one process to enter the process at a time, thus achieving the process mutex.

My understanding is to divide a region and put the relevant processes, variables, and data structures together. When a process needs its resources, it calls a management process. At the same time, only one process enters the management process, other applications need to wait to achieve mutex, which is easier than semaphores to ensure the correctness of concurrent process execution.

Knowledge is like a blind person's understanding of the elephant. Each time an elephant has a certain characteristic, a complete elephant can be spelled out after many times.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.