My operating system review--process (bottom)

Source: Internet
Author: User
Tags semaphore switches

The previous blog is a review of the operating system process, including process status, PCB, Process control, etc.-My operating system review- process (above), this blog post is the next chapter of the process, start review process synchronization, process communication, as well as important threading concepts.

One, process synchronization

What is synchronization? Synchronization means that a task waits for another execution to continue without executing at the same time. As we all know, the process is asynchronous and this nature can cause confusion in the operating system. Process synchronization, which refers to the management of the order of execution between processes, is to resolve this confusion of process asynchrony.

(1) Direct restriction and indirect restriction.

There are two kinds of restrictive relationships between processes. Respectively is the direct restriction and the indirect restriction. Direct constraints refer to inter-process cooperation, where a process requires the coordination of another process, otherwise it will block. If the output buffer is empty, the output process is blocked and the output process relies on the input process for constant input. Indirect restriction refers to a resource, while only one process is occupied, and when you use it, others cannot use it.

(2) The rules that the synchronization mechanism should follow.

This is the rule that all synchronization mechanisms need to follow:

1) idle let in. Allow process access when resources are idle.

2) The busy is waiting. The process must wait while the resource is in use.

3) Limited waiting. You should ensure that the process has limited time to access resources and cannot wait indefinitely.

4) Right to wait. The processor should be disposed when a running process cannot access the specified resource.

second, PV operation

PV operation of the famous, presumably a lot of people have heard. It is a classic pair of operations that implements the most basic process synchronization mechanism. Why is it called PV operation? It is the famous computer science Dixtra in Dutch, in Dutch, by calling Passeren, release called Vrijgeven. The P operation is also called wait (s), which essentially uses resources, and the V operation is called sign (s), which essentially frees up resources. The PV operation is atomic, either fully or completely, and the PV operation is paired. Let's take a closer look at how the PV operation works and how the process is synchronized. PV operation and signal volume is inseparable, first look at what is the semaphore.

(1) What is a semaphore?

Semaphore is a data structure. including shaping signal volume, record-type semaphore, and-type signal volume and semaphore set. Different signal volumes correspond to different data structures, and they also correspond to different PV operations. The signal volume and operation of its PV operation constitute a

The semaphore mechanism that should be used to control process synchronization.

(2) integer signal volume.

As the name implies, the data structure of the shaping signal is a simple shaping, which is usually represented by shaping S. The PV operation on it is as follows:

(The program code here is Pascal code)

Wait (S):    while0do  no-op;             S:1sinal (s):    s:1

As above, s represents the number of resources. For the Wait (S) operation, wait while the number of resources is less than or equal to 0. If you have resources, jump out of the loop and use a resource.

For Sinal (S) operations, each execution frees a resource.

(3) Recording type signal volume.

This semaphore adds an attribute that identifies the process pointer to all the waiting process lists, compared to the shaping semaphore.

The difference between the PV operation and the shaping signal is that when wait () is s<=0, it blocks itself and abandons the processor. Signal (), after judging if s<=0, wake up a process. The advantage of this is that it does not wait indefinitely when the process requests no resources.

(4) and type signal quantity.

The and type semaphore is for multiple critical resources. All resources required to run the process are assigned to the process one at a time, and all resources are freed when the process finishes running. is equivalent to bundling all the resources needed for the process. The practice is to add an and condition to the wait operation: The process can continue only if all the resources required by the process are idle.

(5) The semaphore set.

A semaphore set refers to a one-time request processing of n classes of resources. The above and type semaphore refers to the processing of a number of different types of resources, and the semaphore set refers to the processing of multiple resources of the same kind, as well as the special case of the and type semaphore.

(6) Pipe process

Because a process is operating on a resource, it is necessary to bring a pair of PV operations, in order to avoid this situation, the operation of a resource and the process to wrap it up, such a module called the pipe process. It is a resource management module in the operating system that is called by the process. It can be seen that the tube process realizes the object-oriented thought.

(7) Condition variables

In the implementation of the above process synchronization there is a very serious implication that, if a process has not released a resource, the other process can only wait for an endless time. The meaning of conditional variables is that, in addition to the original resource idle let go into, processing the release of such logic, there are other conditions. For example: Resource idle and XX conditions, let go. Dispose of the resource if the completion or XX condition is processed. These additional conditions are called condition variables.

third, process communication

The above-mentioned process synchronization through the semaphore, its essence is a low-level communication mechanism. There is no significant exchange of information between processes. So what do you do if you want to achieve a large, frequent exchange of information between two processes? This is the advanced communication mechanism. There are three main types of advanced communication mechanisms:

(1) Shared memory system.

memory, or shared memory, as the name implies, is the communication of two processes through a shared memory area to communicate, one responsible for reading a responsible write. In fact, the signal volume of the surface is a kind of shared memory system, but it is a data structure that is shared between processes, and the data structure is manipulated by the PV operation.

(2) Message delivery system.

Processes communicate through messages in a specified format. The message format is usually a header containing the address and a body containing the content. This format is also called a protocol. Our common network protocols are also this way. The message delivery system is divided into direct communication mode and indirect communication mode. Direct communication means that the process of communication both parties know the other party's existence, and in the message header carries itself and each other's address information. Indirect communication means that the process of message delivery is not passed directly to each other, but there is an intermediate entity staged, and forwarded, so as to avoid the process of both receiving, sending the data rate is not uniform caused by the process blocking.

(3) Pipeline communication.

A pipeline is a shared file--pipe file that connects the read and write processes, essentially a fixed-size buffer that connects 2 processes that are mutually exclusive to the pipeline, and blocks until the write process writes the data, until the read process takes all the data to be woken up to continue writing the data. This one-time read operation and write operation, although can cause the process to block, but in the process of reading and writing without maintaining read and write pointers, the efficiency is very high.

Four, Thread

What is a thread? Threads are created to enable the operating system to have better concurrency, which is equivalent to a process that has only a small amount of resources-light processes. In this multithreaded operating system, a process is the basic unit of system resources, contains multiple threads, provides resources for it, and the process itself no longer acts as an executable entity, and when the process executes, it is actually one of the threads executing.

(1) The nature of thread execution.

Understanding threads requires an in-depth understanding of concurrency. The essence of concurrency is that a single processor system is always a linear implementation of tasks. The essence of threading is to divide the time slices that were originally implemented into the process, assuming that in a single processor operating system, the time slice is 20 milliseconds, and the operating system has 50 processes executing in a time slice, Then the average time per process is less than 0.4 milliseconds, and if each process has 10 threads, the time slice will be divided again, with each thread having a execution time of less than 0.04 milliseconds. For most tasks, though, this is still enough. Threads are nothing more than that.

(2) The type and implementation of the thread.

  1) User-level thread--ult.

The implementation of this thread is very simple and, for the processor, it is still a process switch and does not know the existence of a wired thread. If each process is equivalent to a car, then each thread is equivalent to a driver, and thread switching is constantly changing the driver.

So how to split the various multithreading within the process? There is a collection of functions in the process that is designed to manage and control the execution of threads. This collection of functions is called the runtime system. When the process executes, it executes its runtime system, which switches the management of its threads. Thread run-time information-the thread control block TCB is stored in its own stack, and each time it switches, the runtime obtains the corresponding run-time information from the thread's stack, sets the CPU's register, and then it can run. It is important to note that the thread is not able to invoke the system resources directly, and the thread needs the system resources when the allocation is called by the runtime system.

  2) kernel support thread (kernelsupported thread)--kst.

This kind of thread creation, revocation, switch is not dependent process, is directly like process scheduling by the kernel control, because the thread basically does not have resources, so this scheduling is also very fast. Thread priority for kernel support threads is usually much higher than user-level threads.

So how does the kernel support threading? When a process is created, the system assigns a task data area, per task, that contains several thread control block TCB, which are not stored in the process resource memory but are stored in the registers of the CPU. Then it is very similar to the PCB by the processor switching control.

  3) The combination method is implemented by the light weight process--LWP.

The kernel supports the creation of multiple KST threads while supporting the creation of ult threads, which is implemented through the lightweight process LWP. The nature of the light process LWP is a KST process, which is characterized by the ability to let Ult connect, when Ult connect it, it is equivalent to call KST, you can implement all the functions of KST. So the general LWP is implemented using a thread pool. You can see that the purpose of the LWP is to make the user-level thread ult directly able to invoke system resources.

  

Reference: "Computer operating System (Tong)"

My operating system review--process (bottom)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.