Operating system core principles-4. Threading principle (top) thread base and thread synchronization

Source: Internet
Author: User
Tags semaphore

  As we all know, the process is a running program, is a concept invented to implement multi-channel programming on the CPU. But the process can only do one thing at a time, if you want to do two or more things at once, such as watching two movies at once, we will naturally think of the legendary ubiquitous, just like the Monkey King can make a number of true. Although we are not in reality, but the process can do, the way is the thread. Threads are the "ubiquitous" that we invent to allow a process to do more than one thing at a time.

I. Thread-based 1.1 thread concept

A thread is a "clone" of a process, an execution context or sequence of execution in a process. Of course, a process can have multiple execution sequences at the same time. It's like a stage where you can have multiple actors on stage, and the actors and the stage make a play. Analogy process and thread, each actor is a thread, the stage is the address space, so that all the threads in the same address space constitute a process.

In thread mode, a process can have at least one or more threads, as shown in:

Splitting a process into threads can make efficient use of multiprocessor and multicore computers . For example, when we use Microsoft Word, we are actually opening multiple threads. One of these threads is responsible for displaying, one is responsible for receiving input, one timing to save ... These threads work together, allowing us to feel that the input and display occur simultaneously without having to type some characters to wait for a moment to appear on the screen. Inadvertently, Word can also automatically save it on a regular basis.

1.2 Thread Management

Thread management, like process management, requires a certain foundation: maintaining the various information of threads that contain the various critical information of threads. As a result, there is a thread control block.

Because a process space is shared between threads, many resources are shared (this part of the resource does not need to be in the thread control block). But because threads are different execution sequences, there are always some resources that cannot be shared. In general, the partitioning of shared and exclusive resources among threads within a unified process is shown in the following table:

1.3 Threading Model

Modern operating systems combine a user-state and a kernel-state threading model in which the user-state execution system is responsible for switching between non-blocking processes within the process, while the kernel-state operating system is responsible for blocking the switching of threads, i.e. simultaneously implementing kernel-state and user-state thread management. Among them, the number of kernel-state threads is very small, and the number of user-state threads. Each kernel-state thread can serve one or more user-state threads. In other words, the user-state thread is multiplexed to the kernel-state thread.

1.4 Multi-threaded relationships

The purpose of the threading model is to implement process-level concurrency because multiple threads are typically present in a process. Multiple threads share a stage, sometimes interact, and sometimes dance solo. However, sharing a stage can lead to unnecessary trouble, which boils down to the following two fundamental issues:

(1) How do I communicate between threads?

(2) How are threads synchronized?

These two issues are also present at the process level, in the previous Process Principle section has been introduced, from a higher level, different processes also share a huge space, this space is the entire computer.

Second, the reason and purpose of synchronization of thread synchronization 2.1

(1) Reason

The relationship between threads is a partnership, and since it is a partnership, there is some sort of agreed rule, otherwise the cooperation will go wrong. For example, two threads of a process cause thread 1 to run incorrectly because the operation is out of sync:

This problem occurs because of two points: one is a global variable shared between threads, and the other is that the order of relative execution between threads is indeterminate. For the 1th, if all resources are not shared, it violates the original intention of the process and threading design: resource sharing, increasing resource utilization. For the 2nd, the relative order of execution between threads needs to be determined when needed .

(2) Purpose

The purpose of thread synchronization is not how the execution of pipelines can be interspersed, and the result of the operation is correct. In other words, it is to ensure that the results are deterministic under multi-threaded execution . At the same time, there are fewer restrictions on thread execution.

2.2 Ways of Synchronizing

(1) Some necessary concepts

① two or more threads competing to execute the same piece of code or access the same resource is called competition .

② shared Code Snippets or resources that can cause competition are called critical sections .

③ the phenomenon of having a thread in the critical section at any moment is called mutual exclusion . (Only one person at a time uses shared resources, others are excluded)

(2) lock

① about Locks

When two teachers want to use the same classroom to make lessons for students, how to coordinate it? After entering the classroom, the door will be locked and another classroom will not be able to come in and use the classroom. That is, the classroom is a lock to ensure mutual exclusion, then in the operating system, this can ensure that the mutual exclusion of the synchronization mechanism is called a lock.

For example, in. NET you can use the lock statement directly to implement thread synchronization:

    Private Object New Object ();      Public void Work ()    {          lock  (locker)          {              //  Do some work that requires thread synchronization           }     }

The lock has two basic operations: latching and unlocking. It is easy to understand that latching is locking the lock, other people can not enter, unlock is you do the thing done, will lock open, other people may go in. Unlock only one step that is to open the lock, and the latch has two steps: one is to wait for the lock to open, the other is to obtain a lock and locked. Obviously, the two operations of latching should be atomic operations and cannot be separated .

② Sleep and wake up

When the other person holds the lock, you do not need to wait for the lock to open, but to go to sleep, the lock opened after the other party to wake you up, this is a typical producer consumer model. It is not difficult to use computers to simulate producer consumers: a process represents the producer, a process represents the consumer, and a memory buffer represents the store. The producer puts the produced goods from one end into the buffer, and the consumer obtains the items from the other end, as shown in:

For example, in. NET you can use Monitor.Wait () and Monitor.pulse () to perform sleep and wake-up operations:

The first is the consumer thread

 Public void Consumerdo () {    while (true)    {         lock(sync)        {              Step1: Do something to              consume ...  Step2: Wake producer Thread             monitor.pulse (sync);              Step3: Release lock and block consumer thread             monitor.wait (sync);     }}}

Next is the producer thread

 public  void   Producerdo () { true   lock   (sync) { //  Step1: do some production operations  //  Step2: Wake up Consumer thread              Monitor.pulse (dog.lockcommunicate);  //  Step3: Release locks and block producer Threads   monitor.wait (dog.lockcommunicate); }    }}

However, in such cases, producers and consumers are likely to enter the sleep state, so that they can not wake each other and continue to move forward, there has been a system deadlock. How to solve? We can build up the signals in some way, rather than throw them away. That is, after the consumer gets the CPU to execute the sleep statement, the wake-up signal that the producer sends before this is retained, so the consumer will get this signal immediately and wake up. The operating system primitive that can accumulate the signal is the semaphore.

(2) Signal volume

The semaphore (Semaphore) is a counter whose value is the number of signals that are currently accumulating. It supports two operations: addition operation up and subtraction operation down. When you perform a down subtraction operation, a thread that requests the semaphore is suspended, and when an up addition operation is performed, a thread waiting on that semaphore is woken up. The down and up operations, historically known as P and V operations, are the two basic operations of the most important synchronization primitives in the operating system.

Some rooms can accommodate n people at the same time, such as kitchens. In other words, if the number of people is greater than N, the extra person can only wait outside. This is like some memory areas that can only be used by a fixed number of threads. The solution at this point is to hang N keys at the door.

The person who goes in takes a key and hangs the key back when he comes out. After the people found that the key was overhead, they knew they had to wait in line at the door. This is called a "semaphore" to ensure that multiple threads do not conflict with each other.

For example, a semaphore class is provided in. NET for semaphore operations, and the following sample code demonstrates that 4 threads want to execute the Threadentry () method at the same time, but allow only 2 threads to enter:

    classProgram {//The first parameter specifies how many "slots" are currently available (the number of threads allowed to enter)//The second parameter specifies how many "seats" are in total (the maximum number of threads allowed to enter at the same time)        StaticSemaphore SEM =NewSemaphore (2,2); Const intThreadsize =4; Static voidMain (string[] args) {             for(inti =0; i < threadsize; i++) {thread thread=NewThread (threadentry); Thread. Start (i+1);        } console.readkey (); }        Static voidThreadentry (ObjectID) {Console.WriteLine ("Thread {0} requested to enter this method", id); //WaitOne: If there is a "vacancy", then occupy the position, if there is no vacancy, then wait;SEM.            WaitOne (); Console.WriteLine ("Thread {0} successfully entered this method", id); //The impersonation thread performs some operationsThread.Sleep ( -); Console.WriteLine ("Thread {0} has finished executing and left", id); //release: Releasing an "Empty space"SEM.        Release (); }    }

If the resource is compared to a "seat", Semaphore receives two parameters: the first parameter specifies how many "slots" are currently available (how many threads are allowed to enter), and the second parameter specifies how many "seats" (up to the maximum number of threads allowed to enter at the same time). The WaitOne () method indicates that if there is a "vacancy", then the placeholder, if there is no vacancy, waits; the release () method means to release an "empty space".

It is not difficult to see that mutex mutexes are a special case of Semaphore semaphores (n=1). That is, it is entirely possible to replace the former with the latter.

However, if the producer or consumer reverses the order of the two Up/down operations, a deadlock can also occur. In other words, the order of semaphore operations is critical when using the semaphore primitives. Well, there is a way to change this situation, the signal volume of these organizations can not be entrusted to a special structure to take charge of the liberation of the vast number of programmers? The answer is the pipe process.

(3) Pipe process

Enhancement (monitor) is the meaning of monitors, which monitor the synchronization of processes or threads. In particular, a tube is a set of subroutines, variables, and data structure combinations. In other words, the code that needs to be synchronized is framed by a tube-in-the-box, and the code that needs to be protected is placed between the begin monitor and the end monitor for synchronous protection, that is, at any time only one thread is active in the pipe.

The guarantee of a synchronous operation is performed by the compiler, and the compiler knows that the code needs synchronous protection when it sees the begin monitor and end monitor, and adds the required operating system primitives when translating into low-level code so that two threads cannot be active within the same thread at the same time.

For example, a monitor class is provided in. NET that can help us achieve mutually exclusive effects:

    Private ObjectLocker =New Object();  Public voidWork () {//avoid direct use of private member locker (direct use may cause thread insecurity)        Objecttemp =Locker;        Monitor.Enter (temp); Try        {            //do some work that requires thread synchronization        }        finally{monitor.exit (temp); }    }

Two synchronization mechanisms are used in a pipe: locks are used for mutual exclusion, and condition variables are used to control the order of execution. In a sense, a tube is a lock + condition variable .

About: A condition variable is something that a thread can wait on, while another thread wakes up the thread on the condition variable by sending a signal. Therefore, the condition variable is somewhat like a semaphore, but not a semaphore, because it cannot be up and down.

The biggest problem with the tube is the dependency on the compiler, because we need to add the synchronization primitives that the compiler needs to the beginning and end of the pipe. In addition, the process can only work on a single computer, if you want to synchronize in a multi-computer environment, it requires other mechanisms, and this other mechanism is message delivery.

(4) Message delivery

Messaging is achieved by synchronizing the two parties through each other, and it has two basic operations: sending send and receiving receive. They are system calls to the operating system and can be either blocking calls or non-blocking calls. Synchronization requires a blocking call, that is, if a thread performs a receive operation, it must wait for the message to be returned. That is, if receive is called, the thread will hang and be ready when the message is received.

The biggest problem with message delivery is message loss and identity recognition. Because of the unreliability of the network, it is more likely to lose the message when it travels between networks. Identification refers to how the received message is determined to be sent from the target source. Among them, message loss can be reduced by using TCP protocol loss, but also not 100% reliable. Identification problems can be remedied by using such techniques as digital signatures and encryption.

(5) Fence

The fence, as its name implies, is an obstacle, and the thread that arrives at the fence must stop, knowing that the fence will move forward. The hospital is primarily used to coordinate a set of threads, because sometimes a group of threads is working together to complete a problem, so all the threads need to join the same place and then move forward.

For example, you will encounter this requirement in parallel computing, as shown in:

Zhou Xurong

Source: http://edisonchou.cnblogs.com

The copyright of this article is owned by the author and the blog Park, welcome reprint, but without the consent of the author must retain this paragraph, and in the article page obvious location to give the original link.

Operating system core principles-4. Threading principle (top) thread base and thread synchronization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.