C ++ multi-thread programming Learning (1)

Source: Internet
Author: User
Document directory
  • 1. Basic concepts of the thread, the basic state of the thread, and the relationship between states.
  • 2. Differences between threads and processes
  • 3. Thread Synchronization and mutex
  • 4. Causes of deadlocks and how to avoid deadlocks

1. Basic concepts of the thread, the basic state of the thread, and the relationship between states.

(1) thread Concept

------- 1) the most direct understanding of a thread is the "lightweight process", which is a basic CPU Execution Unit and the smallest unit of the program execution flow, it consists of thread ID, program counter, register set, and stack.

------- 2) a thread is an entity in a process and is the basic unit for independent scheduling and distribution by the system. A thread does not own system resources and only has a resource that is essential for running. However, a thread can share all the resources of a process with other threads of the same process.

------- 3) Different threads can execute the same program, that is, when the same service is called by different users, the operating system creates different threads for them.

------- 4) a thread is an independent scheduling unit of a processor. Multiple Threads can be executed concurrently. In a single-CPU computer system, each thread can use the CPU alternately. In a multi-CPU computer system, each thread can use different CPUs at the same time, if each CPU is a thread service in a process at the same time, the processing cycle of the process can be shortened.

(2) Basic thread status

The thread has three statuses: Ready, blocked, and running.

The execution status indicates that the thread is running while obtaining the processor.

Ready State indicates the state in which the thread has various execution conditions and can be executed once the CPU is obtained.

The blocking state indicates that the thread is blocked due to an event during execution and is in the paused state.

2. Differences between threads and processes

1) Scheduling

In traditional operating systems, the basic unit of resource and independent scheduling is process. In the operating system where threads are introduced, threads are the basic units of independent scheduling and processes are the basic units of resources. In the same process, thread switching does not cause process switching. Thread switching in different processes. For example, switching from a thread in one process to another may cause process switching.

2) possess resources

Both the traditional operating system and the operating system with threads, processes are the basic units of resources, and threads do not have resources (only a few essential resources ), however, a thread can access the system resources of a process.

3) concurrency

In the operating system that introduces threads, not only can concurrent execution be performed between processes, but also between multiple threads in the same process, so that the operating system has better concurrency, improves the system throughput.

4) system overhead

When a process is created or abolished, resources, such as memory space and I/O devices, must be allocated or recycled by the system, therefore, the overhead of the operating system is much higher than that of the thread creation or cancellation. Similarly, during process switching, it involves saving the CPU environment of the current execution process and setting the new scheduling to the CPU environment of the process. During thread switching, you only need to save and set a small amount of register content, low overhead. In addition, because multiple threads in the same process share the address space of the process, the synchronization and communication between these threads are very easy to implement, even without the intervention of the operating system.

5) address space and other resources

The address space of the process is independent of each other. The threads of the same process share the resources of the process, and the threads in the process are invisible to other processes.

6) Communication

Inter-process communication (IPC) requires the assistance of Process Synchronization and mutex to ensure data consistency, while inter-thread communication can directly read/write process data segments (such as global variables.

3. Thread Synchronization and mutex

There are four main methods for thread synchronization:

------- 1) critical section: This section enables multiple threads to access the same public resource or code. It is fast and suitable for controlling data access.

A simple method to ensure that only one thread can access a certain resource at a certain time point. Only one thread is allowed to access resources at any time. If multiple threads attempt to access the critical section at the same time, when one thread enters the critical section, other threads will be suspended and wait until the thread exits the critical section. When the critical section is released, other threads can seize it and perform atomic operations on shared resources.

Two primitives of the critical section: entercriticalsection () enters the critical section; leavecriticalsection () exits the critical section.

------- 2) mutex: it is designed by coordinating threads to jointly access a shared resource separately.

The mutex is similar to that in the critical section. Only threads with mutex objects have the permission to access resources. Since there is only one mutex, it is determined that only one thread has access to resources. The thread occupying the resource delivers the mutex object after the task is processed, so that other threads can own the mutex object to control access to the resource. Mutex is more complex than that in the critical section. mutex not only synchronizes different threads of the same application, but also securely shares resources among threads of different programs.

The mutex contains several operation primitives:

Createmutex () Create mutex

Openmutex () Enable mutex

Releasemutex () releases mutex

Waitformultipleobjects () waits for the mutex object

------- 3) semaphores: designed to control access to the same resource by a limited user;

The method for synchronizing semaphores to threads is different from the previous methods. semaphores allow multiple threads to use shared resources at the same time, which is the same as PV operations in the operating system. It specifies the maximum number of threads simultaneously accessing shared resources. It allows multiple threads to access the same resource at the same time, but limits the maximum number of threads that can access resources at the same time. When using createsemaphore () to create a semaphore, specify the maximum resource base allowed for access and the current available resource count. Generally, the current available resource count is used as the maximum resource count, that is, each time a thread is added to access the corresponding resource count, it will be reduced by one, as long as the current available resource count is greater than 0, to send a semaphore signal. However, when the number of currently available resources is reduced to less than 0, it indicates that the number of threads currently occupying resources has reached the maximum, and the thread cannot continue to enter. At this time, the semaphore cannot be issued. After the thread finishes processing the resource, it should exit and use releasesemaphore () to increase the resource count by 1. The number of currently available resources cannot be greater than the maximum number of resources at any time.

PV operations were proposed by Dutch scientists. Semaphore s is an integer. When S is greater than or equal to 0, it indicates the resource entity that can be used by concurrent processes, but s is less than the number of resources that are waiting for sharing.

P operation resource application:

(1) s minus 1

(2) If s minus 1 is still greater than or equal to 0, the process continues;

(3) If s minus 1 and is less than 0, the process is blocked and placed in the corresponding queue of the signal, and then transferred to the process scheduling;

V operation to release resources:

(1) s plus 1;

(2) If the value of S plus 1 is greater than 0, the process continues to run;

(3) If s plus 1 is less than or equal to 0, a waiting process is awakened from the semaphore wait queue, and then the original process is returned for further execution or transfer to process scheduling.

The semaphore contains several operation primitives:

Createsemphore () creates a semaphore;

Opensemphore () opens a semaphore;

Releasesemaphore () to exit a semaphore;

Waitforsingleobject () waits for a semaphore

------ 4) Events: by notifying the thread that some events have occurred, you can start subsequent task execution.

The event object can also maintain thread synchronization by means of notification operations, which can implement thread synchronization operations in different processes;

Summary:

1) The mutex is very similar to that in the critical section. However, the mutex can be named, that is, it can be used by the expansion process. Therefore, more resources are required to create the mutex. If you want to use the mutex inside the process, you can use the critical section to increase the speed and reduce resource usage. The mutex volume is cross-process. Once created, it can be opened by name.

2) mutex, semaphores, and events can be synchronized across processes. Other objects have nothing to do with data synchronization operations, but for processes and threads, if the process and thread are running in a non-signal state, it is in a signal state after exiting. Therefore, you can use waitforsingleobject to wait for the process and thread to exit.

3) The mutex can be used to specify the resource exclusive mode. However, if the following problem occurs, the semaphore cannot be processed, for example, if a user buys a data system with three concurrent access licenses, the number of threads/processes that can perform database operations at the same time can be determined based on the number of access licenses purchased by the user, at this time, if you cannot complete the request by using mutex, you can use a semaphore.

4. Causes of deadlocks and how to avoid deadlocks

The main cause of deadlock is

1) Insufficient system resources

2) The process running promotion order is inappropriate

3) improper resource allocation

Four Conditions for deadlock

1) mutex condition: A resource can only be used by one process at a time.

2) request and retention conditions: when a process is blocked by requesting resources, it will not release the acquired resources.

3) condition of not stripping: resources obtained by the process are not forcibly deprived before they are used up.

4) Cyclic waiting condition: a cyclic waiting resource relationship is formed before several processes.

How to avoid deadlocks

In terms of system design process scheduling, pay attention not to let these four conditions be established, determine the reasonable allocation algorithm of resources, and avoid permanent occupation of system resources by processes.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.