Multi-threading and multi-process of programming ideas (2)-Thread priority and thread safety

Source: Internet
Author: User
Tags mutex semaphore

Original: http://blog.csdn.net/luoweifu/article/details/46701167
Author: Luoweifu
Reprint please mark the source of the name

"Multi-threading and multi-process (1)--talking about threads and processes from the perspective of an operating system" describes in detail the threads, process relationships, and performance in the operating system, which must be understood as a basis for multithreaded learning. This article goes on to talk about thread priority and thread safety.

Thread Priority

The task scheduling of mainstream operating systems (such as Windows, Linux, Mac OS X) now has the characteristics of priority scheduling (Schedule) , in addition to the time-slice rotations mentioned earlier. Priority scheduling determines the order in which threads are executed in turn, and in systems with priority scheduling, threads have their own thread priority. Threads with high priority are executed earlier, while low-priority threads are typically executed when there are no higher-priority executable threads.

The priority of a thread can be set manually by the user, and the system will adjust the priority according to different situations. Typically, threads that enter a waiting state frequently (such as an IO thread) that go into a waiting state to discard a share of the time that is still available before the wait status, are more popular with the operating system than a thread that is heavily computationally intensive every time. Because threads that frequently enter the waiting space only take up a very small amount of time, the operating system can handle more tasks. We refer to threads that are waiting frequently as io-intensive threads (IO Bound thread), while threads that are seldom waiting are referred to as CPU-intensive threads (CPU Bound thread). IO-intensive threads are always more prone to priority improvements than CPU-intensive threads.

Thread starved:

Under the priority scheduling, it is prone to a thread starvation phenomenon. A thread starved to death is to say that it has a lower priority, and there are always higher priority threads waiting to execute before it executes, so this low-priority thread is never executed. When CPU-intensive threads are high-priority, other low-priority threads are likely to starve, and when IO-intensive threads are higher priority, other threads are relatively less likely to starve, because IO threads have a lot of waiting time. To prevent threads from starving, the scheduling system usually increases the priority of those threads that are waiting long enough to be executed. In this way, a thread will always be promoted to the point where it can be executed as long as it waits long enough, that is, it is only a matter of time before the threads are always executed.

In a priority scheduling environment, there are three ways to change the priority of a thread:
1. User-specified priority;
2. Increase or decrease the priority (completed by the operating system) based on the frequency of entering the waiting state;
3. Priority is promoted for a long time without execution.

Thread Safety and lock

When multiple threads concurrently execute access to the same data, it can be very risky to do so without taking appropriate action. Suppose you have a bank account in ICBC, two UnionPay cards (one in your hand, one in your girlfriend's hand), and 1 million inside. Suppose you take money on two processes: 1. Check the account balance, 2. Withdraw Cash (if the amount to be withdrawn > account balance is successful, the withdrawal fails). One day you want to buy a house to get the money out, and at this time your girlfriend also want to buy a car (assuming you have not discussed beforehand). Two people are taking the money, you take 1 million on the A ATM machine, the girlfriend takes 800,000 in the B ATM machine. At this time, a ATM check account balance found 1 million, can be removed, while at the same time, B ATM is also checking the account balance found that 1 million, can be taken out, so that a, b all the money out.

1 million of the deposit out of 1.8 million, the bank will lose money (of course you smiling ...)! This is the non-security of thread concurrency. To avoid this, we will synchronize access to the same data by multiple threads to ensure thread safety.

The so-called Synchronization (synchronization) means that when a thread accesses data, other threads must not access the same data, that is, only one thread can access the data at the same time, and other threads will be able to access it at the end of this process. The most common way to synchronize is to use a lock, also known as a thread lock. A lock is a non-mandatory mechanism in which each thread attempts to acquire (acquire) a lock before accessing the data or resource and releasing the (release ) lock After the end of the visit. When an attempt is made to acquire a lock when the lock is occupied, the thread goes into a wait state until the lock is released and becomes available again.

Binary signal Volume

The two-dollar semaphore (binary Semaphore) is the simplest type of lock that has two states: occupancy and non-occupancy. It is suitable for resources that can only be accessed exclusively by a single thread. When the two-dollar semaphore is in a non-occupied state, the first thread that attempts to acquire the two-dollar semaphore lock acquires the lock and locks the two-dollar semaphore to occupy state, after which other threads attempting to acquire the two-dollar semaphore enter the wait state until the lock is released.

Signal Volume

multivariate semaphores allow multiple threads to access the same resource, a multivariate semaphore called the Semaphore (Semaphore), which is a good choice for resources that allow concurrent access by multiple threads. A semaphore with an initial value of n allows n threads to access concurrently. When a thread accesses a resource, it first acquires a semaphore lock, doing the following:
1. Reduce the value of the semaphore by 1;
2. If the semaphore value is less than 0, then enter the waiting state, otherwise continue execution;
After the access resource ends, the thread releases the semaphore lock, doing the following:
1. Add the value of the semaphore to 1;
2. If the value of the semaphore is less than 1 (equal to 0), wake up a waiting thread;

Mutex Amount

A mutex (mutex) is similar to a two-dollar semaphore, and the resource allows only one thread to access it. Unlike the two-dollar semaphore, semaphores can be obtained and freed by arbitrary threads throughout the system, meaning that the same semaphore can be fetched by one thread and freed by another. The mutex requires which thread gets the mutex to be freed by which thread, and the other thread someone else to release the mutex is invalid.

Critical section

The Critical Zone (Critical section) is a more stringent synchronization method than the mutex. Mutexes and semaphores are visible in any process of the system, meaning that a process creates a mutex or semaphore, and another process attempts to acquire the lock is legal. The scope of the critical section is limited to this process, and other processes cannot acquire the lock. In addition to this, the critical section and the mutex are of the same nature.

Read/write Lock

The read-write lock (Read-write Lock) allows multiple threads to read the same data at the same time, allowing only one thread to write. This is because the read operation does not change the contents of the data, it is secure, and the write operation alters the contents of the data, which is not secure. For the same read-write lock, there are two ways to get it: Shared and Exclusive (Exclusive). When the lock is in a free State, attempts to acquire the lock in any way succeeds and locks the lock to its corresponding state, and if the lock is shared, the other thread acquires the lock in a shared manner, and the lock is assigned to multiple threads, and if another thread tries to acquire the shared lock in an exclusive way, It must wait for all threads to release the lock, and the exclusive lock prevents any thread from acquiring the lock, regardless of the way they are. The way to get read and write locks is summarized as follows:

status of Read/write locks get it in a shared way exclusive access to
Free Success Success
Shares Success Wait
Exclusive Wait Wait

Table 1: How to obtain read and write locks


If you have any doubts and ideas, please give feedback in the comments, your feedback is the best evaluator! Due to my limited skills and skills, if the Ben Boven have errors or shortcomings, please understand and give your valuable advice!

Original: http://blog.csdn.net/luoweifu/article/details/46701167
Author: Luoweifu
Reprint please mark the source of the name


======================== programming thought series of articles review ========================
======================== programming thought series of articles review ========================
Multi-threading and multi-process of programming idea
Message mechanism of programming thought
Logging of programming ideas
Abnormal handling of programming thought
The regular expression of programming thought
An iterative device for programming ideas
Recursion of programming thought
The callback of programming thought

Copyright NOTICE: This article for Bo Master original article, without Bo Master permitted not for any commercial use, reproduced please indicate the source.

Multi-threading and multi-process of programming ideas (2)-Thread priority and thread safety

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.