Concurrent thought Refinement (2) (Lock free, polling, and thread pool)

Source: Internet
Author: User
Tags cas

8. farewell to Lock

Not always say lock is more troublesome and dangerous, then don't be good. There is actually a lock free method.

First, a concept-atomic variable is introduced. Operations on this variable are atomic operations (atomic operation). Atomic operation means that the operation is either complete or not, and partial completion is not possible. Just like an atom in physical chemistry, borrowing is not meant to be divided. According to this understanding, access to this atomic variable must be serial. An atomic operation is completed before another atomic operation can be performed. This type of variable is mostly the basic variable, what int ah double ah Boolean ah.

You can simply think that, as long as the atomic operation is invoked, the object can be guaranteed serial access.

The internal implementation of the atomic operation of common language basically uses the lock. Atomic operations are commonly used with ++,--,compare & swaps. In particular, this CAS (compare and swap) can implement the lock free method in conjunction with the loop operation. Look at the following Stack.push (...) Pseudo code.

1 atomic<node*> head; 2 New Node (...); 3 new_node->next=head; (1) 4 while (!head.compare_and_swap (New_node->next, New_node)); (2)

With a stack, you need to push a value in, and the head pointer declares the atomic weight. (1) Set the new value's next to head. (2) Update Head, if Head==new_node->next then, Head=new_node, returns True, jumps out of the loop, otherwise new_node->next updates to head and returns false to continue the loop. Guess? What happens new_node->next not equal to head? If the head is modified by another thread after (1) execution is complete. At this point Old_head (New_node->next) needs to be updated to the current head for the next steps. The stack push implemented in this way uses the lock free idea. Does it look much more concise than the Add lock code? One of the lock operations in the code is not seen.

Lock free requires a spin lock that consumes CPU time, while the lock method blocks directly, freeing up CPU time. Did you find it? This "spin lock" idea also appeared in the previous try lock? It can even be assumed that if all the locks are replaced by spin locks, is it possible to prevent deadlocks? The previous Stack.push operation can also add CAS trial times, operation a certain number of times or time limit, indicating that the push failed, also jumped out of this spin lock. In addition, there is no so-called lock free is better than lock, according to different business conditions, choose it yourself.

Q: If my key entity is an object, can the object be the most atomic weight? This, I'm afraid not, but you can decompose some of the primitive type fields in the object as atomic weight. What basic types are used requires a deep understanding of the business requirements.

9. Polling Operations

Observer design Patterns There is not much to say here, the main idea is to turn the active polling, into passive notification. However, in some concurrent assemblies "polling" is more common than "notification" thinking, or "notify by polling". Even so, I feel that polling is more useful on certain occasions than notifications, especially in order-sensitive areas. This type of program is based on avoiding the risk of disorderly order, and thus chooses this architecture mode, and the polling architecture can artificially create "program fence". Thread Barrier This operation has been heard, this is a way to synchronize the program. This approach is used in some distributed programs (especially Message Queuing modules) and is very useful for debug.

A complete system polling and notification is important, no one who is good or bad to say. Write here to think, in the poll above, mentioned "order" two words, you understand, sequential execution is prone to trigger performance bottlenecks, and its ripple effect. and polling the database and so on, the data can be directly to the system is not responding well, consider the flip-flop. So architecture is a kind of mentality, according to the specific business needs to decide.

The "notice" of understanding after the opportunity to say. By the way, when polling, remember yield, do not then poll the thread to run up the CPU time slice of the light, other concurrent operations are affected.

Threading Operations

A thread is a function, a sense of functional programming concept. But not the more threads open the faster the program processing, this and the actual number of processors and the thread context switching frequency is very much related, these cases Baidu Google know, do not elaborate, basically the best number of threads is probably the number of idle processors.

One idea here is that the thread pool is thread pool. That is, after the thread is created, it is not destroyed because it is finished, it is put into a pool waiting for the new task to wake it up and then start executing. Thread creation consumes CPU resources, and reuse of course is good. This function has a corresponding implementation in a high-level language, such as a C # task, and its default scheduling class implements this idea. So high-level languages are easier to use, and you don't have to build wheels from scratch. A similar approach is available in the C + + boost library. When programming the use of Bai, why not?

For example, 2 threads do 5 tasks instead of 5 threads. And after Task3, thread 1 is in a idling or blocking wait state instead of destroying itself until TASK4 appears and starts executing. However, if 5 tasks are executed in parallel, and there are indeed so many idle processors, 5 threads are opened and processing is more efficient. Whether the thread pool is suitable for this scenario still needs to be considered.

We see a little bit abstract point, the previous article of the thread pool technology, is the task scheduling operation, this later has the opportunity to say ...

Concurrent thought Refinement (2) (Lock free, polling, and thread pool)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.