Lock-free data structure (LOCK-FREE data structures)

Source: Internet
Author: User

Original: Lock-free data structure (LOCK-FREE data structures)

One weeks ago, I wrote an article about the latch (latches) and spin lock (Spinlocks) in SQL Server. The 2 synchronization primitives (synchronization primitives) are used to protect shared data structures in SQL Server, such as pages in the cache pool (via a latch (latches)), and locks in the Lock Manager hash table (via a spin lock (Spinlock)). Next you will see more and more new synchronization primitives (synchronization primitives), the so-called lock- free data structure (lock-free data structures). That's one of the basics of creating in-memory OLTP in SQL Server 2014, so in today's article I'll give you a quick overview of the unlocked data structures and what they provide.

What is a lock-free data structure (lock-free data structures)

The lock-free algorithm protects shared data structures through non-blocking algorithms. In the previous about latch (latches) and Spin Lock (Spinlock) articles you have seen that other threads block when they cannot get a latch or spin lock. When a thread waits for a latch, SQL Server puts the thread into a suspended (SUSPENDED) state, and if a thread waits for a spin lock, the thread will spin aggressively on the CPU. 2 methods can lead to a blocking scenario, which is what we want to avoid with the non-blocking algorithm (non-blocking algorithm). Wikipedia has a pretty good explanation for non-blocking algorithms:

"A non-blocking algorithm ensures that threads competing for A shared resource does not have their execution indefi Nitely postponed by mutual exclusion. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress regardless of scheduling. "

Look at the Chinese text screen:

Non-blocking algorithms guarantee that the threads competing for shared resources will not be allowed to suspend indefinitely by mutual exclusion. If there is a system-level process that is not scheduled, the nonblocking algorithm is unlocked.

The most important conclusion from this explanation is that a thread is not blocked by another thread. This is possible because no traditional locks are used to synchronize the threads themselves. Let's look at a specific example:

Let's take a step-by-step analysis of this code. First, the implementation of the function Compare_and_swap is achieved through an Atomic hardware directive (atomic hardware instruction) directly at the CPU level,CMPXCHG . I want to demonstrate what logic to implement in CMPXCHG : You compare values with a desired value, and if they are the same, the old values are given new values. Since the entire logic of CHPXCHG is implemented as an atomic unit on the CPU, no other thread can interrupt the execution of this Assembly operation.

In order to store the state of the spin lock itself, a variable named lock is used. So the thread spins from the entire loop until the self-rotating lock synchronization variable is unlocked. If this occurs, the thread can lock the synchronization variable and finally go to the critical section of thread Safe Mode (critical). This is also a simple demo of spin lock (non-thread safe) – In fact things are harder and more complicated than that.

The biggest problem with this traditional approach is that there are shared resources involved in thread synchronization: self-rotating lock sync variable lock. If a thread holds a spin lock and hangs, it will be stuck in the loop when other threads try to acquire a spin lock. You can avoid this problem by introducing the lock-free code technique.

As you can see, the implementation of the Foo method has changed completely. Instead of trying to get a spin lock, the implementation method is only checked for other threads to modify the shared variable (originally protected by a spin lock) before the atom is added. This means that no shared resources are used and the threads do not block each other. This is the main idea of the lock-free data structure (lock-free data structures) and the non-blocking algorithm (non-blocking algorithm).

In-memory OLTP in SQL Server 2014 also uses the same method to build page changes for the Bw-tree mapping table. Therefore, locks, latches and spin locks are not involved. If in-memory OLTP sees a change in the page address in the map table, this means that another thread has already started modifying on that page-but not yet completed (and other threads are scheduled on the CPU). In in-memory OLTP, each thread works in cooperation with each other. So if the thread sees the changes in the mapping table, it is possible to complete this "suspend" operation-for example, page splitting.

A page split in an in-memory OLTP contains multiple atomic operations. So one thread can start splitting a page, and the other thread can finally finish the page split. In a future article I'll discuss the more information about these page splits and make the bw-tree change, making this a complex approach possible.

Summary

In today's article I introduce you to the main ideas behind the lock-free data structure. The main idea is that the thread itself attempts to perform an atomic operation and checks to see if any other threads have completed an operation. Therefore, it is not necessary to protect the critical section through a synchronous primitive like a spin lock. Since SQL Server 2014, the idea of lock-free data structures and non-blocking algorithms has been used by in-memory OLTP.

Thanks for your attention!

Lock-free data structure (LOCK-FREE data structures)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.