Lock free programming)

Source: Internet
Author: User

Recently, online architecture has been implemented. The biggest difference between online architecture and offline architecture is the quality of service (SLA, service level agreement, SLA 99.99 indicates that a 10 k request may fail or times out at most once. The offline architecture is concerned with throughput, and SLA is not so strict, such as 99.9. Offline architecture generally requires Flow Control to control the speed at which users send requests. To avoid a large amount of data accumulation in the buffer or queue due to requests that exceed the processing capability of the server, resulting in a large amount of timeout. The online architecture cannot have traffic control. You cannot restrict user requests. Therefore, the online architecture has high requirements for Elastic resizing and automatically expands the backend service capabilities when a large number of requests arrive. For example, if the current request occupies 70% of the Cluster's resources, the system needs to automatically scale up. On the contrary, if the current request only occupies 20% of the Cluster's resources, it is necessary to recycle some of the resources. You know, the cost of the company's data center is still very expensive.

Of course, the similarities and differences between online and offline architectures are a big article. This article focuses on the use of locks for handling highly concurrent requests. Several principles:

  1. Do not use global locks. The use of global locks means that when the request locks are required, other threads will wait for the lock, which will lead to a sharp decline in service capabilities.
  2. Be sure to pay attention to the scope of the lock, and make sure that the lock applies to a small enough range. Do not wait for operations in the locked area, such as IO calls.
  3. Try to modify the architecture to avoid locking.

Imagine a scenario where we may send multiple requests to the backend for the purpose of service quality:

  1. High-availability row. If a node in the background crashes, other backup requests will be requested. If the SLA of a node is 99% (very low), the SLA can reach 99.99% when two requests are sent to the backend. If the SLA of a single node is 99.9%, the SLA can reach 99.9999, that is, millions of requests may fail at most once.
  2. Low latency: The first returned request will respond, so that some slow nodes will not affect the overall latency of the system.

So how can we determine that the first request is first met?

First, consider a rough method: Use a set to record the ID of the request that has not been returned, and then check whether the set has this ID when the response is received. If yes, delete it and respond to the client; when the second response is received, these requests will be discarded because the set does not have this ID.

This involves the read and write operations on the set, which requires locking. If the set is visible in the process, the lock is at the process level (or the sub-threads of the process or thread are visible). When locking, many threads will wait for the lock. In this case, the performance will be greatly reduced.

This method is fine for hundreds of requests per second. However, if the number of locks reaches the level of 1000, the number of locks will reach several thousand (for example, if requests are sent and 3 requests are sent to the backend, a lock will be added for each set write, three requests will be locked once, so it is equivalent to a real request that will be locked four times, and 1000 requests will be locked for 4000 times. It is terrible to think about it, and 4000 requests will be locked once a second, the cost of the lock is also very small, not to mention the insertion and query of the set, the deletion also has a negligible performance loss ).

Can I add a thread-Level Lock? Thread-level locks reduce the impact on other threads. However, if set is also at the Thread level, you must ensure that the asynchronous callback can only be performed in the same thread. Otherwise, the request sent by this thread is obtained by other threads, so the above logic is not available, because set is at the Thread level, which is invisible to other threads. In this case, if the architecture can ensure the return of an asynchronous request, it would be better to process it in the same thread. So, if the architecture can be so guaranteed, you don't need to lock it at all. Why? Because a thread is executed sequentially and there is no competition for resources, the read/write set is safe, so no locks are required.

So the question is, how does the architecture support this asynchronous callback in the same thread?

One implementation is to implement a thread pool. For a specific request ID, It is scheduled to a working thread based on certain rules. When asynchronous return is performed, then, the request ID is scheduled to be processed by the same thread.

So how to implement a thread pool? In boost; if it is scheduled, boost supports scheduling to which thread. Solve the problem.

Sleep.


Of course, if you think that lock-free programming involves cas, you can perform step-by-step concurrent programming (3): use C ++ 11 to implement lock-free stack)


Lock free programming)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.