Double buffering Message Queuing-reduce lock contention

Source: Internet
Author: User
Tags message queue

Double buffering Message Queuing-reduce lock contention

On the Web application server side, logical processing and I/O processing are often separated for performance and blocking:
I/o Network threads handle I/O events: Receiving and sending packets, establishing and maintaining connections, and so on.
Logical threads should logically handle the packets received.

Typically, network threads and logical threads exchange information through packet queues, simply as a producer-consumer model.
This queue is that multiple threads must lock on shared access, meaning each access is locked. How to better how to reduce the number of lock competition ?


scheme a pair of buffer message queues:
Two queues, one for logical threads, one for IO threads for writing, and for swapping queues with IO threads after a logical thread has read the queue.
IO threads are locked every time the queue is written, and logical threads need to be locked when swapping queues, but logical threads do not need to be locked when reading queues.

The size of the queue buffer is adjusted according to the size of the data, and if the buffer is small, the data can be processed more quickly, but throughput and the probability of competition for resources are most likely.
You can set the maximum upper limit for the buffer queue, and then discard the package without inserting the queue after exceeding the maximum number.
In addition, the implementation of double buffering has different strategies,
One is to read the operation first, that is, producers only find free buffer, immediately swap,
The second is that the write thread will only swap if the current buffer is full.
Third, the upper logic according to the frame rate to deal with each frame of the two-tier buffer queue Exchange, take a queue to deal with it.

Programme II provides a queue container:
Provides a queue container with multiple queues, each of which holds a certain number of messages. When a network IO thread posts a message to a logical thread, it takes an empty queue from the queue container to use.
Until the queue is filled up and then put back into the container to replace another empty queue. When a logical thread fetches a message, it fetches a message queue from the queue container, empties the queue, and then puts it back into the container.
This makes it necessary to lock the queue container only when it is operated on, and the IO and logical threads do not need to be locked in order to manipulate the queues they are currently using, so the chances of locking the competition are greatly reduced.
This sets the maximum number of messages per queue, which seems to be intended to be put back into the container for another queue only when the IO thread fills the queue. That's what happens sometimes IO threads are not fully written
Queues, and logical threads do not have data to handle, especially when the amount of data is small
[This can be handled by setting a timeout, if the current time--when the queue is placed in the first packet > MS, put it back into the container for another queue].


Usually our logical servers divide the threads into scenarios, and different threads perform different scenarios. A thread can execute multiple scenes. Because we play the family in the scene, we will throw the player data, including its buffer pool, into the scene to deal with.

Ref LINK:HTTP://GROUPS.GOOGLE.COM/GROUP/DEV4SERVER/BROWSE_THREAD/THREAD/4655F8AB1248347A?HL=ZH-CN

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.