Revisit the read-write double buffering problem

Source: Internet
Author: User

I haven't written a double cache for a long time, so I'll review it when I'm free.

We often hear about double caches, but we seldom use multiple caches, at least in most cases. Why not need multi-buffering, today under analysis. Not as many buffers as possible, this needs to be considered for specific application scenarios. We assume that the application scenario, in order to simplify the scenario, assume that there is only one read thread and a write thread, set the read time to RT, write time is WT, there are three cases:

1, when RT==WT, that is, read time is equal to write time. At this time, open up a few buffers good, should be two. Look at the following time chart (drawing water, you can read it well)

The above figure can be seen from the beginning of writing 1, write 1 after completion, read 1 begins to write 2 at the same time, when reading 1 is completed when the Write 2 is also completed, so theoretically, this heavy case using a double cache is possible.

2, when the RT>WT, that is, read faster than writing, that is, reading time is less than the time to write, then should use a few cache it? Should theoretically be no more than two, look at the following time chart

Write time longer than read, write 1 start, write 1 finish, read 1 start at the same time to start writing 2. When reading 1 is complete, write 2 is not finished, so this time, even if there is no more cache is useless (here do not consider multi-threaded write), so up to two cache is enough. In order to achieve high performance, it is best to use multi-threaded writing, of course, if multicore CPU.

3, when RT<WT, that is, write faster than reading, this time should theoretically set 2 to 3 buffers is enough. Look at the picture

This does not explain, because the front, are similar, read slowly, write faster also does not have much meaning (except Occupy space).

There is no consideration of the situation, please advise, thank you! The last code: _read_list and _write_list are locked in the code just to meet the three time relationships. If you have identified the two models, you can remove the lock to take a faster approach

Charbuffer1[1024x768];Charbuffer2[1024x768];std::vector<Char*>_buffer_list;std::vector<int> _read_list;//Readable cache subscript collectionstd::vector<int> _write_list;//Writable cache Subscript CollectionStd::mutex _mutex;//Sync Lockstd::atomic<BOOL> _stopflag (false);//Global lockout flagvoidThread_read (event* _er,event*_ew) {     while(!_stopflag) {        //waiting to read        if(_er->wait_for (Std::chrono::milliseconds ( -)))        {             while(true)            {                //Check the subscript collection of the readable cache                intIDX =-1; _mutex.Lock(); if(!_read_list.empty ()) {idx= *_read_list.begin ();                _read_list.erase (_read_list.begin ());                } _mutex.unlock (); if(idx==-1)                {                     Break; }                //to write                Char* Pbuffer =_buffer_list[idx]; cout<< pbuffer <<Endl; //analog read very slow//Sleep (500); //add to write, lock_mutex.Lock();                _write_list.push_back (IDX);                _mutex.unlock (); //Notification writable_ew->Notify_all (); }        }        // do other    }}voidThread_write (event* _er,event*_ew) {    int Global=0;  while(!_stopflag) {        //waiting to write        if(_ew->wait_for (Std::chrono::milliseconds ( -)))        {             while(true)            {                //Check the subscript collection of writable caches                intIDX =-1; _mutex.Lock(); if(!_write_list.empty ()) {idx= *_write_list.begin ();                _write_list.erase (_write_list.begin ());                } _mutex.unlock (); if(idx==-1)                     Break; //to write                Char* Pbuffer =_buffer_list[idx]; memset (pbuffer,0,1024x768); sprintf (pbuffer,"This is threadid%i write%i buffer%i times", std::this_thread::get_id (). hash (), IDX,++Global); //Add to read_mutex.Lock();                _read_list.push_back (IDX);                _mutex.unlock (); //notifications are readable_er->Notify_all (); }        }        // do other    }}intMain () {_buffer_list.push_back (buffer1);    _buffer_list.push_back (BUFFER2);    Event Event_read,event_write; Std::list<std::thread>_list_thr; //Read Thread_list_thr.push_back (Std::thread (thread_read,&event_read,&event_write)); //Write Thread_list_thr.push_back (Std::thread (thread_write,&event_read,&event_write)); System ("Pause"); //at the beginning, all caches are writable     for(size_t i=0; I<_buffer_list.size (); ++i) _write_list.push_back (i); //Notification Writeevent_write.notify_once (); System ("Pause"); _stopflag=true;  for(auto&thr: _LIST_THR) Thr.join (); return 0;}

Revisit the read-write double buffering problem

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.