Review read/write dual-buffering and read/write Buffering

Source: Internet
Author: User

Review read/write dual-buffering and read/write Buffering

I haven't written Double Cache for a long time. Now I have time to study it again.

We often hear about dual-cache, but seldom use multi-cache, at least in most cases. Why don't we need more buffering? Let's analyze it today. It is not the case that the greater the buffer, the better. This requires specific application scenarios. Let's abstract and assume the Application Scenario. to simplify the scenario, assume that there is only one read thread and one write thread, and set the read time to rt and the write time to wt. There are three situations:

1. When rt = wt, that is, the read time is equal to the write time. At this time, there should be two buffer zones. Take a look at the following time chart)

From the preceding figure, we can see that, starting from write 1, after write 1 is complete, read 1 starts to write 2 at the same time. When read 1 is complete, write 2 is also completed. Therefore, theoretically, in this case, you can use dual-cache.

 

2. When rt> wt, that is, the read speed is faster than the write time, that is, the read time is less than the write time. How many caches should be used at this time? In theory, there should be no more than two. See the following time chart.

The write time is longer than the read time. The write time starts from 1. After the write time is completed, the write time starts from the start of the read time. When read 1 is complete, write 2 is not completed, so at this time, even if there are more cache is useless (multithreading is not considered here), so it is enough to have at most two caches. To achieve high performance, it is best to use multi-thread writing, of course, multi-core cpu.

3. When rt <wt, the write speed is faster than the read speed. In theory, it is enough to set 2 to 3 cache zones. View chart

This will not be explained, because the previous steps are similar. Reading is slow and writing is not significant (except for occupying space ).

If you have any questions, please kindly advise. Thank you! Last code: locks _ read_list and _ write_list in the code to satisfy the three time relationships at the same time. If you have determined the two models, you can remove the lock and adopt a faster method.

Char buffer1 [1024]; char buffer2 [1024]; std: vector <char *> _ buffer_list; std: vector <int> _ read_list; // readable cache subscript set std: vector <int> _ write_list; // writable cache subscript set std: mutex _ mutex; // Synchronous lock std :: atomic <bool> _ stopflag (false); // void thread_read (Event * _ er, Event * _ ew) {while (! _ Stopflag) {// wait for read if (_ er-> wait_for (std: chrono: milliseconds (2000) {while (true) {// check the subscript set of the readable cache, int idx =-1; _ mutex. lock (); if (! _ Read_list.empty () {idx = * _ read_list.begin (); _ read_list.erase (_ read_list.begin ();} _ mutex. unlock (); if (idx =-1) {break;} // write char * pbuffer = _ buffer_list [idx]; cout <pbuffer <endl; // Slow read simulation // Sleep (500); // Add writable and lock _ mutex. lock (); _ write_list.push_back (idx); _ mutex. unlock (); // notification writeable _ ew-> policy_all () ;}// do other} void thread_write (Event * _ er, Event * _ ew) {int global = 0; while (! _ Stopflag) {// wait for the write if (_ ew-> wait_for (std: chrono: milliseconds (2000) {while (true) {// check the writable cache subscript set int idx =-1; _ mutex. lock (); if (! _ Write_list.empty () {idx = * _ write_list.begin (); _ write_list.erase (_ write_list.begin ();} _ mutex. unlock (); if (idx =-1) break; // write char * pbuffer = _ buffer_list [idx]; memset (pbuffer,); sprintf (pbuffer, "this is threadid % I write % I buffer % I times", std: this_thread: get_id (). hash (), idx, ++ global); // Add readable _ mutex. lock (); _ read_list.push_back (idx); _ mutex. unlock (); // notification readable _ er-> policy_all () ;}/do other }} int main () {_ buffer_list.push_back (buffer1 ); _ buffer_list.push_back (buffer2); Event event_read, event_write; std: list <std: thread> _ list_thr; // read thread _ list_thr.push_back (std: thread (thread_read, & event_read, & event_write); // write thread _ list_thr.push_back (std: thread (thread_write, & event_read, & event_write); system ("pause "); // start with, all caches can be written for (size_t I = 0; I <_ buffer_list.size (); ++ I) _ write_list.push_back (I); // notification write event_write.policy_once (); system ("pause"); _ stopflag = true; for (auto & thr: _ list_thr) thr. join (); return 0 ;}

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.