Large distributed C + + framework "four: NetIO" under the buffer Manager

Source: Internet
Author: User

Once a week came again. This time the main introduction is NetIO's buffer manager. First, buffer management is an unavoidable problem for every network layer. How to use buffer efficiently is a key issue. Here is the main introduction of our netio is how to deal with.  to be honest, this is the one I've seen comparing egg pain. Buffer management. Anyway, I saw a couple of geniuses.

Recently saw a video of Qcon2016. There are many in the introduction of distributed platform. The feeling is very good ~ ~. It feels like our distribution is a little bit more simple than theirs. Interested students can go and see

Http://daxue.qq.com/content/special/id/20

1.1 Let's look at the next time the system calls recv to receive the full package. 1) First through the system call function recv each time the data from the TCP read to theM_achrecvbuf[tpt_recv_buf_len]; This BUF size is 128*1024 2, Judge Baotou. First determine whether it is 0x5a5a and then parse the header to determine the total length to send over if more than 1024*1024 error. 1024*1024 is the size of the application at the time of initialization. one of our largest request packages has been limited to 1M If TCP is found once, it can receive the full package. NetIO does not use our byte BUF manager。 M_psink->onrecv instead of just throwing it to NetIO's app class to handle it. Then wait for the NetIO in the app class to do the package after the concrete processing. After the network layer discovery is processed, it jumps directly to the while loop waiting for the new event
int Cnethandlemng::_retrievepkgdata (int nhandle,Char* prcvbuf,int  nbuflen) {    ...     // The current packet has been read completed    M_PSINK->ONRECV (nhandle,prcvbuf+tpt_head_len,dwpkglen);     return (Dwpkglen + Tpt_head_len);}
intCNETHANDLEMNG::ONRECV (intNhandle,Char* Prcvbuf,intNbuflen) {Stconn* Pconn =_getconn (Nhandle); if(NULL = =pconn)        {Std::stringstream oss; OSS<<"Reactor report recv data for connection handle"<<nHandle<<"But we cann ' t found the connection data"<<Std::endl; M_psink-Reporttpterror (__file__,__line__,__func__,oss.str (). C_STR ()); return 0; }     intNreadlen =0; if(0= = Pconn->m_prcvbuf->M_ndatalen) {Nreadlen=_retrievepkgsdata (Nhandle,prcvbuf,nbuflen); if(Nreadlen <0 )            returnNreadlen;//reactor layer will automatically close the connection              if(Nreadlen >=Nbuflen)return 0;//The data has been processed .        .....     } 

We found that when a recv can collect the entire packet. The platform does not use byte BUF Manager. But directly to the NetIO app class to deal with

1.2 Let's take a look at the next time the system calls Recv to receive the package. 5.2.1 We first analyze the next specific example and then slowly sum up and summarize

a) Why is 256 pointers?The time of initialization CNetioApp::CNetioApp():CNetMsgqSvr(4096*5,1024*1024,4*1024)Specifies that a maximum request package is the smallest buf size in the 1024*1024buffer Manager is 4*1024 (1024*1024+4*1024-1)/(1024*4) = 256 and then allocates 256 sizes of two-dimensional pointers. so when initializing, a two-dimensional pointer of size 256 is set. Note that this is just creating a two-dimensional pointer. There was no space allocated to the object to which each pointer was pointing. b) When the client first connet. will continue to initialize some of the informationWhen the client connet request comes. will go to Buffer Manager and fetch a buffer. By default.  is the buffer in p[0].  When the buffer Manager discovers P[0] is empty. Will go to create 10 buffer. Here 10 is written dead. Because it is p[0] is the first line. Then the size of each buffer is 4096. These 10 buffer are a list of links. Index=0 is the first to be created. Index=9 is the last created. Such as. Index=9 was taken out. But this time there is no data to come. This doesn't matter. Buffer that is pointed to by the Connet information structure. We all think it's in use. This time p[0] will jump to index=8. Note that the two-dimensional pointer is always pointing to the buffer that is not being used. This is important. If there is no space. will continue to create buffer c) recv 16384 data for the first time This finds that 16,384 bytes received are greater than 4,096 bytes. the buffer Manager. will apply for 10 buffer blocks here at P[3]. every 1024*4*4=16384 just dropped the recv 16384 data.   because the buffer block is not p[0]. The point of P[0] is first returned. From Index=8 to Index=9 then let the connet information block re-point to the index=9 of P[3] it said before that the two-dimensional pointer must point to a buffer that is not being used. All P[3] point to Index=8   also record the number of bytes that have been saved in the M_ndatalen   It's not over yet, and we need to continue collecting data . d) recv 16384 data for the second time Note the second time also received the 16384. then the second time 16384 + last 16384 = 32768This time p[3] This column of buffer does not fit. Need to re-create buffer   this time on p[7] This column is created 4*1024*8 =32768 Just drop all the data Create 10 buffer at this time in p[7]. Each buffer is 4*1024*8   the right to use the p[3] buffer is then returned. At this time, p[3] pointer to index = 9 at the same time the index=9 inside the M_ndatalen set to 0. This means that the index=9 of P[3] was released. But in fact index=9 still has content and did not clear.   then we put the accumulated data into the index=9 of p[7].

e) The following are all similar logic. return space. Then apply for a new space I see a total of 131,158 bytes of content recv 6 times. The Buffer Manager replaced the buffer that included the first initialization, which took 7 times to find the appropriate buffer to store the content p[32] The buffer size is 4*1024*33=135168

1.2.2. Total buffer size under normal conditions

after the NetIO package for a while. If the size of the various packages are present. So what will happen in the end. these 256 pointers will be created in buffer. The buffer size without a column is the number of 4*1024* rows. For example, the first line is 4*1024*1. The last line is 4*1024*256. and the created buffer will not be released. Let's figure out how big the total buffer will be. 4*1024* (1+2+3...+256) =134742016134742016 1048576 134742016/(1024x768 * 1024x768) = M         about 128 trillion. But this is only a case of a low concurrent request. Let's see what happens when the concurrent request is high.

1.2.3. Total size of buffer in high concurrency scenario

we assume that 20 requests have been made concurrently. To make the analysis simple. We just thought. Each request data is within 4*1024. such as user P[0] after the 10 buffer. P[0] This time is pointing to null. But this time there are still requests to help. continue to assign allocate 10 more buffer at this time .

If 10 buffer has been reassigned. Follow the back of the 0. The request is assigned in the 10 buffer in the back. The code is each time the network layer requests buffer from the buffer manager. will go to see if the two-dimensional pointer is empty. If not empty, the space is given to save the data. If it is empty. It will request 10 memory all see this. When the concurrent request is large. This buffer will burst to a very scary number. and since the created space is not deleted. Will maintain a very high memory footprint. 1.2.4 the release of buffer. we thought examples. Request 7, 4, 8 whether the space. so p[0] first point to index=3 and then point to index=6, and finally point to index=2 .   so p[0] is actually pointing to the chain header of the space that is not being used. P[0]->index2->index6->index3   the next time a new request comes in. The INDEX2 is assigned to the new request using the Summary below: 1) NetIO's buffer is still very troublesome at first sight. I saw it for 2 or 3 days before I understood it. The main idea of implementation is still a bit complicated. But there is nothing particularly striking about the personal feeling of seeing it. The implementation feels a bit like Google's tcmall. 2) The benefit of not releasing the application is that it does not generate a lot of memory fragmentation. 3) But high concurrency scenario next memory explosion. And will not go down. 4) There is also a large package for the. Requires multiple recv. The buffer manager will not stop replacing buffer to save data. Instead of parsing the header. Determines the size of the package. Then specify a buffer that exactly matches. Then each time the recv data is placed in this buffer. Instead of constantly replacing buffer.

Large distributed C + + framework "four: NetIO" under the buffer Manager

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.