A piece of http://www.csdn.net/Develop/Read_Article.asp in the chiway translation? Id = 15224
An application disables the buffer by setting so_sndbuf to 0, and then sends a blocking send () call. In this case, the system kernel locks the buffer of the application until the receiver confirms that it has received the entire buffer before sending () calls are returned. It seems that this is a simple method to determine whether your data has been fully received by the other party, but it is not actually the case. Think about it, even if the remote TCP notification data has been received, it does not mean that the data has been successfully sent to the client application. For example, the other party may have insufficient resources, resulting in AFD. sys cannot copy data to an application. Another more important problem is that each thread can only send a call once at a time, which is extremely inefficient.
Set so_rcvbuf to 0 and disable AFD. the received buffer of SYS cannot improve the performance, which only forces the received data to be buffered at a lower level than Winsock. When you send a receive call, we also need to copy the buffer zone, so you would not succeed in avoiding the conspiracy of copying the buffer zone.
Now we should be clear that disabling the buffer is not a good idea for most applications. As long as the application needs to keep several wsarecvs overlapping calls on a connection at any time, there is usually no need to close the receiving buffer. If AFD. sys is always available in a buffer provided by the application, it does not need to use an internal buffer.
High-performance server applications can disable the sending buffer without compromising performance. However, such an application must be very careful to ensure that it always sends multiple overlapping sending calls, rather than sending the next one after an overlapping sending is completed. If the application is in the order of sending and sending the next one, it will waste the gap between two sending attempts. In short, it is necessary to ensure that after the transmission driver sends a buffer, you can immediately switch to another buffer zone.
It seems that setting 0 is not advantageous.
----------------------------
Let's look at how the system handles a typical send call when the send buffer size is non-zero. when an application makes a send call, if there is sufficient buffer space, the data is copied into the socket's send buffers, the call completes immediately
With success, and the completion is posted. on the other hand, if the socket's send buffer is full, then the application's send buffer is locked and the send call fails with wsa_io_pending. after the data in the send buffer is processed (for example, handed
Down to TCP for processing), then Winsock will process the locked buffer directly. that is, the data is handed directly to TCP from the application's buffer and the socket's send buffer is completely bypassed.
The opposite is true for processing ing data. when an overlapped receive call is already med, if data has already been received ed on the connection, it will be buffered in the socket's receive buffer. this data will be copied directly into the application's buffer (
Much as will fit), The receive call returns success, and a completion is posted. however, if the socket's receive buffer is empty, when the overlapped receive call is made, the application's buffer is locked and the call fails with wsa_io_pending. once data
Arrives on the connection, it will be copied directly into the application's buffer, bypassing the socket's Receive Buffer altogether.
Setting the per-socket buffers to zero generally will not increase performance because the extra memory copy can be avoided as long as there are always enough overlapped send and receive operations posted. disabling the socket's send buffer has less of a performance
Impact than disabling the Receive Buffer because the application's send buffer will always be locked until it can be passed down to TCP for processing. however, if the receive buffer is set to zero and there are no outstanding overlapped receive CILS, any
Incoming data can be buffered only at the TCP level. the TCP driver will buffer only up to the receive window size, which is 17 The KB-TCP will increase these buffers as needed to this limit; normally the buffers are much smaller. these TCP buffers (one per connection)
Are allocated out of non-paged pool, which means if the server has 1000 connections and no such es posted at all, 17 MB of the non-paged pool will be consumed! The non-paged pool is a limited resource, and unless the server can guarantee there are always
Es posted for a connection, the per-socket Receive Buffer shoshould be left intact.
Only in a few specific cases will leaving the Receive Buffer intact lead to decreased performance. consider the situation in which a server handles handle thousands of connections and cannot have a receive posted on each connection (this can become very expensive,
As you'll see in the next section ). in addition, the clients send data sporadically. incoming data will be buffered in the per-socket Receive Buffer and when the server does issue an overlapped receive, it is refreshing unnecessary work. the overlapped operation
Issues an I/O Request Packet (IRP) That completes, immediately after which notification is sent to the completion port. in this case, the server cannot keep enough when es posted, so it is better off when ming simple non-blocking receive CILS.