Socket architecture for Windows NT and Windows 2000
For developing a Winsock application with a large response scale, it is helpful to have a basic understanding of the socket architecture of Windows NT and Windows 2000.
Unlike other types of operating systems, the transfer protocols for Windows NT and Windows 2000 do not have an interface that is similar to sockets and can communicate directly with applications, instead, it uses a more underlying API called the Transport Driver Interface (TDI ). The core-mode driver of Winsock is responsible for connection and buffer management, so as to provide socket simulation (implemented in the AFD. SYS file) to the application and to communicate with the underlying transmission driver.
Who manages the buffer zone?
As mentioned above, the application communicates with the transport protocol driver through Winsock, while AFD. sys manages the buffer for the application. That is, when an application calls the send () or wsasend () function to send data, AFD. sys will copy the data to its own internal buffer (depending on the so_sndbuf value), and then the send () or wsasend () function will return immediately. In this case, AFD. sys is responsible for sending data in the background. However, if the size of the buffer required by the application exceeds the size of so_sndbuf, The wsasend () function blocks until all data is sent.
The same is true for receiving data from a remote client. As long as you do not need to receive a large amount of data from the application, and it does not exceed the value set by so_rcvbuf, AFD. sys will first copy the data to its internal buffer. When an application calls the Recv () or wsarecv () function, data is copied from the internal buffer to the buffer provided by the application.
In most cases, this architecture works well, especially when applications are written in non-overlapping send () and receive () modes under traditional sockets. But the programmer should be careful that, although so_sndbuf and so_rcvbuf options can be set to 0 through setsockopt (), the programmer must be very clear about AFD. what are the consequences of SYS's internal buffer shutdown? Avoid system crashes caused by the possible copying of the buffer zone when sending and receiving data.
For example, an application disables the buffer by setting so_sndbuf to 0, and then sends a blocking send () call. In this case, the system kernel locks the buffer of the application until the receiver confirms that it has received the entire buffer before sending () calls are returned. It seems that this is a simple method to determine whether your data has been fully received by the other party, but it is not actually the case. Think about it, even if the remote TCP notification data has been received, it does not mean that the data has been successfully sent to the client application. For example, the other party may have insufficient resources, resulting in AFD. sys cannot copy data to an application. Another more important problem is that each thread can only send a call once at a time, which is extremely inefficient.
Set so_rcvbuf to 0 and disable AFD. the received buffer of SYS cannot improve the performance, which only forces the received data to be buffered at a lower level than Winsock. When you send a receive call, we also need to copy the buffer zone, so you would not succeed in avoiding the conspiracy of copying the buffer zone.
Now we should be clear that disabling the buffer is not a good idea for most applications. As long as the application needs to keep several wsarecvs overlapping calls on a connection at any time, there is usually no need to close the receiving buffer. If AFD. sys is always available in a buffer provided by the application, it does not need to use an internal buffer.
High-performance server applications can disable the sending buffer without compromising performance. However, such an application must be very careful to ensure that it always sends multiple overlapping sending calls, rather than sending the next one after an overlapping sending is completed. If the application is in the order of sending and sending the next one, it will waste the gap between two sending attempts. In short, it is necessary to ensure that after the transmission driver sends a buffer, you can immediately switch to another buffer zone.