About caching buffer a classic parsing

Source: Internet
Author: User

Http://bbs.chinaunix.net/thread-3777707-1-1.html


Q: By looking at the/proc/sys/net/core/rmem_max, the UDP maximum buffer is 32767 bytes on the B embedded device, and I want to increase it to 65535.
May I ask you heroes?/proc/sys/net/core/rmem_max Linux default UDP buffer size modifications, other than memory impact, what else is the impact.

A:

Rmem_max: Controls your socket to accept the maximum value of the receipt (not limited to the TCP/UDP protocol).
The size of this value depends on the maximum number of data you want your socket to accept.
Like what:
How much do you want to recv () data by using the socket?

In fact, when the kernel accepts data, it will receive good data.
As for the user that your application wants to read at a time depends on the size of the kernel buffer and how much data has been processed.


The protocol stack does not invoke setsockopt, but is invoked by the application that created the socket. Rmem_max is a kernel tuning parameter that can be adjusted by/proc/system/net/core/rmem_max. Rmem_max is the size of the kernel to accept the data buffer, of course not the size of an acceptance, the size that the application wants to accept at a time is specified in the parameters of the recv (), but the size specified is the largest rmem_max, that is, the kernel caches the data in Rmem_max and invokes recv in the application is processed (if recv () specifies a size greater than Rmem_max, the Rmem_max data copy is applied to the application;


The meaning of/proc/sys/net/core/rmem_max refers to how much data the kernel caches for a single socket.
Like your echo "65535" >/proc/sys/net/core/rmem_max
Reduce the A program read this SOCKET,B program from another server to send packets, if a is very slow processing speed, B send very fast, when the kernel socket cache more than 65535 bytes, B sent packets will be a kernel discarded.


1:buffer size
The maximum sizes for socket buffers declared via the SO_SNDBUF and SO_RCVBUF
Mechanisms are limited by the values in The/proc/sys/net/core/rmem_max and
/proc/sys/net/core/wmem_max files. Note that TCP actually allocates twice the
The size of the buffer requested in the setsockopt (2) call, and so a succeeding
GetSockOpt (2) call won't return the same size of buffer as requested in the
SetSockOpt (2) call. TCP uses the extra spaces for administrative purposes and
Internal kernel structures, and The/proc file values reflect the larger sizes
Compared to the actual TCP windows. On individual connections, the socket
Buffer size must is set prior to the Listen (2) or connect (2) calls in
Have it take effect. The information socket (7) for the more.
As you can see from the above, the size of the socket cache depends not only on the size of the setsockopt setting, but also on/proc/sys/net/core/rmem_max And/proc/sys/net/core/wmem_max


I looked at the protocol stack code, and when I called setsockopt, when I set the buffer, it was compared to/proc/sys/net/core/wmem_max, which I already understood.


Q: I also have a problem with the following comparison on Linux (kernel version 2.6.2) with Windows XP:
1: Create a UDP socket, send buffer and receive buffer are set to 32K
2): Linux to send 32K data (user data), Windows can receive 32K, but Windows 32K data, Linux is not received,
Only if the hair is less than or equal to 27k,linux can receive, the test data is as follows:
My query, the receive buffer that invokes the setsockopt setting is not a buffer available to the user. Why Linux can't put so much data on it.
Linux is not doing any special processing.


Packet loss caused by buffer size in UDP
See the unix® Network programming Volume 1 8.13 Lack of flow control with UDP
Note the following description:
Why do we set the receive sockets buffer size to 1,024 x Figure 8.22? The maximum size of a socket receive buffer in FreeBSD 5.1 defaults to 262,144 bytes (256 x 1,024), but due to the buffer Allocation policy (described in Chapter 2 of TCPv2), the actual limit is 233,016 bytes. Many earlier systems based on 4.3BSD restricted the size of an socket buffer to around 52,000-bytes
So can answer the Linux system is 32K send data is also 32K, why in 27K can receive success, it seems to be because of the buffer inside in addition to the data also includes some header structure, resulting in real usable data not to 32K.


This buffer set by Linux is not actually a buffer available to the user.
Linux based on Sk_buff data results of network data transceiver, each IP packet will have a Sk_buff data results and also include the entire message, so will greatly reduce the actual application buffer, I found 2.6.16 kernel version of the
/proc/sys/net/core/wmem_max The default value already contains the size of the Sk_buff, I don't know if I'm right.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.