Talking about UDP (packet length, packet-receiving ability, packet loss and process structure selection)

Source: Internet
Author: User
Tags disk usage

UDP packet Length

The theoretical length of UDP packets

What is the theoretical length of UDP packets and what should be the appropriate UDP packets? As can be seen from the packet header of the UDP packet in the 11th chapter of TCP-IP, the maximum packet length of UDP is 2^16-1 bytes. Since the UDP header accounts for 8 bytes, and the IP packet is encapsulated at the IP layer, it takes up 20 bytes, so this is the maximum theoretical length of the UDP packet is 2^16-1-8-20=65507.

However, this is only the maximum theoretical length of a UDP packet. First, we know that TCP/IP is generally considered a four-layer protocol system, including link layer, network layer, Transport layer, application layer. UDP belongs to transport layer, in the transmission process, the whole UDP packet is as the lower protocol data field transmission, its length is limited by the lower IP layer and the Data Link layer protocol.

MTU-related concepts

The length of the Ethernet (Ethernet) data frame must be between 46-1500 bytes, which is determined by the physical characteristics of the Ethernet. This 1500 byte is called the MTU of the link layer (the Maximum transmission unit). The Internet protocol allows IP shards so that packets can be divided into fragments that are small enough to pass through those links whose maximum transmission unit is smaller than the original size of the packet. This fragmentation process occurs at the network layer, which uses the value of the maximum transmission unit that sends packets to the network interface on the link. The value of this maximum transmission unit is the MTU (Maximum transmission Unit). It refers to the maximum packet size (in bytes) that can be passed on a layer of a communication protocol. Maximum Transmission Unit This parameter is usually related to the communication Interface (network interface card, serial port, etc.).

In Internet protocol, the "Path Maximum transmission unit" of an Internet transmission path is defined as the minimum value of the maximum transmission unit of all IP hops from the source address to the "path" on the destination site.

It is important to note that the MTU of the loopback is not subject to the above limitation, viewing the loopback MTU value:

[Email protected] ~]# CAT/SYS/CLASS/NET/LO/MTU

65536

The effect of IP packet UDP packet length

As mentioned above, due to network interface card constraints, the length of the MTU is limited to 1500 bytes, this length refers to the data area of the link layer. Packets larger than this value may be fragmented, otherwise they cannot be sent, while packet-switched networks are unreliable and there are drops. The sender of the IP protocol does not retransmit. The receiver will not be able to reassemble and send to the upper layer protocol processing code until all shards have been received, otherwise these groupings have been discarded in the application's view.

Assuming that the probability of network drops at the same time is equal, then the larger IP datagram must have a greater probability of being discarded, because as long as a fragment is lost, the entire IP datagram is not received. Groupings that do not exceed the MTU do not have fragmentation problems.

The value of the MTU does not include 18 bytes of the header and tail of the link layer. So, this 1500 byte is the length limit of the network layer IP datagram. Because the header of the IP datagram is 20 bytes, the data for the IP datagram has a maximum of 1480 bytes. And this 1480 bytes is used to put TCP packets sent from the TCP packet or UDP transmitted UDP datagram. Because of the first 8 bytes of the UDP datagram, the data area of the UDP datagram has a maximum length of 1472 bytes. This 1472 bytes is the number of bytes we can use.

What happens when we send more UDP data than 1472? This means that the IP datagram is larger than 1500 bytes and is greater than the MTU. At this point the sender IP layer needs to be fragmented (fragmentation). Divide the datagram into pieces so that each piece is smaller than the MTU. The receiver IP layer is required to reorganize the datagram. And more serious, because of the characteristics of UDP, when a piece of data transmission is lost, the reception is not able to reorganize the datagram. Will cause the entire UDP datagram to be discarded. Therefore, in the normal LAN environment, the UDP data control at 1472 bytes below is good.

Internet programming is different because routers on the Internet may set the MTU to a different value. If we assume that the MTU is sending data to 1500来, and the MTU value of a network passing through is less than 1500 bytes, then the system will use a series of mechanisms to adjust the MTU value so that the datagram can reach its destination smoothly. Because the standard MTU value on the Internet is 576 bytes, it is best to use UDP data-length controls within 548 bytes (576-8-20) for UDP programming on the Internet.

UDP packet Loss

UDP packet loss refers to the packet loss of the TCP/IP protocol stack of the Linux kernel in the process of UDP packet processing, with two main reasons:

1. UDP packet format error or checksum check failed.

2, the application is too late to process UDP packets.

For reasons 1,UDP the packet itself is rarely seen, and the application is not controlled, this article is not discussed.

This paper first introduces the universal UDP packet loss detection method, and uses the netstat command to add-su parameters.

# NETSTAT-SU

Udp:

2495354 Packets Received

2100876 packets to unknown Port received.

3596307 Packet Receive errors

14412863 Packets Sent

rcvbuferrors:3596307

sndbuferrors:0

From the output above, you can see that there is a row of output containing "packet receive errors", if the netstat-su is performed at intervals, the number of the beginning of the line is increasing, indicating that a UDP packet packet has occurred.

Here are some common reasons why the application is too late to handle UDP drops:

1, the Linux kernel socket buffer set too small
# Cat/proc/sys/net/core/rmem_default

# Cat/proc/sys/net/core/rmem_max

You can view the default and maximum values for the socket buffer.

Rmem_default and Rmem_max are set to how big appropriate? If the performance of the server is not very stressful, the processing delay is not very strict requirements, set to about 1M. If the server performance pressure is large, or the processing time delay has very strict requirements, you must carefully set Rmem_default and Rmem_max, if too small, will lead to loss of packets, if set too large, there will be a snowball.

2, the server load is too high, consumes a lot of CPU resources, unable to process the Linux kernel socket buffer in time UDP packets, resulting in packet loss.

In general, server load is too high for two reasons: there are too many UDP packets received, and the server process has a performance bottleneck. If you receive too many UDP packets, you need to consider the expansion. The performance bottleneck of the server process is in the category of performance optimization, and there is no much discussion here.

3. Disk IO Busy

The server has a large number of IO operations that cause the process to block, the CPU is waiting for disk IO, and the UDP packets in the kernel socket buffer are not processed in time. If the business itself is IO-intensive, consider optimizing the architecture to reduce disk IO using caching reasonably.

Here is an easy to ignore problem: Many servers have the ability to log on the local disk, due to operational errors caused by the level of logging is too high, or some of the sudden large number of errors, resulting in a large amount of IO requests to disk write logs, disk IO busy, resulting in UDP packet loss.

For operation and maintenance misoperation, can strengthen the management of the operating environment, to prevent errors. If the business does need to log a large number of logs, you can use memory log or remote log.

4, physical memory is not enough, there is swap swap

Swap switching is also essentially a disk IO busy, because it is more special and easy to overlook, so it is a single-column.

This problem can be avoided by planning the use of physical memory and setting the system parameters appropriately.

5) disk full causes no IO

Without planning the use of the disk, monitoring is not in place, causing the disk to be written full after the server process cannot Io, is in a blocking state. The most fundamental approach is to plan the use of the disk, prevent business data or log files from filling up the disk, and enhance monitoring, such as developing a common tool that keeps alarms when disk usage reaches 80%, leaving ample response time.

UDP packet-receiving capability test and test environment

Processor: Intel (R) Xeon (r) CPU X3440 @ 2.53ghz,4 Core, 8 Hyper-threading, Gigabit Ethernet card, 8G memory

Model 1

Standalone, single-threaded asynchronous UDP service, no business logic, only the packet operation, in addition to the UDP header, a byte of data.

Test results

Number of processes

1

2

4

8

Average processing speed (Pack/sec)

791676.1

1016197

1395040

1491744

Network card Traffic (MB/s)

514.361

713.786

714.375

714.036

CPU Usage (%)

100

200

325

370



Phenomenon:

1, single-machine UDP packet processing capacity can reach about 150w per second.

2, processing capacity with the increase in the number of processes increased.

3, the CPU resources are not exhausted when the processing reaches the peak.

Conclusion:

1, the processing capacity of UDP is very considerable.

2, for the phenomenon 2 and phenomenon 3, it can be seen that the bottleneck in the performance of the network card, and not the increase in cpu,cpu, processing capacity rise, from the loss of packets (Udp_error) the number of reduction.

Model 2

Other test conditions are the same as Model 1, except for UDP headers, 100 bytes of data.

Test results

Number of processes

1

2

4

8

Average processing speed (Pack/sec)

571433.4

752319.9

731545.6

751922.5

Network card Traffic (MB/s)

855.482

855.542

855.546

855.549

CPU Usage (%)

100

112.9

——

——

Phenomenon:

1, 100 bytes of packet size, more in line with the usual business situation.

2, UDP processing power is still very considerable, single-machine peak can reach 75w per second.

3, in 4, 8 processes, the CPU consumption is not recorded (network card traffic exhaustion), but to be sure, the CPU is not exhausted.

4. As the number of processes increases, there is no significant improvement in processing power, but the number of drops (udp_error) drops significantly.

Model 3

Standalone, single-process, multi-threaded asynchronous UDP service, multi-threaded common with one FD, no business logic, except UDP header, a byte of data.

Test results:

Number of threads

1

2

Average processing speed (Pack/sec)

791676

509868

Network card Traffic (MB/s)

514.361

714.229

CPU Usage (%)

100

150

Phenomenon:

1, with the increase in the number of threads, processing capacity does not rise and fall.

Conclusion:

1, multi-threaded common use of a FD, will cause considerable lock contention.

2, multi-thread sharing a FD, when there is a package, will activate all the threads, resulting in frequent context switching.

Final conclusion:

1, UDP processing ability is very impressive, in the daily business situation, UDP generally does not become a performance bottleneck.

2, with the increase of the number of processes, processing capacity has not increased significantly, but the number of drops dropped significantly.

3, the test process, the bottleneck in the network card, and not the CPU.

4, using multi-process to monitor the model of different ports, rather than multi-process or multithreaded listening to the same port.

Summarize

UDP packet Length

In the native (loopback) transmission, you can set the MTU as needed, but remember that UDP has a maximum theoretical length of 65507.

In the intranet transmission, it is best to control at 1472 bytes (1500-8-20).

Transmission over the Internet is best controlled within 548 bytes (576-8-20).

UDP packet Delivery capability

UDP processing power is very impressive, in the daily business situation, UDP generally does not become a performance bottleneck.

With the increase of the number of processes, the processing power has not increased significantly, but the number of packets dropped significantly.

Use multi-process to listen to different ports of the model, rather than multi-process or multi-threaded listening to the same port.

Talking about UDP (packet length, packet-receiving ability, packet loss and process structure selection)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.