UDP (packet length, packet collection capability, packet loss and process structure selection)

Source: Internet
Author: User
Tags disk usage

UDP (packet length, packet collection capability, packet loss and process structure selection)
UDP Packet Length: the theoretical length of a UDP packet

What is the theoretical length of udp data packets and what is the proper udp data packet? From the TCP-IP detailed volume a chapter 11th udp packet header can be seen that the maximum udp packet length is 2 ^ 16-1 bytes. Because the udp packet header occupies 8 bytes, the encapsulated ip packet header occupies 20 bytes, therefore, the maximum theoretical length of udp data packets is 2 ^ 16-1-8-20 = 65507.

However, this is only the maximum theoretical length of udp packets. First, we know that TCP/IP is generally considered as a layer-4 protocol system, including the link layer, network layer, transport layer, and application layer. UDP is a transport layer. During the transmission process, the udp packet is transmitted as a data field of the lower-layer protocol. Its length is restricted by the protocol of the lower-layer ip layer and data link layer.

MTU concepts

The length of an Ethernet data frame must be between-bytes, which is determined by the physical characteristics of the Ethernet. This 1500 byte is called the MTU (maximum transmission unit) of the link layer ). The Internet Protocol allows IP sharding so that data packets can be divided into small fragments to pass through links with the maximum transmission unit smaller than the original size of the data packet. This fragment process occurs at the network layer. It uses the value of the maximum transmission unit that sends the group to the network interface on the link. The Maximum Transmission Unit value is MTU (Maximum Transmission Unit ). It refers to the maximum data packet size (in bytes) that can be passed by a layer of communication protocol ). The maximum transmission unit parameter is usually related to the communication interface (network interface card, serial port, etc ).

In Internet protocols, the "maximum transmission unit of a path" of an Internet transmission path is defined as the minimum value of the maximum transmission unit for all IP addresses on the "path" from the source address to the destination address.

Note that the loopback MTU is not subject to the above restrictions. Check the loopback MTU value:

[Root @ bogon ~] # Cat/sys/class/net/lo/mtu

65536

Impact of udp Packet Length on IP Subcontracting

As mentioned above, the mtu length is limited to 1500 bytes due to the limitations of network interface cards. This length refers to the data zone of the link layer. Groups larger than this value may be split, otherwise they cannot be sent, and the packet loss exists in the packet exchange network. The sender of the IP protocol does not retransmit data. The receiver can reassemble and send the upstream protocol to process the Code only after receiving all the parts. Otherwise, these groups are discarded by the application.

Assuming that the probability of packet loss at the same time is equal, a larger IP address datapoint will inevitably have a higher probability of being discarded, because if a fragment is lost, the entire IP address datapoint will not be received. Groups that do not exceed MTU do not have sharding issues.

The MTU value does not include 18 bytes at the beginning and end of the link layer. Therefore, this 1500 byte is the length limit of the network layer IP datagram. Because the IP datagram header is 20 bytes, the IP datagram data zone length is up to 1480 bytes. The 1480 byte is used to store TCP packet segments or UDP datagram packets from UDP. Because the first 8 bytes of the UDP datagram, the maximum length of the UDP datagram data zone is 1472 bytes. This 1472 byte is the number of bytes that we can use.

What happens when we send UDP data greater than 1472? This means that the IP datagram is greater than 1500 bytes and greater than MTU. In this case, fragmentation is required for the sender's IP layer ). Divide the datagram into several slices so that each segment is smaller than MTU. The receiver's IP layer needs to reorganize the datagram. More seriously, due to the characteristics of UDP, when a piece of data is lost during transmission, it is easy to receive and cannot reorganize the datagram. The entire UDP datagram is discarded. Therefore, in a common LAN environment, it is better to control UDP data below 1472 bytes.

When programming the Internet, the MTU may be set to different values on the Internet router. If we assume that the MTU is 1500 to send data, and the MTU value of a network passing through is smaller than 1500 bytes, the system will use a series of mechanisms to adjust the MTU value, enable the datagram to reach the destination smoothly. Because the standard MTU value on the Internet is 576 bytes, it is best to control the UDP data length within 548 bytes (576-8-20) during Internet UDP programming.

UDP Packet Loss

Udp packet loss refers to the packet loss of the Linux Kernel TCP/IP protocol stack during udp packet processing after the NIC receives the packet. There are two main reasons:

1. udp packet format error or verification failed.

2. the application cannot process udp data packets.

For cause 1, udp packet errors are rare, and applications are uncontrollable. This article does not discuss them.

First, we will introduce the general udp packet loss detection method, and use the netstat command to add the-su parameter.

# Netstat-su

Udp:

2495354 packets received

2100876 packets to unknown port already ed.

3596307 packet receive errors

14412863 packets sent

RcvbufErrors: 3596307

SndbufErrors: 0

From the above output, we can see that one row of output contains "packet receive errors". If netstat-su is executed at intervals, the number at the beginning of the row keeps increasing, indicating udp packet loss.

The following describes the common causes of udp packet loss caused by the application being too late to process:

1. the Linux kernel socket buffer is too small
# Cat/proc/sys/net/core/rmem_default

# Cat/proc/sys/net/core/rmem_max

You can view the default and maximum values of the socket buffer.

What is the proper rmem_default and rmem_max settings? If the performance of the server is low and there is no strict requirement on processing latency, set it to around 1 MB. If the server has high performance pressure or strict requirements on processing latency, you must set rmem_default and rmem_max with caution. If it is set too small, packet loss will occur. If it is set too large, A snowball will pop up.

2. The server load is too high and occupies a large amount of cpu resources. Therefore, udp packets in the Linux Kernel socket buffer cannot be processed in time, resulting in packet loss.

Generally, there are two reasons for high server load: Too Many udp packets are received, and performance bottlenecks exist in server processes. If you receive too many udp packets, you need to consider resizing. The performance bottleneck of server processes is in the scope of performance optimization, which is not discussed here.

3. Disk I/O busy

A large number of I/O operations on the server will cause process congestion. the cpu is waiting for disk I/O and cannot process udp packets in the kernel socket buffer in time. If the business itself is IO-intensive, you should consider optimizing the architecture and rationally use the cache to reduce disk IO.

There is a problem that is easily overlooked: many servers have the function of logging on the local disk. Due to misoperations and maintenance, the log record level is too high, or some errors suddenly occur in large numbers, this causes a large amount of I/O requests to write logs to the disk, and the disk I/O is busy, resulting in udp packet loss.

Management of the operating environment can be strengthened to prevent misoperations. If the business needs to record a large number of logs, you can use the memory log or remote log.

4. Insufficient physical memory and swap Switching

Swap switching is also a kind of hard disk I/O busy in nature, because it is special and easy to be ignored, so it is listed separately.

You only need to plan the use of physical memory and set system parameters properly to avoid this problem.

5) I/O failure due to full disk

The disk usage is not planned, and the monitoring is not in place. As a result, after the disk is fully written, the server process cannot perform I/O and is in a blocking state. The most fundamental way is to plan the disk usage to prevent business data or log files from filling up the disk and strengthen monitoring. For example, to develop a common tool, when the disk usage reaches 80%, an alarm is triggered continuously, leaving sufficient response time.

Test environment for UDP packet collection capability

Processor: Intel (R) Xeon (R) CPU X3440 @ 2.53 GHz, 4-core, 8 hyper-threading, Gigabit Ethernet Card, 8 GB memory

Model 1

Single-host, single-thread asynchronous UDP Service, no business logic, only packet receiving operations, except for UDP packet header, one byte data.

Test Results

Process count

1

2

4

8

Average processing speed (packets/s)

791676.1

1016197

1395040

1491744

Nic traffic (Mb/s)

514.361

713.786

714.375

714.036

CPU usage (%)

100

200

325

370

Symptom:

1. The single-host UDP Packet Handling capability can reach around per second.

2. The processing capability is enhanced as the number of processes increases.

3. When the processing reaches the peak, the CPU resources are not exhausted.

Conclusion:

1. The processing capability of UDP is quite impressive.

2. For symptom 2 and Symptom 3, we can see that the performance bottleneck lies in the NIC, not the CPU, the increase in CPU, and the increase in processing capability comes from the decrease in the number of packet loss (UDP_ERROR.

Model 2

Other test conditions are the same as model 1. Except for UDP headers, one hundred bytes of data are collected.

Test Results

Process count

1

2

4

8

Average processing speed (packets/s)

571433.4

752319.9

731545.6

751922.5

Nic traffic (Mb/s)

855.482

855.542

855.546

855.549

CPU usage (%)

100

112.9

--

--

Symptom:

1. The package size of 100 bytes is more suitable for common business scenarios.

2. The processing capability of UDP is still very impressive, and the peak value of a single machine can reach 75 w per second.

3. CPU usage (network card Traffic is exhausted) is not recorded in 4 or 8 processes, but it is certain that the CPU is not used up.

4. As the number of processes increases, the processing capability is not significantly improved. However, the number of packet loss (UDP_ERROR) decreases significantly.

Model 3

Single-host, single-process, multi-thread asynchronous UDP Service, multi-thread sharing of one fd, no business logic, except for UDP headers, one byte data.

Test results:

Number of threads

1

2

Average processing speed (packets/s)

791676

509868

Nic traffic (Mb/s)

514.361

714.229

CPU usage (%)

100

150

Symptom:

1. As the number of threads increases, the processing capability will not increase or decrease.

Conclusion:

1. multi-thread sharing of one fd will cause considerable lock contention.

2. multiple threads share one fd. When a package exists, all threads are activated, resulting in frequent context switching.

 

Conclusion:

1. UDP processing capability is very impressive. In daily business situations, UDP generally does not become a performance bottleneck.

2. As the number of processes increases, the processing capability does not increase significantly, but the number of packet loss decreases significantly.

3. During this test, the bottleneck lies in the NIC, not the CPU.

4. Use a multi-process listening model for different ports, instead of multiple processes or threads listening for the same port.

Summary

UDP Packet Length

For transmission on the local machine (loopback), you can set MTU as needed, but remember that the maximum theoretical length of UDP is 65507.

For Intranet transmission, it is best to control 1472 bytes (1500-8-20 ).

It is recommended that the data be transmitted over the internet be controlled within 548 bytes (576-8-20.

UDP packet collection capability

UDP processing capability is very impressive. In daily business situations, UDP generally does not become a performance bottleneck.

As the number of processes increases, the processing capability does not increase significantly, but the number of packet loss decreases significantly.

A multi-process listening model is used to listen to different ports, rather than multiple processes or threads listening to the same port.

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.