In Linux, how hard is it to write a program that receives 1 million UDP packets per second ?, How hard udp is

Source: Internet
Author: User
Tags snmp radar

In Linux, how hard is it to write a program that receives 1 million UDP packets per second ?, How hard udp is
In Linux, how hard is it to write a program that receives 1 million UDP packets per second?
Well written. Repeat 1. UDP Concept

User Datagram Protocol (UDP), also known as the User data package Protocol, is a simple Datagram-oriented transport layer Protocol, formally standardized as RFC 768

In the TCP/IP model, UDP provides a simple interface at the network layer and below the application layer. UDP only provides unreliable data transmission. Once an application sends data to the network layer, data backup is not retained (So UDP is sometimes considered an unreliable Datagram Protocol ). UDP only adds multiplexing and data validation (fields) to the IP datagram header ).

2. Analysis of UDP Packet Loss

Because the UDP protocol only provides unreliable data transmission, data packets are discarded after being sent, and data packets may be lost during network transmission. Even if the data packet successfully reaches the receiver node, it does not mean that the application can receive it, because there are still many stages to arrive at the application from the NIC, and packet loss may occur at each stage.

Describes a typical path for an application to accept network packets.

First, the NIC receives and processes network packets. The network adapter has its own hardware data buffer. When the network data traffic is too large to exceed the received data hardware buffer of the network adapter, the incoming data packets will overwrite the data packets in the previous buffer, resulting in loss. Whether or not the NIC packet loss occurs depends on the computing performance of the NIC and the size of the hardware buffer.

Second, the network adapter sends the data packets to the Operating System Buffer after processing. Packet Loss in the operating system mainly depends on the following factors:

  • Operating System buffer size
  • System Performance
  • System Load
  • Network-related system load

Finally, when the data packet reaches the socket buffer of the application, if the application cannot take the data packet away from the socket buffer in time, the accumulated data packet will exceed the application socket buffer threshold, leading to buffer overflow, packet loss. Packet Loss in the application phase mainly depends on the following factors:

  • Application buffer size
  • The application's ability to process data packets, that is, how to get data packets from the buffer as quickly as possible
3. Optimize the UDP packet loss problem at the system level and program level by 3.1 for Diagnosis

N Nic Buffer Overflow Diagnosis

In Linux, you can use the netstat-I-udp <NIC> command to diagnose whether the NIC buffer overflows. The RX-DRP column displays the number of dropped packets of the NIC.

For example: netstat-I-udp eth1

[root@TENCENT64 /usr/local/games/udpserver]# netstat -i –udp eth1Kernel Interface tableIface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flgeth1       1500   0 1295218256      0      3      0  7598497      0      0      0 BMRU

The output shows that three data packets are lost by the network adapter.

You can increase the NIC buffer to effectively reduce Nic Buffer Overflow.

N operating system kernel network Buffer Overflow Diagnosis

In Linux, you can run the cat/proc/net/snmp | grep-w Udp command to view the total number of UDP packets lost when the operating system UDP queue overflows.

[root@TENCENT64 /usr/local/games/udpserver]# cat /proc/net/snmp | grep -w UdpUdp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrorsUdp: 859428331 12609927 166563611 151449 166563611 0

There are several ways to effectively reduce the Operating System Buffer Overflow:

1) increase the network buffer size of the operating system kernel

2) In the data packet path graph, the kernel buffer of the operating system is directly bypassed by using the user space stack or some middleware (e.g. Solarflare's OpenOnload) that can bypass the kernel buffer ).

3) Disable unused network-related applications and services to minimize the operating system load.

4) Only a proper number of working NICs are retained in the system, and the network adapter and system resources are used rationally to maximize efficiency.

N application socket buffer overflow Diagnosis

In Linux, you can run the cat/proc/net/snmp | grep-w Udp command. The RcvbufErrors column shows the total number of UDP packets lost when the application socket buffer overflows.

[root@TENCENT64 /usr/local/games/udpserver]# cat /proc/net/snmp | grep -w UdpUdp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrorsUdp: 859428331 12609927 166563611 151449 166563611 0

There are several ways to effectively slow down the application socket buffer overflow:

1) Accept the buffer to process received packets as quickly as possible (e.g. asynchronous non-blocking by using NIO to accept UDP packets or increase the priority of the thread that accepts UDP packets)

2) increase the size of the socket buffer accepted by the application. Note that this is limited by the size of the global socket buffer. If the socket buffer of the application is greater than the global socket buffer, this will not work.

3) Specify the application or receiving thread to the dedicated CPU core.

4) Improve the application's io priority (e.g. Adjust using nice or ionice commands)

5) Disable all unused network-related applications and services to minimize the operating system load.

3.2 Tuning

N Nic Buffer Optimization

Run in Linuxethtool -g <NIC>Command to query the NIC buffer settings, as shown below:

[root@TENCENT64 /usr/local/games/udpserver]# ethtool -g eth1Ring parameters for eth1:Pre-set maximums:RX:             4096RX Mini:        0RX Jumbo:       0TX:             4096Current hardware settings:RX:             256RX Mini:        0RX Jumbo:       0TX:             256

Run ethtool-G d <NIC> rx NEW-BUFFER-SIZE to set the buffer size of the RX ring. The change will take effect immediately without restarting the operating system or refreshing the network stack, this change directly acts on the network card without affecting the operating system, but does not affect the operating system kernel network stack, but will affect the network card firmware parameters. A larger ring size can withstand a large amount of data traffic without packet loss. However, due to the increase in the working set, Nic efficiency may be reduced and performance may be affected. Therefore, we recommend that you set Nic firmware parameters with caution.

N operating system kernel buffer Tuning

Run the command sysctl-A | grep net | grep 'mem \ | backlog' | grep 'udp _ mem \ | rmem_max \ | max_backlog 'to view the current operating system buffer settings. As follows:

[root@TENCENT64 /usr/local/games]# sysctl -A | grep net | grep 'mem\|backlog' | grep 'udp_mem\|rmem_max\|max_backlog'net.core.netdev_max_backlog = 1000net.core.rmem_max = 212992net.ipv4.udp_mem = 188169       250892  376338

Increase the maximum socket buffer size to 32 MB:

Sysctl-w net. core. rmem_max = 33554432

Increase the maximum amount of buffer space that can be allocated. The value is in the unit of page, and the unit of each page is equal to 4096 bytes:

Sysctl-w net. ipv4.udp _ mem = "262144 327680 393216"

Increase the size of the received packet queue:

Sysctl-w net. core. netdev_max_backlog = 2000

After the modification is complete, run the commandsysctl –p Make it take effect

N application optimization

To reduce packet loss, the application must remove data from the buffer as quickly as possible. You can quickly retrieve data from the buffer by appropriately increasing the socket buffer and adopting asynchronous non-blocking IO, build an Asynchronous UDP server using java nio.

// Create the initramchannel dc = initramchannel. open (); dc. configureBlocking (false); // locally bound port SocketAddress address = new InetSocketAddress (port); DatagramSocket ds = dc. socket (); ds. setReceiveBufferSize (1024*1024*32); // set the size of the receiving buffer to 32 MB ds. bind (address); // register Selector select = Selector. open (); dc. register (select, SelectionKey. OP_READ); ByteBuffer buffer = ByteBuffer. allocateDirect (1024); System. out. println ("Listening on port" + port); while (true) {int num = select. select (); if (num = 0) {continue;} // obtain the select key list Set Keys = select. selectedKeys (); Iterator it = Keys. iterator (); while (it. hasNext () {SelectionKey k = (SelectionKey) it. next (); if (k. readyOps () & SelectionKey. OP_READ) = SelectionKey. OP_READ) {DatagramChannel cc = (DatagramChannel) k. channel (); // non-blocking cc. configureBlocking (false );
4. other policies to reduce packet loss

The UDP sender adds traffic control to control the packets sent per second. Try to avoid overflow packet loss when the buffer of the UDP receiver is quickly filled up due to the sending speed too fast. Use the Tool Kit guava provided by google for testing.The RateLimiter class is used for throttling. The Stream Control Algorithm of the token bucket is used. RateLimiter will throw a token to the bucket at a certain frequency, and the thread will get the token before execution, for example, if you want your application QPS to not exceed 1000, then after RateLimiter sets a rate of 1000, it will throw 1000 tokens to the bucket every second.

After traffic control is adopted, a specified number of data packets are sent per second, and fluctuations occur every second. If traffic control is not performed, the UDP sending end will make full effort to send packets until the peak jitters, and the large traffic will continue, with the increase of time, UDP sending end production speed will certainly exceed the UDP receiving end consumption rate, packet loss is sooner or later.

5. Real Test Data

N Machine Type

Both the sender and acceptor use the C1 type machine. The configuration is as follows:

C1 Intel (R) Xeon (R) CPU X3440 @ 2.53 GHz: 8 8 GB 500G: 7200 RPM: 1: SATA NORAID

The receiver Nic information is as follows:

[root@TENCENT64 /usr/local/games]# ethtool eth1                     Settings for eth1:        Supported ports: [ TP ]        Supported link modes:   10baseT/Half 10baseT/Full                                 100baseT/Half 100baseT/Full                                 1000baseT/Full         Supports auto-negotiation: Yes        Advertised link modes:  10baseT/Half 10baseT/Full                                 100baseT/Half 100baseT/Full                                 1000baseT/Full         Advertised pause frame use: Symmetric        Advertised auto-negotiation: Yes        Speed: 1000Mb/s        Duplex: Full        Port: Twisted Pair        PHYAD: 1        Transceiver: internal        Auto-negotiation: on        MDI-X: on        Supports Wake-on: pumbg        Wake-on: g        Current message level: 0x00000007 (7)        Link detected: yes[root@TENCENT64 /usr/local/games]# ethtool -g eth1Ring parameters for eth1:Pre-set maximums:RX:             4096RX Mini:        0RX Jumbo:       0TX:             4096Current hardware settings:RX:             256RX Mini:        0RX Jumbo:       0TX:             256

N actual Optimization

The parameters after the receiver server is optimized are as follows:

[root@TENCENT64 /usr/local/games]# sysctl -A | grep net | grep 'mem\|backlog' | grep 'udp_mem\|rmem_max\|max_backlog'net.core.rmem_max = 67108864net.core.netdev_max_backlog = 20000net.ipv4.udp_mem = 754848       1006464 1509696

Whether the sender controls the sending traffic in the test scenario

N test scenario

Scenario 1: Send more than 100 million data packets. Each data packet has a size of 512 bytes. The data packet contains the current timestamp, which is not throttled and is sent at full speed. The test result is as follows:

Test client:

It takes 512 seconds to send a 4.625-byte udp packet, and 21 wQPS to send a-byte packet.

Test Server:

The client sends 5 packets, each of which is 100 million (512 bytes per packet). The first time the Server accepts 90 million packets, the second time the Server accepts million packets, and the third time the Server accepts million packets, for the fourth time, accept w, lose 3 w, and for the fifth time accept w

Log recorded on the server:

UDP records received by the server operating system: (consistent with the log records)

Scenario 2: The sending end adds traffic control. Each packet contains 512 bytes per second and the current timestamp. The sending time lasts for 2 hours. The test result is as follows:

1. Add traffic control to the Udpclient:

QPS: 4 W

Datapacket: 512 byte, containing the time stamp of the sent message

Continuous sending Duration: 2 h

Cumulative number of packages: 287920000 (0.28792 billion)

Average CPU consumption: 16% (8 CPUs)

Average memory consumption: 0.3% (8 GB)

2. Udpserver:

The Server accepts the UDP details recorded by the previous NIC:

[root@TENCENT64 ~]# cat /proc/net/snmp | grep -w UdpUdp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrorsUdp: 1156064488 753197150 918758960 1718431901 918758960 0

The udp details recorded by the NIC after the Server accepts all UDP Packets:

[root@TENCENT64 ~]# cat /proc/net/snmp | grep -w UdpUdp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrorsUdp: 1443984568 753197150 918758960 1718432045 918758960 0

Analysis of changes before and after:

Incision rams: (1443984568-1156064488) = 287920080

InErrors: 0 (record udp packet loss at the operating system level. The packet loss may be because the udp queue is full)

RcvbufErrors: 0 (recording udp packet loss at the application layer). The packet loss may be due to the Application socket buffer being full)

Server logs:

Total log files: 276, total size: 138 GB

Total number of logs: 287920000 (consistent with the total number of data packets sent by udpclient without packet loss)

Simple computing and processing capabilities based on the log timestamp:

Time cost :( 1445410477654-1445403277874)/1000 = 7199.78 s

Process speed: 287920000/7199. 78 = 3.999 w/s

 

CPU consumption: Average 46% (8 cpu), non-stop asynchronous log writing, frequent IO operations, high CPU consumption

Memory consumption: Average 4.7% (8 GB)

Scenario 3: The sender adds traffic control. Each packet contains 512 bytes per second and the current timestamp. The sending time lasts for 2 hours. packet loss occurs. The test result is as follows:

1. Add traffic control to the Udpclient:

QPS: 6 W

Datapacket: 512 byte, containing the time stamp of the sent message

Continuous sending Duration: 2 h

Cumulative number of packages: 432000000 (0.432 billion)

Average CPU consumption: 70% (8 CPUs)

Average memory consumption: 0.3% (8 GB)

2. Udpserver:

The Server accepts the UDP details recorded by the previous NIC:

[root@TENCENT64 ~]# cat /proc/net/snmp | grep -w UdpUdp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrorsUdp: 2235178200 753197150 918960131 1720242603 918960131 0

The udp details recorded by the NIC after the Server accepts all UDP Packets:

[root@TENCENT64 ~]# cat /proc/net/snmp | grep -w UdpUdp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrorsUdp: 2667158153 753197150 918980378 1720242963 918980378 0

Analysis of changes before and after:

Incision rams: (2667158153-2235178200) = 431979953

InErrors: (918980378-918960131) = 20247 (record udp packet loss at the operating system level. The packet loss may be due to the system udp queue being full)

RcvbufErrors: (918980378-918960131) = 20247 (recording udp packet loss at the application layer). The packet loss may be due to the Application socket buffer being full)

Server logs:

Total log files: 413, total size: 207 GB

Total number of logs: 431979753 (consistent with the total number of udp packets received by the network adapter, no packet loss occurs when the log file is written)

Packet Loss:

Client-side transmission: 432000000,

Total number of udp packets received by the server NIC: 431979953,

Log: 431979953,

Udp Nic accept packet loss: 20247,

Packet Loss Rate: 1/20000

Due to limited hardware resources on the test server, the test lasts for only two hours. As the sending and receiving times increase, the packet loss rate may increase.

Comparison chart: million 512byte data packets are sent without throttling and acceleration (4 w throttling), and radar wave comparison charts are sent every millisecond, the peripheral wave value is the millisecond value of the sent data packet, and the radar wheelbase is the value range of the number of data packets sent per millisecond. In order, this is the graph generated for the traffic limit of 4 W, and the graph generated for the traffic limit. It can be seen that there will be fluctuations every second when traffic is throttled, and the transmission of high traffic will not continue, which can properly relieve the pressure on the UDP receiving end; when traffic is not throttled, data will fluctuate near the peak, and high traffic will continue to be sent, there is a lot of pressure on the UDP receiving end. If the receiving end does not take data from the buffer in time or the consumption capacity is lower than the generation capability of the sending end, it is easy to lose packets.

Bytes ----------------------------------------------------------------------------------------------

Conclusion: without traffic control, the sending end quickly reaches a relatively stable peak value and continuously sends packets. The network adapter or operating system buffer at the receiving end is always limited, as the packet sending time increases, the receiver's network adapter and System Buffer must be filled at a certain time point, and the production rate of the sending end will far exceed the consumption rate of the receiving end, which will inevitably lead to packet loss. After the sending end implements traffic control, the sending rate is effectively controlled and will not continuously send high traffic. fluctuations occur every second, effectively alleviating the pressure on the receiving end, under the premise of a reasonable packet sending rate, through the relevant system optimization, basically can ensure that no packet loss, but to ensure the high data integrity, due to the inherent reliability of the UDP protocol, we still need to expand the UDP protocol and add data integrity verification to ensure the integrity of business data.

[Note] part 1 and part 2 of this article are translated into a foreign article. The original Article is as follows:

Http://ref.onixs.biz/lost-multicast-packets-troubleshooting.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.