Have time to go to the forum, found that the discussion UDP reliable transmission and heat up, some people think that UDP high efficiency, some people think that UDP packet retransmission mechanism is easy to control, there are friends to limit the test, of course, some people sell their own things, here to write a bit of my personal opinion.
UDP reliable transmission is actually very very simple, I first contact UDP reliable transmission is about 2005, because at that time, because of the development of Ftpanywhere, because of the route mapping and gateway NAT processing aspect, the UDP has a natural advantage, so began to write their own UDP reliable transport protocol, As if at that time already had the UDT, I also under the source code looked under, but soon cannot see down, because it uses the timer, plus the cross-platform processing, causes its code, anyway I looked very disorderly, does not have a complete logic diagram. But the principle is the same as the basic of TCP, and there is nothing special about it. Later I wrote my own first UDP reliable transmission class, QTUDP, this is the simplest, that is, like TCP, to achieve 2 points between the transmission, not I now write more than the center-to-peer transmission, high efficiency, but then the code I have not formed the current habit, A large number of data definitions such as DWORD, including the definition of the package, from the present point of view, failed, but at least it can be practical, and integrated into the Ftpanywhere software, in the debugging and operation process, slowly statistics and discovery of UDP packet transmission, the most affecting the performance of the part and other details, Later the UT 1.x multipoint transmission protocol was developed on the basis of QTUDP, in the current UT 2.x, Phoenix 2.x, the realization of several spans.
Please believe that reliable UDP transmission is never synonymous with efficient and reliable transmission, the most important factor affecting transmission efficiency is that the SendTo function, each time can only deliver a packet MTU length, frequent system calls greatly affect the limit performance, perhaps you will say, UDP default can reach 64KB, You can deliver a large package, yes, can be delivered, but due to the limitation of the MTU device on the network, the packet will be split into small packets, if you define a packet larger than the MTU, then when any one of the packets packet loss, it will cause the entire packet to be re-transmitted, this overhead is very large, especially on the Internet, and the MTU size limit within the packet for transmission, lost one, only need to retransmit one, the cost is much smaller. UDP reliable transmission of the custom check is another limitation, in order to avoid the forgery of UDP packets, we need to add a custom check in our own reliable UDP packet, this check method also directly affects the performance, the fastest is the direct application of the CRC32 checksum, as the current CPU instruction set for this calculation is optimized , so it is almost the fastest calculation speed, but the price is that they want to crack or forge your UDP packet is also very easy, because the algorithm is transparent, the previous burst out of the TCP forgery vulnerability is also the case, because its package is transparent public, the only secret is the serial number, the results of some system initial sequence number exists the law, The result is a security issue. Finally, send and receive, as is done in the application layer [whether you are using API Epoll Select overlap Io, the final execution is in the application layer], this send-confirm process, because the application layer is not like the TCP/IP protocol stack in the kernel state of operation, so there may be a delay, however, The current number and frequency of CPU cores, this impact in industrial applications [the Internet, etc.] has been almost negligible, the only impact is in the performance of the native limit test. TCP is more efficient than UDP reliable transmission, because its send function, almost every time you can copy dozens of KB, of course, you can adjust the buffer is very large, such as hundreds of KB or a few MB, but by default dozens of KB enough, want to call UDP sendto dozens of times, TCP only needs to call once, and secondly, TCP packet processing is done in the kernel state, the confirmation is also the kernel state, which is enough to the application layer of UDP reliable transmission distance, let alone the optimization of the hardware layer. So, you might think, why does the book say that UDP performance is good? In fact, this is for unreliable UDP transmission, and is the intranet packet, for example
Char buffs[32*1024];
memset (buffs,0,32*1024);
SendTo (S, buffs,32*1024,.... Regardless of whether or not a packet is dropped after launch
Like this contract, the efficiency will exceed TCP, but the main is the size of the Baotou gap.
The purpose of using UDP reliable transmission is for its flexibility, in many protocol transmission, for example, remote video stream transmission, jrtlib, usually divided into key frame and normal frame, the loss of key frame, will cause the normal frame loses function, this time need to use UDP reliable transmission + unreliable transmission, The implementation is to first deliver the keyframes in a reliable mode, and after the keyframe processing is complete, the remainder of the time is posted in the non-reliable mode, repeating the process every specified time. If the use of TCP, this effect is not good, because the network bandwidth is so much, and the bandwidth varies greatly, all the keyframes and ordinary frames are reliably transmitted, may lead to non-smooth, network congestion and so on.
About the ability to penetrate the UDP, which is the legendary hole, this is a pseudo-proposition, some people are still there what Test said to get through the 4 kinds of NAT model, you think it is possible? There are people using the method of guessing port to make holes, I have a go, industry can use? UDP penetration In fact completely depends on the routing [gateway] configuration, domestic home or Soho for peer-to adjust, but you use professional large equipment such as Cisco to make holes to see, in order to protect the user's security, the general administrator has set high security non-transparent, usually, even if the internal same computer, The same UDP port, sent to different destination IP packets, the route will randomly reassign a port, the hole can not be successful at all. So, if you are using UDP for NAT processing, it is recommended that you give up and use TCP+UPNP directly.
About the performance of UDP reliable transmission, some people say that using epoll, overlap IO, etc., more efficient than using the Select model, I can according to my years of development experience and test results tell you, in the calibration mode, delivery mode and data processing logic the same situation, multi-core CPU platform, The local efficiency gap is not more than 1%, and if used on the Internet, the efficiency gap is nearly 0, now is the era of CPU surplus, Linux windows scheduling, if there is no kernel state operation, then the waiting thread will pick one, You do not think that switching from the kernel to the user state is not cost-intensive, the overhead is equally large, and frequent scheduling also affects performance, although it is not counted in your code for overhead. Of course, if you need to run a Web service such as Apache on a Linux system at the same time, the efficiency gap will be obvious, because there are too many processes and threads to get the first dispatch.
About UDP reliable transmission under the buffer size, to tell the truth, I was the first time to see, the simplest point-to-point UDP reliable transmission, someone opened hundreds of KB buffer, buffering is a good thing, usually, the larger the buffer, the higher the efficiency, because the number of packets delivered at one time, IO performance is high, but, This is not advocated on the Internet, this implementation I do not know whether through strict network packet loss and load testing, in the home routing small Bandwidth upload mode, if there are many people or multiple network programs using bandwidth, this delay parameter changes very frequently, unnecessary retransmission rate is very high, the simplest test method, Find a copper-clad aluminum wire, using 1 1 corresponding to the non-standard connector method, a computer, one end of the route, and then with the remote test on the internet, it is estimated that the probability of this packet retransmission is high frightening.
UDP reliable transmission is better than TCP slow start? I think not necessarily, or I above the network cable do test, if the use of fast start, in the network environment of packet loss, bandwidth waste is too outrageous, I adjust my own UDP reliable transmission start method many times, more and more feel the slow start of TCP is very reasonable, Although the recovery is slow, however, it is very small because of the loss of bandwidth generated by the retransmission of errors. Of course, if you think about performance, you can adjust the speed appropriately.
About UDP reliable transmission of file transfer efficiency, this is a pseudo proposition, this limit is the hardware itself, first, the normal UDP reliable transmission efficiency is certainly higher than the average hard disk speed, that is, using memory map technology to accelerate the file, or not keep up with the speed of UDP itself, and secondly, The file Transfer protocol affects the efficiency of the transfer, what is the fastest mode? is to send the file directly from the beginning to the end, and the stream [FTP data connection HTTP], which is the fastest, but generally, in order to avoid data errors, the file transfer content is chunked and added to the checksum. This affects the transmission efficiency. You say a bus is from the starting point to the end of the direct fast? or a stop at the station? It is clear that the first hypothesis is divorced from the actual one.
About UDP reliable transmission and the relationship between the CPU, is there? There must be, but in general applications, basically no impact, only in the pursuit of limits, such as the test, 2g+ Optical brazing network, will have a significant impact, but the problem is, if your server [computer] access is 2g+ optical brazing, you this server what hardware configuration ? In general applications, CPU and memory overhead, as well as bad packet retransmission, are the first. If a UDP reliable transmission, 100mbps network, must be Intel PIII 1Ghz above, that on the basis of the development of the application may be more than the current computer game is ridiculous. If it is not many-to-many [because this logic is more complex than normal point-to-point UDP reliable transmission], the common point-to-point UDP reliable transmission, to a few previous test data, QTUDP, the environment is Pentium M 1.7g[single core], Ati Xpress motherboard 768m DDR2 533[ Single channel], the simulated transmission of the machine at the time of the test data about 78mb/s, mtu=1380, buffer is the * MTU. , the CPU overhead is approximately 75%, and if you use the current dual core, you can double the estimate. You may find this number too low, but be aware of the hardware environment at the time, which is much better than the UDT. The most basic point-to-point UDP reliable transmission, standard CRC32 calibration, with the current hardware environment, i74 Core, DDR3 1600, plus our special packet processing technology, this machine UDP packet [1400] reliable transmission limit of about 270mb/s, if also to ascend, estimated only and MS IIS, write the network driver, but the x86 processor and memory speed is always improving, perhaps tomorrow the frequency of direct doubling ...
In fact, I turn to UDP reliable transmission is a very important reason, is IPV6 canceled the transmission of the IP layer of data validation, which led to the TCP layer fully affordable Data validation task, TCP data is still like IPV4 under the reliable, need to add a question mark, although the IP layer itself is very low error rate, But the cancellation of the calibration, the direct risk of how much, and did not have a practical assessment, relying on the IPv6 experimental network to obtain theoretical results, perhaps one day there will be a breakthrough IPV6 TCP forgery technology. And if UDP reliable transmission, due to the UDP itself is verified, plus our own design of the checksum serial number, this reliability is completely more than the IPV4 TCP, let alone IPV6 under the TCP.
UDP reliable transmission, in fact very simple, as long as you have the basis of TCP/IP, the simplest point-to-point UDP reliable transmission is very easy to write, is nothing more than a packet, check, send, confirm, re-package, interface can imitate several functions of TCP, one of the difficulties is to determine whether the need to resend, This needs to be based on the previous package confirmation time to deduce this package, if you do not want to be so complex, also can, experience number [not suitable for netcom to Telecom], the first retransmission time of the MS, the second is 1600, each time is 2000, although unscientific, but with this delay number, can ensure that you enough transmission efficiency rate [even in the presence of small probability drops] does not generate a lot of retransmission, but the best way is to derive from the delay of the previous package.
In short, UDP reliable transmission is not so complex and mysterious, it is very simple, and it is not as efficient as you can imagine, it may disappoint you, but it is very cute, you can arbitrarily shape your own package, to achieve a variety of expansion, which is entirely intelligent you.
UDP reliably transmits those things