Performance limit of OpenVPN

Source: Internet
Author: User

The performance limit of OpenVPN is unsatisfactory because data packets are bypassed in the protocol stack and OpenVPN applications ", each door has its own Width limit. Even if the width of the door is not taken into account, the inbound and outbound traffic alone is the performance loss point. Typical Portal: www.2cto.com 1. Encryption/Decryption: A plaintext data block enters the encryption engine, and a ciphertext data block enters the decryption engine. 2. OpenVPN socket communication: system calls are required for socket communication, while the overhead of system calls is large when the data volume is small. 3. packet segmentation: limited by the MTU of the network card, all data exceeding the MTU must be segmented. the MTU of OpenVPN has two limits: the MTU of the virtual network card and the MTU of the physical network card used by the OpenVPN socket. To improve the performance of OpenVPN, OpenVPN has hopes to improve the performance of the above methods. The general idea is to increase the throughput per unit path and reduce the proportion of the door overhead. In fact, as long as the MTU, encryption/decryption, and socket System calling overhead of the virtual network card are increased, the overhead will be reduced, because the next two factors are directly subject to the data size from the virtual network card, the MTU limit of the physical network card is no longer a problem now, because almost all gigabit network cards have the TSO function. In short, it is the core to let the CPU do one thing as long as possible. It may be better to call this "memory CACHE, for example, if you write 10 "you" consecutively and then write 10 "good" consecutively, It is faster than writing 10 "hello" continuously, because not only data cache can speed up, but more importantly, the conversion of thinking requires a certain computing cycle, write 10 "you" in a row and write 10 "good" in a row only need to be converted once, while write 10 "hello" in a row needs to be converted 20 times. Test result of www.2cto.com: OpenVPN configuration: IP Address: Server (physical address, virtual address)/client (physical address, virtual address): 4.4.4.2, 12.0.0.1/4.4.4.1, 12.0.0.2 MTU of the virtual network card: 50000 protocol of use: TCP (82574L Linux driver [Chip?] UFOs are not supported. TCP is used to reflect TSO.) Additional configuration: Set mssfix to 0, in this way, OpenVPN can directly send the mtu of the virtual network card to the network card of the encryption/Decryption engine: 82574L Gigabit card for direct connection to the network benchmark data: iperf-c 4.4.4.2 expose Client connecting to 4.4.4.2, TCP port 5001TCP window size: 27.9 KByte (default) limit [4] local 4.4.4.1 port 16618 connected with 4.4.4.2 port 5001 [ID] Interval Transfer Bandwidth [4] 0.0-6.6 sec 784 MBytes 992 Mbits/sec Test 1: cipher: BF-CBCauth: MD5 test result 1: iperf-c 12.0.0.1-F/lib/init/rw/big ------------------------------------------------------------ Client connecting to 12.0.0.1, TCP port 5001TCP window size: 148 KByte (default) ------------------------------------------------------------ [4] local 12.0.0.2 port 23921 connected with 12.0.0.1 port 5001 [ID] I Nterval Transfer Bandwidth [4] 0.0-10.0 sec 677 MBytes 568 Mbits/sec Test 2: cipher: BF-CBCauth: none test result 2: iperf-c 12.0.0.1-F/lib/init/rw/big ------------------------------------------------------------ Client connecting to 12.0.0.1, TCP port 5001TCP window size: 148 KByte (default) ------------------------------------------------------------ [4] local 12.0.0.2 port 23923 connected with 12.0.0.1 port 50 01 [ID] Interval Transfer Bandwidth [4] 0.0-9.6 sec 784 MBytes 687 Mbits/sec Test 3: cipher: noneauth: none Test 3 result: iperf-c 12.0.0.1 expose Client connecting to 12.0.0.1, TCP port 5001TCP window size: 148 KByte (default) limit [4] local 12.0.0.2 port 23925 connected with 12.0.0.1 port 5001 [ID] Inter Val Transfer Bandwidth [4] 0.0-7.1 sec 784 MBytes 923 Mbits/sec conclusion above can be seen that the use of BF-CBC algorithms without verifying the abstract, the performance of the Gigabit environment almost reaches 70% of the full load performance. If it is not encrypted, it basically approaches the bare bandwidth, which is very impressive! It can be seen that the bottleneck of OpenVPN is 30% encryption and decryption, and nearly 70% of the data blocks caused by MTU are too small! So far, encryption and decryption are just as much as possible to send enough data at a time. It would be better if we could optimize encryption and decryption, this requires something like a encryptor card. Through the test above, we get considerable performance data. However, when we use OpenVPN as a route, the MTU value of 50000 is actually useless, in fact, there is almost no such large data packet passing through the Internet. This ultra-large MTU can be used only when packets from the local machine pass through the Virtual Network Card. Now, the performance problem is not an OpenVPN error, but a physical link capacity problem. by adjusting the MTU of the physical Nic, we can still achieve the above impressive performance data, however, in the end-to-end sense, the intermediate link is largely inaccessible to us. Although it is not an error of OpenVPN, You Need To Know That OpenVPN improves the performance only after MTU is adjusted. To improve the end-to-end performance, it is unrealistic to have all link devices work together to improve MTU. Therefore, the performance limit of OpenVPN shown in this article has no implementation significance. Before the MTU bottleneck of the physical link is solved, this article is just a discussion on paper, when the physical link is no longer a problem, it makes sense to adjust the virtual network card MTU. Currently, the performance bottleneck of OpenVPN is still on the efficiency of encryption/decryption, when you cannot send large data blocks at a time, the only method is to improve the efficiency of unit encryption and decryption. Conclusion: 0. when OpenVPN is used as a route node subordinate: 1. when the MTU of the OpenVPN virtual network card is very low, the bottleneck is that the number of system calls is large, and the encryption and decryption blocks are too small; 2. when the MTU of the OpenVPN virtual network card is large, the minimum MTU in the end-to-end link is too small; 3. since the MTU of the OpenVPN virtual Nic cannot be too large or too small, do not set it. 4. if there are conditions, use the encryption card or use the CPU that supports encryption to increase the unit encryption/Decryption speed; 5. when the minimum MTU of the link is large, the MTU of the OpenVPN virtual network card is upgraded. In short, the MTU of the virtual network card must be consistent with the minimum MTU.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.