OpenVPN Optimization-giant frame
Over the past few days, I have been busy with the performance problem of OpenVPN. This is really an old problem. I have been repairing and completing it for several years until the implementation of multi-thread multi-processing, it solves the server throughput problem in server mode, so that multiple CPU cores can be fully utilized. But there is no good solution for client optimization.
The results are good. The test results are still satisfactory. Record some ideas but do not record technical details. This is to allow yourself or others to see this article later, what are the advantages of knowing that such a thing cannot be used directly? This will allow you to repeat your ideas, instead of copying commands or code. The code or command you wrote may be irrelevant to yourself a week later. After half a year, you will not understand it... but the idea is permanent. I still remember a short article I wrote about the early maturity of the baseline when I was in the fourth grade...
I don't want to drink recently, because I don't have time to wait. After drinking, I will go to bed early, and I can't do anything. In the evening, it's better to read history books and write blogs than to drink.
Ethernet is a bit of a historical burden. Since its birth, Ethernet has maintained a downward compatibility. compatibility is the most important term in the computer era, it can be comparable to IA32 and Win32 APIs, satisfying investors' psychology and facilitating consumers. However, it is almost possible to maintain compatibility with the technology itself.
To avoid IP fragmentation, the network adapter is always expected to send the smallest data. However, to maximize the data packet processing efficiency, the network adapter is always expected to send the largest data. This is a conflict, compensation calculation trade-offs are required. We know that each group of the group Exchange Network carries metadata. Because the protocol stack is layered, multiple protocol headers at different layers must be encapsulated for each group.
For compatibility purposes, 1500 is the default MTU setting for many Ethernet cards, however, manufacturers cannot turn a blind eye to the features of high-end NICs such as 1G/10G, cat5e/cat6 twisted pair wires, and optical fiber cables. Therefore, the configuration interface for MTU is retained, the MTU value of the network card is determined by the user, which is over bytes of the Ethernet frame. to distinguish it from the original standard Ethernet bytes, it is called a giant frame and the name is quite scary, in fact, there is no such thing as a giant. You can set the MTU of your Nic to exceed 1500, but how much can it reach?
Using OpenVPN as a medium, we have mentioned so many things that seem unrelated to OpenVPN optimization. If you still do not understand the process of OpenVPN, please refer to the structure diagram of OpenVPN in my previous article. In the figure, I regard OpenVPN as a medium. The data is sent from the tap Nic to the character device, which enters this special media. After the data is encrypted, it is sent to the peer end and decrypted and written to the character device, then it is received by the peer tap Nic, which goes out of this special media.
In the optimization implementation, we recommend that you set the MTU of the tap Nic to a super large value. The premise is that you have to send such large data packets from the tap Nic, if the two OpenVPN endpoints are only the edge nodes of the two networks, you cannot expect that the data must be sent out of a giant frame. Maybe the MTU of the Data endpoint is only 1500, therefore, no matter how the data reaches the tap Nic, there are only 1500 bytes at most (if the data is forwarded by an MTU with only 500 of the routers in the middle, it will be split, and the length of each segment is shorter ), therefore, it is recommended that the endpoints of the data are sent with giant frames.
Netsh interface ipv4 set subinterfaces "TAP..." mtu = 9014 sore = persistent this command changes the MTU of your TAP Nic to 9014.
Netsh interface ipv4 show subintView. But now you can see that the properties of the tap nic are still 1500! In any case, it can indeed send a giant frame with a length of 9014Byte. I don't want to talk anymore! In short, Microsoft's interfaces are not easy to use, and they prefer to complicate simple problems. If you want to change the MTU of the TAP, you must not only modify the configuration file, but also use netsh to "actually modify it "!
Compensation for IP sharding overhead and protocol header overhead is a conflict between the two. The other is to construct a giant frame of the tap network card. It is first reorganized in the tap, and then encrypted and encapsulated, and then resliced in the physical network card. The weight is light, and the amount of traffic is self-generated.