Everyone crossing very sorry, this article is written, if more inconvenience please forgive meThe code is written in a commercial company and is not open source in commercial products, again sorryThis presentation would highlight our efforts on optimizing theLinux TCP/IP stack for providing networkingOpenStack Environment, as deployed at our industrial customers. Our primary goal are to provide a high-quality and highly performant TCP/IP stack.to achieve this, we had to identify the performance bottlenecks inThe Linux TCP/IP stack for networking in OpenStack. We have performed a lot ofLinux TCP/IP stack performance tuning, related to NIC, CPU cache hits rate, spin lock,Memory Alloc and others. However, we learned while measuring that conntrack NATuses too much CPU such for instance for the Ipt_do_table function.Linux Conntrack is very good, but it's too heavy and many functions are not used.Instead, we implemented FAST NAT in the Linux TCP/IP stack. We'll present our efforts on reducing the performance costs.First , FAST NAT uses spin lock instead of global connection table but the entry to greatly reduces the CPU waiting Tim E,and user policies is instead stored as a hash table not a list. The connection table and userpolicy is Per-numa, this would avoid CPU through QPI waste much time and increase delay.Second, FAST NAT does not record the TCP status,But the record a tuple with relevant connection formation for NAT forward.This can reduce the much check for forwarding packet.Entry in the connection table can is set to expire onAn absolute expiration time or relative expiration time basis.Relative Expiration time would incresae by per forwarding packet.Global Connection table don ' t synchronize for reducing lock ' s using. This could casue one TCP stream inPer-numa Connection table. If we use an Intel Ixgbe NIC with Flow Director ATR mode, the incomingStream and Outcoming stream would have the same index for multiple queues. The mentioned limit aboveWould disappear. limitations of FAST NAT only TCP and UDP is supported.Although some limitations exist, our work have paid off and resulted in 15-20 percentage pps improvement.
Linux Kernel protocol stack NAT performance optimization fast NAT