Previous Erlang c1500k long connection Push service-performance mentions: 150w connection, using 23GB memory, each connection consumes 15KB, about half of the kernel is used.
Probably analyze:
1. Erlang node
12GB, internal because of memory pool fragmentation problem, the actual use of 9GB, that is, the actual process +port occupy 6K, because the hibernate strategy, has no moisture.
2. Linux kernel
11GB, through the operation before and after, Cat/proc/meminfo memtotal-anonpages value increase is basically the core occupancy.
The actual slab:5388732 KB, only 5GB, the other 6GB where to go?
Slabtop:
Active/total Objects (% used): 9821361/9912211 (99.1%) Active/Total slabs (% used): 967448/967448 (100%) Active/Total Caches (% used): 91/174 (52.3%) Active/Total Size (% used): 4664151.73k/4676348.50k (99.7%) Minimum/Average/maximum object:0.02k/0.47k/4096. 00K objs ACTIVE use obj SIZE slabs obj/SLAB CACHE SIZE NAME1500136 1500097 99% 1.69K 375034 43000272K TCP1500330 1500268 99% 0.69K 300066 51200264K Sock_inode_cache1808780 1808377 99% 0.19K 90439 20361756K dentry1501840 1501054 99% 0.19K 75092 20300368K Filp1529370 1501237 98% 0.12K 50979 30203916K Eventpoll_epi1529474 1501153 98% 0.07K 28858 53115432K Eventpoll_pwq116030 113558 97% 0.78K 23206 592824K Ext3_inode_cache30560 30538 99% 1.00K 7640 4 30560K Ext4_inode_cache
TCP Memory Usage
Reference:
1. High-performance network programming 7--tcp connected memory usage
2. memory management in the TCP layer of the Kernel protocol stack
Example configuration:
net.ipv4.tcp_mem=1523712 2031616 3047424 TCP Total Memory limit, per page (low pressure high)
Net.ipv4.tcp_rmem = 8192 87380 8738000 receive buff for a single connection (min, default, max)
Net.ipv4.tcp_wmem = 4096 65536 6553600 send buffer for single connection (min, default, max)
Net.ipv4.tcp_adv_win_scale = 2, which is 1/4, is used to store application data
Set Buffer size
-The default is dynamically allocated for a single connection buffer, max limits the maximum buffer
-The gateway does not use SO_SNDBUF, so_recvbuf limit settings, will cause buffer reservation, rather than dynamic allocation, resulting in memory waste
-Close Application Layer package buffer, many IO library application layer has buffer, should be off reduce unnecessary copy and memory waste
Controlling memory
Test environment is not smooth, and the actual production environment is complex and diverse, especially mobile terminals. Network congestion and attacks can easily make the actual memory consumption of the service more expensive.
The device needs to be equipped with higher memory and is well-controlled:
1. Receiving
In general, as long as the package is not too small, now the application uses CPU single core can run full of KM network card, for the gateway application, the CPU should not be a bottleneck.
That is, how fast the client sends, the application can receive how fast.
gen_tcp {active, true}
2. Send
Because the environment is changeable, downlink, mobile device network condition, congestion is unavoidable, so need to control send buff memory.
-Kernel
The above mentioned 3 parameters, according to the system memory, reasonable configuration, at least ensure that the kernel buffer full application and memory available
-Application
Set GEN_TCP {recbuf, 0}, {sndbuf, 0}, {send_timeout, 5000} to set buffer 0 to ensure that too many packages are not stacked in the application layer buffer.
After send_timeout, this connection is congested, and if many subsequent packages are found after the failure and the application memory is too high, the discard section can be selected.
Just like the Windows platform IOCP, send will always be unblocked, and the other side can continue to be congested until oom, which requires a callback to maintain the non-write TCP buffer
The packet length, over a certain time using the discard strategy.
Erlang c1500k Long Connection push service-memory