Recently found that a lot of kernel optimization parameters can not remember, write down the article to keep in mind, easy to view later.
Edit the/etc/sysctl.conf file and add the following: (with comments)
#最大的待发送TCP数据缓冲区空间
net.inet.tcp.sendspace=65536
#最大的接受TCP缓冲区空间
net.inet.tcp.recvspace=65536
#最大的接受UDP缓冲区大小
net.inet.udp.sendspace=65535
#最大的发送UDP数据缓冲区大小
net.inet.udp.maxdgram=65535
#本地套接字连接的数据发送空间
net.local.stream.sendspace=65535
#加快网络性能的协议
Net.inet.tcp.rfc1323=1
Net.inet.tcp.rfc1644=1
Net.inet.tcp.rfc3042=1
Net.inet.tcp.rfc3390=1
#最大的套接字缓冲区
kern.ipc.maxsockbuf=2097152
#系统中允许的最多文件数量
kern.maxfiles=65536
#每个进程能够同时打开的最大文件数量
kern.maxfilesperproc=32768
#当一台计算机发起TCP连接请求时, the system responds to an ACK reply packet. This option sets whether to delay the ACK reply packet and send it along with the packet containing the data
, with high speed network and low load situation will improve performance slightly, but when the network connection is poor, the other computer can not get the answer will continue to initiate the connection request, but will reduce the sex
Yes.
Net.inet.tcp.delayed_ack=0
#屏蔽ICMP重定向功能
Net.inet.icmp.drop_redirect=1
Net.inet.icmp.log_redirect=1
Net.inet.ip.redirect=0
Net.inet6.ip6.redirect=0
#防止ICMP广播风暴
Net.inet.icmp.bmcastecho=0
Net.inet.icmp.maskrepl=0
#限制系统发送ICMP速率
net.inet.icmp.icmplim=100
#安全参数, when compiling the kernel, add options Tcp_drop_synfin to use
Net.inet.icmp.icmplim_output=0
Net.inet.tcp.drop_synfin=1
#设置为1会帮助系统清除没有正常断开的TCP连接, this increases the use of some network bandwidth, but some dead connections can eventually be identified and erased. Dead TCP connection
The connection is a particular problem with the system accessed by the dial-up user, because the user often disconnects the modem without proper closing the active connections
Net.inet.tcp.always_keepalive=1
#若看到net. Inet.ip.intr_queue_drops this is increasing, it is necessary to adjust the Net.inet.ip.intr_queue_maxlen, for 0 best
net.inet.ip.intr_queue_maxlen=1000
#防止DOS攻击, defaults to 30000
net.inet.tcp.msl=7500
#接收到一个已经关闭的端口发来的所有包, direct drop, if set to 1, is for TCP packets only
net.inet.tcp.blackhole=2
#接收到一个已经关闭的端口发来的所有UDP包直接drop
Net.inet.udp.blackhole=1
#为网络数据连接时提供缓冲
Net.inet.tcp.inflight.enable=1
#如果打开的话每个目标地址一次转发成功以后它的数据都将被记录进路由表和arp数据表, save routing time, but will require a lot of kernel memory
Space to save the routing table
Net.inet.ip.fastforwarding=0
#kernel编译打开options polling function, using low load under high load does not recommend SMP not with polling
#kern. polling.enable=1
#并发连接数, the default is 128, recommended between 1024-4096, the larger the number of memory is larger
kern.ipc.somaxconn=32768
#禁止用户查看其他用户的进程
Security.bsd.see_other_uids=0
#设置kernel安全级别
Kern.securelevel=0
#记录下任何TCP连接
Net.inet.tcp.log_in_vain=1
#记录下任何UDP连接
Net.inet.udp.log_in_vain=1
#防止不正确的udp包的攻击
Net.inet.udp.checksum=1
#防止DOS攻击
Net.inet.tcp.syncookies=1
#仅为线程提供物理内存支持, requires more than 256 megabytes of memory
Kern.ipc.shm_use_phys=1
# Maximum shared memory that threads can use
kern.ipc.shmmax=67108864
# Maximum number of threads
kern.ipc.shmall=32768
# When the program crashes, it's not logged
Kern.coredump=0
# Lo Local data stream receive and send space
net.local.stream.recvspace=65536
net.local.dgram.maxdgram=16384
net.local.dgram.recvspace=65536
# packet data segment size, ADSL 1452.
net.inet.tcp.mssdflt=1460
# provides buffering when connecting to network data
Net.inet.tcp.inflight_enable=1
# packet data segment minimum, ADSL 1452
net.inet.tcp.minmss=1460
# Maximum number of local data
net.inet.raw.maxdgram=65536
# Local data stream receive space
net.inet.raw.recvspace=65536
#ipfw防火墙动态规则数量, the default is 4096, increasing this value prevents certain viruses from sending a large number of TCP connections, resulting in the inability to establish a normal connection
net.inet.ip.fw.dyn_max=65535
#设置ipf防火墙TCP连接空闲保留时间, default 8640000 (120 hours)
net.inet.ipf.fr_tcpidletimeout=864000
$/proc/sys/net/core/wmem_max
Maximum socket write buffer, can refer to the optimal value: 873200
$/proc/sys/net/core/rmem_max
Maximum socket read buffer, can refer to the optimization value: 873200
$/proc/sys/net/ipv4/tcp_wmem
TCP Write buffer, can refer to the optimization value: 8192 436600 873200
$/proc/sys/net/ipv4/tcp_rmem
TCP read buffer, can refer to the optimization value: 32768 436600 873200
$/proc/sys/net/ipv4/tcp_mem
There are also 3 values, meaning:
NET.IPV4.TCP_MEM[0]: Under this value, TCP has no memory pressure.
NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase.
NET.IPV4.TCP_MEM[2]: higher than this value, TCP refuses to allocate socket.
The memory unit above is a page, not a byte. The optimized value to refer to is: 786432 1048576 1572864
$/proc/sys/net/core/netdev_max_backlog
Enter the maximum device queue for the package. The default is 300, which is too low for a heavy load server and can be adjusted to 1000.
$/proc/sys/net/core/somaxconn
The default parameter for listen (), the maximum number of pending requests. The default is 128. Adding this value to a busy server can help network performance. Adjustable to 256.
$/proc/sys/net/core/optmem_max
The maximum initialization value of the socket buffer, the default 10K.
$/proc/sys/net/ipv4/tcp_max_syn_backlog
Enter the maximum request queue for the SYN package. The default 1024. For a heavily loaded server, it is obviously beneficial to increase this value. Can be adjusted to 2048.
$/proc/sys/net/ipv4/tcp_retries2
TCP failed retransmission number, default value 15, meaning 15 times to give up. can be reduced to 5 to release kernel resources as early as possible.
$/proc/sys/net/ipv4/tcp_keepalive_time
$/PROC/SYS/NET/IPV4/TCP_KEEPALIVE_INTVL
$/proc/sys/net/ipv4/tcp_keepalive_probes
These 3 parameters are related to the TCP keepalive. The default value is:
Tcp_keepalive_time = 7200 seconds (2 hours)
Tcp_keepalive_probes = 9
TCP_KEEPALIVE_INTVL = Seconds
This means that if a TCP connection is idle 2 hours later, the kernel does not initiate a probe. If probe 9 times (75 seconds) is unsuccessful, the kernel abandons it completely and considers the connection to be invalid. For the server, the value is obviously too large. Can be adjusted to:
/proc/sys/net/ipv4/tcp_keepalive_time 1800
/PROC/SYS/NET/IPV4/TCP_KEEPALIVE_INTVL 30
/proc/sys/net/ipv4/tcp_keepalive_probes 3
$ proc/sys/net/ipv4/ip_local_port_range
Specifies a configuration for the port range, which defaults to 32768 61000 and is large enough.
Net.ipv4.tcp_syncookies = 1
Indicates that the SYN cookie is opened. When the SYN wait queue overflow occurs, cookies are enabled to handle, to prevent a small number of SYN attacks, the default is 0, indicating shutdown;
Net.ipv4.tcp_tw_reuse = 1
Indicates open reuse. Allows time-wait sockets to be re used for a new TCP connection, which defaults to 0, indicating shutdown;
Net.ipv4.tcp_tw_recycle = 1
Represents a quick recycle of time-wait sockets on a TCP connection, which defaults to 0, indicating shutdown.
Net.ipv4.tcp_fin_timeout = 30
Indicates that if the socket is closed by the local end, this parameter determines how long it remains in the fin-wait-2 state.
Net.ipv4.tcp_keepalive_time = 1200
Indicates how often TCP sends KeepAlive messages when KeepAlive is enabled. The default is 2 hours, and 20 minutes instead.
Net.ipv4.ip_local_port_range = 1024 65000
Represents the range of ports used for outward connections. Small by default: 32768 to 61000, 1024 to 65000.
Net.ipv4.tcp_max_syn_backlog = 8192
Indicates the length of the SYN queue, defaults to 1024, increases the queue length to 8192, and can accommodate more network connections waiting for connections.
Net.ipv4.tcp_max_tw_buckets = 5000
Indicates that the system maintains the maximum number of time_wait sockets at the same time, and if this number is exceeded, the time_wait socket is immediately cleared and the warning message is printed. The default is 180000, and 5000 is changed. For Apache, Nginx and other servers, the parameters on a few lines can well reduce the number of time_wait sockets, but for squid, the effect is not. This parameter controls the maximum number of time_wait sockets and avoids the squid server being dragged to death by a large number of time_wait sockets.