Detailed analysis of CentOS5.x kernel optimization (sysctl. conf)

Source: Internet
Author: User

It mainly analyzes the items in/etc/sysctl. conf in detail. All the content is collected and sorted through the network for your convenience.

System Optimization items:

Kernel. sysrq = 0

# Use the sysrq key combination to understand the current system running status. Set it to 0 for security reasons.

Kernel. core_uses_pid = 1

# Control whether to add pid as extension to the file name of the core File

Kernel. msgmnb = 65536

# Size limit of each message queue, in bytes

Kernel. msgmni = 16

# Maximum number of message queues in the entire system. This value can be increased as needed.

Kernel. msgmax = 65536

# Maximum size of each message

Kernel. shmmax = 68719476736

# Size of available shared memory segments (in bytes)

Kernel. shmall = 4294967296

# All memory size (unit: Page, 1 page = 4 KB)

Kernel. shmmni = 4096

# Control the total number of shared memory segments. The current value is 4096.

Kernel. sem = 250 32000 100 128

Or kernel. sem = 5010 641280 5010 128

# SEMMSL (maximum number of semaphores owned by each user), SEMMNS (maximum number of system semaphores), SEMOPM (number of calls to the semop system each time), and SEMMNI (maximum number of system semaphores)

Fs. aio-max-nr = 65536 or (1048576) (3145728)

# Asynchronous I/O is supported at the system level. When the system performs a large number of consecutive IO operations, a large value is used.

Fs. aio-max-size = 131072

# Maximum asynchronous IO size

Fs. file-max = 65536

# Maximum number of file handles

Net. core. wmem_default = 8388608

# Default memory reserved for TCP socket for sending buffering (unit: bytes)

Net. core. wmem_max = 16777216

# Maximum memory reserved for the TCP socket for sending buffer (unit: bytes)

Net. core. rmem_default = 8388608

# Default memory reserved for TCP socket for receiving buffer (unit: bytes)

Net. core. rmem_max = 16777216

# Maximum memory reserved for TCP socket for receiving buffer (unit: bytes)

Net. core. somaxconn = 262144

# Default parameter of listen (function), maximum number of pending requests

Network Optimization items:

Net. ipv4.ip _ forward = 0

# Disable packet filtering and forwarding

Net. ipv4.tcp _ syncookies = 1

# Enable SYN COOKIES

Net. ipv4.conf. default. rp_filter = 1

# Enable source route check

Net. ipv4.conf. default. accept_source_route = 0

# Disable all IP source routes

Net. Route 4.route. gc_timeout = 100

# The refresh frequency of the route cache. The default value is 300 after a route fails to be refreshed.

Net. ipv4.ip _ local_port_range = 1024 65000

# The range of external connection ports is small by default: 32768 to 61000, changed to 1024 to 65000.

Net. ipv4.tcp _ max_tw_buckets = 6000

# Indicates that the system maintains the maximum number of TIME_WAIT sockets at the same time. If this number is exceeded, the TIME_WAIT socket is immediately cleared and the warning message is printed. The default value is 180000.

Net. ipv4.tcp _ sack = 1

# In high-latency connections, SACK is especially important for the effective use of all available bandwidth. High latency may cause a large number of packets being transferred to be waiting for response at any given moment. In Linux, these packages are stored in the retransmission queue unless they are answered or no longer needed. These packages are queued by serial number, but there is no index of any form. When a received SACK option needs to be processed, the TCP protocol stack must find the packet with SACK applied in the retransmission queue. The longer the retransmission queue, the more difficult it is to find the required data. You can disable this function. Selective Response has a significant impact on performance for network connections with high bandwidth latency, but can also be disabled without sacrificing interoperability. Set the value to 0 to disable the SACK function in the TCP protocol stack.

Net. core. netdev_max_backlog = 262144

# The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes these packets

Net. ipv4.tcp _ window_scaling = 1

# Supported TCP Window expansion factor. If the maximum TCP window size exceeds 65535 (64 K), set this value to 1. The Tcp Window expansion factor is a new option, which is included in some new implementations. In order to be compatible with the new and old protocols, the following conventions are made: 1. Only the first syn of the active connection can send the window expansion factor. 2. If the passive connection receives the option with the window expansion factor, you can send your own window expansion factor; otherwise, ignore this option; 3. If both parties support this option, use this window expansion factor for subsequent data transmission. If the recipient does not support wscale, it should not respond to wscale 0, and should not send 1460 of data when receiving 46 windows; if the recipient supports wscale, therefore, it should send a large amount of data to increase the throughput, so as not to solve the problem by disabling wscale. If it is implemented using a universal protocol, it is necessary to disable wscale to improve performance, just in case.

Net. ipv4.tcp _ rmem = 4096 87380 4194304

# TCP read buffer

Net. ipv4.tcp _ wmem = 4096 16384 4194304

# TCP write buffer

Net. ipv4.tcp _ max_orphans = 3276800

# The maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning is printed. This limit is only used to prevent simple DoS attacks. You cannot rely too much on it or artificially reduce the value. You should also increase this value (if the memory is increased ).

Net. ipv4.tcp _ max_syn_backlog = 262144

# Indicates the length of the SYN queue. The default value is 1024. The length of the queue is 8192, which can accommodate more network connections waiting for connection.

Net. ipv4.tcp _ timestamps = 0

# Timestamp can avoid serial number winding. A 1 Gbit/s link must have a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets. Disable it here.

Net. ipv4.tcp _ synack_retries = 1

# In order to open the peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. That is, the second handshake in the three-way handshake. This setting determines the number of SYN + ACK packets sent before the kernel disconnects.

Net. ipv4.tcp _ syn_retries = 1

# The number of SYN connection requests sent by the kernel for a new connection is determined to be abandoned. It should not be greater than 255. The default value is 5.

Net. ipv4.tcp _ tw_recycle = 1

# Enable timewait quick recovery

Net. ipv4.tcp _ tw_reuse = 1

# Enable reuse. Allow TIME-WAIT sockets to be re-used for a New TCP connection.

Net. ipv4.tcp _ mem = 94500000 915000000 927000000

# 1st is lower than this value, TCP has no memory pressure, 2nd enters the memory pressure stage, and 3rdTCP rejects socket allocation (unit: Memory Page)

Net. ipv4.tcp _ fin_timeout = 1

# Indicates that if the socket is disabled by the local end, this parameter determines that the time it remains in the FIN-WAIT-2 state is 15 seconds.

Net. ipv4.tcp _ keepalive_time = 60

# Indicates the frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours, which is changed to 1 minute.

Net. ipv4.tcp _ keepalive_probes = 1

Net. ipv4.tcp _ keepalive_intvl = 2

# It means that if a TCP connection takes two minutes after idle, the kernel will initiate probe. if the probe fails once (2 seconds each time), the kernel will give up completely and the connection is deemed invalid.

To make the configuration take effect immediately, run the following command:

#/Sbin/sysctl-p

When optimizing performance, we must first set the target for performance optimization, then find the bottleneck and adjust the parameters to achieve the optimization goal. It is hard to find a performance bottleneck. We need to narrow down the scope from a wide range of use cases and tests to determine the bottleneck. Many parameters need to be adjusted while testing, this requires more patience and persistence.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.