Restrictions on the maximum number of concurrent socket connections in CentOS

Source: Internet
Author: User
Tags system echo varnish
1. modify the number of files that can be opened by a user process on the Linux platform. whether writing a client program or a server program, during high-concurrency TCP connection processing, the maximum number of concurrent connections is limited by the number of files that the system can open for a single process (this is because the system creates a socket handle for each TCP connection, each socket handle is also a file handle ). You can use the ulimit command to view the number of files allowed by the current user process: [limit G @ as4 ~] $ Ulimit-n

1. modify the number of files that can be opened by a user process
On the Linux platform, whether it is a client program or a server program, during high-concurrency TCP connection processing, the maximum number of concurrent connections is limited by the number of files that the system can open for a single process (this is because the system creates a socket handle for each TCP connection, each socket handle is also a file handle ). You can use the ulimit command to view the number of files allowed by the current user process:
[Speng @ as4 ~] $ Ulimit-n
1024
This indicates that each process of the current user can open up to 1024 files at the same time. among these 1024 files, the standard input, standard output, standard error, and server listening socket must be removed for each process, the number of files that can be used for client socket connection between processes is about 1024-10 = 1014. That is to say, by default, a Linux-based communication program allows a maximum of 1014 TCP concurrent connections at the same time.
For a communication handler that supports a higher number of TCP concurrent connections, you must modify the soft limit (soft limit) on the number of files simultaneously opened by the current user's processes in Linux) and hardlimit ). Among them, the soft limit refers to the limit on the number of files opened by users in the current system. the hard limit is based on the hardware resources of the system (mainly the system memory) the maximum number of files that can be opened by the system at the same time. Generally, the soft limit is less than or equal to the hard limit.
The simplest way to modify the preceding limits is to use the ulimit command:
[Speng @ as4 ~] $ Ulimit-n
In the preceding command, specify the maximum number of files that a single process can open. If the system Echo is similar to "Operation notpermitted", it indicates that the above restriction modification failed, the reason is that the value specified in exceeds the Linux system's soft or hard limit on the number of files opened by the user. Therefore, you need to modify the Linux system's soft and hard limits on the number of opened files.
Step 1: modify the/etc/security/limits. conf file and add the following lines to the file:
Speng soft nofile 10240
Sort g hard nofile 10240
The limit G specifies the number of files opened by the user to be modified. the '*' sign can be used to modify the limit of all users. soft or hard specifies whether to modify the soft limit or hard limit; 10240 specifies the new limit value to be modified, that is, the maximum number of opened files (note that the soft limit value must be smaller than or equal to the hard limit ). Save the modified file.
Step 2: modify the/etc/pam. d/login file and add the following lines to the file:
Session required/lib/security/pam_limits.so
This tells Linux that after the user completes system logon, you should call the pam_limits.so module to set the maximum number of resources that the system can use (including the maximum number of files that a user can open ), the pam_limits.so module starts from/etc/security/limits. read the configuration in the conf file to set these limits. Save the modified file.
Step 3: Check the maximum number of opened files in Linux. run the following command:
[Speng @ as4 ~] $ Cat/proc/sys/fs/file-max
12158
This indicates that this Linux system allows a maximum of 12158 files to be opened simultaneously (that is, including the total number of files opened by all users), which is a Linux system-level hard limitation, the limit on the number of opened files at the user level should not exceed this value. Generally, this system-level hard limit is the optimal maximum number of files opened at the same time calculated based on the hardware resources of the system when the Linux system is started. if there is no special need, do not modify this limit, unless you want to set a value that exceeds the user-level limit on the number of opened files. To modify this hard limit, modify the/etc/rc. local script and add the following lines to the script:
Echo 22158>/proc/sys/fs/file-max
This allows Linux to forcibly set the number of system-level open files to 22158 after startup. Save the modified file.
After completing the preceding steps, restart the system. Generally, you can set the maximum number of files allowed to be opened simultaneously for a single process of a specified user in Linux to a specified value. If you run the ulimit-n command to check whether the limit on the number of files opened by the user is still lower than the maximum value set in the preceding steps, this may be because the ulimit-n command has been used in the user login script/etc/profile to limit the number of files that the user can open at the same time. When you use ulimit-n to modify the maximum number of files that a user can open at the same time, the new value can only be less than or equal to the value set in the previous ulimit-n, therefore, it is impossible to use this command to increase the limit value. Therefore, if the above problem exists, you can only open the/etc/profile script file and check whether ulimit-n is used in the file to limit the maximum number of files that can be opened at the same time, if it is found, delete this line of command, or change the set value to the appropriate value, save the file, and log out and log on to the system again.
By performing the preceding steps, the system limits the number of opened files for the communication handler that supports high-concurrency TCP connection processing.
2. modify the limitations of the network kernel on TCP connections (refer to the next article "optimizing kernel parameters ")
When you write a client communication handler that supports high-concurrency TCP connections on Linux, you may find that although the system has removed the limit on the number of files opened by users at the same time, however, when the number of concurrent TCP connections increases to a certain number, it is no longer possible to establish a new TCP connection. There are many reasons for this.
The first reason may be that the Linux network kernel has restrictions on the range of local port numbers. In this case, we further analyze why a TCP connection cannot be established, and find that the connection () call fails, and check that the system error message is "Can't assign requestedaddress ". At the same time, if the tcpdump tool is used to monitor the network at this time, the network traffic of the client sending the SYN packet will be found when there is no TCP connection at all. These situations indicate that there are restrictions on the local Linux kernel. In fact, the root cause of the problem is that the TCP/IP implementation module of the Linux kernel limits the range of local port numbers corresponding to all client TCP connections in the system (for example, the kernel limits the range of local port numbers to 1024 ~ Between 32768 ). When too many TCP client connections exist at one time in the system, because each TCP client connection occupies a unique local port number (this port number is within the range of the local port number of the system ), if the existing TCP client connection has filled up all the local port numbers, you cannot allocate a local port number for the new TCP client connection. Therefore, the system will connect () failed to return in the call, and set the error message to "Can't assignrequested address ". For these control logics, you can view the Linux kernel source code. for example, you can view the following functions in the tcp_ipv4.c file:
Static int tcp_v4_hash_connect (struct sock * sk)
Note the access control for the variable sysctl_local_port_range in the above functions. The initialization of the variable sysctl_local_port_range is set in the following function in the tcp. c file:
Void _ init tcp_init (void)
During kernel compilation, the default local port number range may be too small. Therefore, you need to modify the local port range limit.
Step 1: modify the/etc/sysctl. conf file and add the following lines to the file:
Net. ipv4.ip _ local_port_range = 1024 65000
This indicates that the local port range is set to 1024 ~ In the range of 65000. Note that the minimum value of the local port range must be greater than or equal to 1024, and the maximum value of the port range should be less than or equal to 65535. Save the modified file.
Step 2: run the sysctl command:
[Speng @ as4 ~] $ Sysctl-p
If no error message is displayed, the new local port range is successfully set. If the preceding port range is used, a single process can establish a maximum of 60000 TCP client connections at the same time.
The second reason may be that the Linux network kernel's IP_TABLE firewall has a limit on the maximum number of TCP connections tracked. At this time, the program will be blocked in the connect () call, like a dead machine. if you use the tcpdump tool to monitor the network, it will also find that there is no TCP connection when the client sends a SYN packet network traffic. Because the IP_TABLE firewall tracks the status of each TCP connection in the kernel, the tracking information will be stored in the conntrackdatabase in the kernel memory. The size of this database is limited, when too many TCP connections exist in the system, the database capacity is insufficient, and IP_TABLE cannot set up tracking information for the new TCP connection, so it is blocked in the connect () call. In this case, you must modify the kernel's limit on the maximum number of TCP connections tracked. the method is similar to modifying the kernel's limit on the local port number range:
Step 1: modify the/etc/sysctl. conf file and add the following lines to the file:
Net. ipv4.ip _ conntrack_max = 10240
This indicates that the system sets the maximum number of TCP connections for tracking to 10240. Please note that this limit value should be as small as possible to reduce the usage of kernel memory.
Step 2: run the sysctl command:
[Speng @ as4 ~] $ Sysctl-p
If no error message is displayed, the system successfully modifies the maximum number of TCP connections for the new trace. If the preceding parameters are used, a single process can establish a maximum of 10000 TCP client connections at the same time.
3. use programming technology that supports high-concurrency network I/O
When writing highly concurrent TCP connection applications on Linux, appropriate network I/O technology and I/O event allocation mechanism must be used.
Available I/O technologies include Synchronous I/O, non-blocking synchronous I/O (also called reactive I/O), and asynchronous I/O. In the case of high TCP concurrency, if synchronous I/O is used, this will seriously block the operation of the program unless a thread is created for each TCP connection I/O. However, too many threads cause huge overhead for thread scheduling by the system. Therefore, it is not advisable to use synchronous I/O in the case of high TCP concurrency. in this case, you can consider using non-blocking synchronous I/O or asynchronous I/O. Non-blocking I/O synchronization techniques include the use of select (), poll (), epoll and other mechanisms. The asynchronous I/O technology is to use AIO.
From the perspective of the I/O event allocation mechanism, it is inappropriate to use select () because it supports a limited number of concurrent connections (usually within 1024 ). If performance is considered, poll () is not suitable. although it can support a high number of TCP concurrency, because it adopts the "polling" mechanism, when the number of concurrency is high, its operation efficiency is quite low, and the I/O events may be unevenly distributed, resulting in "hunger" for some I/O connections. If epoll or AIO is used, there is no such problem (the AIO technology of the early Linux kernel was implemented by creating a thread for each I/O request in the kernel, this implementation mechanism also has serious performance problems in the case of high-concurrency TCP connections. However, the implementation of AIO has been improved in the latest Linux kernel ).
To sum up, when developing a Linux application that supports high-concurrency TCP connections, epoll or AIO technology should be used as much as possible to achieve I/O control over concurrent TCP connections, this will provide an effective I/O guarantee for the support of the program for high-concurrency TCP connections.

Kernel parameter sysctl. conf optimization

/Etc/sysctl. conf is the configuration file used to control the linux Network. it is very important for network-dependent programs (such as web servers and cache servers). RHEL provides the best adjustment by default.

Recommended configuration (clear the original/etc/sysctl. conf content and copy the following content ):
Net. ipv4.ip _ local_port_range = 1024 65536
Net. core. rmem_max = 16777216
Net. core. wmem_max = 16777216
Net. ipv4.tcp _ rmem = 4096 87380 16777216
Net. ipv4.tcp _ wmem = 4096 65536 16777216
Net. ipv4.tcp _ fin_timeout = 10
Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ timestamps = 0
Net. ipv4.tcp _ window_scaling = 0
Net. ipv4.tcp _ sack = 0
Net. core. netdev_max_backlog = 30000
Net. ipv4.tcp _ no_metrics_save = 1
Net. core. somaxconn = 262144
Net. ipv4.tcp _ syncookies = 0
Net. ipv4.tcp _ max_orphans = 262144
Net. ipv4.tcp _ max_syn_backlog = 262144
Net. ipv4.tcp _ synack_retries = 2
Net. ipv4.tcp _ syn_retries = 2

This configuration is based on the recommended configuration of the cache server varnish and the recommended configuration of SunOne server system optimization.

Address of varnish tuning recommendation configuration is: http://varnish.projects.linpro.no/wiki/Performance

However, the configuration recommended by varnish is incorrect. ipv4.tcp _ fin_timeout = 3 "may cause frequent page failures. when internet users use the Internet Explorer 6, all web pages cannot be opened after the website is accessed for a period of time. after the browser is restarted, it will be normal. It may be the speed of the Internet outside China. we decided to adjust "net. ipv4.tcp _ fin_timeout = 10" according to our national conditions. in 10 s, everything is normal (actual operation conclusion ).

After modification, execute:
/Sbin/sysctl-p/etc/sysctl. conf
/Sbin/sysctl-w net. route 4.route. flush = 1

Command. For the sake of insurance, you can also reboot the system.

Adjust the number of files:
After the linux system is optimized, you must increase the number of files allowed to be opened by the system to support high concurrency. by default, 1024 is far from enough.

Run the following command:
Shell code
Echo ulimit-HSn 65536>/etc/rc. local
Echo ulimit-HSn 65536>/root/. bash_profile
Ulimit-HSn 65536


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.