Linux configuration supports High-concurrency TCP connections (maximum number of socket connections) and Kernel Parameters Optimization
Linux configuration supports High-concurrency TCP connections (maximum number of socket connections) 1. Modify the limit on the number of files opened by a user process
On the Linux platform Program Or a server program. During High-concurrency TCP connection processing, the maximum number of concurrent connections is limited by the number of files that the system can open for a single process (this is because the system creates a socket handle for each TCP connection, each socket handle is also a file handle ). You can use the ulimit command to view the number of files allowed by the current user process:
[Speng @ as4 ~] $ Ulimit-n
1024
This indicates that each process of the current user can open up to 1024 files at the same time. Among these 1024 files, the standard input, standard output, standard error, and server listening socket must be removed for each process, the number of files that can be used for client socket connection between processes is about 1024-10 = 1014. That is to say, by default, a Linux-based communication program allows a maximum of 1014 TCP concurrent connections at the same time.
For a communication handler that supports a higher number of TCP concurrent connections, you must modify the soft limit (soft limit) on the number of files simultaneously opened by the current user's processes in Linux) and hardlimit ). Among them, the soft limit refers to the limit on the number of files opened by users in the current system. The hard limit is based on the hardware resources of the system (mainly the system memory) the maximum number of files that can be opened by the system at the same time. Generally, the soft limit is less than or equal to the hard limit.
The simplest way to modify the preceding limits is to use the ulimit command:
[Speng @ as4 ~] $ Ulimit-n
In the preceding command, specify the maximum number of files that a single process can open. If the system ECHO is similar to "Operation notpermitted", it indicates that the above restriction modification failed, the reason is that the value specified in exceeds the Linux system's soft or hard limit on the number of files opened by the user. Therefore, you need to modify the Linux system's soft and hard limits on the number of opened files.
Step 1: Modify the/etc/security/limits. conf file and add the following lines to the file:
Speng soft nofile 10240
Sort G hard nofile 10240
The limit g specifies the number of files opened by the user to be modified. The '*' sign can be used to modify the limit of all users. soft or hard specifies whether to modify the soft limit or hard limit; 10240 specifies the new limit value to be modified, that is, the maximum number of opened files (note that the soft limit value must be smaller than or equal to the hard limit ). Save the modified file.
Step 2: Modify the/etc/PAM. d/login file and add the following lines to the file:
Session required/lib/security/pam_limits.so
This tells Linux that after the user completes system logon, you should call the pam_limits.so module to set the maximum number of resources that the system can use (including the maximum number of files that a user can open ), the pam_limits.so module starts from/etc/security/limits. read the configuration in the conf file to set these limits. Save the modified file.
Step 3: Check the maximum number of opened files in Linux. Run the following command:
[Speng @ as4 ~] $ CAT/proc/sys/fs/file-max
12158
This indicates that this Linux system allows a maximum of 12158 files to be opened simultaneously (that is, including the total number of files opened by all users), which is a Linux system-level hard limitation, the limit on the number of opened files at the user level should not exceed this value. Generally, this system-level hard limit is the optimal maximum number of files opened at the same time calculated based on the hardware resources of the system when the Linux system is started. If there is no special need, do not modify this limit, unless you want to set a value that exceeds the user-level limit on the number of opened files. To modify this hard limit, modify the/etc/rc. Local script and add the following lines to the script:
Echo 22158>/proc/sys/fs/file-max
This allows Linux to forcibly set the number of system-level open files to 22158 after startup. Save the modified file.
After completing the preceding steps, restart the system. Generally, you can set the maximum number of files allowed to be opened simultaneously for a single process of a specified user in Linux to a specified value. If you run the ulimit-N command to check whether the limit on the number of files opened by the user is still lower than the maximum value set in the preceding steps, this may be because the ulimit-N command has been used in the user login script/etc/profile to limit the number of files that the user can open at the same time. When you use ulimit-N to modify the maximum number of files that a user can open at the same time, the new value can only be less than or equal to the value set in the previous ulimit-n, therefore, it is impossible to use this command to increase the limit value. Therefore, if the above problem exists, you can only open the/etc/profile script file and check whether ulimit-N is used in the file to limit the maximum number of files that can be opened at the same time, if it is found, delete this line of command, or change the set value to the appropriate value, save the file, and log out and log on to the system again.
By performing the preceding steps, the system limits the number of opened files for the communication handler that supports High-concurrency TCP connection processing.
2. Modify the limitations of the network Kernel on TCP connections (refer to the comparison below) Article "Optimizing kernel parameters ")
When you write a client communication handler that supports High-concurrency TCP connections on Linux, you may find that although the system has removed the limit on the number of files opened by users at the same time, however, when the number of concurrent TCP connections increases to a certain number, it is no longer possible to establish a new TCP connection. There are many reasons for this.
The first reason may be that the Linux Network kernel has restrictions on the range of local port numbers. In this case, we further analyze why a TCP connection cannot be established, and find that the connection () call fails, and check that the system error message is "can't assign requestedaddress ". At the same time, if the tcpdump tool is used to monitor the network at this time, the network traffic of the client sending the SYN packet will be found when there is no TCP connection at all. These situations indicate that there are restrictions on the local Linux kernel. In fact, the root cause of the problem is that the TCP/IP implementation module of the Linux kernel limits the range of local port numbers corresponding to all client TCP connections in the system (for example, the kernel limits the range of local port numbers to 1024 ~ Between 32768 ). When too many tcp client connections exist at one time in the system, because each tcp client connection occupies a unique local port number (this port number is within the range of the local port number of the system ), if the existing tcp client connection has filled up all the local port numbers, you cannot allocate a local port number for the new tcp client connection. Therefore, the system will connect () failed to return in the call, and set the error message to "can't"
Assignrequested address ". You can view the Linux kernel for these control logics. Source code Take the linux2.6 kernel as an example. You can view the following functions in the tcp_ipv4.c file:
Static int tcp_v4_hash_connect (struct sock * SK)
Note the access control for the variable sysctl_local_port_range in the above functions. The initialization of the variable sysctl_local_port_range is set in the following function in the TCP. c file:
Void _ init tcp_init (void)
During kernel compilation, the default local port number range may be too small. Therefore, you need to modify the local port range limit.
Step 1: Modify the/etc/sysctl. conf file and add the following lines to the file:
Net. ipv4.ip _ local_port_range = 1024 65000
This indicates that the local port range is set to 1024 ~ In the range of 65000. Note that the minimum value of the local port range must be greater than or equal to 1024, and the maximum value of the port range should be less than or equal to 65535. Save the modified file.
Step 2: run the sysctl command:
[Speng @ as4 ~] $ Sysctl-P
If no error message is displayed, the new local port range is successfully set. If the preceding port range is used, a single process can establish a maximum of 60000 tcp client connections at the same time.
The second reason may be that the Linux Network kernel's ip_table firewall has a limit on the maximum number of TCP connections tracked. At this time, the program will be blocked in the connect () call, like a dead machine. If you use the tcpdump tool to monitor the network, it will also find that there is no TCP connection when the client sends a SYN packet network traffic. Because the ip_table firewall tracks the status of each TCP connection in the kernel, the tracking information will be stored in the conntrackdatabase in the kernel memory. The size of this database is limited, when too many TCP connections exist in the system, the database capacity is insufficient, and ip_table cannot set up tracking information for the new TCP connection, so it is blocked in the connect () call. In this case, you must modify the kernel's limit on the maximum number of TCP connections tracked. The method is similar to modifying the kernel's limit on the local port number range:
Step 1: Modify the/etc/sysctl. conf file and add the following lines to the file:
Net. ipv4.ip _ conntrack_max = 10240
This indicates that the system sets the maximum number of TCP connections for tracking to 10240. Please note that this limit value should be as small as possible to reduce the usage of kernel memory.
Step 2: run the sysctl command:
[Speng @ as4 ~] $ Sysctl-P
If no error message is displayed, the system successfully modifies the maximum number of TCP connections for the new trace. If the preceding parameters are used, a single process can establish a maximum of 10000 tcp client connections at the same time.
3. Use programming technology that supports High-concurrency network I/O
When writing highly concurrent TCP connection applications on Linux, appropriate network I/O technology and I/O event allocation mechanism must be used.
Available I/O technologies include synchronous I/O, non-blocking synchronous I/O (also called reactive I/O), and asynchronous I/O. In the case of high TCP concurrency, if synchronous I/O is used, this will seriously block the operation of the program unless a thread is created for each TCP connection I/O. However, too many threads cause huge overhead for Thread Scheduling by the system. Therefore, it is not advisable to use synchronous I/O in the case of high TCP concurrency. In this case, you can consider using non-blocking synchronous I/O or asynchronous I/O. Non-blocking I/O synchronization techniques include the use of select (), Poll (), epoll and other mechanisms. The asynchronous I/O technology is to use AIO.
From the perspective of the I/O event allocation mechanism, it is inappropriate to use select () because it supports a limited number of concurrent connections (usually within 1024 ). If performance is considered, Poll () is not suitable. Although it can support a high number of TCP concurrency, because it adopts the "polling" mechanism, when the number of concurrency is high, its operation efficiency is quite low, and the I/O events may be unevenly distributed, resulting in "Hunger" for some I/O connections. If epoll or AIO is used, there is no such problem (the AIO technology of the early Linux kernel was implemented by creating a thread for each I/O request in the kernel, this implementation mechanism also has serious performance problems in the case of high-concurrency TCP connections. However, the implementation of AIO has been improved in the latest Linux kernel ).
To sum up, when developing a Linux application that supports High-concurrency TCP connections, epoll or AIO technology should be used as much as possible to achieve I/O control over concurrent TCP connections, this will provide an effective I/O guarantee for the support of the program for high-concurrency TCP connections.