1. modify the number of files that can be opened by a user process on the Linux platform. Whether writing a client program or a server program, during high-concurrency TCP connection processing, the maximum number of concurrent connections is limited by the number of files that the system can open for a single process (this is because the system creates a socket handle for each TCP connection, each socket handle is also a file handle ). You can use the ulimit command to view the number of files allowed by the current user process: [[email protected] ~] $ Ulimit-n1024 indicates that each process of the current user can open up to 1024 files at the same time. The 1024 files must remove the standard input, standard output, and standard error that each process must open, the server listens to socket files and Unix domain socket files for inter-process communication. The remaining number of files that can be used for client socket connection is about 1024-10 = 1014. That is to say, by default, a Linux-based communication program allows a maximum of 1014 TCP concurrent connections at the same time. For a communication handler that supports a higher number of TCP concurrent connections, you must modify the soft limit (soft limit) on the number of files simultaneously opened by the current user's processes in Linux) and hardlimit ). Among them, the soft limit refers to the limit on the number of files opened by users in the current system. The hard limit is based on the hardware resources of the system (mainly the system memory) the maximum number of files that can be opened by the system at the same time. Generally, the soft limit is less than or equal to the hard limit. The simplest way to modify the preceding limits is to use the ulimit command: [email protected] ~] $ Ulimit-n <file_num> in the preceding command, specify the maximum number of files that a single process can open in <file_num>. If the system ECHO is similar to "Operation notpermitted", it indicates that the above restriction modification failed, the reason is that the value specified in <file_num> exceeds the soft or hard limit on the number of files opened by the user in Linux. Therefore, you need to modify the Linux system's soft and hard limits on the number of opened files. Step 1: Modify/etc/security/limits. CONF file. Add the following lines to the file: * Soft nofile 10240 * hard nofile 10240. The limit g specifies the number of files opened by the user to be modified, '*' can be used to modify the limits of all users. soft or hard indicates whether to modify the soft limit or hard limit. 10240 indicates the new limit value to be modified, that is, the maximum number of opened files (note that the soft limit value must be smaller than or equal to the hard limit ). Save the modified file. Step 2: Modify/etc/PAM. d/login file, add the following line to the file: session required/lib/security/pam_limits.so this tells Linux that after the user completes system login, you should call the pam_limits.so module to set the maximum number of resources that the system can use (including the maximum number of files that a user can open ), the pam_limits.so module starts from/etc/security/limits. read the configuration in the conf file to set these limits. Save the modified file. Step 3: Check the maximum number of opened files in Linux. Run the following command: [[email protected] ~] $ CAT/proc/sys/fs/file-max12158 indicates that this Linux system allows up to 12158 files simultaneously (that is, including the total number of files opened by all users), which is a Linux system-level hard limit, the limit on the number of opened files at the user level should not exceed this value. Generally, this system-level hard limit is the optimal maximum number of files opened at the same time calculated based on the hardware resources of the system when the Linux system is started. If there is no special need, do not modify this limit, unless you want to set a value that exceeds the user-level limit on the number of opened files. To modify this hard limit, modify/etc/rc. add the following lines to the local script: echo 22158>/proc/sys/fs/file-Max. This allows Linux to forcibly set the number of files opened at the system level to 22158 after startup. Save the modified file. After completing the preceding steps, restart the system. Generally, you can set the maximum number of files allowed to be opened simultaneously for a single process of a specified user in Linux to a specified value. If you run the ulimit-N command to check whether the limit on the number of files opened by the user is still lower than the maximum value set in the preceding steps, this may be because the ulimit-N command has been used in the user login script/etc/profile to limit the number of files that the user can open at the same time. When you use ulimit-N to modify the maximum number of files that a user can open at the same time, the new value can only be less than or equal to the value set in the previous ulimit-n, therefore, it is impossible to use this command to increase the limit value. Therefore, if the above problem exists, you can only open the/etc/profile script file, check whether ulimit-N is used in the file to limit the maximum number of files that a user can open at the same time. If yes, delete this line of command or change the set value to the appropriate value, save the file and log out and log on to the system again. By performing the preceding steps, the system limits the number of opened files for the communication handler that supports High-concurrency TCP connection processing. 2. When you modify the network kernel's restrictions on TCP connections and write a client communication handler that supports High-concurrency TCP connections on Linux, sometimes it is found that although the system has removed the limit on the number of files opened by users at the same time, when the number of concurrent TCP connections increases to a certain number, it is no longer possible to establish a new TCP connection. There are many reasons for this. The first reason may be that the Linux Network kernel has restrictions on the range of local port numbers. In this case, we further analyze why a TCP connection cannot be established and find that the problem occurs when the connect () call fails to return. Check that the system error message is "can't assign requestedaddress ". At the same time, if the tcpdump tool is used to monitor the network at this time, the network traffic of the client sending the SYN packet will be found when there is no TCP connection at all. These situations indicate that there are restrictions on the local Linux kernel. In fact, the root cause of the problem is that the TCP/IP implementation module of the Linux kernel limits the range of local port numbers corresponding to all client TCP connections in the system (for example, the kernel limits the range of local port numbers to 1024 ~ Between 32768 ). When too many tcp client connections exist at one time in the system, because each tcp client connection occupies a unique local port number (this port number is within the range of the local port number of the system ), if the existing tcp client connection has filled up all the local port numbers, you cannot allocate a local port number for the new tcp client connection, therefore, the system will return a failure in the connect () call in this case, and set the error message to "can't assignrequested address ". For these control logics, you can view the Linux kernel source code. For example, you can view the following function in the tcp_ipv4.c file: static int tcp_v4_hash_connect (struct sock * SK) note the access control for the variable sysctl_local_port_range in the above functions. The initialization of the variable sysctl_local_port_range is in TCP. in the c file, set the following function: void _ init tcp_init (void) the default local port number range set during kernel compilation may be too small, so you need to modify the local port range limit. Step 1: Modify the/etc/sysctl. conf file and add the following lines to the file: net. ipv4.ip _ local_port_range = 1024 65000. This indicates that the system sets the local port range to 1024 ~ In the range of 65000. Note that the minimum value of the local port range must be greater than or equal to 1024, and the maximum value of the port range should be less than or equal to 65535. Save the modified file. Step 2: run the sysctl command: [[email protected] ~] $ Sysctl-P if no error message is displayed, the new local port range is successfully set. If the preceding port range is used, a single process can establish a maximum of 60000 tcp client connections at the same time. The second reason may be that the Linux Network kernel's ip_table firewall has a limit on the maximum number of TCP connections tracked. At this time, the program will be blocked in the connect () call, like a dead machine. If you use the tcpdump tool to monitor the network, it will also find that there is no TCP connection when the client sends a SYN packet network traffic. Because the ip_table firewall tracks the status of each TCP connection in the kernel, the tracking information will be stored in the conntrackdatabase in the kernel memory. The size of this database is limited, when too many TCP connections exist in the system, the database capacity is insufficient, and ip_table cannot set up tracking information for the new TCP connection, so it is blocked in the connect () call. In this case, you must modify the kernel's limit on the maximum number of TCP connections tracked. The method is similar to modifying the kernel's limit on the local port number range: Step 1: Modify/etc/sysctl. CONF file, add the following lines to the file: net. ipv4.ip _ conntrack_max = 10240 this indicates that the system sets the maximum number of TCP connections for tracking to 10240. Please note that this limit value should be as small as possible to reduce the usage of kernel memory. Step 2: run the sysctl command: [[email protected] ~] $ Sysctl-P if no error message is displayed, it indicates that the system has successfully modified the maximum number of TCP connections for the new trail. If the preceding parameters are used, a single process can establish a maximum of 10000 tcp client connections at the same time. 3. When you write a high-concurrency TCP connection application on Linux using programming technology that supports High-concurrency network I/O, appropriate network I/O technology and I/O event dispatch mechanisms must be used. Available I/O technologies include synchronous I/O, non-blocking synchronous I/O (also called reactive I/O), and asynchronous I/O. In the case of high TCP concurrency, if synchronous I/O is used, this will seriously block the operation of the program unless a thread is created for each TCP connection I/O. However, too many threads cause huge overhead for Thread Scheduling by the system. Therefore, it is not advisable to use synchronous I/O in the case of high TCP concurrency. In this case, you can consider using non-blocking synchronous I/O or asynchronous I/O. Non-blocking I/O synchronization techniques include the use of select (), Poll (), epoll and other machine systems. The asynchronous I/O technology is to use AIO. From the perspective of the I/O event allocation mechanism, it is inappropriate to use select () because it supports a limited number of concurrent connections (usually within 1024 ). If performance is considered, Poll () is not suitable. Although it can support a high number of TCP concurrency, because it adopts the "polling" mechanism, when the number of concurrency is high, its operation efficiency is quite low, and the I/O events may be unevenly distributed, resulting in "Hunger" for I/O on the TCP connections. If epoll or AIO is used, there is no such problem (the AIO technology of the early Linux kernel was implemented by creating a thread for each I/O request in the kernel, this implementation mechanism also has serious performance problems in the case of high-concurrency TCP connections. However, the implementation of AIO has been improved in the latest Linux kernel ). To sum up, when developing a Linux application that supports High-concurrency TCP connections, epoll or AIO technology should be used as much as possible to achieve I/O control over concurrent TCP connections, this will provide an effective I/O guarantee for the support of the program for high-concurrency TCP connections. Http://blog.csdn.net/gavinloo/article/details/12129475
Modify the number of files that can be opened by a user process)