Linux High Concurrent socket maximum number of connections under the various restrictions (detailed) _linux

Source: Internet
Author: User
Tags epoll system echo unix domain socket port number varnish

1, modify the user process to open the number of files limit

On Linux platforms, regardless of whether you write a client program or a server-side program, the highest number of concurrent TCP connection processing is limited by the system's number of simultaneous open files for a single user process (this is because the system creates a socket handle for each TCP connection). Each socket handle is also a file handle. You can use the Ulimit command to view the number of file limits that the system allows the current user process to open:

[Speng@as4 ~]$ Ulimit-n
1024

This means that each process for the current user can open up to 1024 files at the same time, and the 1024 files must also remove the standard input, standard output, standard error, server listening socket, UNIX domain socket for interprocess communication, etc. The remaining number of files available for client socket connections is only about 1024-10 = 1014. In other words, the Linux based communication program allows up to 1014 concurrent TCP connections at the same time by default.

For traffic handlers that want to support a higher number of TCP concurrent connections, you must modify the soft limits (soft limit) and hard limits (hardlimit) of the number of files that Linux has opened concurrently for the current user's process. Soft limit refers to the Linux in the current system can afford to further limit the number of users open at the same time, the hard limit is based on the system hardware resource status (mainly system memory) calculated by the maximum number of files can be opened at the same time. The soft limit is usually less than or equal to the hard limit.

The easiest way to modify the above restrictions is to use the Ulimit command:

[Speng@as4 ~]$ Ulimit-n

In the above command, specify the maximum number of files that the single process you want to set to allow to open. If the system echo is similar to "Operation notpermitted", the above limit modification fails, in effect because the value specified in is more than the Linux system's soft limit or hard limit on the number of open files for that user. As a result, you need to modify the Linux system's soft and hard limits on the number of open files to users.

First, modify the/etc/security/limits.conf file and add the following line to the file:

Speng Soft Nofile 10240
Speng Hard Nofile 10240

Where Speng specifies the limit of the number of open files to be modified, the ' * ' number can be used to change the limit of all users; soft or hard specifies whether to modify soft or hard limits, and 10240 specifies the new limit that you want to modify, that is, the maximum number of open files ( Note that the soft limit value is less than or equal to the hard limit. Save the file when you have finished modifying it.

In the second step, modify the/etc/pam.d/login file and add the following line to the file:

Session required/lib/security/pam_limits.so

This tells Linux that after the user completes the system login, the Pam_limits.so module should be called to set the maximum limit on the number of resources available to the user (including the maximum number of file limits that can be opened by the user), and the Pam_limits.so module will be from/etc/ The security/limits.conf file reads the configuration to set these limit values. Save this file when you have finished modifying it.

The third step is to view the maximum number of open file limits at the Linux system level, using the following command:

[Speng@as4 ~]$ Cat/proc/sys/fs/file-max
12158

This indicates that the Linux system allows the maximum number of simultaneous open (that is, the sum of all users open files) 12,158 files, is a Linux system-level hard limit, all user-level Open file limit should not exceed this value. Usually this system-level hard limit is the optimal maximum number of simultaneous open files that the Linux system calculates at startup based on the system hardware resource condition, and should not be modified unless there is a special need to do so, except to set a value that exceeds this limit for user-level Open file limits. The way to modify this hard limit is to modify the/etc/rc.local script and add the following line to the script:

echo 22158 >/proc/sys/fs/file-max

This is to have Linux forcibly set the hard limit on the number of system-level open files to 22158 after startup completes. Save this file when you have finished modifying it.

By restarting the system after completing the above steps, you can generally set the maximum number of file limits that the Linux system will allow to open concurrently for a single process that specifies the user to the specified number. If you restart and use the Ulimit-n command to view the number of user open files is still below the maximum value set in the previous steps, this may be because the ulimit-n command in the user logon script/etc/profile has limited the number of files that users can open concurrently. Because modifying the system through Ulimit-n modifies the maximum limit for the number of files that a user can open at the same time, the newly modified value can only be less than or equal to the value of the last Ulimit-n setting, so it is not possible to increase this limit with this command. So, if you have the above problem, you can only open the/etc/profile script file, find out in the file whether to use the Ulimit-n limit the maximum number of files users can open at the same time, if found, delete the line command, or change its set value to the appropriate value, and then save the file, The user exits and logs on to the system again.

With these steps, the system restrictions on the number of open files are lifted for traffic handlers that support high concurrent TCP connection processing.

2, modify the network kernel on the TCP connection restrictions (refer to the next article "Optimizing Kernel Parameters")

When writing client-side communication handlers that support high concurrent TCP connections on Linux, it is sometimes found that although the system has been removed from the user's limit on the number of files opened simultaneously, a new TCP connection can no longer be successfully established when the number of concurrent TCP connections increases to a certain number. There are many reasons why this is happening now.

The first reason may be because the Linux network kernel has a limited range of local port numbers. At this point, further analysis of why the TCP connection could not be established, the problem will be found in the Connect () call return failed to view the system error message is "Can" T assign requestedaddress. Also, if you use the Tcpdump tool to monitor the network at this time, you will find the network traffic that the client sends a SYN packet when there is no TCP connection at all. These situations indicate that the problem lies in the limitations of the local Linux system kernel. In fact, the root cause of the problem is that the TCP/IP Protocol implementation module of the Linux kernel limits the scope of the local port number corresponding to all the client TCP connections in the system (for example, the kernel limits the local port number to the range of 1024~32768). When there are too many TCP client connections at one time in the system, because each TCP client connection takes up a unique local port number (this port number is in the system's local port number range limit), if an existing TCP client connection has full local port numbers, You cannot assign a local port number to a new TCP client connection at this point, so the system will fail back in the Connect () call in this case and set the error message to "Can" t assignrequested address. For these control logic to view the Linux kernel source code, take the linux2.6 kernel as an example, you can view the following functions in the tcp_ipv4.c file:

static int tcp_v4_hash_connect (struct sock *sk)

Note the access control for variable Sysctl_local_port_range in the above function. The initialization of the variable sysctl_local_port_range is set in the following function in the Tcp.c file:

void __init tcp_init (void)

The local port number that is set by default at kernel compile time may be too small, so you need to modify this local range limit.

First, modify the/etc/sysctl.conf file and add the following line to the file:

Net.ipv4.ip_local_port_range = 1024 65000

This indicates that the system is set to 1024~65000 between the local port range limits. Note that the minimum value for the local port range must be greater than or equal to 1024, and the maximum port range should be less than or equal to 65535. Save this file when you have finished modifying it.

Step two, execute the sysctl command:

[Speng@as4 ~]$ Sysctl-p
If the system does not have an error prompt, the new local port range setting is successful. If you set it by the port range above, you can theoretically create up to 60,000 TCP client connections at a time for a single process.

The second reason for the inability to establish a TCP connection may be that the Linux network kernel's ip_table firewall limits the number of TCP connections to maximum tracking. At this point the program will appear to be blocked in the Connect () call, like a panic, if you use the Tcpdump tool to monitor the network, you will also find that there is no TCP connection when the client sends a SYN packet network traffic. Because the ip_table firewall keeps track of the status of each TCP connection in the kernel, the tracking information will be placed in the conntrackdatabase in kernel memory, which is limited in size and has insufficient database capacity when there are too many TCP connections in the system IP_ The table cannot establish trace information for the new TCP connection, and thus is represented as blocking in the Connect () call. At this point, you must modify the kernel's limit on the number of TCP connections for maximum tracking, which is similar to modifying the kernel's limit on the range of local port numbers:

First, modify the/etc/sysctl.conf file and add the following line to the file:

Net.ipv4.ip_conntrack_max = 10240

This indicates that the limit of the number of TCP connections to maximum tracking is set to 10240. Note that this limit is as small as possible to conserve kernel memory.

Step two, execute the sysctl command:
[Speng@as4 ~]$ sysctl-p

If the system does not have an error prompt, it indicates that the system has succeeded in limiting the number of TCP connections to the new maximum trace. If set according to the above parameters, a single process can theoretically create up to 10,000 TCP client connections at the same time.

3, the use of supporting high concurrent network I/O programming technology

When writing high concurrent TCP connection applications on Linux, you must use the appropriate network I/O technology and the I/O event dispatch mechanism.

The available I/O technologies are synchronous I/O, non-blocking synchronous I/O (also known as reactive I/O), and asynchronous I/O. In the case of high TCP concurrency, if synchronous I/O is used, this can severely block the operation of the program unless a thread is created for I/O for each TCP connection. However, too many threads can incur significant overhead from the system's scheduling of threads. Therefore, it is undesirable to use synchronous I/O in a high TCP concurrency scenario, in which case non-blocking synchronous I/O or asynchronous I/O can be considered. Non-blocking synchronous I/O techniques include the use of Select (), poll (), Epoll, and other mechanisms. The technique of asynchronous I/O is to use AIO.

From the I/O event allocation mechanism, it is not appropriate to use Select () because it supports a limited number of concurrent connections (usually within 1024). If performance is considered, poll () is not appropriate, although it can support a higher number of TCP concurrency, but because it uses the "polling" mechanism, when the number of concurrent high, its efficiency is very low, and there may be an I/O event allocation uneven, resulting in some TCP connections I/O on the "hunger" phenomenon. If you use Epoll or AIO, there is no such problem (the implementation of the AIO technology for the early Linux kernel is achieved by creating a thread in the kernel for each I/O request, which in fact has serious performance problems in the case of a high concurrent TCP connection). But in the latest Linux kernel, the implementation of AIO has been improved.

To sum up, when developing Linux applications that support high concurrent TCP connections, you should try to use Epoll or AIO technology to implement I/O control on concurrent TCP connections, which provides effective I/O guarantees for the elevation program's support for high concurrent TCP connections.

Optimization of kernel parameter sysctl.conf

/etc/sysctl.conf is a configuration file that controls the Linux network and is important for network-dependent programs, such as Web servers and cache servers, Rhel the best adjustments provided by default.

Recommended configuration (clear the original/etc/sysctl.conf content, copy the contents below):

Net.ipv4.ip_local_port_range = 1024 65536
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
Net.ipv4.tcp_fin_timeout = 10
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_window_scaling = 0
Net.ipv4.tcp_sack = 0
Net.core.netdev_max_backlog = 30000
Net.ipv4.tcp_no_metrics_save=1
Net.core.somaxconn = 262144
net.ipv4.tcp_syncookies = 0
Net.ipv4.tcp_max_orphans = 262144
Net.ipv4.tcp_max_syn_backlog = 262144
Net.ipv4.tcp_synack_retries = 2
Net.ipv4.tcp_syn_retries = 2

This configuration refers to the recommended configuration for cache server varnish and the recommended configuration for SunOne Server system optimization.

Varnish the recommended configuration address is: http://varnish.projects.linpro.no/wiki/Performance

However, varnish recommended configuration is problematic, the actual operation shows that "net.ipv4.tcp_fin_timeout = 3" configuration will lead to the page often not open, and when users use the IE6 browser, visit the site after a period of time, all the pages will not open, Normal after restarting the browser. May be the speed of foreign network quickly, we decided to adjust the national conditions "Net.ipv4.tcp_fin_timeout = 10", in the case of 10s, all normal (actual operation of the conclusion).

After the modification is completed, execute:

/sbin/sysctl-p/etc/sysctl.conf
/sbin/sysctl-w net.ipv4.route.flush=1

Command takes effect. For insurance purposes, you can also reboot the system.

Number of files adjusted:

Linux system optimized network must increase the number of files allowed to open the system to support large concurrency, the default 1024 is far from enough.

To execute a command:

Shell Code
echo ULIMIT-HSN 65536 >>/etc/rc.local
echo ULIMIT-HSN 65536 >>/root/.bash_profile
ULIMIT-HSN 65536

Above this Linux high concurrent socket maximum connection number of restrictions (detailed) is a small series to share all the content, hope to give you a reference, but also hope that we support cloud habitat community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.