Referencing Linux network programming
A complete socket description, five tuples, {protocol, local address, local port, remote address, remote port}
When we write our own client test program to test the performance of our service-side programs, we often encounter the problem of the amount of connectivity, most of them are staying around 20,000. If we have a limited number of test machines to test for millions of connections, where do we find 50 test machines? The actual test is also impossible to give 50 test machines. So according to this five-tuple, we can do the article. First the Protocol has determined that TCP, the remote address and the remote port have also been determined, our server program binds the address and port. Then the rest is the local address and the local port. Now it's time to say that the number of local and local port permutations is the number of connections.
In the first step, we can bind multiple IPs to the NIC, bind the multiple IP addresses in the client program, then connect out, this time the port is the system's own choice, the port can be reused, that is, the number of connections equals (multiple IP addresses * multiple port numbers), but the results are unsatisfactory, You will find that the connection volume is still up, due to the effect of the system default port number range.The reason is because the Linux network kernel has a limit on the local port number range. At this point, further analysis of why the TCP connection could not be established, the problem will be found in the Connect () Call return failure, the view system error message is "Can ' t assign Requestedaddress". Also, if you use the Tcpdump tool to monitor the network at this time, you will find that the client sends a SYN packet of network traffic when there is no TCP connection at all. These conditions indicate a limitation in the local Linux system kernel. In fact, the root cause of the problem is that the TCP/IP Protocol implementation module of the Linux kernel restricts the range of local port numbers that correspond to all client TCP connections in the system (for example, the kernel restricts the range of local port numbers to 1024~32768). When there are too many TCP client connections at a time in the system, because each TCP client connection consumes a unique local port number (this port number is within the system's local port number range limit), if an existing TCP client connection has filled all the local port numbers, The new TCP client connection cannot be assigned a local port number at this point, so the system returns a failure in the Connect () call and sets the error message to "Can ' t assignrequested address." The local port number range, which is set by default at kernel compile time, may be too small, so you need to modify this local range limit.
Modify the/etc/sysctl.conf file to add the following line to the file:
Net.ipv4.ip_local_port_range = 1024x768 65000
execute sysctl command:
$ sysctl-p
If the system does not have an error prompt, it indicates that the new local port range setting was successful. If set according to the above port range, it is theoretically possible for a single thread to establish up to 60,000 TCP client connections at the same time. This time if a single thread could not establish a TCP connection, it could be because the ip_table firewall of the Linux network core has a limit on the number of TCP connections that can be traced. At this point the program will appear to block in the Connect () call, as if the crash, if you use the Tcpdump tool to monitor the network, you will also find that there is no TCP connection when the client sends a SYN packet network traffic. Since the ip_table firewall keeps track of the status of each TCP connection in the kernel, the trace information will be placed in Conntrackdatabase, which is located in kernel memory, conntrackdatabase is limited in size, when there are too many TCP connections in the system, the capacity is insufficient, ip_table cannot establish trace information for the new TCP connection, so it behaves as blocking in the Connect () call. The simplest way to do this is to turn off the firewall ip_table or to modify the kernel's limit on the maximum number of TCP connections to trace, similar to modifying the kernel's limit on the range of local port numbers:
Modify the/etc/sysctl.conf file and add the following line to the file:
Net.ipv4.ip_conntrack_max = 10240
This indicates that the system limits the number of TCP connections to maximum traces to 10240. Note that this limit value should be as small as possible to conserve the kernel memory.
Execute sysctl command:
$ sysctl-p
If the system does not have an error prompt, it indicates that the system has successfully modified the TCP connection limit for the new maximum trace. If set according to the above parameters, it is theoretically possible for a single process to establish a maximum of more than 10,000 TCP client connections at the same time.
The second step, this time you may still encounter the error "Too many open files", again reference "Linux network Programming"
Hackers using Unix have the phrase: "Well, in a Unix system, anything is a file." This statement describes the fact that, in a UNIX system, any operation on I/O is done by reading or writing a file descriptor. A file descriptor is just a simple shaping value that represents an open file (the file here is a generalized file, not just a different disk file, it can represent a connection on a network, a FIFO queue, a terminal display screen, and everything else). Everything in a UNIX system is a file!! So if you want to communicate with another program over the Internet, you will be implementing it through a file descriptor. Well, you already believe that the socket is a file descriptor.
Here you understand the cause of the error, this time is the file descriptor does not go, you can indeed create enough connections, but the connection of the socket is not enough.
On a Linux platform, regardless of whether you write a client or a service-side program, the highest concurrency is limited by the number of files that can be opened by the system at the same time as the user's single process (because the system creates a socket handle for each TCP connection) for high concurrent TCP connection processing. Each socket handle is also a file handle. You can use the Ulimit command to view the limit on the number of files that the system allows the current user process to open:
$ ulimit-n
1024x768
This means that each process of the current user is allowed to open up to 1024 files at the same time, and that the 1024 files will have to remove the standard input, standard output, standard error, Server listener socket, UNIX domain socket for interprocess communication, etc. Then the remaining number of files available for the client socket connection is only about 1024-10 = 1014 or so. In other words, Linux-based communication programs allow up to 1014 simultaneous TCP connections by default.
for a communication handler that wants to support a higher number of TCP concurrent connections, you must modify the soft limit (soft limit) and the hard limit (hardlimit) of the number of files that Linux has open simultaneously for the current user's process. The soft limit refers to the Linux in the current system can withstand the extent to further limit the number of files opened by the user at the same time; hard limits are the number of files that can be opened at the same time based on the system's Hardware resource status (mainly system memory). The soft limit is usually less than or equal to the hard limit.
The simplest way to modify the above limitations is to use the Ulimit command:
$ ulimit-n
In the preceding command, specify the maximum number of files that the single process to set allows to open. If the system echoes similar to "Operation Notpermitted", the above limitation modification fails, in effect because the value specified in is more than the soft or hard limit of the number of open files that the Linux system has on the user. Therefore, it is necessary to modify the Linux system's soft and hard limits on the number of open files to the user.
Modify the/etc/security/limits.conf file to add the following line to the file:
user Soft Nofile 10240
user hard Nofile 10240
where user specifies the limit on the number of open files to modify, the ' * ' indicates the limit for all users, soft or hard specifies whether to modify the soft or rigid limit, and 10240 specifies the new limit value that you want to modify, that is, the maximum number of open files ( Note that the soft limit value is less than or equal to the hard limit. This value can refer to the last step of the Linux system-level hard limit of the number of small on the line, after the modification of the file saved.
Modify the/etc/pam.d/login file to add the following line to the file:
Session required/lib/security/pam_limits.so
This is to tell Linux that after the user completes the system login, the Pam_limits.so module should be called to set the system's maximum limit on the number of resources that the user can use, including the maximum number of files a user can open, and the pam_limits.so module from/etc/ The security/limits.conf file reads the configuration to set these throttling values. Save this file when you are finished modifying it.
to view the maximum number of open files at the Linux system level, use the following command:
$ Cat/proc/sys/fs/file-max
12158
This indicates that the Linux system is allowed to open at most (that is, the total number of open files for all users) 12,158 files, is the Linux system-level hard limit, all user-level open files limit should not exceed this number. then save the file and the user exits and logs back in to the system. ThisA system restriction on the number of open files is lifted for communication handlers that support high concurrent TCP connection processing.
The last stand-alone client test program provides 500,000 connectivity with no problem.
High concurrency socket for Linux, 500,000 connection for stand-alone