Basic concepts
The number of open files, as a literal meaning, refers to the number of open files.
Previously, I have been thinking, "Open file" is a concept of what. Later, learned a little C language, only to understand that the program to access a file is required to open the file first. In C programming, the program uses functions, such as the fopen () function, to open the file. For example, to write a log to the/root/test.log file, the program may use fopen ("/root/test.log", "w") to open the file, and the latter to limit the program to write to the file, and the program will first empty the contents of the file ( If the file does not exist, the file is created first, similar to "R" (read-only), "A +" (readable and writable), and so on. After the program opens the file, it can read the contents of the file or write to the file. When a program does not use a file, it also uses the fclose () function to close the file.
Therefore, when a program opens a file, it generates 1 open files. The program opens several files and produces several open files. The operating system has a limit on the number of files that the program can open. Why should the operating system be limited? Because open files are resource-intensive, the operating system needs to track which files are opened by which programs, and the contents of some files may need to be read into memory. Therefore, the operating system restricts the program's "Maximum Open files".
Although it consumes system resources when the program opens the file, the operating system also limits the maximum number of open files, but what does it matter? Generally speaking, the Web server, database server, and so on, if there is no amount of traffic, we certainly do not need to pay attention to the number of open files. However, when the amount of traffic is slightly larger, it is necessary. Especially in the system of Rhel/centos 6 and some other times.
View and modify the maximum number of open files
In the Rhel/centos family of operating systems, the maximum number of open files is limited by the soft limit (soft limit) and the hard limit.
In general, the soft limit value is the maximum number of open file limit, in Rhel/centos 6 This value by default is 1024, the program can not open the number of files beyond this value. Use the Ulimit-n command to view the current soft limit values:
[Email protected] ~]$ ulimit-n 1024 |
However, the average user can also increase this value. Use the ulimit-n number command to temporarily increase or dim the soft limit value. However, a normal user can only adjust it to a hard limit value at most.
The hard limit value is the maximum limit that can be adjusted by limiting the user's soft limit value, which is 4096 by default in Rhel/centos 6, which means that the average user can modify the soft limit value to only 4096. Hard limit values can only be modified by the root user. Use the Ulimit-n-H command to view the current hard limit values:
[Email protected] ~]$ Ulimit-n-H 4096 |
Use the Ulimit-n -H number command to adjust the hard limit values.
Of course, it's best to set it up in the/etc/security/limits.conf file to make it permanent. How to set up in this file, there are many articles on the Internet, I do not introduce. However, even if it is set in this file, it does not take effect in all cases, as I will say later. It can only guarantee you to re-login, or the system restarts after you log back in, and the settings you see are still in effect.
To test
Create a user for the test first, and see that it has a limit of 1024 currently open files:
[Email protected] ~]# Useradd Tuser [Email protected] ~]# Su-tuser [Email protected] ~]$ ulimit-n 1024 |
Then, step back out and look at the current number of open files that the user is 0:
[[Email protected] ~]$ exit Logout [Email protected] ~]# Lsof-u Tuser | Wc-l 0 |
To re-switch to the user, write a simple C program that tests the number of open files:
[Email protected] ~]$ vim TEST_OPENFILES.C #include <stdio.h> #define Open_files 1025 #define LENGTH 20 #define SECOND 600 int main (void) { int count; Char Array[open_files][length]; FILE * FP; for (count = 0; count < open_files; count++) { sprintf (Array[count], "tempdir/%d", count); fp = fopen (Array[count], "w"); if (fp! = NULL) { printf ("program has opened%d files.\n", Count + 1); } Else { printf ("program failed when opening the%dth file.\n", Count + 1); } } Sleep (SECOND); return 0; } |
The program tries to open the number of files specified by Open_files, and the output opens successfully or not, and then waits for second seconds before terminating.
Compile:
[Email protected] ~]$ gcc TEST_OPENFILES.C |
Then execute:
[[email protected] ~]$ mkdir tempdir #先建个目录tempdir to hold open files [Email protected] ~]$/a.out #执行程序 |
The above program simple changes, after many tests, you can get the following conclusions:
1, in the operating system, the maximum number of open files have soft limit and hard limit value, and soft <= hard. The test found that the soft limit determines the limit of the number of open files, and the hard limit only determines the maximum value of the soft limit. The actual number of open files will never exceed the limit of soft limits.
2, although, in the operating system in the/etc/security/limits.conf file, we are the user to set the maximum number of open files limit, according to different users can have a different maximum number of open files limit. However, the test found that the number of open files was limited to the number of opened files that restricted the user's individual process , rather than the total number of open files for all processes of that user. As an example of a normal user default limit of 1024, the maximum number of open files for any process belonging to that ordinary user cannot exceed 1024, but there is no limit to the total number of open files added by all processes of the ordinary user, such as the total number of open files may be 10000 or more.
3, even the same program, open the same file multiple times, the number of open files will also increase correspondingly, and will not be considered as only one open file number.
4. View the number of open files for a process can be viewed using the lsof command, such as (another terminal) to view process 8670:
[Email protected] ~]# lsof-p 8670 |
Direct use of lsof-p 2961 | The wc-l command to calculate the number of open files for a process is inaccurate, resulting in a coarse value. The following is an example output:
Can be seen in the output of the fourth column, in fact the program has told us every file is the first few open files. Because it is numbered starting from 0, see the figure, the number has reached 1023, so process 8670 is now open 1024 files, the current number of open files soft limit is also 1024.
The meaning of the fourth column FD (File descriptor) column of the lsof command output:
The value of the column field may be a file descriptor number or one of the files. If it is a file descriptor number, it is followed by a pattern character (mode character) and a lock character (lock character).
The pattern character represents the open mode in which the file is located, and the value may be one of the following five:
The lock character represents the type of lock applied to the file, and the value may be one of the following:
When a process is running, the most accurate way to see the ulimit limit (including the maximum number of open files) that a process actually applies is to view its/proc/pid/limits file. For example, to see the ulimit limit for the app for process 8670, use the command:
[Email protected] ~]# cat/proc/8670/limits |
Of course, to see how many files the process actually opened, it was described earlier, using the lsof command.
Other questions
1. Does the network connection occupy the number of open files?
Will, a listening or established state network connection consumes an open file count. Therefore, when the Web application is slightly larger, if it is a single-process program, even if it is not the application itself open regular files, due to the number of network connections, it will also cause the number of open files more than 1024 easily. Therefore, for Centos/redhat 6, the old system, because the default value is small, so it is necessary to adjust.
According to the man document, an open file may be a regular file, a directory, a block device file, a character device file, an executing file reference, a library, a stream, or a network file (network Socket,nfs file or UNIX socket). So, the network connection is also counted. I estimate that this may be because in the program, to access these objects, it is a bit like access to the file, you need to open.
2. When you modify the Ulimit limit in the/etc/security/limits.conf file (including the number of open files), do you need to restart the running program?
Is. Because the ulimit limit is bound to your current shell, you start the program in which shell, and if the program itself does not modify the Ulimit limit, the program inherits the Ulimit limit of the shell environment. Therefore, after you usually modify the limit in the limits.conf file, exit the current shell and log back in, let the new limit take effect, and then restart your program.
Of course, as I said earlier, to see the ulimit limit that actually takes effect after a process runs, use the cat/proc/PID/limits command. If the program itself has a modified ulimit limit, you will see that its actual limit is not the same as the limit for your current shell environment.
3. Have you modified the Ulimit limit (including the number of open files) in the/etc/security/limits.conf file to ensure that it is valid for all programs?
This is wrong. In fact, the limit value in the limits.conf file does not take effect for programs that start with a startup script. For example, the Nginx program has a startup script/etc/init.d/nginx and sets the boot up. So, even if you modify the limits.conf file limit, when the server restarts, the Nginx program automatically started, its ulimit limit will be the default value, and will not be the value you set. Of course, if you log into the system at this time, and through the Nginx boot script to restart the Nginx program, the ulimit limit of nginx process will naturally become the limit value you set in the Limits.conf file.
I did not find any authoritative information on the cause of the problem, but I think it might be. Take the CentOS 6 system as an example, because all processes in the system are brought up by the first program/sbin/init when the system starts. The limit value in the limits.conf file does not take effect for the/sbin/init program, so the ulimit limit for the/sbin/init process is still the default value. This causes all of the child processes it initiates, that is, all other programs in the system, to inherit its ulimit limit, which is the default value.
There are two solutions to this problem that I think of.
The first is to add the Ulimit Modify command in the startup script of the program:
[Email protected] ~]# Vim/etc/init.d/mysql #!/bin/sh Ulimit-n 65535 |
The second is that many programs actually support modifying the program's maximum number of open files in the program configuration file, so that you don't have to control what the ulimit limit is for the shell environment. For example, Nginx can use the worker_rlimit_nofile instruction to set the maximum number of open files for its worker process. such as MySQL is actually supported.
Test Linux max open files parameter