The discussion here is redhat. Linux is a file handle limit, the default is 1024, as a production server, when deploying applications under Linux, it is easy to reach this number of restrictions, and then error error:too many open files , so you should change this value a bit larger , set to higher under high load, but only up to 65535. Some people say that the default of 1024 is the system limit, some people say is the user's limit, in fact, this is the user's limit, strictly speaking this is the current user ready to run the program restrictions.
Use Ulimit-a to view all the limits of the current system and use Ulimit-n to view the current maximum number of open files.
There are 3 ways to change the default maximum number of open files (1024):
1. The direct input ulimit-shn 65535 in the terminal can be changed, but after restarting the machine in this way, the change value will be invalidated, back to 1024;
2. Edit the/etc/profile, add ulimit-shn 65535 at the end of the file, reboot the system, change the success;
3. Edit the/etc/security/limits.conf and add the following two lines of records to the file at the end:
* Soft Nofile 65535
* Hard Nofile 65535
Save and restart the system, change the success!
You can also add the following four lines of records at the end of the/etc/security/limits.conf file:
* Soft Nproc 65535
* Hard Nproc 65535
* Soft Nofile 65535
* Hard Nofile 65535
Save the file and restart the machine. where * represents for all users, Nproc represents the maximum number of processes, and nofile represents the maximum number of open files.
The top 3 methods, best of all, are Method 3. It is also said that you can modify the file/etc/rc.local, in its last add a line ulimit-shn 65535, after verifying that this is wrong, at least in Redhat does not work.
This article from "Personal Feelings" blog, declined reprint!