See too many open files may think of the Fs.file-max parameter, actually also affected by the following parameters:
Fs.inotify.max_queued_events: Represents the maximum number of events that can be queued in the INotify instance when the Inotify_init is invoked, and an event that exceeds this value is discarded, but it triggers the in_q_ Overflow event.
Fs.inotify.max_user_instances: Represents the maximum number of inotify instatnces that can be created by each real user ID, default 128.
Fs.inotify.max_user_watches: Represents the number of watch that the same user can add at the same time (watch is generally for the directory, which determines the number of directories that the same user can monitor)
It is recommended that you modify the system default parameters as follows (vi/etc/sysctl.conf):
fs.inotify.max_user_instances=8192
Note: max_queued_events is the maximum length of a inotify-managed queue, and the more frequently the file system changes, the greater the value should be. If you see the event Queue Overflow in the log, it means that max_queued_events is too small to be used again after adjusting the parameters.
About restart INotify configuration max_user_watches Invalid restore default value 8192 correct method of modification
The general online modification method is to directly modify the file:
/proc/sys/fs/inotify/max_user_watches
or modify the method:
Sysctl-w fs.inotify.max_user_watches= "99999999"
However, after these modifications, Linux system restart inotify configuration max_user_watches Invalid is restored default value of 8192, this may be a lot of novice is not very clear, this does not elaborate, there is time for everyone to understand: sysctl
Linux system reboot inotify configuration max_user_watches Invalid restored default value 8192 the correct way to modify is:
Vim/etc/sysctl.conf
Note what is added:
fs.inotify.max_user_watches=99999999 (the value you want to set)
Haole, OK, very simple.
Solution for opening too many files in Linux (Too many open files)
Answer one,
[Root@lxadmin nginx]# Cat/proc/sys/fs/file-max
8192
File system maximum number of open files
[Root@lxadmin nginx]# Ulimit-n
1024
Program restrictions open only 1024 files
Use [root@lxadmin nginx]# ulimit-n 8192 to adjust
or permanently adjust the number of open files can be added at the end of the startup file/etc/rc.d/rc.local (fs.file-max=xxx invalid at end of/etc/sysctl.conf)
Ulimit-n 8192
Answer two,
The Linux kernel sometimes reports "Too many open files" because the File-max default value (8096) is too small. To resolve this problem, you can execute the following commands as root (or add them to the Init script under/etc/rcs.d/*). )
# echo ' 65536 ' >/proc/sys/fs/file-max # for 2.2 and 2.4 kernels
# echo ' 131072 ' >/proc/sys/fs/inode-max # only for version 2.2 kernels
Answer three,
The solution is to modify the operating system's open file limit by the following methods:
1. Set up the system according to the maximum number of open files, and verify that the maximum number of open files has been set correctly by checking the/proc/sys/fs/file-max file.
# Cat/proc/sys/fs/file-max
If the setting value is too small, modify the file/etc/sysctl.conf variable to the appropriate value. This will take effect after each reboot. If the setting value is large enough, skip the next step.
# echo 2048 >/proc/sys/fs/file-max
Edit the file/etc/sysctl.conf and insert the downlink.
Fs.file-max = 8192
2. Set the maximum number of open files in the/etc/security/limits.conf file, the following is a line of hints:
#
Add the following line.
*-Nofile 8192
This line sets the default number of open files per user to 2048. Note that there are two possible restrictions on the "Nofile" item. Is the hard and soft under the item. For the maximum number of open files that have been modified to take effect, you must set both limits. If you use the "-" character setting, the hard and soft settings will be set at the same time.
The hard limit indicates the maximum value that can be set in the soft limit. The soft limit refers to the setting value in effect for the current system. Hard limit values can be reduced by ordinary users. But it can't be increased. The soft limit cannot be set higher than the hard limit. Only the root user can increase the hard limit value.
When adding a file limit description, you can simply double the current value. The example below, if you want to increase the default value of 1024, the best to increase to 2048, if you want to continue to increase, you need to set to 4096.
The other is when you create an index, there are also two possibilities, one of which is that the merge factor is too small, causing the number of files created to exceed the operating system limit, you can modify the merge factor, or you can modify the open file limit for the operating system, and the merge factor is limited by the virtual machine memory and cannot be resized to a larger The number of docs that needs to be indexed is very large, which can only be solved by modifying the operating system's number of open files limit.
On this basis, I have also modified one of the following configuration files
Vi/etc/sysctl.conf
Add to:
# Decrease the time default value for Tcp_fin_timeout connection
Net.ipv4.tcp_fin_timeout = 30
# Decrease the time default value for Tcp_keepalive_time connection
Net.ipv4.tcp_keepalive_time = 1800
# Turn off tcp_window_scaling
net.ipv4.tcp_window_scaling = 0
# Turn off the Tcp_sack
Net.ipv4.tcp_sack = 0
#Turn off Tcp_timestamps
Net.ipv4.tcp_timestamps = 0
Then the service network restart, these are optimized for TCP sockets.
Additional needs to be added to the/etc/rc.d/rc.local have enabled the reboot to take effect.
echo ">/proc/sys/net/ipv4/tcp_fin_timeout"
echo "1800" >/proc/sys/net/ipv4/tcp_keepalive_time
echo "0" >/proc/sys/net/ipv4/tcp_window_scaling
echo "0" >/proc/sys/net/ipv4/tcp_sack
echo "0" >/proc/sys/net/ipv4/tcp_timestamps
Because not all programs run under Root, all Linux has a distinction between hard and soft open files, and ordinary users are limited by hard, no matter how high the ulimit-n $ value is, they can't run to/etc/security/ The value of the Nofile in the limits.conf.
This optimization after Lsof-p $java _pid|wc-l can run to more than 4,000 will not throw too many open files.