Too program open files

Source: Internet
Author: User

The standard solution is to expand the limit of file descriptor.

I,

[Root @ lxadmin nginx] # Cat/proc/sys/fs/file-max
8192
File system supports up
Number of Parts
[Root @ lxadmin nginx] # ulimit-n
1024
The program can only open 1024 files
Use
[Root @ lxadmin nginx] # adjust ulimit-N 8192
Or adjust the number of opened files permanently.
You can add at the end of the Startup file/etc/rc. d/rc. Local (adding fs. File-max = xxx at the end of/etc/sysctl. conf is invalid)

Ulimit-N 8192

II,

The Linux kernel sometimes reports "too open"
Files, because the default value of file-max (8096) is too small. To solve this problem, run the following commands (or add them to/etc/RCs. d/*) as the root user.
Init script .)

# Echo "65536">/proc/sys/fs/file-Max # Applicable to 2.2 and 2.4 kernels
# Echo
"131072">/proc/sys/fs/inode-Max # Applicable only to kernel 2.2

III,

The method is to modify the limit on the number of files opened by the operating system. The method is as follows:

1. Set the system according to the maximum number of opened files,
Check the/proc/sys/fs/file-Max file to confirm that the maximum number of opened files has been set correctly.

# Cat
/Proc/sys/fs/file-max
If the value is too small, modify the variable of the file/etc/sysctl. conf to the appropriate value.
This will take effect after each restart. If the value is large enough, skip the next step.
# Echo 2048>/proc/sys/fs/file-max
Editing
File/etc/sysctl. conf, insert downstream.
FS. File-max = 8192

2. Set the maximum number of opened files in the/etc/security/limits. conf file.
, The following is a line of prompt:
#

Add the following line.
*-Nofile 8192

This line sets the default number of files opened by each user to 2048. Note that "nofile" has two possible restrictions. Is hard and soft under the item.
To make the maximum number of files opened after modification take effect, you must set these two restrictions. If the "-" character is used, both hard and soft settings are set.
Hard limit indicates
The maximum value that can be set in the soft limit. The soft limit refers to the setting value that takes effect for the current system. The hard limit value can be reduced by common users. But cannot be added.
Soft restrictions cannot be set more than hard restrictions. Only root users can increase the hard limit value.
When the file limit description is added, you can simply double the current value.
The example is as follows. If you want to increase the default value by 1024, it is best to increase it to 2048. If you want to continue to increase it, you need to set it to 4096.

In another case, there are two possibilities when creating an index:
If the merging factor is too small, the number of files created exceeds the operating system limit. You can modify the merging factor or the number of files opened by the operating system; the other is that the merging factor is limited by the memory of the virtual machine,
It cannot be adjusted to a larger value, but the number of DOC files to be indexed is very large. This can only be solved by modifying the limit on the number of opened files in the operating system.

On this basis, I also modified the following configuration file
VI/etc/sysctl. conf
Add:
# Decrease
Time default value for tcp_fin_timeout connection
Net. ipv4.tcp _ fin_timeout
= 30
# Decrease the time default value for tcp_keepalive_time
Connection
Net. ipv4.tcp _ keepalive_time = 1800
# Turn off
Tcp_window_scaling
Net. ipv4.tcp _ window_scaling = 0
# Turn off
Tcp_sack
Net. ipv4.tcp _ sack = 0
# Turn off tcp_timestamps
Net. ipv4.tcp _ timestamps
= 0
Then, the service network restart is used to optimize TCP sockets.

In addition, you must add/etc/rc. d/rc. Local to make the restart take effect.
Echo
"30">/proc/sys/NET/IPv4/tcp_fin_timeout
Echo
& Quot; 1800 & quot;>/proc/sys/NET/IPv4/tcp_keepalive_time
Echo
"0">/proc/sys/NET/IPv4/tcp_window_scaling
Echo
"0">/proc/sys/NET/IPv4/tcp_sack
Echo
"0">/proc/sys/NET/IPv4/tcp_timestamps
Because not all programs run under root
The distinction between hard and soft open files means that ordinary users cannot run hard regardless of the value of ulimit-N $.
The value of nofile in/etc/security/limits. conf.

After such optimization, lsof-p $ java_pid | WC-l can run more than without throwing too program open files.

 

Check whether the program has a stream or link that is not closed. There is also a lot of close_waite will also cause file descriptor depletion when I use Tomcat + c3p0 + MySQL, the solution can see my article on this problem: http://blog.csdn.net/yuanyuan110_l/archive/2010/02/01/5276758.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.