Linux Server error Too many open files bug solution

Source: Internet
Author: User
Tags posix cpu usage

1. The essence solution is configured according to the following files in the Oracle installation script:

Cp/etc/security/limits.conf/etc/security/limits.conf.bak

echo "Oracle soft Nproc 2047" >>/etc/security/limits.conf

echo "Oracle hard Nproc 16384" >>/etc/security/limits.conf

echo "Oracle Soft nofile 1024x768" >>/etc/security/limits.conf

echo "Oracle hard Nofile 65536" >>/etc/security/limits.conf

Cp/etc/pam.d/login/etc/pam.d/login.bak

echo "Session required/lib/security/pam_limits.so" >>/etc/pam.d/login

echo "Session required pam_limits.so" >>/etc/pam.d/login

Cp/etc/profile/etc/profile.bak

Echo ' if [$USER = ' Oracle ']; Then ' >>/etc/profile

Echo ' if [$SHELL = '/bin/ksh ']; Then ' >>/etc/profile

Echo ' ulimit-p 16384 ' >>/etc/profile

Echo ' ulimit-n 65536 ' >>/etc/profile

Echo ' Else ' >>/etc/profile

Echo ' ulimit-u 16384-n 65536 ' >>/etc/profile

Echo ' fi ' >>/etc/profile

Echo ' fi ' >>/etc/profile

Cp/etc/sysctl.conf/etc/sysctl.conf.bak

echo "FS.AIO-MAX-NR = 1048576" >>/etc/sysctl.conf

echo "Fs.file-max = 6815744" >>/etc/sysctl.conf

echo "Kernel.shmall = 2097152" >>/etc/sysctl.conf

echo "Kernel.shmmax = 4294967295" >>/etc/sysctl.conf

echo "Kernel.shmmni = 4096" >>/etc/sysctl.conf

echo "Kernel.sem = 32000" >>/etc/sysctl.conf

echo "Net.ipv4.ip_local_port_range = 9000 65500" >>/etc/sysctl.conf

echo "Net.core.rmem_default = 262144" >>/etc/sysctl.conf

echo "Net.core.rmem_max = 4194304" >>/etc/sysctl.conf

echo "Net.core.wmem_default = 262144" >>/etc/sysctl.conf

echo "Net.core.wmem_max = 1048586" >>/etc/sysctl.conf

echo "Net.ipv4.tcp_wmem = 262144 262144 262144" >>/etc/sysctl.conf

echo "Net.ipv4.tcp_rmem = 4194304 4194304 4194304" >>/etc/sysctl.conf

Sysctl-p

2. Interim solution:

[Email protected] ~]# ulimit-a

Core file size (blocks,-c) 0

Data seg Size (Kbytes,-D) Unlimited

Scheduling Priority (-e) 0

File size (blocks,-f) Unlimited

Pending signals (-i) 46418

Max locked Memory (Kbytes, L) 64

Max memory Size (Kbytes,-m) unlimited

Open files (-N) 1024x768

Pipe Size (bytes,-p) 8

POSIX message queues (bytes,-Q) 819200

Real-time priority (-R) 0

Stack size (Kbytes,-s) 10240

CPU time (seconds,-t) unlimited

MAX User Processes (-u) 46418

Virtual Memory (Kbytes,-V) Unlimited

File locks (-X) Unlimited

[Email protected] ~]# ulimit-n 4096

[Email protected] ~]# ulimit-a

Core file size (blocks,-c) 0

Data seg Size (Kbytes,-D) Unlimited

Scheduling Priority (-e) 0

File size (blocks,-f) Unlimited

Pending signals (-i) 46418

Max locked Memory (Kbytes, L) 64

Max memory Size (Kbytes,-m) unlimited

Open files (-N) 4096

Pipe Size (bytes,-p) 8

POSIX message queues (bytes,-Q) 819200

Real-time priority (-R) 0

Stack size (Kbytes,-s) 10240

CPU time (seconds,-t) unlimited

MAX User Processes (-u) 46418

Virtual Memory (Kbytes,-V) Unlimited

File locks (-X) Unlimited

3. Transfer from Web documents

Part I: (mostly commands, see the maximum number of open limits, does not solve the underlying problem)

Under Linux, we use the ulimit-n command to see the maximum number of file handles that a single process can open (the socket is also included). system default value 1024.

For general applications (like Apache, System process) 1024 is completely enough to use. But how to deal with a large number of requests like squid, MySQL, Java and other single processes is a bit stretched. If the number of file handles opened by a single process exceeds the system-defined value, the error message "Too many files open" is mentioned. How do I know how many file handles the current process has opened? Here are some little feet that can help you see:

00001. Lsof-n |awk ' {print $} ' |sort|uniq-c |sort-nr|more

During peak system access time, the above script is executed as root, and the results may appear as follows:

00001. # Lsof-n|awk ' {print $} ' |sort|uniq-c |sort-nr|more

00002.131 24204

00003.57 24244

00004.57 24231

00005.56 24264

The first line is the number of open file handles, and the second line is the process number. After we get the process number, we can get the detailed contents of the process through the PS command.

00001. Ps-aef|grep 24204

00002. mysql 24204 24162 99 16:15? 00:24:25/usr/sbin/mysqld

Oh, it turns out that the MySQL process opened the maximum number of file handles. But he's currently only open 131 file handles, far below the system default of 1024.

But if the system is particularly large, especially squid servers, it is likely to exceed 1024. It is necessary to adjust the system parameters to adapt to the application changes. Linux has hard limits and soft limits. These two parameters can be set by Ulimit. To do this, run the following command as the root user:

00001. ULIMIT-HSN 4096

In the above command, h specifies the hard size, s specifies the soft size, and n indicates the maximum number of open file handles that are set for a single process. Personally think it's best not to exceed 4096, after all, the more open file handle number of response time will certainly be slower. When the number of handles is set, the default value is restored after the system restarts. If you want to save it permanently, you can modify the. bash_profile file , and you can modify the/etc/profile to add the above command to the last. (Findsun proposed method is more reasonable)

=================================================================================

Too Many open files often appear when using Linux, most of the time your program does not properly shut down some resources caused by this situation, please check the IO read and write, socket communication, such as whether the normal shutdown.
If the check program is not a problem, it is possible that the Linux default open files value is too small to meet the current program default requirements, such as the number of database connection pool, Tomcat requests the number of connections, etc...
View the default values for the current system open files, executable:

Java Code

00001. [email protected] script]# ulimit-a

00002. core file size (blocks,-c) 0

00003. Data seg Size (Kbytes,-D) Unlimited

00004. Scheduling priority (-e) 0

00005. File size (blocks,-f) Unlimited

00006. Pending signals (-i) 128161

00007. Max locked Memory (Kbytes, L) 32

00008. Max memory Size (Kbytes,-m) unlimited

00009. open files (-N) 800000

00010. Pipe size (bytes,-p) 8

00011. POSIX message Queues (bytes,-Q) 819200

00012. Real-time priority (-R) 0

00013. Stack size (Kbytes,-s) 10240

00014. CPU time (seconds,-t) unlimited

00015. Max User Processes (-u) 128161

00016. Virtual memory (Kbytes,-V) Unlimited

00017. File locks (-X) Unlimited

00018. ==========================================================================

00019. Part II: (The real method of settlement)

Function Description: Control the resources of the shell program.

Syntax: ulimit [-ahs][-c <core file Upper >][-d < data section size >][-f < file size >][-m < memory size >][-n < number of files >][-p < Buffer size >][-s < stack size >][-t <cpu time >][-u < number of programs >][-v < virtual memory size;]

Additional Note: Ulimit is a shell built-in directive that can be used to control the resources of shell execution programs.

Parameters
-a displays the current resource limit settings.
-C <core File cap > set the maximum value of the core file, in chunks.
-D < data section size > Maximum value of the Program Data section, in kilobytes.
-F < file size > The largest file the shell can create, in chunks.
-H sets the hard limit for the resource, which is the limit set by the administrator.
-m < memory size > Specifies the maximum amount of memory that can be used, in kilobytes.
-N < number of files > specifies the maximum number of files that can be opened at the same time.
-p < buffer size > Specifies the size of the pipe buffer, in 512 bytes.
-s < stack size > specifies the upper limit of the stack in kilobytes.
-S sets the elastic limit for the resource.
-T <CPU time > Specifies the maximum CPU usage time in seconds.
-U < number of programs > number of programs the user can open.
-v < virtual memory size > Specifies the maximum amount of virtual memory that can be used, in kilobytes.

The ulimit-a is used to display the current various user process limits.
Linux for each user, the system limits its maximum number of processes. In order to improve performance, according to the equipment resource situation,
To set the maximum number of processes for each Linux user, let me set the maximum number of processes for a Linux user to 10,000:
Ulimit-u 10000
For Java applications that need to do many socket connections and leave them open,
It is best to modify the number of files that each process can open by using Ulimit-n xx, which is the default value of 1024.
Ulimit-n 4096 increase the number of files that can be opened by each process to 4096, with a default of 1024
Some of the important settings that other recommendations are set to unrestricted (unlimited) are:
Data segment Length: ulimit-d Unlimited
Maximum memory size: Ulimit-m Unlimited
Stack size: Ulimit-s Unlimited
CPU Time: Ulimit-t Unlimited
Virtual Memory: Ulimit-v Unlimited
Our company server needs to adjust the Ulimit stack size parameter to unlimited infinity, using Ulimit-s unlimited only at the time of the shell effect, re-open a shell is invalid. So you have to add Ulimit-s unlimited on the last side of the/etc/profile, Source/etc/profile make the modified file effective.
PS: If you encounter a similar error message
Ulimit:max User Processes:cannot Modify limit: Operations not allowed
Ulimit:open Files:cannot Modify limit: Operations not allowed
Why is the root user possible? What kind of problems do ordinary users encounter?
Look at/etc/security/limits.conf and you'll probably understand.
Linux has a default ulimit limit for users, and this file can be configured with the user's hard and soft configuration, and the hard configuration is an upper limit.
Changes exceeding the upper limit will cause an "Operation not allowed" error.
In Limits.conf Plus
* Soft Noproc 10240
* Hard Noproc 10240
* Soft Nofile 10240
* Hard Nofile 10240
is to limit the maximum number of threads and the number of files to 10240 for any user.

Linux Server error Too many open files wrong solution

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.