Understanding and modification of the maximum number of open files for Linux applications

Source: Internet
Author: User

A Java program running on a Linux system has been running for a period of time after a "Too many open files" exception occurs.

This situation is common in scenarios such as high concurrent access to file systems, multithreaded network connections, and so on. Programs frequently accessed files, sockets in Linux are file files, the system needs to record each current Access file name, location, access authority and other related information, such an entity is called file entry. The Open Files table (the orange logo in the figure) stores these file entry, which are managed linearly as an array. File descriptor as a pointer to the open files table, which is the subscript index of the Open files table, associates each process with the file it accesses.

Each process has a file descriptor table that manages all the files that are accessed by the current process (open or create), and the file descriptor is associated with the file entry of the files in the Open Files table. The details are not table and how many file entry can be accommodated for open files table. The Linux system configures the file limit for open files table, and if it exceeds the configured value, it rejects requests for other file operations and throws the too many open files exception. This restriction has a system-level and user-level point.

System level:

System-level settings are valid for all users. System maximum file limit can be viewed in two ways

1 Cat/proc/sys/fs/file-max

2 Sysctl-a View the number of configurations Fs.file-max this item in the results

If you need to increase the number of configurations to modify the/etc/sysctl.conf file, configure the Fs.file-max property, if the property does not exist, add it.

Use Sysctl-p to notify the system to enable this configuration after configuration is complete

User level:

Linux restricts the number of connected files per logged-on user. The current valid settings can be viewed through ulimit-n. If you want to modify this value, use the Ulimit-n command.

For the increase in file descriptor ratio, the data recommendation is based on the power of 2 for reference. If the current file descriptor number is 1024, can be increased to 2048, if not enough, can be set to 4096, and so on.

After the too many open files problem appears, the main reason must be found first. The biggest possibility is that the open file or socket does not shut down properly. To locate whether the problem is caused by a Java process, view the current process usage file descriptor situation through the Java process number:

Lsof-p $java _pid Specific properties for each file descriptor

Lsof-p $java _pid | Wc-l the total amount of FD in the current Java Process file descriptor table

Analyze the result of the command to determine whether the problem is caused by an abnormal release of the resource.

If we are just ordinary users, just temporarily modify the ulimit-n, can be directly shell command to modify (Ulimit-n 1024000). But this setting is temporarily reserved! When we exit bash, the value reverts to the original value.

If you want to permanently modify the ulimit, you need to modify/etc/security/limits.conf.

Vim/etc/security/limits.conf

# Add the following line

* Soft Nofile 2048

* Hard Nofile 2048

The following is a description:

* Represents for all users

Noproc represents the maximum number of processes

Nofile is the number of open files representing the maximum file

To add a format:

[username | @groupname] Type resource limit

[username | @groupname]: Sets the user name that needs to be restricted, the group name is preceded by the @ and the user name differs. You can also use the wildcard character * to restrict all users.

Type: Soft,hard and-,soft refer to the setting values that are currently in effect for the system. Hard indicates the maximum value that can be set in the system. The soft limit cannot be higher than the hard limit. -The soft and hard values are also set.

Resource

Core-Limit the size of kernel files (KB)

Date-Maximum data size (KB)

Fsize-Maximum file size (KB)

Memlock-Maximum lock memory address space (KB)

Nofile-Maximum number of open files

RSS-Maximum persistent setting size (KB)

Stack-Maximum stack size (KB)

CPU-Maximum CPU time in minutes

Noproc-Maximum number of processes

As-address space limitations

Maxlogins-Maximum number of logons allowed by this user

Instance:

Username Soft Nofile 2048

Username Hard Nofile 2048

@groupname Soft Nofile 2048

@groupname Hard Nofile 2048

Understanding and modification of the maximum number of open files for Linux applications

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.