The maximum number of processes, the maximum number of threads, the number of files opened by the process, and the Ulimit command to modify hardware resource limits under Linux __linux

Source: Internet
Author: User
Tags mongodb posix postgresql

ulimit command to view and change system limits ulimit Command Detailed



Ulimit the resources used by the shell startup process to set system limits



Syntax format



Ulimit [-ACDFHLMNPSSTVW] [size]



Defined in the/etc/security/limits.conf file
Limit.

Command Arguments Description Example
-H Set the hard resource limit once the settings cannot be increased. ULIMIT–HS 64; Limit hard resources, thread stack size is 64K.
-S Set up soft resource limits, which can be added after setting, but not more than hard resource settings. ULIMIT–SN 32; Limit soft resources, 32 file descriptors.
-A Display all current limit information Ulimit–a Display all current limit information
-C The size of the largest core file, in blocks Ulimit–c Unlimited, no limit on the size of the generated core file
-D The size of the process's largest data segment, in Kbytes Ulimit-d Unlimited, no restrictions on the size of the data segment of a process
-F The process can create a maximum value for the file, in blocks ulimit–f 2048; Limit the maximum file size that a process can create is 2048 blocks
-L Maximum lock memory size, in Kbytes Ulimit–l 32 Limit maximum lock memory size to Kbytes
-M Maximum memory size, in units of Kbytes Ulimit–m Unlimited, no limit on maximum memory
-N You can open the maximum number of file descriptors Ulimit–n 128; Limit the maximum available 128 file descriptors
-P The size of the pipe buffer, in Kbytes Ulimit–p 512; Limit the size of the pipe buffer to Kbytes
-S Thread stack size, in units of Kbytes Ulimit–s 512; The size of the limit line stacks is Kbytes
-T Maximum CPU time, in seconds Ulimit–t Unlimited, no limit on maximum CPU time
-U Maximum number of processes available to the user Ulimit–u 64; Limit users to use up to 64 processes
-V The maximum available virtual memory for the process, in Kbytes Ulimit–v 200000; Limit the maximum available virtual memory to 200000 Kbytes


We can use Ulimit-a to view all the limitations of our system






Of course, we all know that most of Linux's command settings are on a temporary basis, and the Ulimit command only takes effect on the current terminal.



If the need for permanent effect, we have two ways,



One is to write the commands to profile and BASHRC, which is equivalent to automatically dynamically modifying the limit when landing



Another is to add a record in the/etc/security/limits.conf (requires a reboot to take effect, and the seesion in/etc/pam.d/is used to the limit module). 


limits.conf File Appendix


The format of the/etc/security/limits.conf modification restrictions are as follows



Domino Type Item Value


Parameters Description
Domino is the user name or group name that begins with the symbol @, * represents all users
Type Set to hard or soft
Item Specify the resources you want to restrict. such as Cpu,core Nproc or Maxlogins
Value is the corresponding



maximum number of processes maximum theoretical calculation of processes in Linux each process occupies two table entries in the Global Segment description table GDT

Each process's local segment describes the table Ldt as a separate segment, and in the Global Segment Description table GDT there is a table entry that points to the starting address of the segment and describes the length of the segment and some other parameters. In addition to the above, each process also has a TSS structure (task status segment) as well. Therefore, each process occupies two table entries in the Global Segment Description table GDT. How big is the capacity of the GDT?



The segment registers are used as the GDT in the table under the position of the object segment width is 13 bits, so GDT can have 213=8192 2^13=8192 descriptor.



In addition to some system overhead (such as the 2nd and 3rd items in GDT for the kernel's code snippets and data segments, the 4th and 5th items are always used for code snippets and data segments of the current process, and the 1th item is always 0, and so on, and 8,180 table entries are available, so the theoretical maximum number of processes in the system is 8180 /2=4090 8180/2=4090.



So the theoretical maximum number of processes in the system is the actual value of the number of processes that can be created in the 4090 system



The Linux kernel marks the process through the process identity value (processes identification value)-pid, the PID is a number, the type bit pid_t, is actually the int type



For compatibility with older versions of UNIX or Linux, the maximum PID value is set by default bit 32768 (the maximum for short int). 


View

You can use Cat/proc/sys/kernel/pid_max to view actual values of the number of processes that can be created on the system

Modify

Ulimit-u 65535


After Setup, although we set the hard limit and soft limit for the number of user creation processes are 65535, we are not able to use the Create 65,535 processes



We also need to set the kernel parameter Kernel.pid_max in Linux, this parameter I install by default is 32768,



So even if you use the root account without setting this kernel parameter, the maximum number of processes that the entire system can create is 32768, so we need to set the following:





Sysctl-w  kernel.pid_max=65535

Maximum number of threads

The maximum number of threads for a single process in a Linux system has its maximum limit Pthread_threads_max



This restriction can be viewed in/usr/include/bits/local_lim.h
For linuxthreads This value is generally 1024, there is no hard limit for NPTL, only limited by the resources of the system



The resource of this system is mainly the memory occupied by the thread stack, with ulimit-s can see the default thread stack size, in general, this value is 8M=8192KB






Can write a simple code to verify the maximum number of threads that can be created





Include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

void func ()
{
}

int main (void)
{
    int i = 0;
    pthread_t thread;

    while (1)
    {
        if (pthread_create (&thread, NULL, func, NULL)!= 0)
        {return
            ;
        }

        i++;
        printf ("i =%d\n", i);
    }

    return exit_success;
}





Experiments show that up to 381 threads can be created on linuxthreads in our system (Ubuntu-14.04-lts-64bit), and then return the  maximum number of threads that can theoretically be created by a single process in Eagain Linux



and 32-bit system, you can wear 381 threads, this value and theory exactly, because the 32-bit Linux process user space is 3G size, that is, 3072M, with 3072m/8m=384 3072m/8m = 384, but in fact, code snippets and data segments and so on to occupy some Space, this value should be rounded down to 383, minus the main thread and getting 382.



Why is there one less thread on the linuxthreads? That's right, because Linuxthreads also needs a management thread

To break the memory limit, there are two ways to

Use ulimit-s 1024 to reduce the default stack size

When calling Pthread_create, set a smaller stack size with pthread_attr_getstacksize

It is important to note that even this does not break the hard limit of 1024 threads, unless you recompile the maximum number of open files in C library file-max System maximum open file descriptor



/PROC/SYS/FS/FILE-MAX Specifies the number of file handles that can be opened by all processes in the system-wide (System level, kernel-level).



The value in File-max denotes the maximum number of file handles this Linux kernel would allocate).



This value should be added when you receive an error message such as "Too many open files in system".



For 2.2 of the kernel, also need to consider Inode-max, the general Inode-max set to File-max 4 times times. For kernel 2.4 and beyond, this file is not Inode-max.


  View actual values

You can use Cat/proc/sys/fs/file-max to view the number of file descriptors that a single process can open in the current system
186405
set up temporary

echo 1000000 >/proc/sys/fs/file-max Permanent: Set in/etc/sysctl.conf

Fs.file-max = 1000000 Nr_open is the maximum number of files that a single process can allocate


The maximum number of file handle supported by the kernel, that is, the maximum number of file handle a process uses



The maximum number of files that can is opened by process.





A process cannot use the more than Nr_open file descriptors.
A process cannot use more than Nr_open file descriptors.


nofile Process Maximum Open file descriptor View actual values



Ulimit-n






Of course, the default view is the soft resource limit value soft limit, if you want to see the number of single process maximum open file descriptors that your system hardware can support, you can use Ulimit-hn



set up temporary



To set the maximum number of open file descriptors by ULIMIT-SN soft limit, note that soft limit cannot be greater than hard limit (Ulimit-hn can view hard limit)



In addition ulimit-n The default view is soft limit, but ulimit-n 1800000 sets both soft limit and hard limit.



For non-root users, you can only set hard limit that are smaller than the original. Permanent



The above method is only temporary, logout login is invalid, and can not increase hard limit, only in hard limit scope to modify soft limit.



To make the modification permanent, you need to set it in/etc/security/limits.conf (requires root permission) and add the following two lines to indicate that the user Chanon the maximum number of open file descriptors soft limit to 1800000,hard Limit is 2000000. The following settings require you to log off and then log back on to take effect:





Chanon           Soft    nofile          102400
chanon hard nofile          409600


Set Nofile hard Limit There is also a point to note is that hard limit can not be greater than/proc/sys/fs/nr_open, if hard limit greater than nr_open, log off after the normal logon.






You can modify the value of Nr_open:



The relationship between echo 2000000 >/proc/sys/fs/nr_open  file-max, Nr_open, Onfile


For the user to open the maximum number of files limit, in the limits.conf corresponding nofile, whether the man manual or file is a statement is just a word



【Maximum number of open files】



It actually corresponds to the maximum number of files that a single process can open, usually for the sake of convenience, and we want to remove its limitations



According to the Man Handbook, "values-1, unlimited or infinity indicating no Limit",-1, unlimited, infinity are all indications of unrestricted



But when you actually set this value to Nofile, you will find that you cannot log on to the system until you reboot.



This shows that the nofile has a cap, while using the Ulimit test:





Ulimit-n Unlimited


Bash:ulimit:open Files:cannot Modify limit: Operations not allowed






Write a simple for loop to derive:





For  V in ' seq  100000  10000000 ';d o ulimit-n $V; [[$?!= 0]]&&break;done


Then execute ulimit-n, you can see that 1048576 is the maximum value of nofile, but why is this value.



1024∗1024=1048576 1024*1024=1048576, of course, this has no egg to use.



And after that, we'll see that this value is actually defined by kernel parameter nr_open:


We are going to talk about Nr_open, and File-max, on the internet when it comes to setting the maximum number of files occasionally some posts also said to modify the File-max, literally see File-max is like the corresponding maximum number of files, and in the Linux kernel document two explanations are: File-max:
The value in File-max denotes the maximum number of file-
Handles that the Linux kernel would allocate. When your get lots
of error messages about running out of the file handles, you might
Want to increase this limit



Executive: Grep-r Memtotal/proc/meminfo | awk ' {printf ('%d ', $2/10)} ' can be seen similar to File-max;



Nr_open:
This denotes the maximum number of file-handles a process can
Allocate. Default value is 1024*1024 (1048576) which should be
enough for most machines. Actual limit depends on Rlimit_nofile
Resource limit.



File-handles (that is, file handles), and then in Unix/linux we are more exposed to file Discriptor (FD, or file descriptor), it seems that file-handle in Windows is a similar file Discrptor, but we are talking about Linux, and then Google, we can be accurate to the C language of the two concepts of the difference,



According to their discussion file-handle should be a high-level object, using Fopen,fread and other functions to invoke, and FD is the underlying object, can be called through functions such as open,read.



In this case, we should be able to make a general conclusion that File-max is the maximum number of files that the kernel can allocate, Nr_open is the maximum number of files that can be allocated by a single process. So when we use Ulimit or limits.conf to set, if we want to exceed the default 1048576 value, we need to increase the Nr_open value first (sysctl-w fs.nr_open=100000000 or write directly to the sysctl.conf file). Of course millions other single process maximum File-handle open number should also be enough.



All processes can open no more file descriptors than/proc/sys/fs/file-max



The number of file descriptors opened by a single process cannot exceed the Nofile soft in user limit limit



Nofile's soft limit cannot exceed its hard limit



Nofile's hard limit cannot exceed/proc/sys/fs/nr_open other

The following content reproduced from

Linux single process can create maximum number of threads the main difference between the 2.4 kernel and the 2.6 kernel



On the typical system of the 2.4 kernel (AS3/RH9), threads are implemented with a lightweight process, each thread to occupy a process ID, in the server program, if you encounter a high click-through access, will cause the process table overflow, the system in order to maintain the overflow process table, there will be intermittent pause service phenomenon, And the 2.6 kernel does not occur because the creation and destruction of a large number of threads causes a process table overflow problem thread end must free up the thread stack

That is, the thread function must call the end of Pthread_exit (), or it will not be released until the main process function exits. In particular, the 2.6 kernel environment, thread creation speed, inadvertently immediately memory is eaten up, this is the 2.4 kernel environment is good, because the 2.4 kernel creates the process, and the thread creation speed is several orders of magnitude slower than the 2.6 kernel. Specifically, the speed of creating threads in the 64-bit cpu,2.6 kernel is even crazier, and if it's too fast, add usleep () pause for a little time. don't write thread applications that require locks


Only those programs that do not require mutexes can maximize the benefits of threading programming, or they will only be slower, and the 2.6 kernel is a preemptive kernel, Sharing conflicts between threads is far more likely to occur than the 2.4 kernel environment, especially pay attention to thread safety, otherwise it is a single CPU will also occur inexplicable memory is not synchronized (CPU cache and main memory content inconsistent), Intel's new CPU for performance using NUMA architecture, online programming must pay attention to avoid weaknesses. the maximum number of concurrent threads and memory for a single process server


Very interesting, under the default ulimit parameter, do not modify kernel header file
AS3 512M memory up to 1000 concurrent continuous connection
CentOS4.3 512M memory up to 300 concurrent persistent connection
seems to be centos less than AS3, Here is the main reason is Ulimit configuration, the two systems default configuration gap is very large, to maintain a single process more threads to receive concurrent connections, it is necessary to minimize the ulimit-s parameters, insert more memory, single process Server 2000 concurrent is not difficult, POSIX default limit is 64 threads per process, but NTPL is not pure POSIX, regardless of this limit, the real limit in the 2.6 kernel is the number of slots in the memory bar (and perhaps the amount of money to buy memory)
In recent days of programming, Note that on the 32-bit x86 platform, the 2.6 kernel single process creates the maximum number of threads =virt Upper/stack, is not related to the total number of memory, 32-bit x86 system default Virt upper limit is 3G (memory allocation 3g+1g), the default stack size is 10240K, As a result, the default upper limit of the single process creation thread is also 3072m/10240k, with Ulimit-s modified stack to 1024K to raise the upper limit to about 3050. I don't have 64-bit systems on hand, and I don't know if the 2.6 kernel creates thread caps on the 64-bit process (in fact, I'm too lazy to upload fc4_x86_64 on my colleague's machine).
a few days ago bought a cheap 64-bit x86 system (64-bit Yang + 915 motherboard), installed the x86_64 version of CentOS4.3, ran the following small program, the result is: In the case of ulimit-s 4096, The maximum number of threads in a single process is more than 16,000, and the top of the
Virt is 64G, or 36 bits, the result of Cat/proc/cpuinfo is: address sizes:36 bits physical, and bits virtual, Unlike the standard 64-bit system I imagined, I always assumed that the 64-bit system's memory space was also 64-bit 


Note 1
Unit in a BSD fans with AMD64 notebook run applet test thread creation speed (thread creation immediately after Phread_detach () then followed by Pthread_exit (), a total of 1 million threads), The same source OpenBSD unexpectedly 3 times times faster than FreeBSD, when OpenBSD also became crazy up. 

NOTE 2

Test single process Create thread cap C source code (TEST.C)





#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>

void * Thread_null (void);

int main (int argc, char *argv[])
{
    unsigned int    i;
    int             RC;
    pthread_t       pool_id[65536];//thread ID sleep

    (1);

    Create thread
    for (i = 0; i < 65536 i++)
    {
        rc = pthread_create (pool_id + i, 0, (void *) thread_null, null);if (RC!= 0)
        {
            fprintf (stderr, "pthread_create () Failure\r\nmax pthread num is%d\r\n", i);
            Exit ( -1);
        }
    }

    fprintf (stdout, "Max pthread num is 65536\r\nyour system is power_full\r\n");

    Exit (0);
}
void * Thread_null (void)
{
    Pthread_detach (pthread_self ());
    Sleep (a);
    Pthread_exit (NULL);
}



Linux Parameters/proc/sys/fs Detailed


Alibaba Cloud Hot Products

Elastic Compute Service (ECS) Dedicated Host (DDH) ApsaraDB RDS for MySQL (RDS) ApsaraDB for PolarDB(PolarDB) AnalyticDB for PostgreSQL (ADB for PG)
AnalyticDB for MySQL(ADB for MySQL) Data Transmission Service (DTS) Server Load Balancer (SLB) Global Accelerator (GA) Cloud Enterprise Network (CEN)
Object Storage Service (OSS) Content Delivery Network (CDN) Short Message Service (SMS) Container Service for Kubernetes (ACK) Data Lake Analytics (DLA)

ApsaraDB for Redis (Redis)

ApsaraDB for MongoDB (MongoDB) NAT Gateway VPN Gateway Cloud Firewall
Anti-DDoS Web Application Firewall (WAF) Log Service DataWorks MaxCompute
Elastic MapReduce (EMR) Elasticsearch

Alibaba Cloud Free Trail

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.