In-depth analysis of linux ulimit restrictions

Source: Internet
Author: User

In-depth analysis of linux ulimit restrictions
Generally, you can run the ulimit command or edit/etc/security/limits. the conf reload method takes effect directly through ulimit, but only valid in the current session, limits. in conf, the user can take effect in the next login according to the user and restrictions. for limits. the conf setting takes effect by loading pam_limits.so, for example,/etc/pam. d/sshd. In this way, the limit will be loaded during ssh login. or in/etc/pam. d/login loading takes effect. the following section analyzes various constraints: core file size (blocks,-c) 0 data seg size (kbytes,-d) unlimitedscheduling priority (-e) 20 afile size (blocks, -f) unlimited apending signals (-I) 16382max locked mem Ory (kbytes,-l) 64 amax memory size (kbytes,-m) unlimited aopen files (-n) 1024 apipe size (512 bytes,-p) 8 POSIX message queues (bytes,-q) 819200real-time priority (-r) 0 stack size (kbytes,-s) 8192cpu time (seconds,-t) unlimitedmax user processes (-u) unlimitedvirtual memory (kbytes,-v) unlimitedfile locks (-x) unlimited 1) restrict the file size generated by a process) the-H parameter is used for the hard limitation and soft limitation of ulimit, and the-S parameter is used for the soft limitation. ulimit- You can see the soft limit. You can see the hard limit through ulimit-a-H. if ulimit does not limit the use of-H or-S, it will change both types of restrictions at the same time. soft restrictions can restrict the use of resources by users/groups. both Super Users and common users can expand the hard limit, but super users can narrow down the hard limit, while ordinary users cannot narrow down the hard limit. after the hard limit is set, the soft limit can only be less than or equal to the hard limit. the following tests apply to hard and soft limits. 1) the soft limit cannot exceed the hard limit under the Super User, while modifying the hard/soft limit, make the current session only be able to create a 100 kb file ulimit-f 100 view the size of the currently created file hard limit is KBulimit-H-f100 at this time limit the current session soft limit to kb, error: ulimit-S-f 1000-bash: ulimit: file size: cannot modify limit: Invalid argument 2) Hard limit cannot be less than soft limit Under the super user, the user can view the current soft limit, which is unlmiitedulimit-S-funlimited. At this time, the hard limit of the file created in the current session is changed to kb, and an error cannot be modified, the hard limit cannot be less than the soft limit ulimit-H-f 1000-bash: ulimit: file size: cannot modify limit: Invalid argument. If we change the soft limit of the file to kb, then you can modify the hard limit of ulimit-S-f 900 ulimit-H-f 1000. 3) common users can only narrow down the hard limit, super Users can expand the hard limit. Common users can access the system su-test to view the hard limit ulimit-H-funlimited of the size of the created file. In this case, the hard limit ulimit-H-f 1000 can be reduced, but not expand the hard limit ulimit-H-f 10000 4) hard limit control soft limit, soft limit To restrict the use of resources by using soft restrictions. The size of the file to be created is 1000 KBulimit-S-f 1000. The size of the file to be created is 2000 KBulimit-H-f 2000. 3 MB File dd if =/dev/zero of =/tmp/test bs = 3 M count = 1 File size limit exceeded view the size of/tmp/test is 1000KB, it indicates that soft restrictions play a decisive role in resource control. ls-lh/tmp/test-rw-r -- 1 root 1000 K/tmp/test file size unit is KB. (2) Restrictions on process priority (scheduling priority) the priority here refers to the NICE value which only works for common users and does not work for Super Users. This problem is caused by CAP_SYS_NICE. for example, adjust the n values that common users can use The ice value ranges from-10 to 20. the hard limit of nice is between-15 and 20. ulimit-H-e 35 soft restrictions nice limit is-10 to 20 ulimit-S-e 30 use nice command, make the nice value of the executed ls-10 nice-n-10 ls/tmpssh-BossiP2810 ssh-KITFTp2620 ssh-vIQDXV3333 use the nice command to make the nice value of the executed ls-11, at this time, the ulimit soft limit on nice is exceeded, and an exception occurs. nice-n-11 ls/tmpnice: cannot set niceness: Permission denied 3) Maximum memory Lock value (max locked memory) This value only works for common users, it does not work for super users. This problem is caused by CAP_IPC_LOCK. linux manages the memory by page, which means that data in the physical memory is switched to Swap or disk. if necessary, the data will be switched to the physical memory, and locking the data to the physical memory can avoid data switching in/out. there are two reasons for locking the memory: 1) the data needs to be locked to the physical memory due to programming needs, such as oracle and other software. 2) It is mainly a security requirement, such as the user name and password. It is switched to a swap or disk and may be leaked, so it is always locked to the physical memory. the mlock () function is used to lock the memory. The prototype of mlock is int mlock (const void * addr, size_t len). The test procedure is as follows: # include <stdio. h> # include <sys/mman. h> int main (int argc, char * argv []) {int array [2048]; if (mlock (const void *) array, sizeof (array )) =-1) {perror ("mlock:"); retur N-1;} printf ("success to lock stack mem at: % p, len = % zd \ n", array, sizeof (array )); if (munlock (const void *) array, sizeof (array) =-1) {perror ("munlock:"); return-1 ;} printf ("success to unlock stack mem at: % p, len = % zd \ n", array, sizeof (array); return 0 ;} the above gcc mlock_test.c-o mlock_test program locks 2 kb of data into the physical memory. we adjusted the max locked memory of ulimit. ulimit-H-l 4 ulimit-S-l 1. /mlock_te1_lock: Can Not allocate memory. we can enlarge the max locked memory limit to 4 kb and execute the above program. ulimit-S-l 4. /mlock_testsuccess to lock stack mem at: 0x7fff1f039500, len = 2048 success to unlock stack mem at: 0x7fff1f039500, len = 2048 Note: The above program cannot be executed if it is adjusted to 3 kb, in addition to this code, we also use other dynamic link libraries. 4) Restrictions on opening files by processes (open files) this value is for all users and indicates the number of files that can be opened in the process. for example, if we change the value of open files to 3 ulimit-n 3, opening the/etc/passwd file fails. cat/etc/passwd-bash: start_pipeline: pgrp pipe: Too When open files-bash:/bin/cat: Too when open files 5) The value of pending signals is applicable to all users, indicates the maximum number of signals that can be suspended or blocked. We use the following program for testing. The source code is as follows: # include <stdio. h> # include <string. h> # include <stdlib. h> # include <signal. h> # include <unistd. h> volatile int done = 0; void handler (int sig) {const char * str = "handled... \ n "; write (1, str, strlen (str); done = 1 ;}void child (void) {int I; for (I = 0; I <3; I ++) {ki Ll (getppid (), SIGRTMIN); printf ("child-BANG! \ N ");} exit (0);} int main (int argc, char * argv []) {signal (SIGRTMIN, handler); sigset_t newset, oldset; sigfillset (& newset); sigprocmask (SIG_BLOCK, & newset, & oldset); pid_t pid = fork (); if (pid = 0) child (); printf ("parent sleeping \ n"); int r = sleep (3); printf ("woke up! R = % d \ n ", r); sigprocmask (SIG_SETMASK, & oldset, NULL); while (! Done) {}; printf ("exiting \ n"); exit (0);} compile source program: gcc test. c-o test executes the program test. At this time, the subroutine sends three SIGRTMIN signals. After 3 seconds, the parent program receives and processes the signal .. /testparent sleepingchild-BANG! Child-BANG! Child-BANG! Woke up! R = 0handled... handled... handled... exiting Note: here we use the real-time signal (SIGRTMIN), for example, kill (getppid (), SIGRTMIN). If it is not a real-time signal, it can only be received once. if we change the pending signals value to 2, only two signals can be suspended, and the third signal will be ignored. ulimit-I 2. /testparent sleepingchild-BANG! Child-BANG! Child-BANG! Woke up! R = 0handled... handled... exiting 6) You can create the maximum value of the POSIX message queue, measured in bytes. (POSIX message queues) we use the following program to test the limitations of POSIX message queues: # include <stdio. h> # include <string. h> # include <stdlib. h> # include <unistd. h> # include <mqueue. h> # include <sys/stat. h> # include <sys/wait. h> struct message {char mtext [128] ;}; int send_msg (int qid, int pri, const char text []) {int r = mq_send (qid, text, strlen (text) + 1, pri); if (r =-1) {Perror ("mq_send");} return r;} void producer (mqd_t qid) {send_msg (qid, 1, "This is my first message. "); send_msg (qid, 1," This is my second message. "); send_msg (qid, 3," No more messages. ");} void consumer (mqd_t qid) {struct mq_attr mattr; do {u_int pri; struct message msg; ssize_t len; len = mq_receive (qid, (char *) & msg, sizeof (msg), & pri); if (len =-1) {perror ("mq_receive"); break;} printf ("Got pri % d' % s' len = % d \ n", pri, msg. mtext, len); int r = mq_getattr (qid, & mattr); if (r =-1) {perror ("mq_getattr"); break ;}} while (mattr. mq_curmsgs);} intmain (int argc, char * argv []) {struct mq_attr mattr = {. mq_maxmsg = 10 ,. mq_msgsize = sizeof (struct message)}; mqd_t mqid = mq_open ("/myq", O_CREAT | O_RDWR, S_IREAD | S_IWRITE, & mattr); if (mqid = (mqd_t) -1) {perror ("mq_open"); exit (1 );} Pid_t pid = fork (); if (pid = 0) {producer (mqid); mq_close (mqid); exit (0);} else {int status; wait (& status); consumer (mqid); mq_close (mqid);} mq_unlink ("/myq"); return 0 ;}compile: gcc test. c-o test limits the maximum POSIX message queue to 1000 bytes ulimit-q 1000. Here we run the test program. /testmq_open: Cannot allocate memory program reports that memory Cannot be allocated. use strace to track the running process of test. An error is reported in the following statement. mq_open ("myq", O_RDWR | O_CREAT, 0600, {mq_maxmsg = 10, mq_msgsize = 128}) = -1 ENOMEM (Cannot allocate memory) {mq_maxmsg = 10, mq_msgsize = 128} is 128*10 = 1280 bytes. This indicates that the number of POSIX message queues exceeds the limit of 1000 bytes. when we adjust the maximum value of the POSIX message queue to 1360, the program can run. ulimit-q 1360. /testgot pri 3' No more messages. 'len = 18got pri 1 'This is my first message. 'len = 26got pri 1 'This is my second message. 'len = 27 7) CPU usage time of the program. Unit: Second (cpu time). We use the following code to test the CPU usage time limit of the program. The source code is as follows: # include <stdio. h> # include <math. h> int main (void) {Double pi = M_PI; double pisqrt; long I; while (1) {pisqrt = sqrt (pi) ;}return 0 ;}compile: gcc test. c-o test-lm runs the program test, and the program will keep repeating until it is interrupted by CTRL + C .. /test ^ C Use ulimit to change the CPU usage time of the program to 2 seconds, and then run the program. ulimit-t 2. the/testKilled program was finally killed. 8) restrict the range of real-time priority of programs, only for common users. (real-time priority) we use the following code to test the range of real-time priority of the program: # include <stdio. h> int main (void) {int I; for (I = 0; I <6; I ++) {printf ("% d \ n", I ); sleep (1);} return 0;} Compilation: gc C test. switch c-o test to common users to test su-ckhitler and run the test program chrt-f 20 with the real-time priority 20. /testchrt: failed to set pid 0's policy: Operation not permitted we use root to adjust the Real-Time ulimit priority to 20. test again. su-rootulimit-r 20 switches to a common user, and runs the program with real-time priority 20. You can run this program. su-ckhitlerchrt-r 20. /test012345 run the program with a real-time priority of 50 or report an error, which indicates that the ulimit restriction has played a role. chrt-r 50. /testchrt: failed to set pid 0's policy: Operation not permitted 9) restrict the number of processes that a program can fork, which is only valid for common users (max us Er processes) we use the following code to test the range of fork processes in the program: # include <unistd. h> # include <stdio. h> int main (void) {pid_t pid; int count = 0; while (count <3) {pid = fork (); count ++; printf ("count = % d \ n", count);} return 0;} compile: gcc test. c-o testcount = 1 count = 2 count = 3 count = 2 count = 3 count = 1 count = 3 count = 2 count = 3 count = 3 count = 3 count = 2 count = 3 count = 3 the number of fork processes increases exponentially, here is the output of 14 processes. except itself, all the other 13 processes come from the test program fork. we will K is limited to 12, as shown below: ulimit-u 12 executes the test program again, here only the output of 12 processes .. /testcount = 1 count = 2 count = 3 count = 1 count = 2 count = 3 count = 2 count = 3 count = 3 count = 2 count = 3 count = 3 count = 3 10) to limit the core file size, use the following code to test the core size generated by the Program: # include <stdio. h> static void sub (void); int main (void) {sub (); return 0;} static void sub (void) {int * p = NULL; printf ("% d", * p);} Compilation: gcc-g test. c-o test runs the program test, and a segment error occurs .. /testSegm Entation fault (core dumped) if there is no core file in the current directory, we should adjust the ulimit to limit the core size. If the core file size is specified as 0, core files will not be generated. the core file size is set to 10 blocks. note: A blocks is 1024 bytes here. ulimit-c 10 run this program again. /testSegmentation fault (core dumped) check the size of the core file ls-lh core-rw ------- 1 root 12 K core we set 10 blocks to 10*1024 or not 10 KB. Why is it 12 kb, because it increases by 4 kb. if we adjust to 14 blocks files, we will generate a maximum of 16 KB core files. 11) restrict the size of data segments used by processes (data seg size). Generally This restriction affects the application to request from the kernel when the program calls brk (System Call) and sbrk (library function) to call malloc. restrict the use of a Data Segment up to 1 kb ulimit-d 1 use norff to open the/etc/passwd file nroff/etc/passwdSegmentation fault and use strace to track program running. strace nroff/etc/passwd prints the following results to prove that brk is called to apply for a new memory when the program is insufficient for memory allocation. Due to ulimit restrictions, the application fails. munmap (0x7fc2abf00000, 104420) = 0rt_sigprocmask (SIG_BLOCK, NULL, [], 8) = 0 open ("/dev/tty", O_RDWR | O_NONBLOCK) = 3 close (3) = 0brk (0) = 0xf5b000brk (0xf5c 000) = 0xf5b000brk (0xf5c000) = 0xf5b000brk (0xf5c000) = 0xf5b000 --- SIGSEGV (Segmentation fault) @ 0 (0) --- ++ killed by SIGSEGV ++ Segmentation fault we use a test program to test the data segment restriction. the source program is as follows: # include <stdio. h> int main () {int start, end; start = sbrk (0); (char *) malloc (32*1024); end = sbrk (0 ); printf ("hello I used % d vmemory \ n", end-start); return 0;} gcc test. c-o test. /testhello I used 0 vmemory connection Change the limit to kb after ulimit to run the program again. /testhello I used 167936 vmemory 12) restrict the size of the stack segment used by the process. We use ulimit to adjust the size of the stack segment to 16, that is, 16*1024. run the command ls-l/etc/Segmentation fault (core dumped) again in ulimit-s 16. Then, use strace to trace the running process of the command strace ls-l/etc/and find that it calls getrlimit, the limit here is 16*1024, which is not enough for the stack used when the program is running. getrlimit (RLIMIT_STACK, {rlim_cur = 16*1024, rlim_max = 16*1024}) = 0 note: on the 2.6.32 system, ls-l/etc/does not cause insufficient stack usage. In this case, you can use reverse CT to trigger this problem. example: expectTcl_Init failed: out Stack space (infinite loop ?) 13) restrict the virtual memory size of processes. We use ulimit to adjust the virtual memory to 8192 KBulimit-v 8192 to run lslsls: error while loading shared libraries: libc. so.6: failed to map segment from shared object: Cannot allocate memoryls load libc. so.6 an error is reported during the dynamic library, prompting that the memory is insufficient. use strace to track the running process of ls. The following output shows that memory usage is insufficient when mmap is mapped out of memory. mmap (NULL, 3680296, PROT_READ | PROT_EXEC, MAP_PRIVATE | MAP_DENYWRITE, 3, 0) =-1 ENOMEM (Cannot allocate memory) close (3) = 0 writev (2, [{"ls", 2 },{ ":", 2 },{ "error while loading shared libra "..., 36 },{ ":", 2 },{ "libc. so.6 ", 9 },{": ", 2 },{" failed to map segment from share "..., 40 },{ ":", 2 },{ "Cannot allocate memory", 22 },{ "\ n", 1}], 10ls: error while loading shared libraries: libc. so.6: failed to map segment from shared object: Cannot allocate memory 14) Description of the remaining three ulimit limits (file locks/max memory size/pipe size) file lock restrictions are only available before the 2.4 kernel. the limit of resident memory does not work in many systems. the pipeline cache cannot be changed. It can only be 8*512 (bytes) bytes, that is, 4096 bytes.
 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.