It can generally be made effective by ulimit commands or by editing the/etc/security/limits.conf reload method
The Ulimit comparison is straightforward, but only valid in the current session, limits.conf can be used by users and restrictions to enable the user to take effect in the next logon.
The settings for limits.conf are valid through pam_limits.so loading, such as/etc/pam.d/sshd, which loads the limit when logged in via SSH.
Or the/etc/pam.d/login loading takes effect.
The following will be analyzed for various limitations
Core file size (blocks,-c) 0
Data seg Size (Kbytes,-D) Unlimited
Scheduling Priority (-e)
File size (blocks,-f) unlimited A
Pending signals (-i) 16382
Max locked Memory (Kbytes,-L)
Max memory Size (Kbytes,-m) unlimited A
Open files (-N) 1024x768 A
Pipe Size (bytes,-p) 8
POSIX message queues (bytes,-Q) 819200
Real-time priority (-R) 0
Stack size (Kbytes,-s) 8192
CPU time (seconds,-t) unlimited
MAX User Processes (-u) Unlimited
Virtual Memory (Kbytes,-V) Unlimited
File locks (-X) Unlimited
A) limit the file size generated by the process (file size)
First of all, the hard limits and soft limits of ulimit.
Hard limit with-h parameter, soft limit with-s parameter.
Ulimit-a see the soft limit, ulimit-a-H can see the hard limit.
If Ulimit does not restrict the use of-h or-S, it will change both types of restrictions at the same time.
Soft limits can limit the use of resources by users/groups, and the effect of hard limits is to control soft limits.
Both the superuser and the average user can extend the hard limit, but the superuser can narrow down the hard limit and the average user cannot narrow the hard limit.
When the hard limit is set, the soft limit can only be less than or equal to the hard limit.
The following tests apply to both hard and soft limits.
1) Soft limit cannot exceed hard limit
Under Superuser, modify the hard/soft limit at the same time so that the current session can only build 100KB files
Ulimit-f 100
View the currently created file size with a hard limit of 100KB
Ulimit-h-F
100
Limit the current session to a soft limit of 1000KB, there is an error that cannot be modified
Ulimit-s-F 1000
-bash:ulimit:file size:cannot Modify Limit:invalid argument
2) hard limit cannot be less than soft limit
Under Superuser, the user looks at the current soft limit, which is unlmiited
Ulimit-s-F
Unlimited
At this time, modify the current session to create the file size of the hard limit of 1000KB, the error can not be modified, stating that the hard limit can not be less than soft limit
Ulimit-h-F 1000
-bash:ulimit:file size:cannot Modify Limit:invalid argument
If we change the soft limit of the created file size to 900KB, then we can modify its hard limit.
Ulimit-s-F 900
Ulimit-h-F 1000
3) Ordinary users can only narrow down the hard limit, the super user may enlarge the hard limit
Enter the system with a normal user
Su-test
View hard limits for creating file sizes
Ulimit-h-F
Unlimited
You can reduce the hard limit at this time
Ulimit-h-F 1000
But this hard limit cannot be enlarged
Ulimit-h-F 10000
4) Hard limit control soft limit, soft limit to restrict user's use of resources
Create file size 1000KB with soft limit limit
Ulimit-s-F 1000
Create file size 2000KB with hard limit limit
Ulimit-h-F 2000
Create 3MB-sized files
DD If=/dev/zero of=/tmp/test bs=3m count=1
File size limit exceeded
Viewing the size of the/tmp/test is 1000KB, indicating that the soft limit is decisive for resource control.
Ls-lh/tmp/test
-rw-r--r--1 root root 1000K 2010-10-15 23:04/tmp/test
The file size unit is KB.
II) Limitations on process priorities (scheduling priority)
The priority here refers to the nice value
This value only works for ordinary users and does not work for superuser, the problem is caused by cap_sys_nice.
For example, adjust the nice value that normal users can use between 10 and 20.
The hard limit for Nice is limited to 15 to 20.
Ulimit-h-E 35
Soft limit Nice is limited to 10 to 20
Ulimit-s-E 30
With the nice command, the nice value of the Execute LS is-10
Nice-n -10 ls/tmp
ssh-bossip2810 ssh-kitftp2620 ssh-viqdxv3333
With the nice command, the nice value of executing LS is-11, which exceeds the soft limit of ulimit to Nice, and an exception occurs.
Nice-n -11 ls/tmp
Nice:cannot Set Niceness:permission denied
(iii) Limitations on memory lockout values (max locked memory)
This value only works for ordinary users and does not work for superuser, the problem is caused by cap_ipc_lock.
Linux is page-managed for memory, which means that when there is no need, the data in the physical memory is swapped to the swap area or disk.
When needed, it is swapped to physical memory, and locking the data to physical memory avoids swapping in/out of the data.
There are two reasons to use locked memory:
1) It is necessary to lock the data into physical memory due to program design, such as software such as Oracle.
2) mainly security needs, such as user name and password, etc., are swapped to swap or disk, there is the possibility of disclosure, so it has been locked into physical memory.
The action of locking memory is done by the Mlock () function.
The prototype of Mlock is as follows:
int Mlock (const void *addr,size_t len);
The test procedure is as follows:
#include <stdio.h>
#include <sys/mman.h>
int main (int argc, char* argv[])
{
int array[2048];
if (Mlock (const void *) array, sizeof (array)) = =-1) {
Perror ("Mlock:");
return-1;
}
printf ("Success to lock Stack mem at:%p, len=%zd\n",
Array, sizeof (array));
if (Munlock (const void *) array, sizeof (array)) = =-1) {
Perror ("Munlock:");
return-1;
}
printf ("Success to unlock stack mem at:%p, len=%zd\n",
Array, sizeof (array));
return 0;
}
GCC Mlock_test.c-o mlock_test
Above this program, locking 2KB of data into physical memory, we adjust Ulimit max locked memory.
Ulimit-h-L 4
Ulimit-s-L 1
./mlock_test
Mlock:: Cannot allocate memory
We enlarge max locked memory limit to 4KB, can execute the above program.
Ulimit-s-L 4
./mlock_test
Success to lock Stack mem at:0x7fff1f039500, len=2048
Success to unlock stack mem at:0x7fff1f039500, len=2048
Note: If you adjust to 3KB, you cannot execute the above program because we will use other dynamic link libraries In addition to this code.
(iv) Restrictions on the opening of a process file ( open files)
This value is for all users and represents the number of files that can be opened in the process.
For example, we will change the value of open files to 3
Ulimit-n 3
The/etc/passwd file has failed to open at this time.
cat/etc/passwd
-bash:start_pipeline:pgrp Pipe:too many open files
-bash:/bin/cat:too Many open files
(v) Maximum number of signals that can be suspended (pending signals)
This value is for all users, indicating the maximum number of signals that can be suspended/blocked
We use the following procedure to test the source program as follows:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <signal.h>
#include <unistd.h>
volatile int done = 0;
void handler (int sig)
{
const char *STR = "handled...\n";
Write (1, str, strlen (str));
Done = 1;
}
void Child (void)
{
int i;
for (i = 0; i < 3; i++) {
Kill (Getppid (), sigrtmin);
printf ("child-bang!\n");
}
Exit (0);
}
int main (int argc, char *argv[])
{
Signal (sigrtmin, handler);
sigset_t Newset, Oldset;
Sigfillset (&newset);
Sigprocmask (Sig_block, &newset, &oldset);
pid_t pid = fork ();
if (PID = = 0)
Child ();
printf ("Parent sleeping \ n");
int r = Sleep (3);
printf ("Woke up! R=%d\n ", R);
Sigprocmask (Sig_setmask, &oldset, NULL);
while (!done) {
};
printf ("exiting\n");
Exit (0);
}
Compile the source program:
GCC Test.c-o Test
Executes the program test, at which time the subroutine sends three sigrtmin signals, and the parent program receives and processes the signal after 3 seconds.
./test
Parent Sleeping
child-bang!
child-bang!
child-bang!
Woke up! R=0
Handled ...
Handled ...
Handled ...
Exiting
Note: This is used to send real-time signals (sigrtmin), such as: Kill (Getppid (), sigrtmin);
If it is not a real-time signal, it can only be received once.
If we change the pending signals value to 2, it will only be guaranteed to suspend two signals, and the third signal will be ignored as follows:
Ulimit-i 2
./test
Parent Sleeping
child-bang!
child-bang!
child-bang!
Woke up! R=0
Handled ...
Handled ...
Exiting
VI) The maximum value that can be created using POSIX Message Queuing, in units of bytes. (POSIX message queues)
We test the limitations of POSIX Message Queuing with the following program:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <mqueue.h>
#include <sys/stat.h>
#include <sys/wait.h>
struct message{
Char mtext[128];
};
int send_msg (int qid, int pri, const char text[])
{
int r = mq_send (qid, text, strlen (text) + 1,pri);
if (r = =-1) {
Perror ("Mq_send");
}
return R;
}
void producer (mqd_t qid)
{
Send_msg (qid, 1, "This is my first message.");
Send_msg (qid, 1, "This is my second message.");
Send_msg (qid, 3, "No more messages.");
}
void consumer (mqd_t qid)
{
struct mq_attr mattr;
bo=
U_int pri;
struct message msg;
ssize_t Len;
len = mq_receive (qid, (char *) &msg, sizeof (msg), &PRI);
if (len = =-1) {
Perror ("mq_receive");
Break
}
printf ("Got pri%d '%s ' len=%d\n", PRI, Msg.mtext, Len);
int r = mq_getattr (qid, &mattr);
if (r = =-1) {
Perror ("mq_getattr");
Break
}
}while (MATTR.MQ_CURMSGS);
}
Int
Main (int argc, char *argv[])
{
struct Mq_attr mattr = {
. mq_maxmsg = 10,
. mq_msgsize = sizeof (struct message)
};
mqd_t mqid = Mq_open ("/myq",
O_creat| O_RDWR,
s_iread| S_iwrite,
&MATTR);
if (mqid = = (mqd_t)-1) {
Perror ("Mq_open");
Exit (1);
}
pid_t pid = fork ();
if (PID = = 0) {
Producer (MQID);
Mq_close (MQID);
Exit (0);
}
Else
{
int status;
Wait (&status);
Consumer (MQID);
Mq_close (MQID);
}
Mq_unlink ("/myq");
return 0;
}
Compile:
GCC Test.c-o Test
Limit the maximum number of POSIX message queues to 1000 bytes
ULIMIT-Q 1000
Here we execute the test program
./test
Mq_open:cannot Allocate Memory
The program reported that memory could not be allocated.
Use Strace to track test run process, in the following statement times wrong.
Mq_open ("Myq", o_rdwr| O_creat, 0600, {mq_maxmsg=10, mq_msgsize=128}) = 1 Enomem (cannot allocate memory)
{mq_maxmsg=10, mq_msgsize=128} is a 128*10=1280 byte, which indicates that the POSIX Message Queuing limit of 1000 bytes has been exceeded.
When we adjust the maximum value of POSIX Message Queuing to 1360, the program can run.
ULIMIT-Q 1360
./test
Got PRI 3 ' No more messages. ' Len=18
Got PRI 1 ' This is my first message. ' Len=26
Got PRI 1 ' This is my second message. ' Len=27
VII) The time that the program consumes the CPU, in seconds (CPU times)
We use the following code to test the CPU time limit of the program
The source program is as follows:
# include <stdio.h>
# include <math.h>
int main (void)
{
Double Pi=m_pi;
Double pisqrt;
Long I;
while (1) {
Pisqrt=sqrt (PI);
}
return 0;
}
Compile:
GCC Test.c-o test-lm
Run the program test, the program will continue to loop, only through CTRL + C interrupt.
./test
^c
Use Ulimit to change the CPU time of the program to 2 seconds before running the program.
Ulimit-t 2
./test
Killed
The program was killed at last.
Eight) limit the scope of the program's real-time priority, only for ordinary users. (Real-time priority)
We test the scope of the program's real-time priority with the following code
The source program is as follows:
# include <stdio.h>
int main (void)
{
int i;
for (i=0;i<6;i++)
{
printf ("%d\n", I);
Sleep (1);
}
return 0;
}
Compile:
GCC Test.c-o Test
Switch to normal user for testing
Su-ckhitler
Run the test program with real-time priority 20
Chrt-f./test
chrt:failed to set PID 0 ' s policy:operation not permitted
We use root to adjust the real-time priority of Ulimit to 20. Test again.
Su-root
Ulimit-r 20
Switch to the normal user, with real-time priority 20 run the program, you can run the program.
Su-ckhitler
Chrt-r./test
0
1
2
3
4
5
Running the program in real-time priority 50, or an error, indicates that the ulimit limit has played a role.
Chrt-r./test
chrt:failed to set PID 0 ' s policy:operation not permitted
IX) Limit the number of processes that a program can fork, only valid for normal users (max user processes)
We test the scope of the program's fork process with the following code
The source program is as follows:
#include <unistd.h>
#include <stdio.h>
int main (void)
{
pid_t pid;
int count=0;
while (count<3) {
Pid=fork ();
count++;
printf ("count=%d\n", count);
}
return 0;
}
Compile:
GCC Test.c-o Test
Count= 1
Count= 2
Count= 3
Count= 2
Count= 3
Count= 1
Count= 3
Count= 2
Count= 3
Count= 3
Count= 3
Count= 2
Count= 3
Count= 3
The number of process fork processes increases exponentially, here is the output of 14 processes. In addition to itself, the other 13 processes are the test program fork out.
We will limit the fork to 12, as follows:
Ulimit-u 12
Execute the test program again, where there are only 12 process outputs.
./test
Count= 1
Count= 2
Count= 3
Count= 1
Count= 2
Count= 3
Count= 2
Count= 3
Count= 3
Count= 2
Count= 3
Count= 3
Count= 3
Ten) limit the size of core files (core file size)
We use the following code to test the size of the program-generated core
Source:
#include <stdio.h>
static void sub (void);
int main (void)
{
Sub ();
return 0;
}
static void sub (void)
{
int *p = NULL;
printf ("%d", *p);
}
Compile:
Gcc-g Test.c-o Test
Run the program test with a segment error.
./test
Segmentation fault (core dumped)
If there is no core file in the current directory, we should adjust the ulimit to limit the size of the core, if the core file size is specified here as 0, the core file will not be generated.
This sets the core file size to 10 blocks. Note: A blocks is here for 1024 bytes.
Ulimit-c 10
Run this program again
./test
Segmentation fault (core dumped)
To view the size of a core file
LS-LH Core
-RW-------1 root root 12K 2011-03-08 13:54 core
We set 10 blocks should be 10*1024 is not 10KB, why it is 12KB, because its increment is 4KB.
If you adjust to 14 blocks, we will produce a core file of 16KB maximum.
11) Limit the size of the data segment used by the process (value seg size)
In general, this restriction affects program calls BRK (System calls) and SBRK (library functions)
When you call malloc, you will use BRK to apply to the kernel if you find that the VM is not enough.
Limit the use of data segments up to 1KB
Ulimit-d 1
Open/etc/passwd file with Norff
nroff/etc/passwd
Segmentation fault
You can use Strace to track the operation of your program.
Strace nroff/etc/passwd
Print out the following results to prove that the program when allocating memory is not enough, call BRK request new memory, and due to ulimit restrictions, resulting in the application failed.
Munmap (0x7fc2abf00000, 104420) = 0
Rt_sigprocmask (Sig_block, NULL, [], 8) = 0
Open ("/dev/tty", o_rdwr| O_nonblock) = 3
Close (3) = 0
BRK (0) = 0xf5b000
BRK (0xf5c000) = 0xf5b000
BRK (0xf5c000) = 0xf5b000
BRK (0xf5c000) = 0xf5b000
---SIGSEGV (segmentation fault) @ 0 (0)---
+ + + killed by SIGSEGV + + +
Segmentation fault
Here we test the limitations of data segment with a test program.
The source program is as follows:
#include <stdio.h>
int main ()
{
int start,end;
Start = sbrk (0);
(char *) malloc (32*1024);
End = SBRK (0);
printf ("Hello I used%d vmemory\n", End-start);
return 0;
}
GCC Test.c-o Test
./test
Hello I used 0 vmemory
Change the limit to 170KB via Ulimit
Run the program again
./test
Hello I used 167936 vmemory
12) Limit the size of the process to use the stack segment
We use Ulimit to resize the stack segment to 16, which is 16*1024.
Ulimit-s 16
Run the command again:
Ls-l/etc/
Segmentation fault (core dumped)
In this case, the running process of tracking command with Strace
Strace ls-l/etc/
It is found that it calls Getrlimit, where the limit is 16*1024, not enough for the stack that the program runs on.
Getrlimit (Rlimit_stack, {rlim_cur=16*1024, rlim_max=16*1024}) = 0
Note: The Ls-l/etc/on the 2.6.32 system does not appear to be insufficient for the stack, so you can use expect to trigger the problem.
Such as:
Expect
Tcl_init Failed:out of stack space (infinite loop?)
13) Limit the size of the process to use virtual memory
We use Ulimit to adjust the virtual memory to 8192KB
Ulimit-v 8192
Running LS
Ls
Ls:error while loading gkfx libraries:libc.so.6:failed to map segment from shared Object:cannot allocate memory
LS in the load libc.so.6 dynamic library when the error, indicating insufficient memory.
Using Strace to track the operation of LS, see the following output, indicating that when doing mmap mapped out memory, there is insufficient memory.
Mmap (NULL, 3680296, prot_read| Prot_exec, map_private| Map_denywrite, 3, 0) = 1 Enomem (cannot allocate memory)
Close (3) = 0
Writev (2, [{"LS", 2}, {":", 2}, {"Error while loading shared Libra" ..., 2}, {":", 9}, {":", 2}, {"FA iled to map segment from share "..., +}, {": ", 2}, {" Cannot allocate memory ", (+}, {" \ n ", 1}], 10ls:error while loading Shared libraries:libc.so.6:failed to map segment from shared Object:cannot allocate memory
14) remaining three types of Ulimit limit instructions (file Locks/max memory size/pipe size)
File lock restrictions are only useful before the 2.4 kernel.
The memory-resident limit does not work in many systems.
The pipeline's cache cannot be changed, only 8*512 (bytes), which is 4,096 bytes.
Source: http://blog.sina.com.cn/s/blog_59b6af6901011ekd.html
An in-depth analysis of the limitations of Linux Ulimit