The projects that are now being developed are porting applications from Solaris to Linux. The Popen function is often used to read the Popen output using a 8192-byte array, but no overflow is judged.
Initially thought to be a simple memory out of bounds, but after the investigation of Popen and pipe, doubts more and more.
1) The problem arises
Popen uses a pipeline to record the output of the called command, the maximum number of bytes written by Popen is necessarily the maximum of the pipeline.
Use Linux's ulimit-a to view system limits:
[[email protected] linux]$ ulimit-acore file size (blocks,-c) 0data seg size (Kbytes,-D) unlimitedscheduling PR Iority (-e) 0file size (blocks,-f) unlimitedpending signals (-I ) 16204max locked memory (Kbytes, L) 64max Memory Size (Kbytes,-m) unlimitedopen files (-N) 1024pipe size (bytes,-p) 8POSIX message Queues
(bytes,-Q) 819200real-time priority (-R) 0stack size (Kbytes,-s) 8192cpu time (seconds,-t) Unlimitedmax User Processes (-u) 1024virtual memory (Kbytes,-V) unlimitedfile locks (-X) Unlimited
To view the system limits for Solaris:
bash-3.00$ ulimit-acore File Size (blocks,-c) unlimiteddata seg size (Kbytes,-D) unlimitedfile size (block S,-f) unlimitedopen files (-N) 256pipe size (bytes,-p) 10stack size (Kbytes,-s) 8192cpu time (secon DS,-T) Unlimitedmax user Processes (-u) 29995virtual memory (Kbytes,-V) Unlimited
You can see that the pipe size is 512bytes * 8= 4096bytes on the Linux system. The pipe size on the Solaris system is 512bytes * 10= 5120bytes.
Both 4096 and 5120 are far less than 8912. Therefore, there is absolutely no memory out-of-bounds problem when reading the output of Popen using a 8912-byte buf.
2) The depth of the problem
Through the ulimit seems to get the right results, but in the actual test but let the people surprised!
Test procedure:
Test_popen.c
#include <stdio.h>int main () { FILE *fp; int i; Char *p; Char buf[128]; fp = Popen ("./test_print", "R"); if (FP ==null) { printf ("ng\n"); return-1; } Fgets (buf, (+), FP); Pclose (FP); return 0;}
Test_print.c
#include <stdio.h>int main () { unsigned int i; for (i=0; i<0xffffffff;i++) printf ("a"); return 0;}
TEST_POPEN.C and test_print.c respectively compiled into Test_popen and Test_print, run Test_popen, the program should run normally! Test_print obviously output 4G characters.
3) Explore the principles.
Understand pipe with man 7 pipe (My man version is 1.6f)
Pipe_buf posix.1-2001 says that write (2) s of less than pipe_buf bytes must is Atomic: the output data is written to the pipe as a contiguous sequence. Writes of more than Pipe_buf bytes is non-atomic: The kernel may interleave The data with data written is processes. POSIX.1-2001 requires PIPE_BUF to is at least bytes. (on Linux, pipe_buf is 4096 bytes.) The precise semantics depend on whether the file descriptor was non-blocking (O_nonblock), whether there is mult I- ple writers to the pipe, and on N, the number of bytes to be written:
Pipe_buf is really 4096, but Pipe_buf is the maximum number of atoms written, 4kb is just 1page size, not pipe size
Search the kernel source on Google, in the 3.10 kernel has the following:
133/* differs from PIPE_BUF on that pipe_size are the length of the actual134 memory allocation, whereas PIPE_BUF make s atomicity guarantees. */135 #define Pipe_size page_size
Clearly illustrates the difference between Pipe_buf and pipe_size.
Further investigation, found before 2.6.11, Pipe_size is 4k, then the kernel pipe_size 64k, then why can write up to 4G characters?
At this time I thought of Popen source code, see the Popen in the BSD implementation, in addition to using the pipe function, with the system call is no different.
Point me to view Popen source code
4) Conclusion
The process of relying on the pipe characteristics of Linux is not a good design, it is very easy to go out of trouble (not yet), it is best to honestly do memory cross-border judgment, reduce the coupling with the system.
Pipe_size and Pipe_buf in Linux, pipe max write value problem