I need to read several lines of content from a file, but the file is large. So I studied how PHP can read several lines of content from a large file and wrote a method, Code As follows (with comments added ):
If the cached file can be saved in one row Algorithm Reading the specified number of rows is naturally much faster than selecting all rows. However, PHP seems to be relatively weak in this aspect and is not very good at operations. Even if splfileobject is used, it is still not particularly desirable and the memory pressure exists.
Copy code The Code is as follows: $ FP-> seek ($ Startline-1 );
After testing, this line of code reaches the last line in the 8 Mb text, and the memory usage is 49kb, which is not bad. if you use the fgets skip mode in the fopen mode, it will consume 29kb of memory. The fopen mode is dominant.
Copy code The Code is as follows: function getfilelines ($ filename, $ Startline = 1, $ endline = 50, $ method = 'rb '){
$ Content = array ();
If (version_compare (php_version, '5. 1.0 ','> = ') {// determine the PHP version (because splfileobject, php> = 5.1.0)
$ COUNT = $ endline-$ Startline;
$ Fp = new splfileobject ($ filename, $ method );
$ FP-> seek ($ Startline-1); // go to row N, and the seek method parameter starts counting from 0.
For ($ I = 0; $ I <= $ count; ++ $ I ){
$ Content [] = $ FP-> current (); // current () Get the content of the current row
$ FP-> next (); // next row
}
} Else {// php <5.1
$ Fp = fopen ($ filename, $ method );
If (! $ FP)
Return 'error: can not read file ';
For ($ I = 1; $ I <$ Startline; ++ $ I) {// skip the previous $ Startline line
Fgets ($ FP );
}
For ($ I; $ I <= $ endline; ++ $ I ){
$ Content [] = fgets ($ FP); // read the content of the file row
}
Fclose ($ FP );
}
Return array_filter ($ content); // array_filter: false, null ,''
}
The results are good, and the splfileobject class features better.