Python reads files by row,
1: readline ()
File = open ("sample.txt") while 1: line = file. readline () if not line: break pass # do something
File. close ()
Reading data from a file in one row is obviously slow;
However, it saves a lot of memory;
The test reads 10 million sample.txt files and reads about 32000 lines per second;
2: fileinput
Import fileinput for line in fileinput. input ("sample.txt"): pass
The writing method is simpler. However, after the test, we found that only 13000 rows of data can be read per second, which is twice as slow as the previous method;
3: readlines ()
File = open ("sample.txt") while 1: lines = file. readlines (100000) if not lines: break for line in lines: pass # do something
File. close ()
With the same data test, it can read 96900 rows of data per second! The efficiency is three times that of the first method, and seven times that of the second method!
4: file iterator
Read and display only one row at a time. When reading large files, it should be as follows:
File = open ("sample.txt") for line in file: pass # do something
File. close ()