Please pay attention to my watercress http://www.douban.com/note/484517776/
For many large files of the incremental read, if traversing each row than the history of the loss of money or all loaded into memory through the history of the index lookup, is very wasteful of resources, online there are a lot of people's technical blog is written with a for loop readline and a counter to read the increment, this is very brain residual, If the file is large, traverse it for too long.
We need to understand the basic theory of getting a file handle, which contains the pointer operation, and so on.
The principle is this way, the Linux file descriptor in the struct has a F_pos attribute, which contains the file current read location, through the east through the VFS a series of mappings will get the location of the hard disk storage, so very direct, very quickly.
The following is the use of Python combat code, the core function of tell (), Seek (). Also called by the system call seek tell
Three modes of Seek ():
(1) F.seek (p,0) Move when the file is at p Byte, absolute position
(2) F.seek (p,1) moves to p bytes relative to the current position
(3) F.seek (p,2) moves to p bytes after the end of the article
Tell ():
Returns the read location of the current file.
Code:
#!/usr/bin/python
Fd=open ("Test.txt", ' R ') #获得一个句柄
For I in Xrange (1,3): #读取三行数据
Fd.readline ()
Label=fd.tell () #记录读取到的位置
Fd.close () #关闭文件
#再次阅读文件
Fd=open ("Test.txt", ' R ') #获得一个句柄
Fd.seek (label,0) # Move the file read pointer to the previous record location
Fd.readline () #接着上次的位置继续向下读取
Follow up: One of the people asked me how I learned this big file line number, and the change, my idea is
Method 1:
You can go through the ' \ n ' character.
Method 2:
The Fd.readline () is counted from the beginning with a for loop, and then the changed part (with the Seek, tell function done above) is then used for Loop fd.readline () to increase the number of rows.
Python incremental read for large files