Memory mapping for large file usage

Source: Internet
Author: User

Usually rarely use large file memory mapping, happened to meet such requirements, so the process is recorded, when giving you a primer, because the application is not complicated, there may be no consideration of the place, welcome to exchange.

For some small files, with ordinary file stream can be a good solution, but for large files, such as 2G or more, the file stream is not, so to use the API's memory mapping method, even the memory map, can not map the size of all files at once, so you must take a block map, Handle a small portion at a time.

Let's look at a few functions

CreateFile: Open File

GetFileSize: Get File size

CreateFileMapping: Creating Mappings

MapViewOfFile: Mapping File

See MapViewOfFile's help, his last two parameters all need to be the page granularity of the integer times, the general machine page granularity of 64k (65536 bytes), and we actually operate, is generally not such rules, arbitrary position, arbitrary length is possible, so we have to do some processing.

The task of this example is from a length list (finfolist), read the length value sequentially, and then go to another large file (Fsourcefilename) to read the specified length of the data, if it is a small file, this is good, once read the file stream, and then read it sequentially, Large numbers for large files, you need to constantly change the location of the map to get the data we want.

This example shows that the page granularity is obtained by GetSystemInfo, and then 10 times times the page granularity as a mapped block, in the for loop, the length of the read (Totallen) plus the length to be read, whether within this mapping range (10 times times the page granularity), If you continue to read, if it is exceeded, write down the rest of the data, then remap the next piece of memory, and then merge the remaining data from the record into the newly read data, a little bit around (maybe my idea is too wrapped up), the code listed below.

[Delphi]View Plaincopy
  1. Procedure Tgetdatathread.  Dogetdata;
  2. Var
  3. Ffile_handle:thandle;
  4. Ffile_map:thandle;
  5. List:tstringlist;
  6. P:pchar;
  7. I,interval:integer;
  8. Begin
  9. Try
  10. Totallen: = 0;
  11. Offset: = 0;
  12. Tstream: = Tmemorystream.  Create;
  13. Stream: = Tmemorystream.  Create;
  14. List: = Tstringlist.  Create;
  15. //Get System Information
  16. GetSystemInfo (SysInfo);
  17. //page allocation granularity size
  18. BlockSize: = SysInfo. dwallocationgranularity;
  19. //Open File
  20. Ffile_handle: = CreateFile (PChar (fsourcefilename), Generic_read,file_share_read,Nil,open_existing,file_  Attribute_normal,0);
  21. if Ffile_handle = Invalid_handle_value then Exit;
  22. //Get file size
  23. FileSize: = GetFileSize (Ffile_handle,nil);
  24. //Create Mappings
  25. Ffile_map: = CreateFileMapping (Ffile_handle,nil,page_readonly,0,0,Nil);
  26. if Ffile_map = 0 then Exit;
  27. //Here we have 10 times times blocksize is mapped to a data block, and if the file size is less than 10 times times blocksize, the entire file length is mapped directly
  28. if filesize div blocksize > ten Then
  29. Readlen: = 10*blocksize
  30. Else
  31. Readlen: = FileSize;
  32. For i: = 0 to finfolist. Count- 1 do
  33. begin
  34. List.  Delimiter: = ': ';
  35. List. Delimitedtext: = Finfolist.  Strings[i];
  36. //Get length, I've done the parsing here because I've stored the information for A:B:C this type, so separated by: number
  37. Len: = Strtoint (List.  strings[1]);
  38. Interval: = Strtoint (List.  strings[2]);
  39. if (i = 0) or (Totallen+len >=readlen) Then
  40. begin
  41. //If the length of the read plus the length that is about to be read is greater than 10 times times blocksize, then we want to keep the contents of the previous mapping to merge with the newly mapped content
  42. if i > 0 Then
  43. begin
  44. Offset: = offset + Readlen;
  45. //write temporary stream
  46. Tstream.  Write (P^,readlen-totallen);
  47. Tstream.  Position: = 0;
  48. end;
  49. //If the length of the unread data is not enough for one allocation granularity, then the remaining length is mapped directly
  50. if Filesize-offset < blocksize Then
  51. Readlen: = Filesize-offset;
  52. //Map, p is a pointer to the map area
  53. //Note here The third parameter, always set to 0, this value is set according to the actual situation
  54. P: = PChar (MapViewOfFile (Ffile_map,file_map_read,0,offset,readlen));
  55. end;
  56. //If there is data in the staging stream, you need to merge
  57. if Tstream. Size > 0 Then
  58. begin
  59. //Copy the temporary stream data.
  60. Stream. CopyFrom (Tstream,tstream.  Size);
  61. ///Then write new data at the end, merge complete
  62. Stream. Write (P^,len-tstream.  Size);
  63. Totallen: = Len-tstream.  Size;
  64. //Move the position of the pointer, pointing to the beginning of the next data
  65. INC (P,len-tstream.  Size);
  66. Tstream.  Clear;
  67. End
  68. Else
  69. begin
  70. Stream.  Write (P^,len);
  71. Totallen: = Totallen + len;
  72. INC (P,len);
  73. end;
  74. Stream.  Position: = 0;
  75. //Save the stream as a file
  76. Stream.  SaveToFile (IntToStr (i) +'. txt ');
  77. Stream.  Clear;
  78. end;
  79. finally
  80. Stream.  Free;
  81. Tstream.  Free;
  82. CloseHandle (Ffile_handle);
  83. CloseHandle (FFILE_MAP);
  84. end;
  85. End

http://blog.csdn.net/bdmh/article/details/6369250

Memory mapping for large file usage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.