Use of memory ing for large files

Source: Internet
Author: User

Usually, the memory ing of large files is seldom used, and such a requirement occurs. Therefore, record the process and give you an introduction, because the application is not complex, you may not be able to think about it.

For some small files, the use of common file streams can be a good solution, but for large files, such as 2 GB or more, the file stream will not work, therefore, you need to use the memory ing method of the API. Even the memory ing cannot map the size of all files at a time. Therefore, you must adopt block ing to process a small part each time.

Let's look at several functions.
Createfile: open the file
Getfilesize: get the file size
Createfilemapping: Creates a ing.
Mapviewoffile: ing File

Looking at the help of mapviewoffile, the last two parameters of mapviewoffile need to be an integer multiple of the page granularity. Generally, the page granularity of the machine is 64 KB (65536 bytes). In actual operations, this is generally not the case. It is possible to have any length at any position, so some processing is required.
In this example, the task reads the length value from a length list (finfolist) sequentially, and then reads the data of the specified length in sequence in another large file (fsourcefilename, if it is a small file, this is easy to do. Read the file stream once and then read it in sequence. For large files, you need to constantly change the ing position, to obtain the data we want.
In this example, getsysteminfo is used to obtain the page granularity, and a ing data block with a page granularity of 10 times. In the for loop, the read length (totallen) is determined) add the length to be read, and determine whether the data is within the ing range (10 times the page granularity). If the data is read, record the remaining data, then re-map the next piece of memory and merge the remaining data in the record into the newly read data. This is a bit confusing (maybe my idea is too messy). The Code is listed below.

[Delphi]View plaincopy
  1. Procedure tgetdatathread. dogetdata;
  2. VaR
  3. Ffile_handle: thandle;
  4. Ffile_map: thandle;
  5. List: tstringlist;
  6. P: pchar;
  7. I, interval: integer;
  8. Begin
  9. Try
  10. Totallen: = 0;
  11. Offset: = 0;
  12. Tstream: = tmemorystream. Create;
  13. Stream: = tmemorystream. Create;
  14. List: = tstringlist. Create;
  15. // Obtain system information
  16. Getsysteminfo (sysinfo );
  17. // Page allocation granularity
  18. Blocksize: = sysinfo. dwallocationgranularity;
  19. // Open the file
  20. Ffile_handle: = createfile (pchar (fsourcefilename), generic_read, file_share_read, nil, open_existing, file_attribute_normal, 0 );
  21. If ffile_handle = invalid_handle_value then exit;
  22. // Obtain the file size
  23. Filesize: = getfilesize (ffile_handle, nil );
  24. // Create a ing
  25. Ffile_map: = createfilemapping (ffile_handle, nil, page_readonly, 0, 0, nil );
  26. If ffile_map = 0 Then exit;
  27. // We have 10 times blocksize as a data block for ing. If the file size is smaller than 10 times blocksize, the entire file length will be mapped directly.
  28. If filesize Div blocksize> 10 then
  29. Readlen: = 10 * blocksize
  30. Else
  31. Readlen: = filesize;
  32. For I: = 0 to finfolist. Count-1 do
  33. Begin
  34. List. delimiter: = ':';
  35. List. delimitedtext: = finfolist. Strings [I];
  36. // Get the length. I have resolved it here, because the information I store is A: B: C, so it is separated:
  37. Len: = strtoint (list. Strings [1]);
  38. Interval: = strtoint (list. Strings [2]);
  39. If (I = 0) or (totallen + Len> = readlen) then
  40. Begin
  41. // If the read Length plus the block size to be read is greater than 10 times, we need to retain the content at the end of the previous ing to merge with the new ing content.
  42. If I> 0 then
  43. Begin
  44. Offset: = offset + readlen;
  45. // Write a temporary stream
  46. Tstream. Write (P ^, readlen-totallen );
  47. Tstream. Position: = 0;
  48. End;
  49. // If the length of the unread data is not enough allocation granularity, the remaining length will be mapped directly.
  50. If filesize-offset <blocksize then
  51. Readlen: = filesize-offset;
  52. // Ing. P is a pointer to the ing area.
  53. // Note that the third parameter is always set to 0. This value must be set according to the actual situation.
  54. P: = pchar (mapviewoffile (ffile_map, file_map_read, 0, offset, readlen ));
  55. End;
  56. // If data exists in the temporary stream, merge the data.
  57. If tstream. size> 0 then
  58. Begin
  59. // Copy the temporary Stream Data
  60. Stream. copyfrom (tstream, tstream. size );
  61. // Write new data at the end and merge the data.
  62. Stream. Write (P ^, len-tstream.Size );
  63. Totallen: = len-tstream.Size;
  64. // Move the pointer to the start of the next data
  65. INC (p, len-tstream.Size );
  66. Tstream. Clear;
  67. End
  68. Else
  69. Begin
  70. Stream. Write (P ^, Len );
  71. Totallen: = totallen + Len;
  72. INC (p, Len );
  73. End;
  74. Stream. Position: = 0;
  75. // Save the stream as a file
  76. Stream.savetofile(inttostr(imo-{'.txt ');
  77. Stream. Clear;
  78. End;
  79. Finally
  80. Stream. Free;
  81. Tstream. Free;
  82. Closehandle (ffile_handle );
  83. Closehandle (ffile_map );
  84. End;
  85. End;
 

Refer:

Http://blog.csdn.net/bdmh/article/details/6369250

Use of memory ing for large files

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.