Objective
Today is the first day of 51 holidays, it should be happy to play the day, but as a north drift to the capital of the developers, it is difficult to go out to play. The place of fun is far, near and feel boring. So, idle write an article, summed up the program written yesterday.
Yesterday afternoon my friend talked to me, he said there is a need to put the TXT file on the G read to write to the database. Using normal IO Results is naturally oom, so be decisive with NIO technology. In order to improve the speed, naturally also need to use multithreading technology.
The next step is to introduce the implementation ideas and related knowledge points.
Content
One, the file partition
In order to make full use of multithreaded reads, it is necessary to divide the files into multiple regions for each thread to read. Then there needs to be an algorithm to calculate the starting and ending positions of each thread read. Then the average allocated read length for each thread is calculated based on the number of threads configured and the total length of the file. However, since the file is a plain text file, it must be processed by line, if the split point is in the middle of a row, then this row of data will be divided into two parts, which are handled simultaneously by two threads, which cannot be seen. So the characters on the end point of each region must be line breaks. The first area starts at 0, the end position is set to (file length/number of threads), and if the end point position is not a newline character, only 1 is added until it is the line break position. The end position of the first area has, naturally, we can find the beginning of the second region, the same as the upper algorithm to find the end of the second region, and then the third, and so on the fourth ...
In the algorithm above, the end position of the first area is fixed, in order to have the start position of the second region, the end position of the second region is fixed, in order to have a third region start position, and so on. According to this law, it is natural to think of the return of the solution by hand. (see source for details)
Second, Memory file mapping
Simply say the memory file mapping:
A memory file mapping is simply the mapping of a file to an address in memory. to understand the memory file mapping, first understand the normal way to read the file: First, the memory space is divided into kernel space and user space, when the application reads the file, the underlying system calls, the system calls the data first read into the kernel space, The data is then copied to the application's user space for use by the application. This process is a process of copying from kernel space to user-space. If a memory file mapping is used, the file is mapped to an address in physical memory (not the data is loaded into memory), and the address of the application reading the file is a memory address that is mapped to the address of the physical memory mentioned earlier. After the application initiates the read, if the data is not loaded, the system call is responsible for loading the data from the file to the physical address. The application can then read the data to the file. Eliminates the process of copying data from kernel space to user space. So the speed will also be increased.
In my implementation of reading large files, I used the Java Memory Mapping API so that we could load the content into memory when we were going to read an address. There is no need to load the content in all of a sudden.
Summarize
The above is my main use of ideas and some technical points. There may be some unclear expression of the place, hope that the big God do not spray ^_^
Implementation please refer to the code, here is the address of the code (Java read large files)
Java Multithreading read large files