File maintains a cache mechanism. File uses the default cache value as the IO cache (4 K), or you can use setbuf to set the cache size.Assume that fread 1 byte causes readfile 4 K, and fread then copies the data to be read to the specified buffer. In the future, as long as the access does not pass this boundary, it will always be read from the IO cache, and fwrite will also be called until the boundary of the IO cache is exceeded. You can call flush to actively force the refresh to call writefile or fclose to passively refresh and call writefile (fclose will block at this time ). Besides, the hard disk cache is managed and used by the hard disk controller, just as the processor cache cannot be used to directly write data to the hard disk, it will first write data to the cache and then write data to the disk. there is no optimization in the process, that is, the order in which the hard drive writes data to the disk In fact, the hard disk is a random access device. It doesn't matter which one to write first. Therefore, after converting the I/O access at the application layer into the underlying I/O requests, the kernel layer will optimize the sorting of I/O requests. Assume that there are currently 10 request kernel layers hanging on an I/o queue that calculate the physical location of each request in advance and then sort it to ensure that the head rotates for one week, try to make multiple of the 10 requests complete within one week, imagine the best case. 10 requests are all on one disk. The head is rotated for one week and 10 requests are all completed. The worst case is to be switched to 10 weeks and 10 weeks. The reason is that only one head can be operated at a time. however, 10 requests may unfortunately be on 10 disks (at this time, the kernel will not be sorted again) Therefore, it is best to keep your I/O operations in a continuous disk space as much as possible and not physically cross the disk. For this reason, you may need accurate parameters of the hard disk and exact calculation. The advantage of caching is offset by high-intensity I/O operations because the hard disk write speed cannot keep up with the processor's request cache. The larger the cache, the longer the buffer time. When the cache fills up the ready signal on the hardware an I/O queue that can only suspend the kernel can no longer be written for an invalid hard drive. At this time, the upper layer continuously requests the kernel layer, either continuing to mount the request to the IO Request queue or blocking the process that initiates the IO until the cache has the hardware enables the ready signal driver to extract IO requests from the inland Io Request queue and fill in the cache again .... That is to say, the advantage of cache is only at the beginning of the cache time. This advantage is especially good for small IO requests because it will not be blocked or suspended until the cache is filled up. In fact, the above mentioned software is very limited and also very tired. Why orz .... |