Cache memory is a memory chip on the hard disk controller, which has an extremely fast access speed, which is the buffer between the internal storage of the hard disk and the external interface. Because the internal data transmission speed of the hard disk and the external interface transmission speed is different, the cache plays a role in the buffer. The size and speed of the cache are the important factors that directly relate to the transmission speed of the hard disk, and can greatly improve the whole performance of the hard disk. When the hard disk accesses the fragmented data, it is necessary to constantly exchange data between the hard disk and the memory, if there is a large cache, those fragmentary data can be temporarily cached, reduce the load of the external system, and improve the transmission speed of the data.
Hard disk caching primarily has three functions
One is pre-read. When the hard disk is subject to CPU command control to start reading data, the control chip on the hard disk will control the head to read the data in the next or several clusters of the cluster that is being read into the cache (because the data stored on the hard disk is more contiguous, so the read hit ratio is high), and when you need to read the data in the next or several clusters, Hard disk does not need to read the data again, directly to the cache in the memory of the data can be, because the cache speed is much higher than the speed of the head read and write, so can achieve a significant improvement in performance purposes;
The second is to cache the write action. When the hard drive receives instructions to write data, the data is not immediately written to the disc. Instead, it is temporarily stored in the cache and then sent a "data written" signal to the system, and the system thinks the data has been written and continues to do the work below, while the hard drive is idle (without reading or writing). The data in the cache is then written to the platter. While there is a certain improvement in the performance of writing data, it inevitably poses a security risk--if the data is still in the cache and suddenly loses power, the data is lost. For this problem, the hard drive vendors naturally have a solution: when the power is off, the head will use inertia to write the data in the cache to a staging area other than 0 tracks, wait until the next time the data is started to write to the destination, and the third is to temporarily store the recently accessed data. Sometimes some data is often needed, and the cache inside the hard disk stores some of the data that is read more frequently in the cache, which can be transmitted directly from the cache when it is read again.