Array cache write mechanism: difference between Write-through and Write-back

Source: Internet
Author: User
Tags flush

Write through and write-back write through and write back are two ways of using array card cache, also known as both write-through and writeback. When the write through method is selected, the system writes disk operation does not utilize the cache of the array card, but directly interacts with the data of the disk. The write back method uses array cache as the setter between the system and the disk, the system first to the cache, and then to the cache data to the disk.

When you configure the array, the default is OK if you don't make it clear and the system defaults to the disk type.

  Write Caching or Write-through

Write-through means that the write operation does not use caching at all. Data is always written directly to the disk. Turns off write caching, which frees the cache for read operations. (Cache is shared by read and write operations)

Write caching can improve the performance of writing operations. The data is not written directly to the disk, but is written to the cache. From an application's point of view, it is much faster than waiting to complete a disk write operation. Therefore, you can improve write performance. The controller writes data that is not written to disk in the cache to disk. On the surface, write cache is better than Write-through read and write performance, but also depends on the disk access mode and disk load.

The Write-back (write cache) is usually faster when the disk load is lighter. When the load is heavy, every time the data is written to the cache, it is written to the disk immediately to release the cache to save the new data that will be written, and the controller will run faster if the data is written directly to the disk. Therefore, when the load is heavy, writing the data to the cache first reduces throughput.

Starting and stopping cache flushing levels

These two settings affect how the controller handles cached data that is not written to disk and takes effect only in Write-back cache mode. Data written to disk in cache is called Flushing. You can configure the starting and stopping cache flushing levels value, which represents the percentage of the total cache size. When data that is not written to disk in the cache reaches starting flushing value, the controller begins flushing (written to disk by cache). The flushing process stops when the amount of disk data not written in the cache is lower than the stop flush value. The controller always flush the old cached data first. The data in the cache is not written for more than 20 seconds and is automatically flushing.

The typical start flushing level is 80%. Typically, the stop flushing level is also set to 80%. That is, the controller does not allow more than 80% of the cache to be used for Write-back cache, but remains as far as possible. If you use this setting, you can write more data in the cache memory. This helps to improve the performance of write operations, but at the expense of data protection. If you want to get data protection, you can use the lower start and stop values. By setting these two parameters, you can adjust the read and write performance of the cache. The test shows that the performance is better when using close start and stop flushing levels. If stop level value is far below the start value, it can cause disk congestion when flushing.

Cache Block Size

This value refers to the size of the cache allocation unit, which can be 4 K or 16K. Choosing the right values can significantly improve cache usage performance.

If the application accesses more than 8K of data, the cache block size is set to 16K, with only a subset of cache blocks per access. 8 k or smaller data is always stored in the cache block of 16K, meaning that only 50% of the cache capacity is used effectively, which can degrade performance. 4K is appropriate for random I/O and small block of data transfer. On the other hand, if it is continuous I/O and uses large segment size, it is best to choose 16K. The large cache block size means that the number of cache blocks is small and can shorten caching consumption latency. In addition, for the same size of data, cache block size is larger, the need for less cache data transfer.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.