Detailed description of cache parameter settings in ds4000 (fastt series) Storage Manager

Source: Internet
Author: User
Tags prefetch

Products: fastt and ds4000
Host platform: P/I/z/X-series, sun, HP ,...
Operating System: Aix, Sun Solaris, HP-UX, Linux

You can use the fastt Storage Manager tool to set different cache parameters. The setting of these parameters directly affects the performance and data availability of the fastt storage server. The following describes these parameters.
Cache memory is a volatile memory (RAM) used on the Controller to temporarily retain data, which is faster than disk access. Cache memory is provided for read/write operations. The effective use of the cache memory of the RAID Controller enables the fastt storage server to provide better performance.
Using the fastt Storage Manager tool, you can set different cache parameters, including:
Read caching
Cache block size
Cache read-ahead Multiplier
Write caching or write-through Mode
Enable or disable write cache caching ing
Start and Stop cache flushing levels
Unwritten cache age parameter
It is the default setting when storage manager is used to create a logical drive and can be adjusted.
These parameter settings directly affect the performance and data availability of the fastt storage server. As we all know, performance and availability are usually a conflict. If you want to achieve the best performance, you must
Sacrifice system availability, and vice versa. By default, read cache and write cache are used for all logical drives. For all write data, the cache between the two controllers is mirroring each other. Write cache is only available in controller battery
Available when the power is full. The pre-Read mode is generally not used for logical drives.


Read caching)

Enabling the read cache will not cause data loss risks. In rare cases, you need to disable the read cache to provide more caching for other logical drives.


Read-ahead multiplier (preread multiplier)

This parameter will affect the read performance. incorrect settings will cause a great negative impact. It controls how many consecutive data blocks are stored in the cache after a read request.
Obviously, this value should be zero for random I/O loads. Otherwise, each read request will prefetch additional data blocks without any need, and these data blocks are rarely used, thus affecting performance. For continuous I/O
Setting the load to 1 to 4 is a proper value, which is mainly determined by the specific environment. When this setting is used, a Read Request initiates prefetch of multiple consecutive data blocks to the cache, thus accelerating concurrent access and enabling
The same data volume is transmitted with less I/O than the cache, and the performance is optimized under the continuous I/O workload. If this value is set too high and the cache is filled with unwanted pre-read data
This causes the overall performance to decline. Use Performance Monitor to observe the cache hit rate and obtain an appropriate value.


Write caching)

The write cache allows the storage system to write data to the cache first, rather than directly writing data to the disk. This will significantly improve performance, especially for random write database applications. For continuous writing environments, the performance varies with the size of written data. If the logical drive is only used for read access, disabling the write cache of the logical drive will improve the overall performance.


Write cache Mirroring)

Fastt write cache caching ing can ensure the integrity of cache data after a RAID card fails. This improves data availability but reduces performance. Data goes through the optical fiber loop between controllers to form an image, competing with normal data transmission.
We recommend that you use this function to ensure the cache data integrity after a RAID card fails.
By default, write cache mirroring writes cached data to another controller, even if the logical drive is moved to another controller. Otherwise, if the control of the logical drive is switched to another controller and there is no written data in the cache, the data on the logical drive will be lost.
If the write cache image is disabled, data may be lost when the controller is invalid and the path in fabric may be invalid. The fastt controller caches battery protection to prevent data loss when power loss occurs.
If the battery is not fully powered, for example, the Controller automatically disables the write cache when it is just started. If the Enable write cache is enabled without battery protection, data loss may occur.


Write caching or write-through

Write-through means that write operations do not use cache at all. Data is always directly written to the disk. Disable write cache to release the cache for read operations. (The cache is shared by read/write operations)
Write
Caching can improve the performance of write operations. Data is not directly written to the disk, but to the cache. From the application perspective, it is much faster than waiting to complete the disk write operation. Therefore, the write performance can be improved.
Yes. The Controller writes data not written to the disk in the cache to the disk. On the surface, write
The cache method has better read and write performance than the write-through method, but it also depends on the disk access mode and Disk Load.
Write-back (write
Cache) is usually faster when the disk load is low. When the load is heavy, every time the data is written into the cache, it is necessary to immediately write to the disk to release the cache to save the new data to be written.
Write Data to the disk, and the Controller runs at a faster speed. Therefore, when the load is heavy, writing data to the cache first reduces the throughput.


Starting and stopping cache flushing levels

These two settings affect how the controller processes data in the cache that has not been written to the disk, and only in the write-back
It takes effect in cache mode. The data written into the disk in the cache is called flushing. You can configure starting and stopping cache.
Flushing levels value, which indicates the percentage of the cache size occupied. When the data not written to the disk in the cache reaches starting Flushing
Value, the controller starts flushing (written by the cache to the disk ). When the volume of data not written to the disk in the cache is less than stop flush
Value, the flushing process is stopped. The Controller always flush the old cached data first. The cache is automatically flushing after no data is written for more than 20 seconds.
The typical start flushing level is 80%. Generally, stop flushing
The level is set to 80%. That is to say, the controller does not allow more than 80% of the cache to be used for write-back.
Cache, but keep this proportion as much as possible. If you use this setting, you can add more unwritten data in the cache memory. This helps improve the performance of write operations, but sacrifices data protection. If you want to obtain
Data protection, you can use lower start and stop
Values. By setting these two parameters, you can adjust the Read and Write Performance of the cache. Tests show that close start and stop flushing is used.
Levels has good performance. If the stop level value is much lower than the start value, disk congestion will occur during flushing.


Cache block size

This value indicates the cache allocation unit size, which can be 4 K or 16 K. Selecting the appropriate value can significantly improve the cache performance.
If more applications access data smaller than 8 K and set the cache block size to 16 K, only a portion of the cache is used for each access
Block. Cache at 16 K
The block always stores 8 K or smaller data, which means that only 50% of the cache capacity is used effectively, reducing the performance. 4 K is suitable for the transfer of random I/O and small data blocks. On the other hand,
If it is continuous I/O and uses a large segment size, it is best to select 16 K. The large cache block size indicates the cache
A small number of blocks can shorten the cache consumption delay. In addition, for data of the same size, the cache block size is larger and the amount of cache data transferred is smaller.
Generally, before implementing the ds4000 series storage solutions, we recommend that you test and monitor performance and adjust relevant performance parameters in one week.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.