Can ssd be used as a redo disk? 1. ssd has write wear and tear, and the ssd write performance is not very good. ssd is only very good for random reading, because it does not need to be searched. Ssd is easy to damage. 2. index tablespace and temporary tablespace are good choices. 3. ssd hard disk erasability is not ideal at present. 4. SSD has a short write life., in addition, with severe write penalties, the hard-to-use system will not take long for the SSD disk to be scrapped. Redo security is the first consideration. 5. Writing is not ideal. Because of the array, writing is first written to the cache, and writing is successful, so it's not that high. 6. Oracle redo logs should not be placed on an SSD disk. Now The tips below will help you to reduce log file sync when writes areslow: Tune LGWR to get good throughput to disk. eg: Do not put redo logson RAID 5. do not put redo logs on Solid State Disk (SSD) Although generally, Solid State Disks write performance is goodon average, they may endure write peaks which wil L highly increasewaits on 'Log file sync' from mos [ID 857576.1] FYI: 7. SSD "taboos": When should SSD be used? SSD has set up its own position in the field of data centers. Almost all mainstream vendors specify Tier 0 storage in their best practices architecture. Server-side SSDs are used to improve server performance, while storage-side SSDS solve the storm startup bottleneck. Like most technologies, it is important to know when to use it and when not to use it. Now let's see when SSD is not used. Do not use SSD for applications that are not read-intensive. SSD can significantly increase the Read access time. Compared with traditional HDD hard disks, the Read efficiency can be increased by more than 10 times. However, there is no free lunch in the world, and SSD has no advantage in writing performance. Write operations not only have latency, but also consume SSD storage units. The storage unit has an average write life. When the storage unit exceeds this life cycle, the storage unit is gradually damaged (For details, refer to the detailed information provided by the vendor to the specified system ). When the storage unit is damaged, the overall performance of the SSD will decrease. In the end, SSD must be replaced to ensure its performance. We all know that SSD is not cheap. Some vendors provide expensive warranty services. What is the ideal read/write ratio? This does not have a fixed number, but 90/10 is ideal. At this point, the application must make concessions, which requires IT managers to make reasonable decisions. If the read/write ratio is lower than 50/50, it is obvious that the traditional HDD is a better choice. In this case, from the application perspective, the improvement of SSD's read performance will be offset by its poor write performance. Therefore, if you need SSD to improve read performance, but the write performance becomes a problem, you can consider using the wear-leveling mechanism and write-amplification) to reduce the performance impact. The size of SSD is also a factor. A cheaper SSD will generate a heavier load because it reduces the chance of recursive reading. Do not use SSD when Random Access to data is too large. SSD is generally regarded as a "cache layer". This name is very good. Basically, SSD is a kind of cache used to reduce data retrieval from traditional hard disks. Applications with high random access requirements will not benefit from SSD-read operations will be directed to HDD by the array controller, and the expensive SSD will play a very small role. Do not use general SSDS in highly virtualized environments. Well, this will bring about some debate, because there are already some SSD success stories in the virtual machine environment, such as the startup storm. However, when many virtual machines access the same SSD, many random data commands are generated, at least from the storage point of view. When hundreds of virtual machines read and write the same storage, one machine will constantly rewrite the data of other machines. Therefore, there is an SSD solution specifically designed for the virtualization environment, which is why "common" SSD was mentioned earlier. Do not use SSD on the server to solve the storage I/O bottleneck. Server-side SSDs are essentially Server caches used to solve performance and network bandwidth problems. Distribution of SSDS to hundreds of physical servers. configuring each server with its own SSD may indeed help solve the IO bottleneck, however, it is not highly efficient to place it on the storage array. Do not use Tier 0 to solve network bottlenecks. If data transmission is controlled by the network, optimization of the storage system on the network backend will obviously not work well. Server-side SSD can reduce the need to access the storage system, thus reducing network traffic. Do not deploy a consumer-level SSD for enterprise-level applications. SSD has three levels: SLC, MLC, and eMLC ). MLC is considered as a consumer-level SSD, which is usually used in packaged applications. The lifecycle of each unit is-write operations. SLC is an enterprise-level SSD with a lifecycle of more than 100000 write operations per unit. EMLC tries to find a balance between price and performance and provides 30000 write operations per unit, but the price is lower than that of SLC. The customer can decide which level of SSD to purchase based on their purchasing power.