Q: I am often asked if CheckDB is recommended in SQL Server 2008. Since SQL Server 2005, SQL Server now calculates checksum to avoid physical page corruption. However, I have never seen a clear explanation that it is unnecessary to claim CheckDB. Do you have any suggestions?
A:
Indeed, the increase in database page audit checksum reduces the need for some regular CHECKDB execution. Historical data also shows that the storage engine quality has been significantly improved since SQL Server 7.0, further reducing the need to use CHECKDB.
However, similar to the checksum function, the content written to the disk must be exactly the same as the read back content. It does not verify many other consistency problems involved in the CHECKDB design. For example, CHECKDB verifies Metadata (Metadata), page Link, index ordering, and other aspects of the system table.
Therefore, the answer to this question is: this demand has been reduced since SQL Server 7.0, and another reason is for the convenience of use by the customer. A well-considered recovery plan includes both full recovery and checkdb to ensure that each part of the recovery plan works properly. For example, you may have a clean database for backup and recovery if it is damaged due to a backup media transmission failure. The SQL Server 2008 backup feature with checksum can take effect in this case. However, the database engine cannot control all aspects, such as the following situations:
1. There was a case where the tape drive generated a shift deviation. The backup is successful, but the tape cannot be read during restoration. Before trying to recover, the engine did not realize that hardware caused a problem. The customer's production server is damaged, but cannot be recovered. The cost is only USD 100,000. Ask the tape to restore the company.
2. One major problem with memory and ECC error rates is that only one bit of memory transmission is changed. This is why SQL Server 2005 has added continuous page scanning for the cache pool. This technique provides a checksum sample when the page is in memory to ensure that the read-only page is not changed by similar errors. However, this technology may also be omitted. There was a case that was caused by this problem. The value 9 on a primary key is changed to 19, resulting in a primary key conflict. Dbcc checkdb is the only method that can check this problem before the backup is completed or differential.
3. This bug is similar to that of ecc SQL Server, but a higher frequency is caused by the bug of XPROC and COM components. For example, the Code in XPROC may incorrectly update the memory.
4. In the past ten years, the primary concern for storage was the old read (the old version of the page is returned from the hardware cache, rather than the latest version of the data on the disk ). This usually does not cause the checksum test to fail. In the following scenario: My wife stored $500 in a husband-and-wife account, and I revoked the operation a few minutes later. The hardware cache still exists in the state before my wife's storage. The $500 storage has been submitted and written to the disk, but the hardware cache has not been updated. When I undo the update operation, I cannot see the saved USD 500, and the account is incorrectly updated. SQL Server may not directly realize this error, resulting in the loss of logical transactions. This is why the SQL Server has added the old read check sample. The sample can be helpful, but it cannot completely prevent this situation. In this case, even if DBCC does not capture this problem, but the recovery from the transaction log will check that the page is updated twice, but has the same LSN, this is not allowed.
To sum up, I will never say that DBCC is no longer needed. You need to run it when necessary and when it is restored from backup.