SQL2000 three "fault-reduction models"

Source: Internet
Author: User
Tags truncated

First, SQL2000 three kinds of "fault reduction model"

On the Options page of the database properties, the failure Restore model column has three choices: simple, full, and bulk-logged. The fundamental difference is that SQL2000 maintains a different way of maintaining database logs. The following are described individually:

1、“完全”模型

As we all can imagine, if you need to implement a "point-in-time restore," SQL2000 must save all transaction records without a single omission and become an uninterrupted chain. In the log file, each transaction record is numbered (called "LSN") and the number is sequential.

Under the "full" model, SQL2000 the most rigorous and thorough management of transaction logs. The central principle is that if you are not backing up the transaction log, the log file will never delete any historical transaction records on its own (the result will be unlimited file growth).

When you back up the log file (that is, "BACKUP Log"), SQL2000 will note which LSN has been backed up, and the next time you back up the log, these LSN will not recur in the next transaction log backup set. Also, every time a log backup is made, unless you indicate "no 删除事务日志中不活动的条目 ", SQL2000 will always delete unused history (called "Truncating the transaction log") whenever it feels the right time.

Visible, timely log file backup is necessary, not only the need for data security, but also to prevent the unlimited growth of log files (resulting in wasted disk space, reduce system performance) needs.

If you make a full backup of the database first (that is, "bottom-up"), and then back up the log files continuously, the resulting sequence of "log file backup Sets" will constitute a complete "log chain". We can restore the database to the state at any point in the log chain by simply taking a "base backup set" combined with this "log chain". To do this, a DBA can take a big breath.

To emphasize again:

① under the "full" model, only transaction log backups will cause the log file to be truncated, "Database full backup" or "Database differential backup" without truncating the log.

② A log chain, if you accidentally lose the middle of a log backup set, then the entire chain is obsolete (or at least, from the loss of the back of the backup set is obsolete), you have to re-"bottom" head. Therefore, it is usually necessary to continuously chase into the same media set, do not each backup to generate a file, fragmented files are not manageable (strictly speaking, this is a matter of personal preference).

2、“简单”模型

If you select the simple recovery model for a database, SQL2000 will truncate (make available space) at any time when the transaction log is logged, for inactive parts in the log. When you truncate the log, the log file size does not change (unless you shrink the database). The word "cut at any time" here is SQL2000 's own idea and does not need to be notified.

Under the "simple" model, inactive portions of the log are truncated at any time. The advantage of this model is that "even if you do not manage log files, it will not grow too large to take up too many hard drives", with the disadvantage of "unable to maintain log chain" (because there is no continuous retention of records in it).

Note: Even if you set the recovery model to full, if the database has never had a full backup, SQL2000 also maintains its log as a "simple" model. Reason: You have not made a "bottom" backup, leaving so much log records to you is also white waste.

3、Bulk-Logged (also known as "Batch Log")

The batch log model is a complement to the "full" model, and in some special cases, using the batch log model can improve the performance of SQL2000 (that is, "speed"), but it is not necessary to work well even if you never use the "batch log" model.

When inserting data into a table, SQL2000 usually inserts only one row at a time, and always enlists the log records before inserting. However, SQL2000 also provides a "batch insert" statement, which is a statement that inserts thousands of rows into a table (for example: Insert ... SELECT statement), which is called a "batch operation." How do SQL2000 log records when a batch operation is performed? Under the "full" model, it records the insertion of each row, and the result is a large log volume; In the batch log model, it simply flags which data regions are affected by "batch operations" (this is called a "bit mapping table") and does not copy the data into the log file, so the log volume is greatly reduced.

When backing up a log under the batch log model, SQL2000 not only writes the contents of the log file to the backup set, but it also requires that the contents of the current database be copied into the log backup set by "Bit mapping table": The "data region" that has been affected by "batch operation" since the last log backup. are to be copied into the original. Otherwise, what do you do to restore data when you need to restore it? (self-figured).

Understanding the nature of the "batch log" above, we can understand the following phenomenon: Under the "Batch log" model, assume that there is a batch operation (to the table in a row by inserting 10000 rows), in the middle of the journey (already inserted 4800 rows), the hard disk is damaged, the data file is not. At this point, we cannot revert back to the "inserted 4800 rows" state using the log file. The reason is simple: there is no data inserted in the log.

In view of this, how should we treat the batch operation? First, do not change to the "batch log" model, keep the "full" model to operate, slow down (because to record a large number of logs), the teeth to do. The second is to temporarily change to the "batch log" model, the following steps:

① allow all other users to exit the connection before starting the batch operation (for example, bulk import of the base material);

② a log backup (for convenience, the resulting log backup set is called "Log backup set a");

③ the "failure restore Model" to "batch log" model;

④ begins the batch operation.

⑤ if all goes well (the intermediate disk is not bad), after the batch operation is completed, immediately change back to the "full" model, and then a log backup.

⑥ If in step ④, the batch operation is half done and the disk is broken, use log backup set A to restore the database to its full state before the batch operation starts, and then restart the batch operation.

In any case, don't forget to replace the "complete" model after the batch operation.

In fact, in my opinion, whether or not you plan to use the "batch log" model temporarily, it is necessary to make a log backup before the "Mass import data" operation, at least, there is a chance to turn back.

< from repost ^_^ >

SQL2000 three "fault-reduction models"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.