SQL Server Log configuration problems

Source: Internet
Author: User
Too many VLFsSQLServer database engines divide each physical log file into multiple virtual log files, so that the log management system can easily track which parts can be reused. The transaction log file determines the number of VLFs generated based on the following formula, whether it is automatic or manual growth: Upto1MB2VLFs, eachroughly12of

Too many VLFs SQL Server database engines divide each physical log file into multiple virtual log files, so that the log management system can easily track which parts can be reused. The transaction log file determines the number of VLFs generated based on the formula below, whether it is automatic or manual growth: Up to 1 MB 2 VLFs, each roughly 1/2

Too many VLFs

The SQL Server database engine divides each physical log file into multiple virtual log files, so that the log management system can easily track which parts can be reused. The transaction log file determines the number of VLFs generated based on the following formula, whether automatic or manual growth:

Up to 1 MB

2 VLFs, each roughly 1/2 of the total size

1 MB to 64 MB

4 VLFs, each roughly 1/4 of the total size

64 MB to 1 GB

8 VLFs, each roughly 1/8 of the total size

More than 1 GB

16 VLFs, each roughly 1/16 of the total size

For example, if you create an 8 GB transaction log file, you will get 16 VLF records, each of which is about MB. if the log increases by 4 GB at a time, we will get another 16 VLF, each of which is about 256 MB, and the entire file has 32 VLF.

Generally, the best practice is to set the Automatic Log growth instead of the default 10%, so that you can better control the log suspension caused by the zero-initializing operation. For example, you can create a transaction log of MB and set to automatically increase to 32 MB, and then increase the log to the steady state size of 16 GB. According to the above formula, this will lead to more than 4000 VLF transactions.

These VLF may cause performance problems for operations that require transaction logs (such as crash recovery, log clearing, log backup, transaction replication, and database recovery. This situation is called having VLF fragments. Generally, any number of VLF logs that exceed one thousand yuan will be problematic and need to be solved (the most I have heard of is 1.54 million VLF transaction logs that exceed 1 TB !).

Too many VLFs may cause some operations to encounter performance problems when processing logs (such as crash recovery, log clearing, log backup, transaction replication, and database recovery ). This is called VLF fragment. Generally, more than 1000 VLF is problematic and needs to be solved (I have heard that 1 TB of transaction log files have more than VLF ).

To query the number of VLFs, you can use the unmarshented (absolutely secure) dbcc loginfo command. The number of output rows is the number of VLF transaction logs. If you think there are too many VLF instances, you can use the following method to reduce them:

1. Clear logs (for example, truncate logs through log backup)

2. Manually contract log files

3. Repeat steps 1 and 2 until the log size reaches small size (it may be troublesome in busy production systems)

4. Manually increase the log size to the expected size. For example, a VLFS single VLF such as 8 GB cannot exceed 0.5 GB.

You can read more about VLF fragmentation and how to solve it:

· Microsoft KB article that advises variable cing VLF numbers

· Can log files growth affect DML?

· 8 steps to better transaction log throughput

Tempdb

The Tempdb log configuration should be the same as that of other databases, and will automatically grow like other databases. However, there are some potential factors that may cause problems. After an SQL Server instance is restarted, the data and log files of the tempdb database are restored to the size set in the initial file, while the current size of other databases remains unchanged.

This behavior means that when tmpdb has grown to a proper size, you must use Alter database to set the fixed size of the log file. Otherwise, the log file needs to grow from the initial value to the appropriate size after restart. When the log grows, the new space must be initialized without any initialization, and the log record will be suspended, which will affect the performance. Therefore, if you do not properly manage the size of the tempdb log file, you will suffer performance loss after the instance is restarted.

Regular log Shrinkage

Many people often hear that they regularly scale down database logs due to some common operations (such as weekly data import, I do not recommend this operation.

As I have explained above, whenever the transaction log grows or automatically grows, it will be paused due to the zero-initialize action of the log file. If the transaction log file often needs to grow to x, this means that your application will suffer performance impact when the log grows to X.

If your transaction record keeps increasing X size, don't worry about it! Actively set it to the size X and manage VLF according to the method we mentioned above, because this size is required for normal database operations.

Multiple transaction log files

Creating multiple log files in a database does not improve the performance. When the current log space is insufficient, you may need to add a second log file. If you do not add a second log file, you can change the database recovery mode to Simple and run the checkpoint (this will break the record backup chain ).

I am often asked if I want to remove the second log file or keep it in the original place. The answer is to remove it as soon as possible.

Although the second log file does not cause performance problems, it may affect disaster recovery. If your database is damaged for some reason, you need to recover it from scratch. The first step of restoration is to create the two files when the data and log files do not exist. You can enable the instantfile initialization parameter when creating a data file. This option skips zero-initialization, but this parameter is not applicable to log files. This means that you need to create all the log files (or during the transaction log Backup recovery) and perform zero-initialize for full backup recovery. If the second log file is created but not deleted, the zero-initialize process increases the total downtime. Although this does not cause performance problems, it affects the availability of the server.

Restore from database Snapshot

The last problem is actually a bug in SQL Server. If you use a database snapshot to quickly restore to a known point in time to avoid restoring the backup (called restoring from the snapshot), you can save a lot of time. However, there is a major drawback.

When the database is recovered from the database snapshot, the transaction log re-creates two mb vlf. This means you need to manually adjust the size of the log file to the optimum value (or it will ?? Otherwise, the zero initialization mentioned above will occur and the log will be paused. Obviously this is not what we expected.

Summary:

From this article, we can see that there are many things that can cause transaction log performance problems, and thus the performance of the entire database. You can use the method mentioned above to set your logs so that you can have healthy logs. In addition, you also need to monitor transaction logs, such as due to automatic growth and excessive read and write IO latency. These will be explained in future articles.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.