Too many VLFs
The SQL Server database engine internally divides each physical log file into multiple virtual log files, so that the log management system can easily track those parts that can be reused. The transaction log file determines how many VLFs are generated, whether automatically or manually, based on the following formula:
Up to 1MB |
2 VLFs, each roughly 1/2 of the total size |
1MB to 64MB |
4 VLFs, each roughly 1/4 of the total size |
64MB to 1GB |
8 VLFs, each roughly 1/8 of the total size |
More than 1GB |
VLFs, each roughly 1/16 of the total size |
For example, if you create a 8GB transaction log file, you get 16 VLF, each about 512MB. If the log grows 4GB at once, then we get another 16 VLF, each about 256MB, and the entire file has 32 VLF.
Generally the best practice is to set the automatic growth of the log instead of the default 10% so you can better control the pauses caused by the log due to the zero-initializing operation. For example, you create a 256MB transaction log, and set the automatic growth to 32MB, and then increase the log to 16GB steady size. According to the above formula, this will result in more than 4,000 VLF of your transaction.
Many of these VLF are likely to have a performance problem with operations that require transaction logs (such as crash recovery, purge logs, log backups, transactional replication, and database recovery). This condition is known to have VLF fragments. In general any number of VLF over 1000 yuan will be problematic and need to be addressed (I have heard the most is 1.54 million VLF in the size of the transaction log over 1TB!) )。
Too many vlfs may cause some operations to experience performance problems while processing logs (such as crash recovery, purge logs, log backups, transactional replication, database recovery). This condition is known as VLF fragmentation. In general, more than 1000 of the VLF is problematic and needs to be addressed (I've heard of 1TB of transaction log files with more than 154W VLF).
The number of query VLFs can use the undocumented (absolutely secure) DBCC LOGINFO command. The number of rows to output is the amount of VLF in the transaction log. If you feel VLF too much, you can reduce it in the following ways:
1. Clear the log (such as through log backups, etc. truncate log)
2. Manually shrink the log file
3. Repeat steps 1 and 2 until the log reaches a small size (which can be cumbersome in a busy production system)
4. Manually increase the log to the desired size, such as 8GB so vlfs a single VLF no more than 0.5GB.
You can read more about VLF fragmentation issues and how to solve them:
· Microsoft KB article that advises reducing VLF numbers
· Can log files growth affect DML?
· 8 Steps to better transaction log throughput
Tempdb
The tempdb log configuration should be the same as other databases, and it will grow automatically like other databases. But there are some potential factors that can cause problems. When an instance of SQL Server restarts, the data and log files of the tempdb database revert to the size of the initial file settings, while the other databases remain the same size.
This behavior means that when the tmpdb has grown to a suitable size, you must use ALTER DATABASE to set the fixed size of the log file, otherwise the log file needs to grow to the appropriate size from the set initial value after the reboot. Whenever the log grows, the new space must be 0 initialized and will cause logging to pause, which can affect performance. So if you don't manage the size of the tempdb log file properly, you'll lose performance after the instance restarts.
Recurrence Journal Shrink
Many times I hear people say that they are doing regular contractions when they find that the database log is growing due to some common operations, such as a weekly data import, which I do not recommend.
As I explained above, whenever the transaction log grows or grows automatically, the log file zero-initialize action causes a pause. If the transaction log files often need to grow to size X, this means that your application will suffer from performance impact as the log grows to X.
If your trading record keeps increasing x size, leave it! Proactively set it to size X, and manage VLF as we mentioned above, because this size is required for the database to operate properly.
Multiple transaction log files
Creating multiple log files for a database does not improve performance. When the current log space is low, you may need to add a second log file. If you do not add a second log file, you can modify the recovery mode of the database to simple and perform checkpoints (this will break the record backup chain).
I'm often asked if I want to remove the second log file or keep it in place. The answer is to remove it as soon as possible.
Although the second log file does not cause performance problems, it can affect disaster recovery. If your database is damaged for some reason, you need to recover from scratch. The first step in the restore is to create these two files when the data and log files do not exist. When you create a data file, you can enable the Instantfile initialization parameter, which skips zero-initialization, but this parameter does not apply to log files. This means that using full backup recovery requires the creation of all log files (or during the recovery of transaction log backups) and Zero-initialize. If you create a second log file but do not delete it, the zero-initialize process increases total downtime. Although this does not cause performance problems, it affects the availability of the server.
Recovering from a database snapshot
The last problem is actually a bug in SQL Server. If you use a database snapshot to quickly recover to a known point in time to avoid restoring backups (called recovery from snapshots), you can save a lot of time. However, there is a big drawback.
When the database is recovered from a database snapshot, the transaction log recreated the two 0.25MB VLF. This means that you need to manually adjust the log file size to the best value (or it will??) Automatic growth), otherwise it causes the 0 initialization mentioned earlier and causes the log to pause, which is obviously not what we expected.
Summarize:
As you can see from this article, there are many things that can lead to transaction log performance problems, which in turn leads to a whole library performance problem. You can use the method we mentioned above to set up your log so that you can have a healthy log. In addition, you need to monitor transaction logs, such as automatic growth and excessive read and write IO delays. These will be explained in future articles.