Reasons why log files keep growing and related solutions

Source: Internet
Author: User
Tags management studio sql server management sql server management studio
-- Explore the reasons for the continuous growth of log files and solutions. Transaction log files are another important component of SQL Server (also known as data files. Each database has a transaction log file to record all transactions and the modifications made to the database by each transaction. To improve the overall performance of SQL Server, data pages are read into the buffer cache during data retrieval. Data modification is not directly performed on the disk, but the copy of pages in the cache. These modifications are not written to the disk until the database has a checkpoint or the modifications must be written to the disk before the buffer can accommodate new pages. The operation to write the modified data page to the disk from the cache is called refresh page. Pages modified in the cache but not written to the disk are called "Dirty pages ". When modifying pages in the buffer zone, a log record is generated in the log cache. SQL Server has the logic to prevent dirty pages from being refreshed before the associated log records are written. It will make sure that the log is written to the disk when the transaction is committed or before. In other words, SQL Server only submits transactions after the insert, update, and delete operations on data are completed in the memory. These changes are not immediately synchronized to the data page of the hard disk. The SQL Server must ensure transaction consistency. Even if an exception is terminated, the modifications in the memory cannot be written to the hard disk. At the next restart, the transaction can be restored to a time point consistent with the transaction. The submitted modification must be completed on the page on the hard disk. To achieve this, you must rely on transaction logs. However, in the use of SQL Server, the log file size keeps increasing for various reasons. When the maximum number of log files is reached (the maximum number of Log Files specified for creating a database), or when the hard disk space is full, the database cannot be inserted, modified, or deleted. Next we will discuss some causes of this problem and solutions --> travylee !) Generate Test Data: -- create test database mydb: If object_id ('mydb') is not nullddrop database mydbgocreate database mydbgo -- what is the log file? Here I use mydb database, create a table testlog that contains only one int type field and then clear the log file. Run the DBCC log command to find the last record in the log file, use mydbgocreate table testlog (a int) gocheckpoint go -- backup log mydb with truncate_only (this command is no longer supported in version 2008) -- 2008 backup database [mydb] to disk = n'e: \ mydb_bak \ mydb_bak_20121113 'go -- then it is divided into transaction logs (the previous logs are automatically cleared after the transaction logs are backed up) backup log [mydb] to disk = n'e: \ mydb_bak \ mydb_log 'gogosp_helpdb -- you can see the dbiddbcc log () Go/* Current lsn operation context transaction ID logbloc of the mydb database. Kgeneration tag bits log record fixed length log record length previous lsn flag bits log reserve description provided when starting ------------------ when starting --------- ----------- when 00000015: When 00a0: 0003 lop_modify_row lcx_boot _ Page_ckpt 0 0x0000 62 100 0x0000 0 (9 rows affected) DBCC execution is complete. If DBCC outputs an error message, contact the system administrator. */-- The preceding result is the last data returned by DBCC log, A total of nine records are returned. Next, I will insert the testlog table of mydb database into the insert testlog select 1 godbcc log (00000015) Go/*: 000000a5: 0013lop_insert_rowslcx_heap0000: 0000021000x000062104 */-- insert another record under the insert-related record (30 records in total) in the result: insert testlog select 100 godbcc log) go/* 00000015: 000000a9: 0001lop_begin_xactlcx_null0000: 0000021300x00006400000015: 000000a9: 0002lop_insert_rowslcx_heap0000: 0000021300 X00006200000015: 000000a9: 0003lop_commit_xactlcx_null0000: 0000021300x000048 */-- from these records, we can find the insert transaction, start time, end time, update testlogset A = 2 godbcc log (00000015) Go/* 0000021400: 000000aa: 0000: 0000648800000015 X: 000000aa: 0002lop_modify_rowlcx_heap0000: 0000021400x00006210000000015: 000000aa: 0003lop_modify_rowlcx_heap0000: 0000021400x000062100000 00015: 000000aa: 0004lop_commit_xactlcx_null0000: 0000021400x00004852 */-- this time, four Update-related records are generated to analyze transaction logs. One log records data changes, instead of the Operation Type 2 sent by the user, each record has a unique lsn number and records its transaction number 3, the number of log records is related to the actually changing data volume. 4. The log records the transaction occurrence time but does not guarantee that the user name that initiates the transaction is not recorded, and the program name of the initiator is not recorded. 5, SQL Server can read the values before and after data modification from log files. However, it is difficult for managers to understand the modification process directly from the log. The reason for log file growth is that SQL server records logs for all modifications. SQL Server also designs a mechanism to periodically log unnecessary logs in the log files. If more and more logs are required in the log file, the log file will grow continuously. The following are common causes: 1. The database recovery mode is not a simple mode, but log backup is not arranged for non-simple databases, only after the log backup is completed will the record be truncated and complete backup and differential backup do not play this role. 2. There is a transaction on the database that has not been committed for a long time. 3, there is a large transaction running on the database. 4. database replication or image exceptions should avoid the constant growth of log files, so how can we avoid the above situations? For a database that will not be backed up by logs recently, you can set it to simple recovery. If the database is designed to be in full recovery mode, you must regularly schedule regular log backup. if there is a problem with the copy and image tasks, it must be promptly resolved. if the solution cannot be found at the moment, we recommend that you temporarily remove the copy or image to prevent more log records from accumulating. during program design, you also need to avoid the transaction time being too long, so you should not use one thing for too many operations. if the database performs maintenance at night or weekend, such as cleaning historical data, importing and exporting data, and re-indexing. this requires you to reserve enough space for them and back up the data as soon as they are done. finally, let's talk about how to locate the cause of log growth-step one checks the current log usage and database status: DBCC sqlperf (logspace) Go/* mydb0.804687576.577670 */selectname, recovery_model_desc, log_reuse _ O/* mydbfull2log_backup */the above statements can find that the log space used (%) is very high, this is why we need to identify the cause that the log records cannot be cleared. If the Database Log corresponds to the log_reuse_wait_desc value, it indicates that the SQL Server is waiting for log backup, in this case, you need to check the backup plan. If there is an inappropriate plan, you need to change the backup policy in time. If you do not want to back up logs, you can simply change the recovery mode to simple. Step two checks the oldest active transaction: Use mydbgo/* Then open another window and enter the following statement: Use mydbgobegin transelect name, number into test from master .. spt_values */we can clearly see that this transaction has not been committed and then execute the following command to find information about the active transaction (we are not sure whether this transaction is the uncommitted transaction) DBCC opentrango/* transaction information of database 'mydb. Earliest active transaction: spid (server process ID): 53 UID (User ID):-1 Name: user_transaction lsn: () Start Time: 11 13 2012 3: 55: 18: 230pm Sid: 0x01dbcc completed. If DBCC outputs an error message, contact the system administrator. */= -- Execute the following statement: selectst. text, T2. * fromsys. dm_exec_sessions as T2, sys. dm_exec_connections as t1cross applysys. dm_exec_ SQL _text (t1.most _ recent_ SQL _handle) as stwheret1.session _ id = t2.session _ idand t1.session _ id> 50/* Tran in TRAN select name, number into test from master .. spt_values532012-11-13 15:55:15. 550sx-1microsoft SQL Server Management studio-query sqlclient data provider0x01sasleeping0x152181822 15:55:18 012-11-13. 2302012-11-13 15:55:18. 2470711612147483647 Simplified Chinese ymd7111011112-10250600x01saNULLNULLNULL2 */then we can find the spid (server process ID) from the results: uncommitted transactions under 53, the returned statement clearly shows which statements are executed in the process. In this case, we only need to kill the process: Kill 53 DBCC opentrango/* no active transactions in the open state. DBCC execution is complete. If DBCC outputs an error message, contact the system administrator. */

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.