Some of the operations of databases in SQL Server in Bulk_logged/simple mode take the form of minimizing log logging to reduce the amount of Tran log disk logs to improve overall performance.
Here I'll briefly describe which operations will minimize logging in what circumstances. and how to minimize logs in a real-world production environment.
concept : SQL Server can improve system performance by minimizing the Tran log record operation when performing certain actions such as rebuild index when the appropriate conditions are met.
Note: The log operation segment with minimized operations log is not recoverable by point-in-time
Requires restore mode for simple or bulk logs
Actions to minimize logs
Create Index,alter Index Rebulid
Bulk Import operation (Bcp,bulk insert)
Select into
BLOB data manipulation (using write, etc.)
Insert Select (available under specific conditions after SQL 2008)
Merge (Specific conditions)
application: The actual application process when we actually use the insert Select when the majority, the introduction of this
About minimizing logs for insert select operations
Aggregation table
When the clustered table is empty, using the TABLOCK lock hint will minimize the log
When the clustered table is non-empty, the log will not be minimized anyway
Non-clustered tables
When the heap table is empty, use TABLOCK lock hint, table row data, index data (nonclustered index) will minimize the log
When the heap table is not empty, use the TABLOCK lock hint, the table has a nonclustered index, the row data, the index data is not minimized log
Note: Minimize table non-replicated tables in the log
In some documents where the heap table has an index non-empty, the heap row data is considered to minimize the log, which is actually wrong. See figure B-2
Clustered Table Instances
Clustered empty minimized log graph A-1
Create DatabaseTestbulkGo UseMasterALTER DATABASE [Testbulk] SETRECOVERY bulk_logged withno_waitGo UseTestbulkGoCreate TableT1 (IDint not NULL Identity(1,1), Dystrvarchar( $), FixstrChar( -));GoSetNocount onDeclare @i intSet @i=0 while(@i<20000)begin Insert intoT1 (DYSTR,FIXSTR)Values('AAA'+Str(RAND()*100000000),'BBB'+Str(RAND()*100000000)) Set @i=@i+1EndCreate TableTcls (IDint, Dystrvarchar( $), FixstrChar( -))GoCREATE UNIQUE CLUSTERED INDEXinx_id ondbo.tcls (ID)Insert intoDbo.tcls with(Tablockx)Select * fromDbo.t1----cluster Table EmptySelectOperation,context,[Log Record Length], Allocunitname fromFn_dblog (NULL,NULL)whereAllocunitname like '%tcls%'
A-1
Clustered non-empty non-minimized log graph a-2
truncate TableTclsDBCCShrinkfile (N'Testbulk_log',0, Truncateonly)Insert intoDbo.tcls with(Tablockx)Values(100000,'AAA','BBB')----made not empty clustered tableGoInsert intoDbo.tcls with(Tablockx)Select * fromdbo.t1----cluster table not emptySelectOperation,context,[Log Record Length], Allocunitname fromFn_dblog (NULL,NULL)whereAllocunitname like '%tcls%'
a-2
Nonclustered index Instances
Nonclustered non-empty heap table no index instance diagram B-1
Create Tabletnoncls (IDint, Dystrvarchar( $), FixstrChar( -))GoInsert intoDbo.tnoncls with(Tablockx)Values(100000,'AAA','BBB')----made not empty heap Table no indexGoInsert intoDbo.tnoncls with(Tablockx)Select * fromDbo.t1 with(Tablockx)----heap table not empty with no indexSelectOperation,context,[Log Record Length], Allocunitname fromFn_dblog (NULL,NULL)whereAllocunitname like '%tnoncls%'
Figure B-1
Nonclustered non-empty heap table with index instance diagram B-2
truncate TableTnoncls----truncate TableDBCCShrinkfile (N'Testbulk_log',0, Truncateonly)CREATE UNIQUE nonclustered INDEXinx_id onDBO.TNONCLS (ID)----add non clustered indexInsert intoDbo.tnoncls with(Tablockx)Values(100000,'AAA','BBB')----made not empty heap table with indexGoInsert intoDbo.tnoncls with(Tablockx)Select * fromDbo.t1 with(Tablockx)----heap table not empty with indexSelectOperation,context,[Log Record Length], Allocunitname fromFn_dblog (NULL,NULL)whereAllocunitname like '%tnoncls%'----both datapage and indexpage full log
b-2
About trace Flag 610
Sql2008 the newly introduced TF, which can still be used to minimize log operations in a non-empty b-tree structure.
My personal advice on the use of TF610 is to use caution in special scenarios.
In general, when we import data into a non-empty table, the heap table minimizes the log lock table itself in the online process and affects the online application. Clustered table data is less likely to be imported in bulk during the insertion process. (Good aggregation table data Bulk Import, the situation is very little).
The TF610 itself is designed to reduce the tran-log size of the records, not to speed up the import.
Note When using TF610:
1: Session-level Open DBCC TRACEON in specific cases (610)
2: When the bulk transaction commits all data pages to be dropped, if there is no checkpoint before the execution of the disk will result in a large number of random IO resulting in performance degradation, and sometimes even worse than full log record insertion.
3: Avoid a single transaction too large. Large transactions can cause other problems.
Minimized log (Minimal log) Best practices
Bulk_Logged mode: A database in a real-world production environment is generally simple, or full-time. Bulk_Logged mode is rarely used in the normal way. But when there are a lot of minimized log operations in our data operations (such as index rebuild maintenance), we can turn on Bulk_Logged mode to improve operational efficiency.
Example: index maintenance
1: Select the Operation time window: Before the daily full backup
2: After the full backup is complete, the manual intervention performs a log backup.
3: Modify the database schema by full->bulk_logged
4: Bulk-Logged Operations (index maintenance)
5: Manual Intervention Backup Log
6: Re-adjust to full-time log (mode)
In bulk_logged mode, the log chain is not broken, and in this mode we minimize the time period of non point.
Note: When the database has the application of the full-time log mode, such as mirroring, inappropriate modification of the database schema and the destruction of the application, when the full-time log situation resulting in a large number of logs may lead to an instance-level global problem, should carefully weigh the operation.
For a database with audit requirements, pay attention to specific audit requirements: whether to restore to a point in time.