(go) Explain SQL Server transaction log records

Source: Internet
Author: User
Tags bulk insert

This article was reproduced from the blog of Birch Tsaihttp://www.cnblogs.com/lyhabc/archive/2013/07/16/3194220.html explain SQL Server transaction log records

As you know, under the full recovery model, SQL Server records the operations that are done by each transaction, which are stored in the transactions log, and some software uses transaction logs to read

Operation record recovery data, for example: Log Explorer

So what does the transaction log record look like and what's in it?

Open the following SQL statement can be used to view transaction log records for the database in which it resides

1 use [Gposdb]--the database to view transaction log records 2 GO3 SELECT * FROM [SYS]. [Fn_dblog] (Null,null)

There's a lot of things to see in the transaction log records, and very detailed database activity information is recorded.

I'm just going to show you some important fields that need to be known, other fields because I have limited ability and feel that other fields are not very important.

CURRENTLSN: The current LSN number, each record in the transaction log is identified by a unique log sequence number (LSN). The LSN is sorted in this way: if LSN2 is greater than LSN1,

LSN2 the changes that are described by the log records that are identified occur after the changes that are described in logging LSN1

MSDN Explanation: http://msdn.microsoft.com/zh-cn/library/ms190411 (v=sql.90). aspx

Operation: Operations done by the current LSN

Context: Action contexts

Transactoinid: Transaction ID number

The fixed length of the virtual log file occupied by the log record fixed LENGTH:LSN record

Previous LSN: Previous LSN number

--------------------------------------------------------------------------------------------------------------

Allocunitid: The modified data belongs to the allocation unit ID

Allocunitname: Modified table name of the data

Page id:0001:00000121 converted to decimal: 289 so view PageID as 289 page DBCC page ([pratice],1,289,3)

Slot ID: The first few records of the data page on which the data resides

PartitionID: The partition ID of the data page where the data resides

For example, the table name of the modified data is the Insert_test,page ID is 0001:00000121 to the decimal for the 289 Slot ID is 6 (that is, the 6th record of the data page)

You can view the data on the page with the following SQL statement

1 use [pratice]2 GO3 DBCC TRACEON (3604,-1) 4 GO5 6 DBCC PAGE ([pratice],1,289,3)  7 GO
 1 Slot 6 Offset 0x552 Length 211 2 3 record Type = Primary_record record Attributes = Null_bitmap 4 Memory Dump @0x0a2ac552 5 6 00000000:1000d000 3f080000 61616120 20202020† ...? ... aaa 7 00000010:20,202,020 20202020 20202020 20202020†8 00000020:20,202,020 2                         0202020 20202020 20202020†9 00000030:20,202,020 20202020 20202020 20202020† 10 00000040:20,202,020 20202020 20202020 20202020†11 00000050:20,202,020 20202020 20202 020 20202020†12 00000060:20,202,020 20202020 20202020 20202020†13 000 00070:20,202,020 20202020 20202020 20202020†14 00000080:20,202,020 20202020 20202020 20202020 †15 00000090:20,202,020 20202020 20202020 20202020†16 000000a0:2020 2020 20202020 20202020 20202020†17 000000b0: 20202020 20202020 20202020 20202020†18 000000c0:20202020 20202020 20202020 20202020†                      000000d0:0200fc††††††††††††††††††††††††††††††† ... Slot 6 Column 0 offset 0x4 Length 422 id = 2111 Slot 6 column 1 offset 0x8 lengt                                             H 20026 name = AAA


This table has only two fields, so let's take a look at the table data

--------------------------------------------------------------------------------------------------------

Checkpoint Begin:checkpoint Start time

Checkpoint Begin DB Version: Current database versions SQL2005 is 611 SQL2012 is 706

Checkpoint End:checkpoint End time, this time must be at the location of the next transaction log record in Checkpoint begin

Minimum LSN: The log sequence number (LSN) of this first log record, called the minimum Recovery LSN (MinLSN)

Dirty Pages: Dirty data page

Oldest replicated Begin LSN: The oldest replication start LSN if the database configuration is replicated

Next replicated End LSN: Next replication End LSN

Last distributed end LSN: Latest distribution End LSN

SPID: Process ID to perform the current operation

Beginlog Status: Start logging the state of the transaction log, which indicates that the transaction log is now logged properly

Begin Time: Transaction start

Transaction Name: Transaction name

End Time: Transaction end

Transaction begin: Records the cureent LSN of the begin Transaction of this transaction

Master DBID: Displays the DBID of the current master database

Preplog Begin LSN: The previous transaction log LSN before starting the database

Prepare time: Ready to start the database

New Split page: Which data page generated the page split

Rows Deleted: How many rows of data pages have been deleted

Description: Describes what this transaction does, and sometimes the name of the transaction is not necessarily the name of the operation.

For example, it happens that the transaction name and description are all create TABLE if you name the transaction, then you can only see the description column to see what the transaction is.

------------------------------------------------- Gorgeous split-line ----------------------------------------------- --------------------

Now explain some common operation and context, some uncommon I do not know, hehe O (∩_∩) o

Operation: Operations done by the current LSN

Context: Action contexts

DCM Page Info: http://www.cnblogs.com/lyhabc/archive/2013/01/21/2870392.html

Operation

Context

Explain

Lop_set_bits

Lcx_diff_map

Set Bitmap, Data: Differential (differential) Backup: Only after the last full backup is backed up, make the modified portion. The backup unit is a zone (Extent). means that even if only one page has been changed in a zone, it will be reflected in the differential backup. Differential backups depend on a bitmap for maintenance, a bit corresponds to a zone, and since the last full backup, the modified area is set to 1, while the corresponding zone in bitmap is backed up by a differential backup. After the next full backup, all the bits in the bitmap will be reset to 0

And this bitmap on the 7th page of the database:

DCM page Variance Change (differential Changed MAP,DCM) page He tracks which area of a file has been modified since the latest full database backup. SQL Server uses incremental backups only to incrementally back up partitions that have changed data

Lop_begin_xact

Transaction start

Lop_modify_row

Lcx_heap

Modify a row of records in a heap table

Lop_prep_xact

Prepare to start the database

Lop_commit_xact

Commit a transaction

Lop_modify_row

Lcx_boot_page

Modify the Database startup page

Lop_modify_header

Lcx_pfs

Modify page header information for a PFS page

Lop_insert_rows

Lcx_clustered

Insert data to index page of clustered index

Lop_insert_rows

Lcx_index_leaf

The leaf node that inserts data into the index is the data page

Lop_format_page

Lcx_clustered

Reorganize clustered Indexes

Lop_delete_split

Lcx_clustered

Deleting a row of records from a clustered Index table causes page splits

Lop_modify_header

Lcx_heap

To modify the header information for a page of a heap table

Lop_begin_ckpt

Lcx_null

Check Point start

Lop_end_ckpt

Lcx_null

Check Point end

Lop_set_free_space

Lcx_pfs

Modify the PFS page settings that data page is free

Lop_root_change

Lcx_clustered

Root node change of clustered index

Lop_insert_rows

Lcx_heap

Inserting data into the heap table

Lop_format_page

Lcx_heap

Formatting data pages in the heap

Operation

Lock Information

Explain

Lop_lock_xact

HoBt 0:acquire_lock_sch_m metadata:database_id = STATS (object_id = 7, stats_id = 11)

Acquire a lock in a transaction

------------------------------------------------------- Gorgeous split-line ----------------------------------------- ----------

Under the bulk-logged recovery model, you will not see the operation of the data page in the transaction log record, when you use bcp, bulk inert, select into bulk operation statement

Modify the data and insert the data as you can not find it in the transaction log records.

So when the bulk-logged recovery model, the LDF file is so small, the insertion speed is so fast

The following references MSDN:

http://msdn.microsoft.com/zh-cn/library/ms190925.aspx
The ability to minimize log volume operations
"Minimal logging" means only the information required to recover a transaction in the event of a point-in-time recovery is not supported. This topic describes actions that are minimally logged under the bulk-logged recovery model (and under the simple Recovery model), but that are exceptions when you run a backup.
Note
Under the full recovery model, all bulk operations are fully documented. However, you can minimize the logging of a set of bulk operations by temporarily switching the database to the bulk-logged recovery model for bulk operations. Minimal logging is more efficient than full logging and reduces the likelihood of large-scale bulk operations filling up the available transaction log space during bulk transactions. However, you cannot recover a database to a point of failure if the database is damaged or lost when minimal logging is in effect.
The following operations perform full logging under the full recovery model, and are minimally logged under the simple and bulk-logged recovery model:
Bulk-Import operations (bcp, BULK Insert, and insert ... SELECT). For more information about when to minimally record a bulk import table, see prerequisites for minimally logging in bulk import.
Note
When transactional replication is enabled, the BULK INSERT operation is fully logged, even under the bulk-logged recovery model.
SELECT into operation.
Note
When transactional replication is enabled, the SELECT into operation is fully logged, even under the bulk-logged recovery model.
When inserting or appending new data, use the in the UPDATE statement. The WRITE clause is partially updated to the large value data type. Note that minimal logging is not used when updating existing values. For more information about large value data types, see Data types (Transact-SQL).
WRITETEXT and UPDATETEXT statements when new data is inserted or appended in the text, ntext, and image data type columns. Note that minimal logging is not used when updating existing values.
Note
The use of WRITETEXT and UPDATETEXT statements is deprecated, so you should avoid using them in new applications.
If the database is set to a simple or bulk-logged recovery model, some index DDL operations are minimally logged, whether the operation is performed offline or online. The minimally logged index operations are as follows:
CREATE index operation (including indexed views).
ALTER INDEX REBUILD or DBCC dbreindex operation.
Note
The DBCC Dbreindex statement is not recommended, so you should avoid using the statement in a new application.
DROP INDEX New Heap regeneration (if applicable).
Note
The release of the index page is always fully logged during the DROP index operation.

You can also take a look at this post about the bulk-logged recovery model

http://social.msdn.microsoft.com/Forums/zh-CN/958febc2-5eaf-46e4-b658-4bea087c0b0f

1 0 2 Votes 3 In addition, did the next Test. 4  5 in bulk logged mode, only the actions of the page (such as IAM,PFS and GAM) are recorded in the log, no data is recorded. 6  7 Operation                          Context                   allocunitname 8 lop_modify_row        LCX_PFS        dbo. SomeTable 9 lop_set_bits               lcx_iam        dbo. SomeTable10 lop_set_bits               lcx_gam        dbo. SomeTable11 lop_modify_row        LCX_PFS        dbo. SomeTable12 lop_set_bits                lcx_iam        dbo. SomeTable13 lop_set_bits                lcx_gam        dbo. SomeTable14 lop_modify_row        LCX_PFS        dbo. SomeTable15 lop_set_bits                lcx_iam        dbo. SomeTable16 but for a db in full mode, there is a section that records data, in addition to this part of the data. 17 18 that is in bulk logged mode, the transaction log backup, how to crawl the data change it? 19 20 I did the experiment. In bulk logged mode, a transactional backup is performed, after which the backup file can be restored to the SELECT * into operation. SELECT * Into. /....from ... This should be the operation to minimize the log.

The post says, "Do not record transaction log, then how to do database restore?" Data page changes are actually recorded through the database's System page BCM page.

Reference:DCM, BCM, SQL Server 2008 storage Structure

BCM page Structure

 1 DBCC execution is complete. If DBCC outputs an error message, contact your system administrator.                   2 3 PAGE: (1:7) 4 5 6 Buffer:7 8 9 BUF @0x03edb9f410 One bpage = 0x170e4000 Bhash = 0x00000000 Bpageno = (1:7) bdbid = 5 Breferences = 1 BUSE1 = 1409813 Bstat = 0xc00009 Blog = 0x32159 bnext = 0x0000000014 PAGE header:16 E @0x170e400019 M_pageid = (1:7) m_headerversion = 1 M_type = 1721 m_typeflagbits = 0x0 M_level = 0 m_flagbits = 0x022 M_objid (allocunitid.idobj) = M_index                 Id (Allocunitid.idind) = 0 Metadata:allocunitid = 648806423 Metadata:partitionid = 0 Metadata:indexid = 0 Metadata:objectid = 9924 M_prevpage = (0:0) M_nextpage = (0:0) Pminle n = 9025 m_slotcnt = 2 M_freecnt = 6 M_freedata = 818226 M_reservedcnt = 0 M_lsn = (0:0:1) m_xactreserved = 027 M_xdesid = (0:0)                m_ghostreccnt = 0 M_tornbits = 028 Allocation Status30 GAM (1:2) = Allocated SGAM (1:3) = Not allocated PFS (1:1) = 0x44 Allocated 100_pct_full32 DIFF (1:6) = CHANGED ML (                         1:7) = Not min_logged Ml_map:header @0x09c2c064 Slot 0, Offset 9635-Status = 0x0                              PNS ml_map:extent Alloc Status @0x09c2c0c239 (1:0)-(1:43256) = Not min_logged The DBCC execution is complete. If DBCC outputs an error message, contact your system administrator.
View Code

---------------------------------------------------- Gorgeous split-line -------------------------------------------- -----

It's interesting to have a look at the article I wrote to learn more about the transaction log

Queries on SQL Server database transaction logs

Http://www.cnblogs.com/lyhabc/archive/2013/06/10/3130856.html

Depending on the fields in the transaction log above, the Log Explorer software is able to recover data from the Insert script, which is estimated to read the data page ID of the transaction log that deleted the data

Then save the data in the data page, but if you use the bulk-logged recovery model or use the TRUNCATE TABLE statement to estimate that the Log Explorer software will not save you

Because the DELETE statement either deletes a record or a full table delete should generate one transaction log record per record, which generates an LSN

However, the bulk-logged recovery model, or the use of TRUNCATE TABLE statements, does not log the LSN of each row of data, and may only log delete actions

Allocunitid: The modified data belongs to the allocation unit ID

Allocunitname: Modified table name of the data

Page ID: data page ID

Slot ID: The first few records of the data page on which the data resides

PartitionID: The partition ID of the data page where the data resides

-------------------------------------------------- Gorgeous split-line ---------------------------------------------- ---------

After you back up the transaction log and truncate the log, then you do not do anything to the database, but you still see a transaction journal record generated

As for the reason, due to the length of the relationship, you can go to my other post, the bottom of the question, there are answers

Why does the log record increase after the full backup, sometimes it increases after the backup, and sometimes it decreases after the backup is finished???

Because some system database pages must be modified when the database is started, there will be some transaction log records generated

Article address: queries on SQL Server database transaction logs

(go) Explain SQL Server transaction log records

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.