Transaction log records in in-memory OLTP (Hekaton)

Source: Internet
Author: User
Tags filegroup

In today's article, I want to discuss in detail how the transaction logs in in-memory OLTP are written to the transaction log. As we all know, in-memory OLTP provides you with 2 persistence (durability) options for your memory-optimized table (Optimized Tables):

    • Schema_only
    • Schema_and_data

Today I don't want to talk more about schema_only, because with this option, no logs occur in the transaction log (your data is lost after SQL Server restarts). Today we will be dedicated to the persistence of the schema_and_data option.

Schema_and_data

With the schema_and_data persistence option, SQL Server must record your transaction to the transaction log, because each in-memory OLTP transaction must always be persistent. This is the same as the traditional disk-based table. However, write transaction logs in in-memory OLTP are more optimized than traditional tables. In-memory OLTP supports multiple concurrent log streams (not currently implemented in SQL Server 2014), and in-memory OLTP records only logical transactions that occur (logical transaction).

What does a logical transaction mean? Suppose you have a clustered table defined by 5 nonclustered indexes. If you insert 1 records into the table, SQL Server must record the insert into the clustered index, and 5 additional non-clustered indexes to insert. The more nonclustered indexes you define on your table, the more logs SQL Server requires. and SQL Server can only be as fast as the transaction log.

With in-memory OLTP things change. In-memory Oltp,sql server records only logical modifications that occur in your transaction. SQL Server does not log changes in your hash or range index. Therefore, 1 log records only describe the logical insert/update/delete statements that occurred. As a result, in-memory OLTP writes less data to your transaction log, so your transactions can be submitted faster.

let's verify it!

I want to use 1 simple examples to show you that when you first insert 10,000 records into a traditional disk Basedtable, and then insert the memory-optimized table (Memories Optimized table), How much data SQL Server writes to your transaction log. The following code creates 1 simple tables and inserts 10,000 records in a while loop. Then I look at the transaction log with the Sys.fn_dblog system function (no document is exposed).

1 --Create a Disk Based table2 CREATE TABLEtesttable_diskbased3 (4Col1INT  not NULL PRIMARY KEY,5Col2VARCHAR( -) not NULL INDEXIdx_col2nonclustered,6Col3VARCHAR( -) not NULL7 )8 GO9 Ten --Insert 10000 records into the table One DECLARE @i INT = 0 A  - BEGIN TRANSACTION -  while(@i < 10000) the BEGIN -     INSERT  intoTesttable_diskbasedVALUES(@i,@i,@i) -  -     SET @i += 1  + END - COMMIT + GO A  at --SQL Server logged more than 20000 log records, because we have 2 indexes - --defined on the table (Clustered Index, non-clustered index) - SELECT *  fromSys.fn_dblog (NULL,NULL) - WHEREPartitionIDinch - ( -     SELECTpartition_id fromsys.partitions in     WHERE object_id = object_id('testtable_diskbased') - ) to GO

From the output of the system functions you can see that you have a little more than 20,000 log records. This is true because we have 2 index definitions (a clustered index, 1 nonclustered indexes) on the table.

Now let's take a look at what's changed with memory-optimized tables (Optimized table) log records. The following code shows the necessary work for an in-memory OLTP database: We only added 1 new memory-optimized filegroups (Optimized file group)and added a container to it:

1 --ADD memory_optimized_data filegroup to the database.2 ALTER DATABASEINMEMORYOLTP3 ADDFILEGROUP InmemoryoltpfilegroupCONTAINSMemory_optimized_data4 GO5 6  UseINMEMORYOLTP7 GO8 9 --Add A new file to the previously created file groupTen ALTER DATABASEInmemoryoltpADD FILE One ( ANAME=N'Inmemoryoltpcontainer',  -FILENAME=N'C:\Program Files\Microsoft SQL Server\mssql12. Mssqlserver\mssql\data\inmemoryoltpcontainer' - ) the  toFILEGROUP[Inmemoryoltpfilegroup] - GO

Next I created 1 new memory-Optimized tables (Optimized table). Here I have selected 16384 of the number of buckets on the hash index to avoid the possibility of hash collisions (hash collisions). In addition, I created 2 scope indexes (range Indexes) on the Col2 and Col3 columns.

1 --creates a Memory Optimized table2 CREATE TABLEtesttable_memoryoptimized3 (4Col1INT  not NULL PRIMARY KEY nonclusteredHASH with(Bucket_count= 16384),5Col2VARCHAR( -) COLLATE latin1_general_100_bin2 not NULL INDEXIdx_col2,6Col3VARCHAR( -) COLLATE latin1_general_100_bin2 not NULL INDEXidx_col37) with8 (9Memory_optimized=  on, TenDurability=Schema_and_data One ) A GO

On that table, there are 3 indexes (a hash index) and 2 range index (ranges Indexes). When you insert 10,000 records into the table, the traditional tables generate nearly 30000 of the log records-each inserting 1 logs in each index.

1 --Copy out of the highest ' current LSN '2 SELECT *  fromSys.fn_dblog (NULL,NULL)3 GO4 5 --Insert 10000 records into the table6 DECLARE @i INT = 07 8 BEGIN TRANSACTION9  while(@i < 10000)Ten BEGIN One     INSERT  intoTesttable_memoryoptimizedVALUES(@i,@i,@i) A  -     SET @i += 1  - END the COMMIT - GO -  - --Just a few log records! + SELECT *  fromSys.fn_dblog (NULL,NULL) - WHERE [Current LSN] > '0000002f:000001c9:0032' --highest LSN from above + GO

But now when you look at the transaction log, you'll see that 10,000 inserts generate only a few log records--here are 17!

Magic happens in lop_hk log records. In this particular log record, in-memory OLTP bundles multiple modifications to your memory-optimized tables (Optimized table). You can also decompose lop_hk logging by using the sys.fn_dblog_xtp system function:

1 --Let's look into a LOP_HK log record2 SELECT *  fromSYS.FN_DBLOG_XTP (NULL,NULL)3 WHERE [Current LSN] > '0000002f:000001c9:0032'  andOperation= 'LOP_HK'4 GO

as you can see, in-memory OLTP generates almost 10,000 LOP_HK log records-1 records for each logical insert that occurs on this table.

Summary

In-memory OLTP provides a stunning performance boost because it is based on new principles, such as MVCC and Lock-free Data structrues. In addition, it generates only a small amount of transaction log records because only logical changes are recorded to the transaction log. I hope this article has given you a better understanding of how in-memory OLTP can improve your transaction log throughput.

Thanks for your attention!

Transaction log records in in-memory OLTP (Hekaton)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.