Overview of the internal mechanisms of SQL Server in-memory OLTP (iv)

Source: Internet
Author: User
Tags filegroup new set unsupported management studio sql server management sql server management studio server memory

----------------------------I'm a split line-------------------------------

This article is translated from the Microsoft white Paper SQL Server in-memory OLTP Internals Overview: http://technet.microsoft.com/en-us/library/dn720242.aspx

Translators are limited, and if there is any improper translation, please correct them.

----------------------------I'm a split line-------------------------------

Native compilation of tables and stored procedures

In-memory OLTP introduces the concept of native compilation into SQL Server 2014. SQL Server can natively compile a stored procedure that accesses memory-optimized tables, and in fact natively compiles the memory-optimized table itself. Native compilation provides faster data access and more efficient query execution than traditional interpreted Transact-SQL.

What is native compilation?

Local compilation is the process of converting a variable programming structure into local code, including processor instructions that can be executed directly by the CPU without the need for further compilation or interpretation.

The Transact-SQL language is made up of high-level structures, such as CREATE TABLE and select ... From. The in-memory OLTP compiler parses these structures and compiles them into cost code for fast run time for data access and query execution. The in-memory OLTP compiler in SQL Server 2014 takes the definition of tables and stored procedures as inputs. It generates the C language code and uses the Visual C compiler to generate native code.

The results of tables and stored procedures are compiled into DLL files that are loaded into memory and linked to the SQL Server process.

When SQL Server creates memory-optimized tables and locally compiled stored procedures, it compiles the cost to the DLL files. In addition, the DLL files for tables and stored procedures are recompiled after the database or server restarts. The information required to rebuild the DLL file is stored in the metadata of the database; DLL files are not part of the database itself. Therefore, DLL files are not part of a database backup.

Maintenance of DLL files

Memory-optimized tables and DLL files for locally compiled stored procedures, as well as other generated files saved for troubleshooting and support, are stored in the file system.

The following query shows the DLL files for all the tables and stored procedures that are loaded into memory on the current server:

SELECT  from Sys.dm_os_loaded_modules WHERE = ' XTP Native DLL '

The database administrator does not need to maintain files generated by native compilation. For example, during the deletion of tables and stored procedures, and in databases that are deleted, SQL Server automatically deletes the build files that are no longer needed and is automatically deleted when the server or database restarts.

Local compilation of tables

Creating a memory-optimized table using the CREATE TABLE statement, table information is written to the metadata in the database, the structure of the table and index is created in memory, and the table is compiled into a DLL file.

Please refer to the following sample script, which creates a database and a memory-optimized table:

 UseMasterGO Create DatabaseDB1GO ALTER DATABASEDb1ADDFILEGROUP Db1_modCONTAINSMemory_optimized_dataGO --adapt filename as neededALTER DATABASEDb1ADD FILE(Name='Db1_mod', filename='C:\data\db1_mod')      toFILEGROUP Db1_modGO  UseDB1GO CREATE TABLEDbo.t1 (C1int  not NULL Primary Key nonclustered, C2int)  with(memory_optimized= on) GO --retrieve the path of the DLL for table T1SELECTName, description fromSys.dm_os_loaded_modulesWHEREName like '%xtp_t_' + cast(db_id() as varchar(Ten))                + '_' + cast(object_id('dbo.t1') as varchar(Ten))+ '. DLL' GO

The creation of the table compiles the DLL file for the table and loads the DLL file into memory. The query immediately after the CREATE TABLE statement retrieves the path to the DLL file for the table.

The DLL file of table T1 can parse the index structure and row format of the table. SQL Server uses this DLL file to traverse the index and retrieve rows, as well as the contents of the rows.

Native compilation of stored procedures

Stored procedures marked with the Native_compilation option are natively compiled. This means that Transact-SQL statements in stored procedures are compiled into local code for performance-demanding business logic execution efficiency.

Refer to the following sample stored procedure, which inserts rows into the table T1 of the previous example:

CREATE PROCEDUREDBO.P1 withNative_compilation, SCHEMABINDING,EXECUTE  asOWNER asBEGINATOMIC with(TRANSACTION Isolation  Level=Snapshot, LANGUAGE=N'us_english')    DECLARE @i int = 1000000     while @i > 0    BEGIN    INSERTDbo.t1VALUES(@i,@i+1)    SET @i -= 1    ENDENDGOEXECDBO.P1GO--ResetDELETE  fromdbo.t1GO

The DLL files of the stored procedure P1 can interact directly with the DLL files of table T1 and the storage engine for in-memory OLTP to insert rows as quickly as possible.

The in-memory OLTP compiler takes full advantage of the query optimizer to create an efficient execution plan for each query in the stored procedure. It is important to note that for natively compiled stored procedures, the query execution plan is compiled into a DLL file. Because SQL Server 2014 does not support automatic recompilation of natively compiled stored procedures, modifications to table data may require you to delete and recreate some stored procedures to merge the new query plan into the DLL file of the stored procedure. It is important to note that after the server restarts and fails over to an AlwaysOn replica, natively compiled stored procedures are recompiled the first time they are executed, which means that the query optimizer creates a new query plan and the new query plan is then compiled into the DLL file of the stored procedure.

Compilation and query processing

Figure 15 shows the compilation process for a natively compiled stored procedure:

Figure 15 Local compilation of stored procedures

1. The user issues a CREATE PROCEDURE statement to SQL Server

2. The parser and Algebrizer create a process flow for the stored procedure and create a query tree for Transact-SQL queries in the stored procedure

3. Optimizer creates an optimized query execution plan for all queries in the stored procedure

4. In-memory OLTP compilers take over the processing process through an embedded optimized query plan and generate a DLL that contains the machine code that executes the stored procedure

5. The generated DLL is loaded into memory and connected to the SQL Server process

A call to a natively compiled stored procedure is converted to a call to a function in a DLL, as shown in 16

Figure 16 Execution of a locally compiled stored procedure

1. The user issues an ' EXEC MyProc ' statement

2. Parser fetch name and stored procedure parameters

3. The in-memory OLTP runtime looks for the entry point of the stored procedure DLL file

4. dll files execute stored procedure logic and the results are returned to the client

Parameter sniffing

Interpreted Transact-SQL stored procedures are compiled into a direct physical execution plan when they are first executed (called), while natively compiled stored procedures are compiled at the time of creation. When the interpreted stored procedure is compiled at invocation time, the optimizer uses the parameter values provided for this invocation to generate the execution plan. This parameter usage during compilation is called "Parameter sniffing."

Parameter sniffing does not apply to compiling a natively compiled stored procedure. All parameters of such stored procedures are treated as having UNKNOWN values.

Feature support for SQL Server

Many SQL Server features support in-memory OLTP and databases with memory-optimized tables, but not all SQL Server features are supported. For example, AlwaysOn components, log shipping, and database backup and recovery all fully support in-memory OLTP. Transactional replication is partially supported, allowing memory-optimized tables to be used as a subscription, but not as a publishing project. However, database mirroring is not supported. You can use SQL Server Management Studio to manage memory-optimized tables and also support SSIS.

For a complete list of supported and unsupported features, see the documentation for SQL Server in-memory OLTP.

Management experience

In-memory OLTP is fully integrated into the management experience of SQL Server. As mentioned above, SQL Server Management Studio is able to manage memory-optimized tables, filegroups, and locally compiled stored procedures. You can also use SQL Server Management Objects (SMO) and PowerShell to manage your memory-optimized objects.

Meta data

Some existing metadata objects have been enhanced to provide information about memory-optimized tables and stored procedures, and new objects have been added.

One function has been enhanced:

    • ObjectProperty-now includes an attribute tableismemoryoptimized
catalog view

The following system views are enhanced:

    • Sys.tables has three new columns:
      • Durability (0 or 1)
      • Durability_desc (Schema_and_data and Schema_only)
      • is_memory_optimized (0 or 1)
    • Sys.table_types now has a is_memory_optimized column
    • Sys.indexes now has a type value that may be 7 and a TYPE_DESC value corresponding to the nonclustered hash. (like a nonclustered b-tree index, a nonclustered index has a 2 type_value and a type_desc of nonclustered).
    • Sys.index_columns column Is_descending_key now has different semantics, and for hash indexes, this value is meaningless and can be ignored.
    • Sys.data_spaces now has a possible type value for FX and a Type_desc value that corresponds to Memory_optimized_data_filegroup
    • Sys.sql_modules and Sys.all_sql_modules-now contains a uses_native_compilation column

In addition, there are several new metadata objects that specialize in providing memory-optimized table information.

Added a new catalog view to support hash index: sys.hash_indexes. This view is based on sys.indexes, so it has the same columns as the sys.indexes and has an extra column added. This bucket_count column shows the number of hash buckets specified for the index, and the value will not be changed if the index is not deleted and rebuilt.

Dynamic management Objects

The following SQL Server dynamic management views are new for in-memory OLTP. (The xtp identifier represents "extreme transaction processing (EXTREME transaction processing)". The dynamic management views at the beginning of sys.dm_db_xtp_* provide information about the various databases that enable in-memory OLTP, and the dynamic management views that begin with sys.dm_xtp_* provide instance-level information. You can read the details about these objects in the document. Some of these dynamic management views are mentioned earlier in this article in the relevant sections.

For more information about dynamic management views that support memory-optimized tables, see dynamic management views for memory-optimized tables.

    • Sys.dm_db_xtp_checkpoint
    • Sys.dm_db_xtp_checkpoint_files
    • Sys.dm_db_xtp_gc_cycles_stats
    • Sys.dm_xtp_gc_stats
    • Sys.dm_xtp_system_memory_consumers
    • Sys.dm_xtp_threads
    • Sys.dm_xtp_transaction_stats
    • Sys.dm_db_xtp_index_stats
    • Sys.dm_db_xtp_memory_consumers
    • Sys.dm_db_xtp_object_stats
    • Sys.dm_db_xtp_transactions
    • Sys.dm_db_xtp_table_memory_stats
XEvents

The in-memory OLTP engine provides xevents to help with monitoring and troubleshooting. You can run the following query to view the currently available xevents:

SELECT p.name, O.name, O.description  from JOIN Sys.dm_xe_packages P  on O.package_guid=p.guidWHERE='xtpengine ' ; GO

Performance counters

The in-memory OLTP engine provides performance counters to help monitor and troubleshoot. You can run the following query to view the currently available performance counters:

SELECT object_name , Counter_name  from sys.dm_os_performance_counters WHERE object_name  like ' xtp% ' ; GO

A database counter object called XTP using memory is also provided to track memory usage at the database level of memory-optimized tables.

Memory Usage Report

To get memory-optimized tables and indexes for real-time reports that currently use memory, you can run the reports that are available in SQL Server Management Studio. In Object Explorer, right-click the name of the database that contains the memory-optimized table, select Report | Standard Report | Memory usage of memory-optimized objects. You will see a report similar to Figure 17.

Figure 17 Memory usage Report for memory-optimized objects

This report shows the space used by the table's rows and indexes, and a small amount of space used by the system. Keep in mind that once a hash index is created, these indexes will have memory allocated to the number of declared hash buckets, so the report will show memory usage for those indexes before any rows are inserted. For nonclustered indexes, memory is not allocated until rows are added, and memory requirements depend on the size of the index key and the number of rows.

Memory requirements

When running in-memory OLTP, SQL Server needs to configure enough memory to hold all memory-optimized tables. Failure to allocate enough memory causes the transaction to fail when an operation that requires additional memory runs. Typically this occurs during an INSERT or update operation, but a delete operation on a memory-optimized nonclustered index can also occur. As described in the previous section, deleting may result in a page merge and the merge operation will assign a new page because the index page is never updated. The memory manager for in-memory OLTP is fully integrated with the memory manager of SQL Server and, where possible, reacts to memory pressure by more aggressively cleaning up the old row version.

When predicting the amount of memory required for memory-optimized tables, one experience is that you should have twice times the amount of memory used for the data. In addition, the total memory requirement depends on the workload, and if there is a large amount of data modification due to OLTP operations, more memory is needed to supply the row version. If you read large amounts of existing data, you may need less memory.

The hash index is very simple for the space required to plan the index. Each bucket requires 8 bytes, so you can calculate the number of buckets multiplied by 8 bytes. The size of a memory-optimized nonclustered index depends on the size of the index key and the number of rows in the table. You can assume that each index row is 8 bytes plus the size of the index key (assuming K-byte), so the maximum number of rows appropriate for a page will be 8176/(K +8). The estimated number of rows divided by this result will be a preliminary estimate. Keep in mind that not all index pages are 8K, and not all pages are full. Because pages need to be split and merged, creating new pages requires space for these pages until the garbage collection process deletes them.

Managing memory with Resource governor

SQL Server Resource Governor is a tool that allows you to proactively manage memory. Starting with the CTP2 version, the database can be bound to a resource pool, and you can allocate a certain amount of memory to the pool. Memory-Optimized tables in this database cannot use more than this amount of memory. There is a fixed limit of 80% of allocated memory to ensure that the system remains stable under memory pressure. In fact, memory-optimized tables and their indexes consume any memory that is managed by resource Governor, except that there is no other type of memory that is managed by resource governor. If the database is not explicitly mapped to a resource pool, it is implicitly mapped to the default pool.

For more information about SQL Server resource Governor, see the extension whitepaper written when SQL Server 2008 introduces Resource governor: Http://view.officeapps.live.com/op/view.aspx?src=http %3a%2f%2fdownload.microsoft.com%2fdownload%2fd%2fb%2fd%2fdbde7972-1eb9-470a-ba18-58849db3eb3b% 2fresourcegov.docx

Changes to resource governor in SQL Server2012 can be read here: http://msdn.microsoft.com/en-us/library/jj573256.aspx

The first step in creating a memory pool for an in-memory OLTP database is to specify the Max_memory_percent value. This value specifies the percentage of SQL Server memory that can be assigned to memory-optimized tables in the database associated with the pool.

For example:

CREATE  with (Max_memory_percent=); ALTER RECONFIGURE;

After you create a resource pool, you need to use stored procedures Sp_xtp_bind_db_resource_pool to bind the databases that you want to manage to the respective pools. Note that a pool can contain multiple databases, but the database can be associated with only one pool at any point in time.

The following is an example:

EXEC ' HkDB ' ' Hkpool ';

Because memory is assigned to a resource pool when it is allocated, only the resource pool is associated with a database and the assigned memory assignment cannot be transferred. To do this, you need to take the database offline and bring it back online. When the data is read into the memory-optimized table, the memory is associated with the new resource pool.

For example:

ALTER DATABASE [HkDb] SET OFFLINE; ALTER DATABASE [HkDb] SET ONLINE;

If you want to remove a binding between a database and a resource pool, you can use the stored procedure sp_xtp_unbind_db_resource_pool. For example, you might want to move the database to another resource pool, or you can completely remove the resource pool to replace it with another resource pool.

EXEC ' Hkpool ';

Analyze, migrate, and Report toolset (AMR) to help migrate

After you install SQL Server2014 (as long as you have selected the complete set of administrative Tools installed), analyze, migrate, and report Toolsets (Analyze, Migrate and reports, AMR) can be used to provide recommendations on what tables and stored procedures you might want to consider migrating to in-memory OLTP.

The Toolset's analysis and reporting aspects are combined with a new set of data collectors to manage the data warehouse to get workload bottlenecks and performance metrics. This data can be used to generate reports, which can be used by right-clicking the database that manages the data warehouse and selecting Reports | To manage the Data warehouse. Then there is an option to select the Transactional performance analysis Overview.

One of the reports contains suggestions for which tables might provide the maximum performance gain if converted to memory-optimized tables. The report will also be based on how much of the unsupported features are used at the same time in the table, explaining how much it will cost to convert. Another report contains recommendations for the use of memory-optimized tables, which stored procedures may benefit from converting to natively compiled stored procedures.

Once the key table is determined, the migration aspects of the toolset include memory Tuning Advisor that can help you migrate them. In the context menu that right-clicked the disk-based table popup, click Memory Optimization Advisor. This advisor will identify existing incompatibilities in the object and generate reports outlining these incompatibilities, as well as a guided migration of a limited set of tables.

Memory-Optimized tables can be accessed by interpreted Transact-SQL and locally compiled stored procedures, and memory-optimized tables can be used in the same query as disk-based tables, which means that the environment for migrating to an in-memory OLTP can be done incrementally. Depending on the recommendations provided in the Management Data Warehouse report, you can start converting tables to memory-optimized tables one at a time, starting with the tables that benefit most from the in-memory optimization structure. When you begin to see the benefits of converting to memory-optimized tables, you can continue to convert more and more tables, but access them using a regular Transact-SQL interface, and if so, the application requires only minimal changes.

Once your table has been converted, you can begin planning the stored procedures that will compile the code in a cost-effective manner, and you can start again from the Data Warehouse report that shows those stored procedures that will provide the most benefit. Using a native compilation advisor (another tool that can be found by right-clicking a stored procedure in Management Studio) to determine incompatibilities in Transact-SQL, you can help migrate these objects.

Summarize

The in-memory OLTP feature of SQL Server provides the ability to create and use in-memory optimized tables, and it can be managed extremely efficiently, providing performance optimizations for OLTP workloads. Memory-Optimized tables can be accessed with true multi-version optimistic concurrency control that does not require locks or latches during processing. Memory-Optimized tables for all in-memory OLTP must have at least one index, and all accesses need to be indexed. Memory-optimized tables in in-memory OLTP can be referenced in the same transaction as disk-based tables, with only a few limitations. Natively compiled stored procedures are the quickest way to access memory-optimized tables and efficient business logic algorithms.

---------------------------complete the full text-------------------------------

Overview of the internal mechanisms of SQL Server in-memory OLTP (i)

Overview of the internal mechanisms of SQL Server in-memory OLTP (ii)

Overview of the internal mechanisms of SQL Server in-memory OLTP (iii)

Overview of the internal mechanisms of SQL Server in-memory OLTP (iv)

Overview of the internal mechanisms of SQL Server in-memory OLTP (iv)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.