SQL Server's memory management is a huge topic that involves a number of concepts and techniques, such as the common Plan Cache, Buffer Pool, memory clerks, and so on. This article is only glimpse, which describes common memory management-related concepts.
Before understanding memory management, the Sys.dm_os_memory_clerks view can be used to query memory clerks, which is the memory consumer.
SELECT [type] SUM as total_pages_kb from sys.dm_os_memory_clerksWHERE ! =0GROUPby[type]ORDER byDESC;
The most important memory component in SQL Server is the buffer pool, or BPool. A buffer pool is a collection of 8KB pages, and any block of memory larger than 8KB requires separate allocation management, such as COM Objects, CLR Code, Extended Stored procedures, Large Cached Plan, and so on.
The Buffer Manager is responsible for reading the data page and index pages (index page) from the data file on disk and putting the page data in buffer Pool as the data cache. Buffer refers to a page in memory that is the same size as a data page or index page.
SQL Server uses a read-ahead mechanism (read-ahead) to read data pages and index pages from disk into the buffer to improve performance. The pre-read mechanism is fully system-controlled and does not require any configuration or adjustment.
Data cache
The data cache uses a hash (Hashing) method to store page data. For example, given a databaseid-filenumber-pagenumber identifier, the identifier is stored in a hash table by a hash function. This allows the database engine to determine whether the target page exists in the cache, if it does not exist, and then reads the cache from the disk, using the Quick search function provided by the hash table.
SQL Server marks a paged location that has not been referenced for some time as a free page and is managed through the free page list. When a new buffer page is needed, the address of the page to be used is fetched from the linked header.
Each data cache page contains a header area that contains information about the last two references to the page and status information, such as whether the page is dirty (Dirty page). Dirty Pages (Dirty page) refer to data pages that have been read from disk and have been modified. The reference information is used to implement the page substitution policy for the data cache page, which uses the LRU-K algorithm. The LRU-K algorithm retains the value of the buffer hold pages in the active buffer pool, and if the buffer holds a page with a low reference frequency, the buffer pages are gradually released back into the list of free buffers.
LazyWriter Threads
SQL Server creates a lazywriter thread for each NUMA node that scans the buffers associated with that node. The lazywriter thread periodically sleeps and wakes, and when it wakes, it checks the length of the free buffer list, and if it falls below a certain threshold, the entire buffer is scanned. During the scanning process, dirty page indicators are checked when the page's reference rate is low. If the page is a dirty page, perform a disk write operation. The page will then be released back into the list of free buffers.
When SQL Server uses memory dynamically, it must continuously listen for the amount of available memory and track and monitor memory changes to determine when to increase or decrease its total amount of storage. Event notifications are generated whenever memory in SQL Server increases or decreases by 1MB, or when 5% of server memory is reached.
Checkpoint (Checkpoint)
The checkpoint (Checkpoint) thread also periodically scans the buffer and writes the dirty data page to disk. The difference between checkpoints and lazywriter is that checkpoints do not add free buffers to the list of free buffers. The sole purpose of the checkpoint is to make sure that the pages that are before a certain time are written to disk so that the number of dirty pages in memory is always kept to a minimum.
SQL Server logs the checkpoint's run process to the transaction log, which reduces recovery time when SQL Server fails due to data that was written before a certain time.
The conditions that trigger the checkpoint are:
- Manually trigger the CHECKPOINT command to perform checkpoints on the specified database.
- The log is slowing down, exceeding 70% of the capacity. Triggering checkpoints can truncate the log and free up log space.
- It is expected that a longer recovery time will be required. The expected recovery time is longer than the value set for the "Recovery Interval" option. If "Recovery Interval" is set to 1, it means that the checkpoint is once per minute. The default value is 0, which means that SQL Server will choose the appropriate value, usually 1 minutes.
- The request gracefully shuts down SQL Server and does not use the NOWAIT option.
The checkpoint thread scans the buffer in a non-sequential manner. When a dirty page is found, it checks to see if the dirty page is a dirty page on the disk, so that it can be combined to write gather-write chunks of data to improve performance.
Planning cache (Plan caches)
In addition to the data cache, one of the most used components of buffer Pool buffers is the caching of process and query plans, which is usually the plan cache.
SQL Server provides a common caching framework for all other caching mechanisms outside of the data cache, including storage methods and resource monitors. Storage methods include three types:
- Cache Store: Plan cache and Rowset (Rowset Clerk) are common cache stores.
- User store: The metadata cache (Metadata clerk) is a user store.
- The object store/memory Pool:sni Pooling Network Buffer is an object Store.
The Cache store and User store use the LRU mechanism to allocate and free space, which is implemented using the Clock page replacement algorithm. The Object Store is just a chunk of memory and does not require an LRU mechanism.
The Cache store uses a hash table to speed up queries, while the User store does not use a hash table, and the Object store does not use a hash table.
By looking at the Sys.dm_os_memory_cache_clock_hands view, especially the Removed_last_round_count column, if the value is increasing sharply, there is a significant indication of memory pressure.
SELECT cache_address ,[name] ,[type] , Clock_hand , Clock_status , Removed_last_round_countfrom Sys.dm_os_ Memory_cache_clock_hands;
Memory Broker
In SQL Server, where there are a large number of components that require memory, SQL Server uses memory Broker to analyze the behavior associated with memory consumption and to improve dynamic memory allocation in order to ensure that each component is in efficient use.
Memory Broker can schedule the allocation of allocations between various components that use caching, such as buffer Pool, query Executor, query Optimizer, and so on. Through the monitoring of memory requirements and consumption, through the dynamic allocation algorithm with feedback and improvement mechanism, to coordinate the formation of the best memory allocation between the components.
SELECT * from sys.dm_os_ring_buffers WHERE = ' Ring_buffer_memory_broker ';
Observing memory
- Sys.dm_os_memory_clerks This view describes the component responsibilities that are using memory in an instance of SQL Server.
- sys.dm_os_memory_objects This view describes the memory objects currently allocated by SQL Server.
- Sys.dm_os_memory_nodes This view describes the assigned NUMA Node-related information.
- Sys.dm_os_memory_pools This view displays information about the Object Store.
- Sys.dm_os_memory_cache_counters This view describes a snapshot of the runtime of the cache store and the User store.
- Sys.dm_os_memory_cache_hash_tables This view describes the active cache information.
- Sys.dm_os_memory_cache_clock_hands This view describes information about the clock page replacement algorithm.
This series of articles "Everyone is a DBA" by Dennis Gao published from the blog Park Personal technology blog, without the author's consent to prohibit any form of reproduction, any automatic or artificial reptile reproduction or plagiarism behavior are bullying.
Everyone is a DBA (IV) SQL Server memory Management