SQL Server 2008 Memory and I/O performance monitoring

Source: Internet
Author: User
Tags management studio sql server management sql server management studio server memory

Source:it168 Date: 2011-04-12 11:04 read: 10,820 recommendations: 1 original link [favorites]

The following are for Windows 32-bit system environments where 64-bit is not described in the following scenario.

  User-mode and kernel-mode (users mode& kernel mode)

To prevent user programs from accessing and tampering with key parts of the operating system, Windows uses 2 processor access modes: User mode and kernel mode. As the name implies, kernel mode is given to the operating system core code and basic driver, user mode to the user application program. In kernel mode, the program can access all the memory and hardware, and use all the processor instructions. Operating system programs have higher permissions than the user program, allowing the system designer to ensure that the user program does not accidentally disrupt the system's stability.

  Physical RAM (physical memory)

That is, the size of the actual purchased memory, the capacity on the memory strip. The address line of the CPU can be directly addressed by the amount of memory space, on the 32-bit operating system platform, the maximum address space of the CPU is 4GB, also can support the maximum 4G of physical memory space. On a 32-bit operating system, even if you buy 64G of memory, you can only say that you have 4GB of physical memory space

  Virtual Memory (Vsan)

If the computer lacks the random storage memory required to run the program or operation, Windows uses virtual memory to compensate. Virtual memory combines the computer's RAM and the temporary space on the hard disk, and when the RAM is low, virtual memory moves the data from RAM to a space called a paging file, and moving the data into and out of the paging file frees up RAM to get the job done.

  Virtual address space (Vsan, or VAS)

In a Windows system, any process is given its own virtual address space, which covers a fairly large range of virtual address space ranges from 0X00000000~0XFFFFFFFF (4GB) for 32-bit systems. Windows uses a paging mechanism that divides the 4G address space into fixed-size pages and maps each page in the virtual address to physical memory.

In the default case, the low 2G in the virtual address space, that is, 0X0000000~0X7FFFFFFFF is the user address space, and the 4G virtual address space of high 2G is 0x8000000~0xffffffff is allocated to kernel mode. In fact, the user process has only 2GB of virtual address space.

  Virtual Memory Manager

The virtual Memory Manager is responsible for address mapping of the virtual address space and physical memory, and if there is insufficient memory, you need to use the page file file to maintain temporary data, also known as virtual memory, and Use Page table entry (PTE) to track each address mapping relationship.

Up to this point, 64G of memory 60G can not be accessed, it is not a waste, how to do?

/3GB and Increaseuserva

By/3GB, you can increase the address space of the SQL Server process by reducing kernel mode footprint. By default, user mode and kernel mode each occupy 2G addressing space, and the 3G option allows SQL Server to get more than 1G of virtual address space.

/3GB Switch Usage:

Modify the paragraphs in the boot. ini file to:

or use the bootcfg command:

You can run the bcdedit command in Windows Server 2008 to adjust it.

  Physical Address Extension PAE (Physical address Extension)

Physical Address Extension (PAE) is an extension of the 32-bit Intel CPU that supports up to 64G of physical memory on 32-bit systems, which means that more than 4GB of physical memory allows more physical memory to be mapped to the application's virtual address space.

Use this method to modify the paragraph in the boot. ini file:

The following commands can also be performed under the Windows Server 2008 operating system:

  /PAE and/3GB

The two goals are different and can be configured in the same place, so it is inevitable to doubt that if the computer can use more than 16GB of physical memory, you need to make sure that the boot. ini file does not have the/3GB parameter.

Addresses Windowing Extensions AWE (Address windowing Extensions)

AWE is a set of extensions to the memory management capabilities of Windows that enable applications to use more memory than 2~3g memory that can be used through standard 32-bit addressing. AWE allows an application to obtain physical memory and then maps nonpaged memory views to a 32-bit address space. Although the 32-bit address space is limited to 4GB, non-paged memory can be much larger than 4GB.

Under SQL Server 2008, you can log in to SQL Server Management Studio, locate the appropriate DB instance, right-click the property, and then click Memory in the Select page, and in the Server memory option, check use AWE to allocate memory.

  Memory management for SQL Server

The primary memory component of SQL Server is the buffer pool. The high-speed data buffer is used to load data from disk into memory, which realizes high-speed data reading and writing. and the process buffer zone is used to store the corresponding execution plan, reduce the compilation process, is also the main component of the high-speed buffer warehouse. User Warehouse high-speed buffer is the main part of user warehouse. The object repository is just a memory pool of memory blocks that do not require LRU or cost calculations such as SQL Server network Interface (SNI) to leverage the object storage warehouse as a network buffer pool.

The SQL Server memory Manager consists of a three-tier hierarchy. The bottom layer of the hierarchy is the memory node. The next layer consists of a memory clerk, a memory cache, and a memory pool. The last layer consists of a memory object. These objects are typically used to allocate memory in an instance of SQL Server.

The Memory node (sys.dm_os_memory_nodes) provides the interface and implementation of a low-level allocator. In the NUMA memory node and the CPU node can correspond, in SQL Server, only memory clerk can access the memory node.

Memory Clerk (Sys.dm_os_memory_clerks) accesses the Memory node interface to allocate memory. The memory node also tracks the memory allocated by clerk for diagnostics. Each component that allocates a large amount of memory must use the Clerk interface to create its own memory clerk and allocate all its memory. Each component creates its corresponding clerk when SQL Server starts.

A Memory object (sys.dm_os_memory_objects) refers to multiple heaps. The granularity of allocations they provide is finer than the granularity of allocations provided by memory clerk. The SQL Server component uses memory objects without using memory clerk. The memory object uses the Memory Clerk page allocator interface to allocate the page. The memory object does not use a virtual memory interface or a shared memory interface. Depending on the allocation pattern, the component can create different types of memory objects to allocate any size area.

The buffer pool for SQL Server only provides 8KB of memory, and large blocks of memory larger than 8KB are managed separately and are generally obtained directly from the operating system or from outside the buffer pool, and only the data cache page can use AWE memory and need to be tracked separately.

  System view of memory for SQL Server

Sys.dm_os_memory_cache_clock_hands returns the state of each pointer for a particular cache clock. Provides the user with the clock pointer information about each cache store and user store-whether the pointer is spinning, the number of laps, the number of items removed, and so on. This view is useful for finding the state of the current clock pointer and the movement history of the clock pointer.

Sys.dm_os_memory_cache_counters returns a snapshot of the cache health. Provides information about the allocated cache entries, the usage of cache entries, and the run-time of memory sources. Provides summary information to the user about each store-the amount of memory used, the number of entries, and the number of entries being used. Users can use this view to find the cached memory usage and the number of entries in a cache.

Sys.dm_os_memory_cache_hash_tables returns a row for each active cache in an instance of SQL Server. That is, the user's hash list information about the cache store--maximum, minimum, average bucket length, and so on. This view is useful for finding the distribution of entries for each of the hash buckets in each cache table in the cache store.

Sys.dm_os_memory_cache_entries returns information about all the entries in the cache. Use this view to track cache entries up to their associated objects. You can also use this view to get statistics about the cache entries.

Sys.dm_os_sys_info returns a set of useful miscellaneous information about the computer and about the resources available to SQL Server and the resources it occupies.

Sys.dm_os_sys_memory returns memory information from the operating system. SQL Server is constrained by and responds to the operating system-level external memory conditions and the physical limitations of the underlying hardware. Determining the state of the entire system is an important aspect of evaluating SQL Server memory usage.

Sys.dm_os_virtual_address_dump returns information about the range of pages in the virtual address space of the calling process.

The DBCC memorystatus command provides a snapshot of the current memory state of SQL Server, as well as an important basis for us to analyze memory bottlenecks.

  Memory pressure

Monitoring of memory resources in SQL Server is mainly focused on page throughput capability, page faults, and available memory, and the focus is on the utilization of paging files. Several objects, counters, and corresponding thresholds and descriptions are provided below.

SQL Server provides a sys.dm_os_performance_counters counter view that monitors the buffer manager and some counters of the memory manager, such as the life cycle of a page, checkpoints, lazy writers, and buffer hit ratios.

The following is the distribution of the databases in the buffer pool in the database buffer pool.

The following is a count of the cached pages that return each object in the current database, with appropriate modifications we can also get the distribution of object data pages and index pages in the data buffer pool.

The following are the top ten memory components in the buffer pool that consume the most memory.

We need to focus on the following memory components:

Sys.dm_exec_cached_plans returns one row for each query plan that SQL Server caches for faster query execution. You can use this dynamic management view to find cached query plans, cached query text, the amount of memory that the cache plan consumes, and the reuse of cache schedule counts. We can also combine with sys.dm_exec_sql_text to get the top 10 buffers to the largest buffer.

Cachestore_sqlcp-sql Execution Plan (temporary cache plan, automatic parameterization plan, and precompiled Plan)

cachestore_objcp-Object Planning (execution plan for stored procedures, functions, triggers, etc.)

Cachestore_phdr-bound trees is a structured process that is algebraic in SQL Server and is used for views, constraints, and default values.

Cachestore_xpro is a pre-defined system stored procedure that contains only the function name and DLL name of the implementation process.

The following SQL is used to verify that memory is under pressure by identifying internal components that are allocated memory outside the buffer (that is, requesting memory through a multi-page allocator).

Sys.dm_exec_cached_plans returns one row for each query plan that SQL Server caches for faster query execution. You can use this dynamic management view to find cached query plans, cached query text, the amount of memory that the cache plan consumes, and the reuse of cache schedule counts. We can also combine with sys.dm_exec_sql_text to get the top 10 buffers to the largest buffer.

  I/O performance diagnostics

SQL Server performance is very dependent on the I/O subsystem. Unless your database is suitable for physical memory, SQL Server frequently has database pages in and out of the cache pool. In this way, real I/O traffic occurs. Similarly, log records need to be written to disk before the transaction is explicitly committed. SQL Server can use tempdb for various purposes, such as storing intermediate results, sorting, preserving the version of a row, or other. So a good I/O subsystem is important for SQL Server performance.

The performance of I/O depends on some of the following:

The types of disks include IDE, SATA, SCSI, SAS, Fibre Channel Drive, and so on, where IDE and SATA are commonly used on personal computers.

At the same time, in order to balance data security, data performance and data capacity, and developed a raid,raid is a number of separate disks in different ways to form a hard disk group, so as to provide higher storage performance than a single hard disk and improve data backup technology. RAID mainly includes several specifications such as Raid0~raid7, the commonly used raid type is RAID0, RAID1, Raid5,raid10.

In addition, depending on the connection mode can be divided into: Direct attached Storage (DAS), Storage area Networks (SAN), Fibre Channel Storage area NETWORKS,ISCSI Storage area Networks.

  Throughput and IOPS Metrics

Throughput depends primarily on the architecture of the array, the size of the fibre Channel, and the number of hard disks. The architecture of the array differs from each array, but there are internal bandwidths, but in general, the internal bandwidth is well designed, not the bottleneck. The second is the impact of Fibre Channel on data traffic, in order to achieve 1GB/S data traffic requirements, we must use the 1GB*8=8GB fiber optic card, can also use 4 2GB of fiber optic card. In fact, the number of hard disks, you can refer to the following calculation methods, assuming to meet the 1GB data traffic requirements, the number of disks required.

IOPS (input/output Operations per Second), which is the number of read-write (I/O) operations per second, used in databases and other applications, to measure the performance of random access.

The main determining IOPS depends on the array algorithm, the cache hit rate, and the number of disks. The cache hit rate depends on the distribution of the data, the size of the cache, the access rules for the data, and the cache's algorithm.

Disk limits, each disk can handle IOPS is limited, usually the maximum IOPS per disk is determined, such as IDE and SATA hard drive IOPS roughly 100 (we can use the HD Tune tool for IOPS testing), Moreover, the test results of IOPS have much to do with the way they are tested (such as random Read and write, sequential reading and writing, ratio of reads and writes, size of transmission database, number of disks), although the IOPS metrics for disks are also instructive for our assessment of disk pressure and the ability to meet the system's performance requirements.

Assuming the current business requirements are 10000 iops,120 block SCSI disks, what are the IOPS requirements for each disk in different RAID levels for different cache hit rates and read/write ratios?

Raid 0– I/o calculation per disk = (read + Write)/number of disks

Raid 1--I/o calculation per disk = [Read + write]]/2

Raid 5--I/o calculation per disk = [Read + (]/) Number of disks

Raid 10--I/o calculation per disk = [Read + (write-down) number of]/disks

In addition, I/O bottlenecks occur when the throughput rate exceeds 85%, so the single disk IOPS calculation rule is

((10000* (1-cache hit rate) * Read scale) +10000* write ratio *raid factor)/disk number/0.85

That is, the IOPS per disk is around 200 to meet the requirements of RAID0, RAID5, RAID10.

In addition, general planning and recommendations for deployment of SQL Server are as follows:

The operating system and SQL Server are built separately on the RAID1 disk image; For high-speed and security purposes, log files need to be installed separately on the RAID1/RAID10; tempdb files are best placed on RAID0, and data files are for security, performance, capacity, General consideration of cost is the use of RAID5.

There's a best practice on storage at Microsoft's TechNet Top (Storage top practices):

1. Understand the IO characteristics of SQL Server and the IO Requirements specification of the application system.

2. Use more/faster disk drives for good performance

3. Do not over-optimize storage, simple design often provides good performance and flexibility.

4. Verify the configuration before deployment. You can simulate tests with tools such as Sqlio.

5. Always place the log file on the RAID10/RAID1.

6. Isolate the log files and data files from the physical disk.

7. Carefully consider the data configuration of tempdb.

8. Balance the number of data files with the capacity of the CPU.

9. Do not neglect the foundation of SQL Server.

10. Do not neglect the storage configuration

Monitoring of SQL Server consumption of I/O resources is primarily focused on disk response time, queue length, disk read and write, and transfer speed. Several objects, counters, and corresponding thresholds and descriptions are provided below.

Sys.dm_io_virtual_file_stats is able to return I/O statistics for data and log files, which gives us an intuitive understanding of the throughput and wait times of each disk and database as a whole.

Sys.dm_io_pending_io_requests corresponds to each pending I/O request in SQL Server, we will sys.dm_io_pending_io_requests and Sys.dm_io_virtual_file_ Stats, you can see if there is currently a waiting IO and then go to locate and identify it.

SQL Server 2008 Memory and I/O performance monitoring

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.