SQL Server memory

Source: Internet
Author: User
Tags cpu usage server memory

Background

Recently a client found me that all the SQL Server server's memory was exhausted, and then showed me the task manager of a server.

Here to illustrate that the task manager will not completely tell the real memory or CPU usage, that is, this can only get non-accurate information, there may be a false alarm.

In order to reassure my customers, I checked the server and looked at a number of performance metrics. All I see is that CPU and hard disk use are very low only memory is high, this is exactly what we expect the status of SQL Server servers. SQL Server uses memory as much as possible to improve performance by caching as many disks as possible. Of course, if the OS needs it, it will also immediately release the resources back.

SQL Server is "greedy" for memory, and it holds all the memory allocated to it, whether or not it is used. And that's what we want it to do. Because it stores the data and executes the plan in the cache, then when the memory is exhausted, it does not release the memory and caches it in memory unless the cached data memory is released in two cases: 1) SQL Server restarts or less than 2) the operating system requires memory
The default memory setting is to use all memory (set at installation), which also frees up memory when the operating system requires memory. Then wait until there is memory in the re-large hold. But this is not a best practice, it is best to set a maximum memory limit, so that the operating system will ensure that a certain amount of memory is always used for SQL Server.

When you see the resource Manager, Available MB of memory has two parts made up of standby--standby and free--available, this standby space system has already cached it, and free memory means that it has not been used. They are called available memory. So we don't have to worry too much about the customer at first. Of course we also need healthy other performance counters to find out if there is a potential for memory impact performance. The following indicators need to be followed:

    • Page Life expectancy
    • Available Bytes
    • Buffer Cache hit Ratio
    • Target & Total Server Memory
    • Memory Grants Pending
    • Pages/sec (hard Page faults)
    • Batch Requests/sec & Compilations/sec

These performance parameters are described below:

Page life expectancy (PLE)

This performance counter records the average time of the data page (non-locking) in the buffer pool. In the production peak this value may be lower, but generally to keep the data above 300s, the longer the data in the buffer, the less SQL IO operation.

If the value of the long-term is below 300s, you can consider increasing the memory, of course, because the memory is getting larger, this value also becomes less important, but for small and medium-sized systems can still be used as a standard threshold value.

You can also use the following statement to query the counter:

' %buffer manager% ' ' Page Life expectancy '
Available MBytes

This counter monitors how much memory is available and whether the operating system has memory pressure. Generally we investigate whether this counter continues below 500MB, which indicates that memory is too low. If it lasts below 500, you need to add more memory.

This counter cannot be queried by T-SQL and can only be observed by Performance Monitor.

Buffer Cache hit Ratio

Buffer hit ratio, this counter records the average frequency of data obtained from the buffer pool. We typically have a ratio of 90% to 95% in the OLTP database. The higher the ratio, this means fewer IO operations. It also gets better performance, and if the counter continues below 90%, you need to increase the memory.

You can query by using the following T-SQL statement:

' %buffer manager% ' ' Buffer Cache Hit Ratio '
Target & Total Server Memory

The current total memory (buffer) of the server and the target memory, the total memory will be slightly lower than the target memory when the buffer pool is initialized to increase memory. This ratio will gradually close to 1, if the total memory does not grow quickly, it will be significantly lower than the target memory, which represents the following two points:

1) You can allocate as much memory as possible, SQL can cache the entire database into memory, and if the database is smaller than machine memory, the memory will not be completely exhausted, in which case the total memory is always less than the target memory.

2) SQL cannot increase buffer pools, such as pressure in the system. If this is the case you need to increase the maximum server memory, or increase the memory to improve performance.

' %memory manager% ' and [Counter_name] in ('totalServer Memory (KB)','Target Server Memory (KB)  ')
Memory Grants Pending

This counter measures the number of processes waiting for memory to be granted by SQL. The general recommended threshold is 1 or less. If it is greater than 1, this indicates that memory is not in order to wait for memory to release and then manipulate SQL.

This wait in general work can be caused by bad queries, missing indexes, sorting, or hashing. To find out why you can query the DMV--sys.dm_exec_query_memory_grants This view, it will show which query requires memory grant execution.

If the memory waits are not caused by the above reasons, you need to increase the memory to resolve the problem. There is a reason to increase the hardware at this point. The T-SQL statements for the query are as follows:

' %memory manager% ' ' Memory Grants Pending '
Pages/sec (hard Page faults)

Database-level counters are also used here: when a page that needs to be read or written is not in memory, it needs to be counted to the disk for reading. This counter is a record of the sum of Read and write and cannot be obtained directly in memory only from the disk read (resulting resulting in hard page faults), this problem is due to the operating system must exchange files on disk, when accessing memory, when memory is not enough to swap files to disk , performance can be severely impacted because disk reads and writes are much slower than memory.

For this counter, the recommended threshold is <50, and if you see above this value, you may be able to offer performance issues. Of course, if the database is backed up or restored, including exporting, importing data, and in-memory mapping files, these can also cause performance counters to exceed the threshold value.

Batch Request & Compilations

The counter consists of two checks

    • SQL server:sql Statistics–batch request/sec. Number of incoming queries (number of batches)
    • SQL Server:sql statistics-compilations/sec. Number of newly created execution plans

If Compilations/sec is 25% or the relative batch requests/sec is higher, the execution plan is placed in the cache, but the execution plan is never reused. The precious memory is wasted, not the data being cached. This is a bad practice, and all we have to do is stop this situation,

If the compilation/sec is high, say 100, there is a large number of ad hoc queries running. You can enable "optimize for ad hoc" to cache the execution plan, but only when the second query is used.

The corresponding indicators can be obtained using T-SQL as follows:

' %sql statistics% ' ' Batch requests/sec '  '%sql statistics%'SQL compilations/sec';

The same rate can be obtained:

SELECT ROUND (100.0'%sql statistics%'SQL compilations/sec ' ' %sql statistics% ' ' Batch requests/sec ') ,2 as [Ratio]
about how to set the maximum amount of memory available to a database:

recommended thresholds : Generally speaking, I use 10% for other 90% of the operating system to allocate to the database. Of course if the memory is very large can adjust this ratio is less than 1/9, for the memory of the small usually I reserved 4-6g around to the operating system.

Let's take a look at the actual example:

Look at this counter in Performance Monitor, we can see that this server is in a healthy state, there is 11GB of free space, no pagefaults (I/O only from the cache is not swapped to disk), the buffer ratio of 100%,ple more than 20000s, no memory wait, Sufficient total memory and a lower compilation ratio (number of compilations/queries).

This measurement data is easy to understand, which is more useful than the Task manager and can be relied upon to determine if there is enough memory on this SQL Server server.

Summarize

If you only make judgments based on Task Manager, we are prone to error decisions. Because no matter how much memory the system has, SQL Server uses as much memory as possible, which is not a bug. Cached data has a good effect in memory, meaning that the server is healthy and provides better execution efficiency for the user. In the actual database environment, the general sudden performance problems are mostly caused by T-SQL statements, as I mentioned earlier bad query (missing index, sorting, hashing, etc.), this time through the statement optimization can be a good solution to the unexpected problem, here is an unknown solution. If the server is rife with memory performance counter issues in the article, write the report submission memory increase requirement.

SQL Server memory

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.