Often see someone ask, MSSQL occupies too much memory, but also continue to grow, or has set up the use of memory, but it does not use so much, what is the matter?
First, let's look at how the MSSQL uses memory.
Most of the overhead is used for data caching, and if there is enough memory, it will throw the used data and the data that you are using into memory until it is out of memory, and then the low hit data is cleared. So normally when we look at statistics IO, we see physics read is 0.
The second is the cost of the query, generally speaking, hash join will bring a relatively large memory overhead, while the merge join and nested loop overhead is relatively small, there are sorting and intermediate tables, cursors will have a relatively large overhead.
Therefore, indexes are generally required on the columns used for association and sorting.
The second is the implementation of the plan, the system data storage, these are relatively small.
Let's take a look at the impact of data caching on performance, and if there are no other applications competing for memory in the system, the more the data cache is, the better, and even sometimes we force some data pins into the cache. But if there are other applications, while MSSQL frees up memory when it is needed, thread switching, io waiting for these tasks also takes time, so performance is reduced. So we have to set the maximum memory usage of MSSQL. You can find where to configure maximum memory usage in the SQL Server properties (Memory tab), or you can use sp_configure to do so. If there are no other applications, then do not limit the use of MSSQL to memory.
Then look at the cost of the query, which is obviously the lower the better, because we can not benefit from it, on the contrary, the more memory used means that the query speed is reduced. So we generally avoid the use of intermediate tables and cursors, and index on columns that are frequently associated and sorted.