Dynamic | server | static
There are two basic methods of managing SQL Server memory: Dynamic allocation and static allocation
The amount of memory that the control program can use. Dynamic assignment allows an administrator to declare the size of a piece of memory, and, given its actual use, the SQL Server can allocate the maximum amount of memory it needs to occupy, and (theoretically) release it without using memory. Static allocation creates a fixed amount of memory space that is provided to SQL Server for use-no longer allocated.
By default, SQL Server is set to be dynamically allocated to all available physical memory within the computer that it is running. Many administrators note that SQL Server memory is being depleted over time, probably because of a failure or memory leak, but this program is designed to do so. SQL Server is meant to run on your computer whenever possible, and therefore uses all available memory to achieve its best performance. If SQL Server is running on a stand-alone machine, let it allocate and release the memory it needs.
In a Small Business Server machine, SQL may run concurrently with other programs, such as IIS, and the administrator may try to set up SQL Server to run a fixed amount of memory, in order to control that it does not occupy the memory used for sharing. But this is not necessarily the wish. On the one hand, setting the maximum amount of memory too low and not allocating enough memory available to the SQL Server to be used as a cache for similar transaction logs or query execution is all too difficult. The only way for SQL Servers to get the memory they need to perform operations is to swap out other pages, which is a slow process.
There are many ways to calculate the best memory allocation. If you have predictable user loads, assign them to the maximum number of users you want. Microsoft recommends that at least 4 MB be used as the maximum dynamic space, which has become a possible rule. If your user load changes a lot--and when you connect to the public Internet through the front-end of IIS to support your database service--real-time statistics will help more than just guessing. At peak times, collect performance figures such as SQL Server cache hit ratio and page fault rate per second. If this data indicates that SQL Server is doing a large amount of swapping, increase the maximum memory space until the exchange decreases. The exchange of one or more times per second is bad.
The alternative is to make the option "Reserve physical memory for SQL Server" available, which prevents SQL Server from swapping out the memory that has been allocated to it, even when other applications are able to use it. This can be called a double-edged sword: it can improve performance to a considerable extent, and may also lead to greater performance damage. This is worth trying on systems that have a lot of RAM to share (1 GB or more), but this approach should not be used when there are other critical processes that may suddenly require large amounts of memory. (and SQL Server may be forced to give up some of its own memory if needed). If SQL Server is running on a stand-alone machine, it is worthwhile to optimize performance in this way.