Myth #12: The number of files in tempdb is consistent with the number of CPUs required
Error
Alas, this myth is a cliché because it is Microsoft's "official" advice, and there are a lot of posts sticking to that view.
But the confusing thing is that the SQL Cat team's recommendation is 1:1, but it comes from the principle of extension rather than a general rule. Because of the large number of client data servers and IO subsystems they face, most people have no chance of encountering them.
Each instance allows only one tempdb, but there are a lot of places to use tempdb, so it's easy to be a performance bottleneck, and I think everyone knows that, and most people don't know where the extra tempdb files are needed.
When you see the blocking of the Pagelatch type, you are encountering a contention problem with the allocated bitmap in memory. When you see Pageiolatch, you are experiencing contention issues with the I/O subsystem level. For latches (Latch) You can view it as a common lock, but lighter, shorter, and only used internally by the storage engine.
MVP Glenn Berry has a blog post that looks at the DMV in Sys.dm_os_wait_stats. This blog post can find out why your server is causing the most blocking. If you find that the Pagelatch type is waiting, you can use this script to see if the problem is caused by Fps,gam or sgam contention.
If you encounter latch contention, you can mitigate this by tracking tag 1118 or by building a tempdb file (the principle can be found in KB 328551), and I've written a long blog about why tracking tag 1118 is still needed, Link: Misconceptions around TF 1118.
More Wonderful content: http://www.bianceng.cnhttp://www.bianceng.cn/database/SQLServer/
In the SQL Server 2000 era, the number of files in tempdb required a 1:1 relationship with the CPU core, which is also true in the SQL Server 2005 and 2008 versions, but due to SQL Server 2005+ optimizations (see my blog for details), You no longer need to set the CPU core and the number of tempdb files strictly according to the ratio of 1:1, but the number of files and the ratio of the CPU core is kept at 1:2 or 1:4.
[digression: In SQL pass 2011 my good friend Bob Ward, who is also the most bull of SQL CSS.) A new formula is given: If the CPU core is less than or equal to 8, the ratio is kept at 1:1, and if the CPU core is larger than 8, 8 files are used, and when you find latch contention, each amount plus 4 files]
But this is not a generalized. Last week I ran into a problem where a client's tempdb load is large enough to require 32 CPUs with 64 tempdb files to ease latch contention. Does this mean that this is a best practice? Of course not.
Then you may be wondering why the 1:1 ratio is bad, because too much tempdb can cause another performance problem. If you have a query in which some operations (such as sorting) require a large amount of memory, but not enough memory, you need to allocate the content to tempdb. When multiple tempdb files exist, this can cause performance to be dragged down due to the cyclic allocation mechanism of tempdb, as is the case for larger temporary tables.
So why does the cyclic allocation mechanism have a performance problem when there are a large number of files in tempdb? There are several possibilities:
The Loop allocation algorithm is for filegroups, and only one filegroup exists for tempdb. When this filegroup contains 16 or 32 files, due to the limited thread of the loop allocation algorithm, there is still some additional synchronization required for tempdb, a large number of files, so this part of the work can cause performance loss
The file size of tempdb is inconsistent, which can cause a single file to grow automatically, resulting in hotspot IO.
When a buffer needs to free some space through LazyWriter (the checkpoint of tempdb does not write back), multiple tempdb files can cause random read and write problems with the IO subsystem, which can cause IO performance problems.
So this choice lets you enter also worry, return also worry. How many tempdb files are appropriate? I can't give you a specific answer, but I can give you a guideline based on my years of consulting experience and the experience of attending conferences---when you create multiple files for tempdb in order to resolve the latch contention, add the tempdb file only if you have to. That is, you need to strike a balance between scalability and performance.
I hope the above guidelines will help you.
PS: Respond to some comments: Tempdb's files are not necessarily distributed between multiple storage. If you see a pagelatch type of wait, even if you do distribute it will not improve performance, and if the Pageiolatch type of waiting, you may need more than one memory, but this is not necessarily-there is the possibility that you need to speak the entire tempdb migrate to another storage system, Instead of adding only one file for tempdb. This requires careful analysis before you decide.