In our OLAP implementation, SQL is extremely complex and uses a lot of temporary tables. During the installation of tempdb, the local disk path for SQL Server installation is selected by default, and no disk array is used. When learning PostgreSQL, we found that many experts suggest placing the temporary tablespace on SSD or using RAID0 + 1 to increase the write speed and performance. Therefore
In our OLAP implementation, SQL is extremely complex and uses a lot of temporary tables. During the installation of tempdb, the local disk path for SQL Server installation is selected by default, and no disk array is used. When learning PostgreSQL, we found that many experts suggest placing the temporary tablespace on SSD or using RAID0 + 1 to increase the write speed and performance. Therefore
In our OLAP implementation, SQL is extremely complex and uses a lot of temporary tables. During the installation of tempdb, the local disk path for SQL Server installation is selected by default, and no disk array is used.
When learning PostgreSQL, we found that many experts suggest placing the temporary tablespace on SSD or using RAID0 + 1 to increase the write speed and performance.
Therefore, a complicated SQL statement is selected for relevant testing. It is found that the storage path of TempDB has a great impact on the performance.
Test Description: A single temporary table contains 55 million rows, and eight temporary tables are generated. The last eight temporary tables are joined by the select group by clause.
The test results are as follows:
1. Local Disk 2X136G 10 k sas hard disk RAID1
2. EVA4400 36 Block 15 k x 300g sas disk array RAID 0 + 1
3. EVA4400 36 Block 15 k x 300g sas disk array RAID 5
We can see that putting tempdb into the disk array has more than twice the performance improvement. The strange thing is that RAID1 + 0 has no higher performance than RAID5. Is EVA optimized for writing?
After being put into the production environment, the average blocking time of the original Tempdb was reduced from more than 300 milliseconds to 9 milliseconds, and the report performance was greatly improved, initially, the response time is reduced to about 50%.
Another important optimization is to set the number of data files in tempdb to multiple. The number of data files is the same as the number of database CPUs (note that it is not the number of cores ).
In addition, set the appropriate initial file size and growth rate based on the size of tempdb.
Check whether the disk bottleneck SQL is as follows:
SELECT
DB_NAME (fs. database_id) AS [Database Name]
, Mf. physical_name
, Io_stall_read_ms
, Num_of_reads
, CAST (io_stall_read_ms/(1.0 + num_of_reads) as numeric (10, 1) AS [avg_read_stall_ms]
, Io_stall_write_ms
, Num_of_writes
, CAST (io_stall_write_ms/(1.0 + num_of_writes) as numeric (10, 1) AS [avg_write_stall_ms]
, Io_stall_read_ms + io_stall_write_ms AS [io_stballs]
, Num_of_reads + num_of_writes AS [total_io]
, CAST (io_stall_read_ms + io_stall_write_ms)/(1.0 + num_of_reads
+ Num_of_writes) as numeric (10, 1) AS [avg_io_stall_ms]
FROM
Sys. dm_io_virtual_file_stats (NULL, NULL) AS fs
INNER JOIN
Sys. master_files AS mf
ON fs. database_id = mf. database_id
AND fs. [file_id] = mf. [file_id]
ORDER
Avg_io_stall_ms DESC
OPTION
(RECOMPILE );
,