1. Data and log files are stored separately on different disks
The operation of data files and log files produces a large amount of I/O. Where possible, the log file should be stored on a hard disk that is different from the data file where the data and index are located to distribute I/O, while also facilitating disaster recovery for the database.
2.tempdb databases are stored separately on different disks
The tempdb database is a temporary database that is likely to be used by all other databases. Temporary tables are generated in the tempdb database to store intermediate data when you use SELECT INTO to execute order by on columns that are not indexed. Because creating and populating temporary tables can severely degrade system performance, you should index the columns you want to sort as much as possible. Also, the tempdb database is shared for all users and applications, so if a user occupies all the space in the tempdb database, the other databases will no longer be available. Where possible, the tempdb database should be placed on a single, faster hard drive or RAID array. Detach the I/O operation of the tempdb database to speed performance. The tempdb database should have the appropriate capacity to meet the needs of the user. Space for the tempdb database should be allowed to grow automatically. If set to not allow automatic growth, the operation cannot be completed when the query operation establishes a temporary table that exceeds the capacity of the tempdb database.
Appropriately set the growth of the tempdb database, too small growth will result in more external fragmentation and will consume more resources.
3. Avoid the occurrence of hot spot data
Before SQLServer7.0, for tables that do not have a clustered index (a heap table), the newly inserted rows of data are always placed at the physical end of the table on the disk. If there are many concurrent users, while inserting or updating data on a table, this makes it possible for the end of a very busy table to produce data hotspots. Concurrent I/O operations centralize the operation of a few pages, resulting in a decrease in database performance.
In SQL Server, the allocation of physical storage space for new data rows is done through PFS pages. The management algorithm of the PFS page disperses the insert operation to avoid generating data hotspots as much as possible.
When designing application systems and databases, avoid creating primary keys on naturally growing columns, which can result in hot-spot data happening.
4. Less data types
When designing a table, use as little data types as possible. You can save the most information on such a data page. There are fewer data pages, and less I/O to retrieve data pages, so the efficiency will be high.
5. Monitoring and organizing space debris
Automatic growth in the file space increases automatic manageability, but can lead to space fragmentation. The logical space of physical space and data is no longer continuous. Regular monitoring and space defragmentation help improve I/O performance.
6. Use primary and secondary data files
One primary data file for each database belongs to the main filegroup. For a database of around 1GB, a data file is sufficient, and if there is a secondary data file, the main data file has a pointer to manage the secondary data file.
When multiple data files are used, the primary data file is used to store system objects and tables, and secondary data files are used to store user data and indexes. Where possible, primary and secondary data files can be stored separately on separate disks to disperse I/O.
If multiple data files are used, the primary data file is recommended for storing system data, and secondary data files are stored with user data and indexes, which can help improve I/O performance.