Continue with the previous SQL data optimization. in the previous article, we will introduce the commonly used tools for SQL operations. In this article, we will list several points of personal database optimization, this method improves the query and maintenance performance of large data volumes to a certain extent.
1. Review the indexes, stored procedures, and SQL statements of big data tables
This method is fundamental. It focuses on the logic analysis and performance tools of data tables, and the Execution Plan to check whether indexes are missing or the consumption performance of SQL statement writing is optimized. For problems with IO bottlenecks, you can try to use paging stored procedures and other methods to achieve data optimization at the underlying layer. Some Articles have explained the performance comparison of some common SQL statements and try to modify them as much as possible.
2. Database Log File compression
A database contains data files and log files. For a large database, you may find that the log file is so large that the log file I use reaches 18 GB, log Files play a key role in analyzing database operations and data recovery in exceptional cases, but the best way to do this is to regularly back up such files and then clear the logs. To clear logs without affecting the normal use of the database, follow these steps:
A. Run the following statement: Dump transaction dbname with nolog
B. Right-click the database name, choose task> shrink> file, select file type: log, and select release in the shrink operation ..., enter 0 and click OK. The log file is cleared.
If the log file is too large, it will occupy a lot of file disks. In addition, frequent log reads and writes also affect the retrieval speed and performance of the track to a certain extent.
Note: To back up logs, back up logs first.
3. view the corresponding metadata of the database, analyze the index fragmentation, and sort out the index fragmentation.
An index is a dictionary directory that stores quick access records. However, because the data changes dynamically, the data is constantly modified, deleted, and inserted, which may cause the index to change dynamically, index fragmentation may occur over time, which will cause the system to execute the corresponding query and retrieval process, to perform some additional operations to locate the specified index, the best way is to locate the index at a time. Therefore, it is also critical to dynamically sort the index and identify the index fragmentation. In the following code system, the system automatically cleans up indexes with an index fragmentation greater than 12% in the current database and recreates the corresponding indexes.
First, let's look at how to view index fragmentation:
DBCC showcontig then automatically clears the index with the index fragmentation greater than 12% and re-creates the Index automatically. 12% can be customized.
Use dbname -- re-organize the index set nocount on for all tables in the specified database -- re-organize the index in the specified database using the cursor, remove index fragmentation -- r_t layer cursor to retrieve all tables in the current database declare r_t cursorfor select name from sys. tablesdeclare @ t varchar (50) Open r_tfetch next from r_t into @ t while @ fetch_status = 0 begin -- r_index cursor judge the fragmentation of the specified table index and optimize declare r_index cursor for select T. name, I. name, S. avg_fragmentation_in_percent from sys. tables t join sys. indexes I on I. object_id = T. object_id join sys. dm_db_index_physical_stats (db_id (), object_id (@ T), null, null, 'limited') s on S. object_id = I. object_id and S. index_id = I. index_id declare @ tname varchar (50), @ INAME varchar (100), @ AVG int, @ STR varchar (500) Open r_index fetch next from r_index into @ tname, @ INAME, @ AVG while @ fetch_status = 0 begin if @ AVG> = 12 -- if the fragmentation is greater than 12, re-create the index begin set @ STR = 'alter Index' + rtrim (@ INAME) + 'on DBO. '+ rtrim (@ tname) + 'rebuild' end else -- if the fragmentation is less than 30, reorganize the index begin set @ STR = 'alter Index' + rtrim (@ INAME) + 'on DBO. '+ rtrim (@ tname) + 'reorganize' end print @ STR exec (@ Str) -- execute fetch next from r_index into @ tname, @ INAME, @ AVG end -- end r_index cursor close r_index deallocate r_index fetch next from r_t into @ t end -- end r_t cursor close r_t deallocate r_t set nocount off
The included cleanup statistics can also be cleared as appropriate. If the statistics are valid, the cleanup will automatically skip
USE DBNameGO EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?',' ',90)" GO EXEC sp_updatestats Go
Note: The Use dbname in the preceding section uses the corresponding database name and must be modified during use.
After completing the above operations, you can restart the Database Service to see the results. The above optimization can improve performance to a certain extent.
4. Create a partition table and a partition Database
This effect is remarkable in terms of strength and strength. Previously, I thought that creating a partitioned table requires a lot of complicated operations and familiarity with program services. In fact, SQL has provided a complete solution.
A disk partition table is a horizontal partition of a table. One data is distributed in multiple data entity files, that is, in the. MDF file. Considering the performance, it is best to place each data file on a different physical disk. The procedure is as follows:
A. Create a partition function, mainly using create partition function XXX (parms)
B. Check whether the partition function is successfully created.
Select * From sys. partition_functionsc, create partition policy, create partition scheme schemaforpartitiond, check whether the partition policy is created successfully select * From sys. partition_schemese: Define partition policies and functions on the data table to associate tables. D. Test the partition information and select * From sys. partitions where object_id = object_id ('yourname') lists the main function names in the Partition Table above. You can view the help documentation in this method and learn this module in series, and simply copying the Code has much better results. After creating a partition table, you will find that the performance of queries with specified conditions has improved significantly. The key here is what is the key to partitioning, which depends on the specific business partition, such as the day, region, and so on. Well, the above are my personal opinions on Database optimization. Do you have any good ideas for further communication.