1. View execution time and CPU SET STATISTICS Time Onselect * from Bus_devhistorydataset statistics time offYou can see it in the message after execution. 2. View the operation of the query for I/O SET STATISTICS IO onselect * from bus_devhistorydataset statistics IO offResults after the execution: Scan count: Index and number of table executions Logical reads: number of pages read in the data cache Physical reads: Number of pages read from disk Read-ahead: Number of pages that are cached from disk during query LOB logical reads: reads the number of pages of an image, text, ntext, or large data from the data cache LOB physical reads: The number of pages of an image, text, ntext, or large data read from disk LOB pre-read: The number of pages from the disk into the cached image, text, ntext, or large data during query If there are more physical reads and estimates, you can use the index for optimization. View the above two information if you do not want to write SQL, you can do this by setting: Tools, Options 3. View the Execution plan Select the query statement and click First, the Database design optimization 1. Do not use cursors. Using cursors not only takes up memory, but also locks tables in incredible ways, so that all the performance optimizations that DBAs can do are equal to not being done. Each fetch in the cursor is equal to the execution of a select. 2. Create an appropriate index Whenever you add an index to a table, select is faster, and insert and delete are significantly slower because it takes a lot of extra work to create a maintenance index. (1) fields with function processing cannot take advantage of the index (2) cannot be indexed when a field operation with multiple tables is included in the condition 3. Use transactions For some time-consuming operations, the use of transactions can achieve a good optimization effect. 4. Beware of deadlocks Access your table in a certain order. If you lock table A and then lock table B, you must lock them in this order in all stored procedures. If a stored procedure locks table B First, and then locks table A, this may cause a deadlock. 5. Do not open large datasets 6. Do not use server-side cursors Compared to server-side cursors, client cursors can reduce server and network overhead and also reduce lockout time. 7. Do not ignore the problem of modifying the same record at the same time Sometimes, two users modify the same record at the same time, so that the latter modifies the previous modifier's actions, and some updates are lost. To handle this situation, create a timestamp field, check it before writing, and if allowed, modify the merge if there is a conflict and prompt the user. 8. Try not to use the text data type Do not use text unless you are using it to process a large data. Because it is not easy to query, slow, poor use will also waste a lot of space. General varchar can better handle the data. 9. Avoid using calculations on indexed columns In the WHERE clause, if the index column is part of a function, the optimizer uses a full table scan without using the index. For example: (inefficient) Select ... from [dept] where [sal]*12>25000; (efficient) Select ... from [dept] where [sal]>25000/12; 10, different types of index performance is not the same, should be used as far as possible first high-performance The index lookup efficiency of a numeric type is higher than the string type, and the index of fixed-length string char, nchar is higher than the variable-length string varchar, nvarchar. (inefficient) Select ... from tableName where Username= ' Zhang San ' and age>=21 (efficient) Select ... from TableName where age>=21 and Username= ' Zhang San ' Second, SQL statement optimization 1. Do not use SELECT * The benefits of specifying the required columns in select are: (1) Reduce memory cost and network bandwidth (2) More secure (3) Give the query optimizer the opportunity to read all the required columns from the index 2. Use parameter query The main is to prevent SQL injection, improve security. 3. Replace in or not with exists or not exists (efficient) select * FROM [EMP] where [empno]>0 and exists (select ' X ' from [dept] where [dept].[ Deptno]=[emp]. [Deptno] and [loc]= ' Melb '); (inefficient) select * FROM [EMP] where [empno]>0 and [Deptno] in (SELECT [Deptno] from [dept] where [loc]= ' Melb '); 4, is null or NOT NULL operation Determining whether a field is empty generally does not apply an index because the index does not index null values. You cannot use NULL as an index, and any column that contains null values will not be included in the index. This means that if a column has a null value, even indexing the column does not improve performance. Any statement optimizer that uses is null or is not NULL in the WHERE clause does not allow the use of indexes. Recommended scenario: Replace with other operations with the same function, such as: A is not null to a>0 or a> ". 5, < and > operation Greater than or less than the general situation does not need to adjust, because it has an index will be indexed to find, but in some cases can be optimized. If a table has 1 million records, then the effect of performing >2 and >=3 is very different. (inefficient) select * FROM [EMP] where [deptno]>2; (efficient) select * FROM [EMP] where [deptno]>=3; 6. Like operation The like operation can apply a wildcard query, where the wildcard combination may reach almost any query, but if used poorly it can produce performance problems, such as lide '%5400% ' queries that do not reference the index, and a "x5400%" reference to the scope index. 7, where the condition order after the influence The condition order after the WHERE clause has a direct effect on the query of the large data scale. Such as: SELECT * from zl_yhjbqk where dy_dj= ' 1KV below ' and xh_bz=1; SELECT * from Zl_yhjbqk where dy_dj=1 and dy_dj= ' 1KV below '; The above two queries, two fields are not indexed, so the execution is full table scan, the first SQL dy_dj= ' 1KV below ' condition in the recordset is 99%, while the xh_bz=1 ratio is only 0.5%, at the time of the first SQL 99% records are Dy_ Comparison of DJ and XH_BZ. And in the second SQL when the 0.5% records are DY_DJ and xh_bz comparison, in order to conclude that the second SQL CPU utilization is significantly lower than the first one. 8. Replace or with union (for indexed columns) In general, replacing or in a WHERE clause with Union will have a good effect. Using or on an indexed column causes a full table scan. Note: This rule is valid only for multiple indexed columns. If a column is not indexed, the query efficiency may be reduced because you did not select or. The following example has indexes on both loc_id and region. (inefficient) Select loc_id,loc_desc,begion from location where loc_id=10 or begion= ' MELBOURNE '; (efficient) Select loc_id,loc_desc,begion from location where loc_id=10 Union Select Loc_id,loc_desc_begion from location where begion= ' MELBOURNE '; 9. Optimize GROUP BY The efficiency of the group BY statement can be improved by filtering out unwanted records before group by. (inefficient) SELECT [Job],avg ([Sal]) from [EMP] GROUP by [job] have job= ' president ' or job= ' MANAGER '; (efficient) SELECT [Job],avg ([Sal]) from [EMP] where [job]= ' president ' or job= ' MANAGER ' GROUP by [job]; 10. Using Stored Procedures Consider using stored procedures to encapsulate complex SQL statements or business logic, which has several benefits: (1) The execution plan of the stored procedure can be cached for a long time in memory, reducing the time to recompile. (2) The stored procedure reduces the complex interaction between the client and the server. (3) If you need to make some changes after the program is published, you can modify the stored procedure directly without modifying the program, avoiding the need to reinstall the deployment program. 11. Use sp_configure ' query governor cost limit ' or set Query_governor_cost_limit to limit the resources consumed by the query. When an estimate query consumes more resources than the limit, the server automatically cancels the query and kills it before the query. Set Locktime the time at which the lock has been setup. 12. Use select top or SET ROWCOUNT to restrict the rows of the operation. 13. If you use in or OR and so on to find that the query does not walk the index, use the explicit declaration to specify the index: SELECT * from personmember (index = ix_title) WHERE ProcessID in (' Male ', ' female '). 14, if you want to insert a large binary value into the image column, using stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times its size), the server receives the character and converts it to a binary value. Stored procedures do not have these actions: Method: Create procedure P_insert as INSERT into table (Fimage) VALUES (@image), call this stored procedure in the foreground to pass in binary parameters, so processing speed significantly improved. 15. Analysis Select Emp_name Form employee where salary>3000 if salary is a float type in this statement, the optimizer optimizes it to convert (float,3000), Since 3000 is an integer, we should use 3000.0 in programming instead of waiting for the DBMS to be transformed by the runtime. Conversions of the same character and integer data. Third, processing millions above data to improve the query speed method 1. Try to avoid using the! = or <> operator in the WHERE clause, otherwise the engine will discard the full table scan using the index. 2. You should consider establishing an index on the columns involved in where and order by. 3, try to avoid null values in the WHERE clause to judge the field, otherwise it will result in a full table scan. 4, is to avoid using or in the WHERE clause to join the condition, otherwise it will result in a full table scan. Select ID from t where num=10 or num=20 is rewritten as Select ID from t where num=10 UNION ALL Select ID from t where num=20 5, try to avoid the use of the preceding percent semicolon. Select ID from t where name like '%abc% ' 6, in and not in also to use with caution, many times you can use exists and not exists, otherwise it will result in a full table scan. 7, if the use of parameters in the WHERE clause, also causes a full table scan. Select ID from t where [email protected] can be forced to query using index instead Select ID from T with (index name) where [email protected] 8. Try to avoid the expression of the field in the WHERE clause, otherwise it will cause a full table scan. Select ID from t where num/2=100 should read: Select ID from t where num=100*2 9, try to avoid the field in the WHERE clause function operation, otherwise it will result in a full table scan. Select ID from t where substring (name,1,3) = ' abc ' should read: Select ID from t where name like ' abc% ' 10. Not all indexes are valid for queries, SQL queries are optimized based on the data in the table, and SQL queries may not take advantage of indexes when there is a large amount of data duplication in the index columns. 11, the index is not the more the better, the index submitted select efficiency, but reduced the efficiency of insert and update. The number of indexes on a table should not be more than 6. 12, try to use numeric fields, if only the value of the field is not designed as a character type, which will reduce the performance of query and connection, and increase storage overhead. Because the engine compares each character in a string one at a time when it processes queries and joins, it is enough for a numeric type to be compared only once. 13, as far as possible to use Varchar/nvarchar instead of Char/nchar, because the first variable long field storage space is small, you can save storage space; second, for queries, the search efficiency in a relatively small field is clearly higher. 14. Do not use SELECT * anywhere, instead of using a specific field list *, do not return fields that are not available. 15, try to avoid using cursors, because cursors are inefficient, if the cursor operation of more than 10,000 rows of data, then consider rewriting. 16, try to avoid large transaction operation, improve the system concurrency ability. 17, the use of SET rowcount to achieve high-performance paging. Declare @ID int Declare @MoveRecords int [email protected] and @pagesize are incoming parameters Set @[email protected] * @PageSize +1--The following two lines for fast scrolling To the row of the data we want to fetch, and record the ID down Set Rowcount @MoveRecords select @ID =id from Table1 Order by ID setrowcount @PageSize Select * from Ta Ble1 Where Id>[email protected] Order by ID Set Rowcount 0Iv. Database primary Key selection Common database primary Key selection methods are: Auto Grow Field uniqueidentifier "Comb (Combine)" type 1. Auto-Grow field Advantages: (1) Simple, high efficiency. Disadvantages: (1) Self-increment is generally used int type, there is the limit of the number of data bars. (2) It is troublesome to merge data in a database. 2. GUID Advantages: (1) Safety, ensure uniqueness. (2) does not produce problems with data merging as the self-increment field does. Disadvantages: (1) It is 16 bytes in length and consumes a lot of storage space. (2) This data type is not regular, it is time-consuming to index the above, so the efficiency is lower than using the self-increment field. 3, Comb Given the pros and cons of the two primary key types above, the comb type can be used to find a balance for both. Its design idea is this: since the GUID type has no regularity can be made index inefficient, affecting the performance of the system, then can not be combined to preserve the GUID of the first 10 bytes, with the latter 6 bytes to represent the time of the GUID generation, so that the uniqueness and increase the order at the same time, To improve the efficiency of the index. After 6 bytes of time precision can reach 1/300 seconds, two comb type data is identical to the probability that the 300 GUID generated in this 1/two seconds the first 10 bytes exactly the same, which is almost impossible. (1) How SQL commands in SQL Server implement this idea: DECLARE @aGuid uniqueidentifier SET @aGuid = cast (CAST (NEWID () as binary) + cast (GETDATE () as binary (6)) as uniqueidentifier) (2) C # Way to implement comb data: <summary> Returns a GUID for database operations, and a specific time code to improve retrieval efficiency </summary> <returns>comb (GUID and time-mixed) type GUID data </returns> public static Guid Newcomb () { byte[] Guidarray = System.Guid.NewGuid (). Tobytearray (); DateTime basedate = new DateTime (1900,1,1); DateTime now = DateTime.Now; Get the days and milliseconds which'll be used to build the byte string TimeSpan days = new TimeSpan (now. Ticks-basedate.ticks); TimeSpan msecs = new TimeSpan (now. Ticks-(New DateTime (now). Year, now. Month, now. Day). Ticks)); Convert to a byte array Note that SQL Server was accurate to 1/300th of a millisecond so we divide by 3.333333 byte[] Daysarray = Bitconverter.getbytes (days. Days); byte[] Msecsarray = Bitconverter.getbytes ((long) (msecs. totalmilliseconds/3.333333)); Reverse the bytes to match SQL Servers ordering Array.reverse (Daysarray); Array.reverse (Msecsarray); Copy the bytes into the GUID Array.copy (Daysarray, Daysarray.length-2, Guidarray, guidarray.length-6, 2); Array.copy (Msecsarray, msecsarray.length-4, Guidarray, guidarray.length-4, 4); return new System.Guid (Guidarray); } <summary> Generate time information from the GUID returned by SQL SERVER </summary> <param name= "GUID" > Comb with time information </param> <returns> Time </returns> public static DateTime Getdatefromcomb (System.Guid Guid) { DateTime basedate = new DateTime (1900,1,1); byte[] Daysarray = new Byte[4]; byte[] Msecsarray = new Byte[4]; Byte[] Guidarray = GUID. Tobytearray (); Copy the date parts of the GUID to the respective byte arrays. Array.copy (Guidarray, Guidarray.length-6, Daysarray, 2, 2); Array.copy (Guidarray, guidarray.length-4, Msecsarray, 0, 4); Reverse the arrays to put them into the appropriate order Array.reverse (Daysarray); Array.reverse (Msecsarray); Convert the bytes to INTs int days = Bitconverter.toint32 (Daysarray, 0); int msecs = Bitconverter.toint32 (msecsarray, 0); DateTime date = Basedate.adddays (days); Date = date. Addmilliseconds (msecs * 3.333333); return date; } |