I/O throughput is small, creating a bottleneck effect.
A computed column was not created to cause the query to be not optimized.
There is not enough memory.
Network speed is slow.
The amount of data queried is too large (you can use multiple queries and other methods to reduce the amount of data).
Lock or deadlock (this is also the most common problem with query slowness, which is a flaw in programming).
Sp_lock,sp_who, the active user is viewed because of read-write competitive resources.
The rows and columns that are not necessary are returned.
Query statement is not good, there is no optimization.
you can refine your query by:
1, the data, log, index on the different I/O devices, increase the reading speed, before the tempdb should be placed on the RAID0, SQL2000 not support. The larger the amount of data (size), the more important I/O is.
2, longitudinal, transverse partition table, reduce the size of the table (Sp_spaceuse).
3, upgrade the hardware.
4, according to query conditions, establish index, optimize index, optimize access mode, limit the data quantity of result set. Note that the fill factor should be appropriate (preferably with the default value of 0). The index should be as small as possible, using a Lie Jian index of small bytes (referencing the creation of the index), and do not Jianjian a single index of a limited number of values, such as a sex field.
5, improve the speed.
6, to expand the server's memory, Windows 2000 and SQL Server 2000 can support 4-8g memory. Configure virtual Memory: The virtual memory size should be configured based on the services that are running concurrently on the computer. Run Microsoft SQL Server? When you are 2000, consider setting the virtual memory size to 1.5 times times the physical memory installed on your computer. If you installed the Full-text Search feature and intend to run the Microsoft search service to perform full-text indexing and querying, consider: Configure the virtual memory size to be at least 3 times times the physical memory installed on your computer. Configure the SQL Server max server memory server configuration option to 1.5 times times the physical memory (half of the virtual memory size setting).
7, increase the number of server CPUs; but it must be understood that parallel processing of serial processing requires resources such as memory. The use of parallel or serial stroke is the automatic evaluation of MSSQL selection. A single task can be run on a processor by breaking it into multiple tasks. For example, delays in sorting, linking, scanning, and group by words, SQL Server determines the optimal level of parallelism based on the load of the system, and complex queries that consume a large number of CPUs are best suited for parallel processing. However, the update operation is Update,insert, and delete cannot be processed in parallel.
8, if you are using like to query, the simple use of index is not good, but full-text indexing, space consumption. Like ' a% ' using the index as '%a ' without using the index as a '%a% ' query, query time is proportional to the total length of the field value, so you cannot use the char type, but the varchar. A full-text index that is long for a field's value.
9, DB server and Application server separation, OLTP and OLAP separation.
10. Distributed partitioned views can be used to implement Federation of database servers. A consortium is a set of separately managed servers, but they work together to share the processing load of the system. This mechanism for creating a federation of database servers through partitioned data can expand a set of servers to support the processing needs of large, multi-tiered Web sites. For more information, see Designing a federated database server. (Reference SQL Help file ' partitioned view ')
Before you can implement partitioned views, you must first partition the table horizontally.
After the member table is created, a distributed partitioned view is defined on each member server, and each view has the same name. In this way, queries that reference the distributed partitioned view name can run on any one member server. The system operates like a copy of the original table on each member server, but in fact there is only one member table and one distributed partitioned view per server. The location of the data is transparent to the application.
11. Rebuild index DBCC REINDEX, DBCC INDEXDEFRAG, shrink data and log DBCC SHRINKDB,DBCC shrinkfile. Set up automatic shrink logging. For large databases do not set up automatic database growth, it can degrade server performance. There is a lot of emphasis on T-SQL, and here's a list of common points: first, the process that the DBMS processes the query plan is this:
The lexical and grammatical checks of the query statement.
Submits the statement to the query optimizer of the DBMS.
Optimizer to do algebraic optimization and access path optimization.
The query plan is generated by the precompiled module.
Then submit to system processing execution at the appropriate time.
Finally return the execution results to the user second, look at the structure of SQL Server data storage: A page size of 8K (8060) bytes, 8 pages for a disk, according to B-Tree storage.
12, the difference between commit and Rollback Rollback: Roll back all things. Commit: Commit the current thing. There is no need to write things in dynamic SQL, if you want to write please write in the outside such as: Begin TRAN EXEC (@s) commit trans or write dynamic SQL as a function or stored procedure. [SPAN]
13, in the query SELECT statement with the WHERE clause to limit the number of rows returned to avoid table scanning, if the return of unnecessary data, waste the server's I/O resources, increased the burden of the network to reduce performance. If the table is large, locks the table during table scans, and prevents other joins from accessing the table, with serious consequences.
14. The SQL Annotation declaration has no effect on execution.
15, as far as possible without the use of cursors, it occupies a large number of resources. If you need to row-by-row, try to use non-cursor technology, such as: In the client loop, with temporary tables, table variables, with subqueries, with case statements and so on. Cursors can be categorized according to the extraction options it supports: Only the rows must be fetched in the order from the first line to the last line. Fetch NEXT is the only extraction operation allowed and the default way. Scrollable can randomly extract any row anywhere in the cursor. The technique of cursors becomes powerful under SQL2000, and his purpose is to support loops. There are four concurrent options READ_ONLY: Cursor positioning update (update) is not allowed, and there are no locks in the rows that make up the result set. Optimistic with ValueS: Optimistic concurrency control is a standard part of the theory of transaction control. Optimistic concurrency control is used in situations where there is only a small opportunity for a second user to update a row in the interval between opening the cursor and updating the row. When a cursor is opened with this option, there is no lock to control the rows in it, which helps to maximize its processing power. If the user attempts to modify a row, the current value of this row is compared to the value obtained when the row was last fetched. If any value changes, the server knows that someone else has updated the row and returns an error. If the value is the same, the server performs the modification. Select this concurrency option optimistic with row VERSIONING: This optimistic concurrency control option is based on row versioning. With row versioning, the table must have a version identifier that the server can use to determine whether the row has changed since it was read into the cursor. In SQL Server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order of changes in the database. Each database has a global current timestamp value: @ @DBTS. Each time a row with a timestamp column is changed in any way, SQL Server stores the current @ @DBTS value in the timestamp column, and then increases the value of the @ @DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can compare the current timestamp value of a row with the timestamp value stored at the last fetch to determine whether the row has been updated. The server does not have to compare the values of all columns, just compare the timestamp columns. If the application requires optimistic concurrency based on row versioning for tables that do not have timestamp columns, then Reise considers optimistic concurrency control based on numeric values.
SCROLL LOCKS This option to implement pessimistic concurrency control. In pessimistic concurrency control, the application attempts to lock the database row when the row of the database is read into the cursor result set. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If a cursor is opened within a transaction, the transaction update lock remains until the transaction is committed or rolled back, and the cursor lock is dropped when the next row is fetched. If you open a cursor outside of a transaction, the lock is discarded when the next row is fetched. Therefore, whenever a user needs full pessimistic concurrency control, the cursor should open within the transaction. Updating a lock prevents any other task from obtaining an update or exclusive lock, preventing other tasks from updating the row. However, the update lock does not block shared locks, so it does not prevent other tasks from reading rows unless the second task also requires a read with an update lock. Scroll locks can generate scroll locks based on the lock hints specified in the Select statement defined in the cursor. The scroll lock is fetched on each line during extraction and is persisted to the next fetch or the cursor is closed, whichever occurs first. The next time the fetch is fetched, the server acquires a scroll lock for the row in the new fetch and releases the scroll lock from the last fetch of the rows. The scroll LOCK is independent of the transaction lock and can be persisted after a commit or rollback operation. If the option to close the cursor at commit is off, the commit statement does not close any open cursors, and the scroll lock is persisted to the commit to maintain isolation of the extracted data. The type of the obtained scroll lock depends on the cursor concurrency option and the lock hint in the cursor Select statement. Lock prompt Read Only optimistic value optimistic row version control lock silent unlocked unlocked Unlocked update NOLOCK unlocked unlocked unlocked unlocked HOLDLOCK shared share share update UPDLOCK error update update update TABLOCKX error unlocked Fixed update other unlocked unlocked Unlocked update * Specifies that the NOLOCK prompt will cause the table specified for that hint to be read-only in Gianpene.
16, with Profiler to track the query, get the time required to query, find out the problem of SQL; Use the index optimizer to optimize indexes.
17, notice the difference between Union and union all. UNION all good.
18, pay attention to use distinct, do not need to use, it is the same as the Union will make the query slow. Duplicate records are not a problem in the query.
19, do not return the query does not need the row, column.
20, use sp_configure ' query governor cost limit ' or set Query_governor_cost_limit to limit the resources that the query consumes. When an estimate query consumes more resources than the limit, the server automatically cancels the query and kills it before the query. Set Locktime the time of the lock.
21, select top 100/10 Percent to limit the number of rows returned by the user or set rowcount to restrict the operation of the line.
22, in the SQL2000 before, generally do not use the following words: "Is NULL", "<>", "!=", "!>", "!<", "not", "not EXISTS", "does not", "Don't like", and "L IKE '%500 ' because they don't walk the index are all table scans. Also, do not add a function to the column name in the WHERE clause, such as convert,substring, if you must use a function, create a computed column and then create an index instead. You can also work around it: Where SUBSTRING (firstname,1,1) = ' M ' instead of where FirstName like ' m% ' (index Scan), be sure to separate the function from the column name. And the index cannot be built too much and too large. Not in will scan the table multiple times, using EXISTS, not EXISTS, in, and left OUTER joins to replace them, especially the left-hand connection, and EXISTS is faster than in. The slowest is the not operation. If the value of the column is empty, the previous index does not work, and the 2000 optimizer is now able to handle it. The same is NULL, ' not ', ' not EXISTS ', ' not in ' can optimize her, and ' <> ' is still not optimized and cannot be indexed. [SPAN]
23. Use Query Analyzer to view query plans for SQL statements and evaluate whether the analysis is optimized for SQL. The average 20% of the code occupies 80% of the resources, and the focus of our optimization is these slow places.
24, if you use in or or, and so on, found that the query did not go index, use the display declaration to specify the index: Select * from personmember (INDEX = ix_title) Where ProcessID in (' Male ', ' female ').
25, will need to query the results of a prior calculation to put in the table, the query time again select. This is the most important means before SQL7.0. For example, hospital hospitalization costs are calculated.
26, MIN () and MAX () can use the appropriate index.
27, the database has a principle is the code from the data closer to the better, so priority to choose Default, followed by Rules,triggers, Constraint (constraints such as the health of the main health checkunique ..., the maximum length of the data type, etc. are constraints), Procedure. This not only maintenance work small, the writing program quality is high, and the execution speed is fast.
28, if you want to insert a large binary value to the image column, use stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times the size), the server is converted to a binary value after the character. Stored procedures do not have these actions: Method: Create procedure P_insert as insert into table ( Fimage) VALUES (@image), which call this stored procedure in the foreground to pass in binary parameters, which improves the processing speed significantly.
29, between at some times faster than in speed, between can be faster to find the scope of the index. The query optimizer shows differences. SELECT * from Chineseresume where title in (' Male ', ' female ') select * from Chineseresume where between ' men ' and ' women ' are the same. Because in is more than once, it is sometimes slower.
30, when it is necessary to create a global or local temporary table index, sometimes can improve speed, but not necessarily, because the index also consumes a lot of resources. He created the same as the actual table.
31. Don't build things that don't work. For example, waste resources when generating reports. Use it only when it is necessary to use things.
32, with or words can be decomposed into multiple queries, and through the Union to connect multiple queries. Their speed is only related to whether or not the index is used, and if the query needs to use a federated index, it is more efficient to execute with UNION all. Multiple or phrases are not indexed, converted to union, and then try to match the index. A key question is whether to use the index.
33, try to use less view, it's inefficient. The view operation is slower than the direct table operation, and you can replace her with stored procedure. Specifically, do not use view nesting, nested views increase the difficulty of finding raw materials. We look at the nature of the view: It is an optimized SQL stored on the server that has generated a query plan. When retrieving data from a single table, do not use a view that points to multiple tables, retrieve it directly from the table, or simply include a view that contains the table, otherwise it adds unnecessary overhead and interferes with the query. MSSQL increases the functionality of the view index in order to expedite query for the view.
34, do not need to use distinct and order by, these actions can be changed to client execution. They add extra overhead. This is the same reason as Union and union all.
Select Top Ad.companyname,comid,position,ad.referenceid,worklocation, convert (varchar, ad.postdate,120) as Postdate1,workyear,degreedescription from Jobcn_query.dbo.COMPANYAD_query ad where Referenceid in (' JCNAD00329667 ', ' JCNAD132168 ', ' JCNAD00337748 ', ' JCNAD00338345 ',
' JCNAD00333138 ', ' JCNAD00303570 ', ' JCNAD00303569 ',
' JCNAD00303568 ', ' JCNAD00306698 ', ' JCNAD00231935 ', ' JCNAD00231933 ',
' JCNAD00254567 ', ' JCNAD00254585 ', ' JCNAD00254608 ',
' JCNAD00254607 ', ' JCNAD00258524 ', ' JCNAD00332133 ', ' JCNAD00268618 ',
' JCNAD00279196 ', ' JCNAD00268613 ') Order BY postdate Desc
35, in the list of the face value, will appear the most frequent values at the front, the least appear at the end, reduce the number of judgments.
36, when using SELECT INTO, it will lock the system table (sysobjects,sysindexes, etc.), blocking access to other connections. When you create a temporary table, you use a display declaration statement instead of SELECT INTO. drop table T_LXH BEGIN TRAN SELECT * into T_lxh from chineseresume where name = ' XYZ '--commit in another connection select * from Sysobj ECTS can see that the Select into locks up the system table, and the Create table locks the system table (whether it is a temporary table or a system table). So don't use it in things!!! In this case, use a real table, or a temporary table variable, if it's a temporary table that you often use.
37, generally after the group by a having words can eliminate the excess line, so try not to use them to do the work of the exclusion line. Their execution order should be optimal: Select the WHERE clause selects all the appropriate rows, group by is used to group a statistic row, and the HAVING clause is used to remove the unwanted groupings. So the group by has a having the expense is small, the query is fast. For large data rows to be grouped and having very consuming resources. If the purpose of group by does not include calculations, but groups, then use distinct faster.
38, one update more than one record score many times a quick, that is, batch processing good. [SPAN]
39, use less temporary table, as far as possible with the result set and table class variables to replace it, table type of variables than temporary table.
40. Under SQL2000, the computed fields are indexed and the conditions to be satisfied are as follows:
The expression of the calculated field is OK.
cannot be used in Text,ntext,image data types.
The following options must be prepared ansi_nulls = on, ansi_paddings = ON, ....
41, as far as possible the data processing work on the server, reduce the cost of the network, such as the use of stored procedures. Stored procedures are compiled, optimized, and organized into an execution plan, and stored in the database of SQL statements, is the control of the Flow language collection, of course, fast. Dynamic SQL, which is executed repeatedly, can use temporary stored procedures, which (temporary tables) are placed in tempdb. Because SQL Server did not support complex math calculations before, it was necessary to put this work on other layers and increase the overhead of the network. SQL2000 supports UDFs, now supports complex mathematical calculations, the return value of the function is not too large, such a large overhead. User-defined functions consume a large amount of resources, like cursors, if large results are returned using stored procedures.
42, do not use the same function repeatedly in a sentence, waste resources, put the results in the variable and then call faster.
43. The efficiency of Select count (*) teaches low, as much as possible to adapt his writing, and exists fast. Note also the difference: Select count (field of NULL) from Table and select COUNT (field of NOT NULL The return value of the FROM Table is different!!!
44, when the server has enough memory, the number of threads = maximum number of connections +5, so as to maximize the efficiency; otherwise, the thread pool of SQL Server is enabled by using the number of compounding threads < maximum number of connections to resolve, if still quantity = The maximum number of connections +5, severely damaging server performance.
45, in a certain order to visit your table. If you lock table a first and then lock table B, then lock them in this order in all stored procedures. If you (inadvertently) lock table B in a stored procedure and then lock table A, this can cause a deadlock. Deadlocks are difficult to detect if the locking order is not well designed in advance.
46, monitor the corresponding hardware load memory:page through SQL Server Performance Monitor FAULTS/SEC counter if the value is occasionally higher, indicating that the thread is competing for memory. If it continues to be high, memory may be a bottleneck.
Process:
% DPC time refers to the percentage of processors receiving and providing services in the deferred program call (DPC) during the sample interval. (DPC is running at a lower interval than the standard interval priority). Because DPC is performed in privileged mode, the percentage of DPC time is part of the privileged time percentage. These times are calculated separately and are not part of the total interval calculation. This total shows the average busy time as a percentage of the instance time.
The%processor time counter indicates that the bottleneck is CPU if the value of the parameter persists over 95%. Consider adding a processor or switching to a faster processor.
% Privileged Time is the percentage of the non idle processor hours used for privileged mode. (Privileged mode is a processing pattern designed for operating system components and manipulating hardware drivers.) It allows direct access to hardware and all memory. Another mode is User mode, which is a kind of limited processing mode designed for application, environment and integer system. The operating system converts application threads into privileged mode to access operating system services. % of privileged time includes the time to provide services for intermittent and DPC. A high privileged time ratio may be caused by a large number of gaps in a failed device. This counter will display the average busy time as part of the sample time.
% User time represents CPU-consuming database operations, such as sorting, executing aggregate functions, and so on. If the value is high, consider adding an index to reduce the value by using simple table joins, and horizontally dividing the large table. Physical DISK:CURRETN Disk Queue Length counter this value should be no more than 1.5~2 times the number of disks. To improve performance, you can increase the disk. Sqlserver:cache Hit ratio counter the higher the value the better. If you continue below 80%, you should consider adding more memory. Note the value of this parameter is cumulative since the start of SQL Server, so the value will not reflect the current value of the system after a period of time.
47, analysis Select Emp_name form. Employee where salary > 3000 If salary is a float type in this statement, the optimizer optimizes it to convert (float,3000), because 3000 is an integer. We should use 3000.0 in programming and not wait for the runtime to transform the DBMS. Conversion of the same character and integer data.
48, the query's association with the Order of the Write
Select A.personmemberid, * from Chineseresume a,personmember b where Personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' (A = b, b = ' number ')
Select A.personmemberid, * from Chineseresume a,personmember b where A.personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' and B.referenceid = ' JCNPRH39681 ' (a = b, b = ' number ', a = ' number ')
Select A.personmemberid, * from Chineseresume a,personmember b where B.referenceid = ' JCNPRH39681 ' and A.personmemberid = ' JCNPRH39681 ' (B = ' number ', A = ' number ')
49,
(1) If no owner code is entered THEN code1=0 code2=9999 ELSE code1=code2= owner code End IF Execute SQL statement as: Select owner name from P2000 Where owner code >=:CODE1 and owner Code <=:CODE2
(2) IF The owner code is not entered THEN Select owner name from P2000 ELSE code= owner code Select owner code from P2000 Where owner code =:code end IF The first method uses only one SQL Sentence, the second method uses two SQL statements. In the absence of input to the owner code, the second method is obviously more efficient than the first method because it does not have a restrictive condition; When you enter the owner code, the second method is still more efficient than the first method, not only with a less restrictive condition, but also because the equality operation is the fastest query operation. We write programs, don't be afraid of trouble.
50, about JOBCN now query paging new methods (as follows), using Performance Optimizer to analyze performance bottlenecks, if the I/O or the speed of the network, such as the next method optimization effective, if the CPU or memory, with the current method better. Please distinguish the following methods to show that the smaller the index the better.
Begin
DECLARE @local_variable Table (FID int identity (1,1), Referenceid varchar (20))
Insert INTO @local_variable (Referenceid)
Select top 100000 Referenceid from Chineseresume order by Referenceid
SELECT * FROM @local_variable where FID > FID <= 60
End and
Begin
DECLARE @local_variable Table (FID int identity (1,1), Referenceid varchar (20))
Insert INTO @local_variable (Referenceid)
Select top 100000 Referenceid from Chineseresume order by updatedate
SELECT * FROM @local_variable where FID > FID <= 60
End of the different
Begin
CREATE TABLE #temp (FID int identity (1,1), Referenceid varchar (20))
Insert into #temp (Referenceid)
Select top 100000 Referenceid from Chineseresume order by updatedate
SELECT * from #temp where FID > and FID <= to drop table #temp
End[span]
Additional: Experience in writing stored procedures and optimization measures
One, suitable for the reader object: The database development programmer, the database data quantity is many, involves to the SP (stored procedure) Optimization project developer, to the database has the intense interest person.
Second, the introduction: In the database development process, often encounter complex business logic and the operation of the database, this time will use the SP to encapsulate database operations. If there are more SPS in the project, writing and there is no specification, will affect the future system maintenance difficulties and large SP logic is difficult to understand, in addition, if the data volume of the database or the performance requirements of the SP is very high, you will encounter optimization problems, or speed may be very slow, after personal experience, An optimized SP is hundreds of times times more efficient than a poorly performing sp.
Third, content:
1, if developers use other libraries of table or view, it is necessary to establish a view in the current library to achieve cross library operations, it is best not to directly use "Databse.dbo.table_name", because sp_ Depends cannot display the cross Library table or view used by the SP, which is inconvenient to verify.
2. Before submitting the SP, the developer must have already analyzed the query plan using SET SHOWPLAN on, and has done its own query optimization check.
3, high program operation efficiency, optimize the application, in the SP writing process should pay attention to the following points:
A) Usage specification for SQL:
Try to avoid big business operation, use HOLDLOCK clause carefully, improve system concurrency ability.
Try to avoid repeated access to the same or a few tables, especially the large data table, you can consider the conditions to extract data to the temporary table, and then do the connection.
Avoid the use of cursors, because cursors are less efficient, and if the cursor operates more than 10,000 rows of data, it should be overwritten, and if you use a cursor, try to avoid the operation of table joins in the cursor loop.
Note the where sentence, you must consider the order of the sentence, should be based on the index order, range size to determine the order of the conditional clauses, as far as possible to make the field order and the index order consistent, ranging from large to small.
Do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the WHERE clause, otherwise the system may not be able to use the index correctly.
Try to use exists instead of select COUNT (1) to determine whether a record exists, and the Count function is only used for all rows in the statistics table, and COUNT (1) is more efficient than count (*).
Try to use ">=" and do not use ">".
Notice the substitution between some or clauses and the Union clause
Note the data types of the connections between tables to avoid connections between different types of data.
Note the relationship between parameters and data types in stored procedures.
Note the amount of data in the INSERT, update operation to prevent conflicts with other applications. If the data volume exceeds 200 data pages (400k), the system will lock up and the page level lock will be upgraded to a table-level lock.
b The usage specification of the index:
The creation of an index should be considered in conjunction with the application, and the proposed large OLTP table should not exceed 6 indexes.
Use indexed fields as much as possible as query criteria, especially clustered indexes, and, if necessary, by index index_name to force the index to be specified
Avoid table scan when querying large tables, and consider creating new indexes if necessary.
When using an indexed field as a condition, if the index is a federated index, the first field in the index must be used as a condition to ensure that the index is used by the system, otherwise the index will not be used.
Note the maintenance of the index, periodically rebuild the index, and recompile the stored procedure.
(c) Specification for use of tempdb:
Try to avoid using distinct, order BY, group BY, have, join, Cumpute, because these statements will aggravate the burden of tempdb.
Avoid frequent creation and deletion of temporary tables, and reduce the consumption of system table resources.
When you create a new temporary table, you can use SELECT INTO instead of CREATE table, to avoid log, to improve speed, if the amount of data is small, to ease the resources of the system table, we recommend create table first, and then insert.
If a temporary table has a large amount of data and needs to be indexed, the process of creating a temporary table and indexing should be placed in a separate child stored procedure to ensure that the system is able to use the index of the temporary table well.
If you use a temporary table, be sure to explicitly delete all temporary tables at the end of the stored procedure, truncate the table, and then drop the table, which avoids the longer locking of the system tables.
Careful use of large temporary tables and other large tables to query and modify the connection, reduce the burden on the system table, because this operation will be in a statement more than once using the system table of tempdb.
d) Reasonable algorithm use:
Based on the SQL optimization described above and the SQL optimization content in the ASE Tuning manual, a number of algorithms are used to compare with each other in order to obtain the most efficient and least resource consuming method. Specific ASE tuning commands are available: SET STATISTICS IO on, SET statistics time on, set Showplan on, and so on.
You can also look at the following article. Http://www.jb51.net/article/19024.htm