SQL Server Optimization 50 method

Source: Internet
Author: User
Tags server memory

The reasons for the slow query are many, and the following are common:
1, no index or no index (this is the most common problem of slow query, is the defect of program design)
2, I/O throughput is small, forming a bottleneck effect.
3. No computed columns are created, resulting in queries not being optimized.
4. Insufficient memory
5. Slow network speed
6, the amount of data queried is too large (can use multiple queries, other methods to reduce the amount of data)
7, lock or deadlock (this is also the most common problem of slow query, is the defect of program design)
8, sp_lock,sp_who, the activity of the user view, the reason is to read and write competitive resources.
9. Return unnecessary rows and columns
10, query statement is not good, no optimization

you can refine your query by:
1, thedata, logs, indexes on different I/O devices, increasing read speed, tempdb should have previously been placed on RAID0, SQL2000 not supported. The larger the amount of data (size), the more important it is to increase I/O.
2, Portrait, landscapesplit table to reduce the size of the table(Sp_spaceuse)
3. Upgrading hardware
4.indexing, optimizing indexes, and optimizing access based on query criterialimit the amount of data in a result set。 Attentionfill factor to be appropriate(preferably using the default value of 0). The index should be as small as possible,use a Lie Jian index with a small number of bytesGood (refer to the creation of the index),do not have a limited number of valuesThe word Jianjian single index such as Gender field
5, improve speed;
6.Expand the server's memory, Windows 2000 andSQLServer2000 can support 4-8g memory. Configure virtual Memory: The virtual memory size should be configured based on the services that are running concurrently on the computer. When you run Microsoft SQL Server 2000, considerset the virtual memory size to the physical memory installed on the computer1.5 times times as much. If you install a differentFull-Text search function, and you plan to run the Microsoft search service to perform full-text indexing and querying, consider: Configure the virtual memory size to be at least 3 times times the physical memory installed on the computer. Configure the SQL Server max server memory server configuration option to 1.5 times times the physical memory (half of the virtual memory size setting).
7. Increasenumber of server CPUs; but must understandparallel processing requires more resources than serial processingsuch as memory. The use of parallel or string travel is the MSSQL automatic evaluation option. A single task is decomposed into multiple tasks and can be run on the processor. For example, delays in sorting, connecting, scanning, and group by words are performed simultaneously, and SQL Server determines the optimal level of parallelism based on the load of the system.complex queries that consume large amounts of CPU are best suited to parallel processing。 ButUpdate operation Update,insert,delete cannot be processed in parallel
8, if you use like to query, simple to use index is not, butFull-text indexing consumes space. Like ' a% ' uses an index like '%a ' does not use an index. When using the like '%a% ' query, the query time is proportional to the total length of the field value, so you can't use char type, but varchar.built-in full-text index for long values of fields
9.DB Server and Application Serverseparation; OLTP and OLAP separation
10.Distributed Partitioned viewsCan be used to implementDatabaseServer Federation. A consortium is a set of separateManagementservers, but they work together to share the processing load of the system. This adoptionPartition DataThe mechanism that forms a federation of database servers can expand a set of servers to support the processing needs of large, multi-tiered Web sites. For more information, see Designing federated database servers. (Refer to SQL Help file ' partitioned view ')
A, before implementing a partitioned view, you must firstHorizontal partition Table
Bafter the member table is created, a distributed partitioned view is defined on each member server, and each view has the same name。 Suchqueries that reference a distributed partitioned view name can run on any member server。 The system operates as if each member server has a copy of the original table, but there is only one member table and one distributed partitioned view on each server.The location of the data is transparent to the application
11.Rebuilding IndexesDBCC REINDEX, DBCC INDEXDEFRAG,shrinking data and logsDBCC SHRINKDB,DBCC Shrinkfile. Sets the auto-shrink log. Forlarge databases do not set the database autogrow, it will reduce the performance of the server。 There's a lot of emphasis on T-SQL, and here's a list of common points: first, the DBMS processes the query plan:
1. Lexical and grammatical checking of query statements
2. Query optimizer to submit statements to the DBMS
3 optimization of optimized algebra and access paths
4. Generate query plan by precompiled module
5, and then at the appropriate time to submit to the system processing execution
6, finally return the execution result to the user second, look atSQL Server data storage structure: A page size of 8K (8060) bytes, 8 pages for a single areaand stored in B-tree.
12.the difference between commit and rollbackRollback: Rolls back all the transactions.commit: Commit the current transaction.There is no need to write transactions in dynamic SQL, if you want to write please write on the outside. such as: Begin TRAN Exec (@s) commits trans or writes dynamic SQL as a function or stored procedure.
13, inQuery the number of rows returned with a WHERE clause in a SELECT statement, avoid table scan, if return unnecessary data, waste the server I/O resource, aggravate the burden of network reduce performance.If the table is large, locks the table during the table scan and prevents other joins from accessing the table, otherwise serious consequences.
14. The SQL annotation statement has no effect on execution.
15, as far as possible without using the cursor, it occupies a large number of resources. If you need to do it row-by-row,try to use non-cursor technology, such as: In the client loop, with temporary tables, table variables, subqueries, case statements and so on。 A cursor can follow itsSupported extraction OptionsTo classify:
only inRows must be fetched in the order from the first row to the last row. Fetch NEXT is the only allowed fetch operation and is the default.
roll-up sexAny row can be randomly fetched anywhere in the cursor.
The technique of cursors becomes very powerful under SQL2000, and his purpose is to support loops. There are four concurrency options:
read_only: The cursor is not allowed to locate updates (update), and there are no locks in the rows that make up the result set.
optimistic with ValueS: Optimistic concurrency control is a standard part of transaction control theory. Optimistic concurrency control is used in situations where there is only a small chance for a second user to update a row in the interval between opening the cursor and updating the row. When a cursor is opened with this option, there is no lock to control the rows in it, which will help maximize its processing power. If the user attempts to modify a row, the current value of this row is compared with the value obtained when the row was last fetched. If any value changes, the server will know that the other person has updated the row and will return an error. If the value is the same, the server executes the modification and chooses this concurrency option.
optimistic with ROW VERSIONING: This optimistic concurrency control option is based on row versioning. With row versioning, the table must have some version identifier that the server can use to determine whether the row has changed after it has been read into the cursor. In SQL Server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order in which changes are made in the database. Each database has a global current timestamp value: @ @DBTS. Each time a row with a timestamp column is changed in any way, SQL Server stores the current @ @DBTS value in the timestamp column, and then increases the value of the @ @DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can compare the current timestamp value of a row with the timestamp value stored at the last fetch to determine whether the row has been updated. The server does not have to compare the values of all columns, just compare the timestamp columns. If an application requires optimistic concurrency based on row versioning for tables that do not have timestamp columns, Reise considers optimistic concurrency control based on numeric values.
SCROLL LOCKSThis option implements pessimistic concurrency control. In pessimistic concurrency control, when a row of a database is read into a cursor result set, the application attempts to lock the database row. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If a cursor is opened within a transaction, the transaction update lock is persisted until the transaction is committed or rolled back, and the cursor lock is dropped when the next row is fetched. If you open a cursor outside of a transaction, the lock is discarded when the next row is fetched. Therefore, each time a user needs full pessimistic concurrency control, the cursor should open within the transaction. An update lock prevents any other task from acquiring an update lock or exclusive lock, preventing other tasks from updating the row. However, updating a lock does not prevent a shared lock, so it does not prevent other tasks from reading the row unless the second task also requires a read with an update lock. Scroll locks These cursor concurrency options can generate scroll locks based on the lock hints specified in the SELECT statement defined by the cursor. The scroll lock is fetched on each line at fetch and remains until the next fetch or the cursor closes, whichever occurs first. The next time the fetch occurs, the server acquires a scroll lock for the row in the new fetch and releases the last scroll lock to fetch rows. A scroll lock is independent of the transaction lock and can be persisted after a commit or rollback operation. If the option to close the cursor at commit is off, the commit statement does not close any open cursors, and the scroll lock is persisted to the commit to maintain isolation of the extracted data. The type of scroll lock acquired depends on the cursor concurrency options and cursors.
16, withProfiler to track queries to get the time required to queryTo find out the problem with SQL;Index OptimizerOptimize the index.
17. Attentionthe difference between Union and UNION ALL:UNION All good
18, pay attention to using distinct, do not use when not necessary, it will make the query slower than the union.duplicate records are not a problem in the query.
19. When inquiringdo not return to the unwantedRows, Columns
20, withsp_configure ' query governor cost limit ' or set Query_governor_cost_limit to restrict the resources consumed by the query。 When an estimate query consumes more resources than the limit, the server automatically cancels the query and kills it before the query. Set Locktime Setting the lock time
21. Use SELECT Top 100/10 Percent to limit the number of rows returned by the user orSET ROWCOUNTTo restrict the rows of the operation
22. Before SQL2000,generally do not use the following words: "Is NULL", "<>", "! =", "!>", "!<", "not", "Isn't EXISTS", "Not in", "Not like", and "like '%500 '", because they don't go index is all table sweep Stroke Alsodo not add a function to the column name in the WHERE clause, such as convert,substring, if the function must be used,create a computed column and then create an index to replaceYou can also work around: where SUBSTRING (firstname,1,1) = ' m ' is changed to where FirstName like ' m% ' (index Scan), be sure to separate function and column names. And the index cannot be built too much and too large.Not in will scan the table multiple times, using EXISTS, not EXISTS, in, and left OUTER joins instead, especially left join, and the exists is faster than in, and the slowest is not. If the value of the column is empty, the previous index does not work, and now 2000 of the optimizer can handle it. The same is NULL, ' not ', ' not EXISTS ', and ' not ' to optimize her,and "<>" and so on still can't optimize, the index is not used.
23. UseQuery Analyzer, review the SQL statement's query plan and evaluate whether the analysis is optimized for SQL. The average 20% ofCodeOccupy 80% of the resources, we optimize the focus is these slow places.
24, if the use of in or or, and so found that the query did not go index,specifying an index with a display declaration: SELECT * from personmember (INDEX = ix_title) WHERE ProcessID in (' Male ', ' female ')
25, willresults that need to be queried are pre-calculated and placed in the table, select again when querying. This is the most important means before SQL7.0. For example, the hospital's hospitalization fee calculation.
26.MIN () and MAX ()can use the appropriate index.
27.database has a principleIs the code closer to the better, so the first choice of default, in turn, Rules,triggers, Constraint (constraints such as the external health of the main health checkunique ..., the maximum length of data type, etc. are constraints), Procedure. This not only makes maintenance work small, it writes programs with high quality, and executes faster.
28, if you want to insert a large binary value into the image column, using stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times its size), the server receives the character and converts it to a binary value. The stored procedure does not have these actions: Method: Create procedure P_insert as insert into table ( Fimage) VALUES (@image), call this stored procedure in the foreground to pass in binary parameters, so processing speed significantly improved.
29.between at some point faster than in,Between is able to quicklyfind ranges by index。 The difference is visible with the query optimizer. SELECT * from Chineseresume where title in (' Male ', ' female ') Select * from chineseresume where between ' male ' and ' female ' are the same. Because in will be compared several times, it is sometimes slower.
30, increate indexes on global or local temporary tables, if necessary, sometimes it can improve speed, but not necessarily, because the index also consumes a lot of resources. His creation is the same as the actual table.
31, do not build no roleTransactionssuch as generatingReportWaste of resources. Use it only when necessary to use the transaction.
32.use or words can be broken down into multiple queries., and joins multiple queries through union. of theirspeed is only relevant for using indexes, the union all performs more efficiently if the query requires a federated index.multiple or words are not used in the index, and then try to match the index to the form of Union. A key question is whether to use the index.
33.minimize the use of views, it is inefficient. View operations are slower than direct table operations, you can use stored procedure to replace her.in particular, do not use view nesting, nested views increase the difficulty of finding the original data. Let's see.the nature of the view: it is a SQL that has been optimized to be stored on the server and has generated query planning。 When retrieving data for a single table, do not use a view that points to more than one table, either directly from the table or only the view that contains the table, otherwise it adds unnecessary overhead and the query is disturbed. To speed up the query of the view, MSSQL addsView IndexThe function.
34, do not use distinct and order by when not necessary, these actions can be changed in the client execution. They add extra overhead. This is the same as union and union all.
Select Top Ad.companyname,comid,position,ad.referenceid,worklocation, convert (varchar (ten), ad.postdate,120) as Postdate1,workyear,degreedescription from Jobcn_query.dbo.COMPANYAD_query ad where Referenceid in (' JCNAD00329667 ', ' JCNAD132168 ', ' JCNAD00337748 ', ' JCNAD00338345 ',

' JCNAD00333138 ', ' JCNAD00303570 ', ' JCNAD00303569 ',' JCNAD00303568 ', ' JCNAD00306698 ', ' JCNAD00231935 ', ' JCNAD00231933 ',' JCNAD00254567 ', ' JCNAD00254585 ', ' JCNAD00254608 ',' JCNAD00254607 ', ' JCNAD00258524 ', ' JCNAD00332133 ', ' JCNAD00268618 ',

' JCNAD00279196 ', ' JCNAD00268613 ') Order BY postdate Desc
35, inin the back face list, the most frequent values are placed at the front, with the fewest occurrences of the last face, reducing the number of judgements
36. WhenWhen you use SELECT INTO, it locks the system table(sysobjects,sysindexes, etc.), blocking access to other connections.use the Display declaration statement when creating a temporary tableInstead of SELECT INTO. drop table T_LXH BEGIN TRAN SELECT * into T_lxh from chineseresume where name = ' XYZ '--commit in another connection select * from Sysobj ECTS can see that SELECT into will lock the system table,The Create table also locks the system tables (either temporary or system tables). So don't use it within a transaction!!! In this case, use a real table, or a temporary table variable, if it is a temporary table that you want to use frequently.
37, generally before the group by and having a sentence can be removed from the redundant lines, sotry not to use them as a work of culling.。 Their order of execution should be optimal: the WHERE clause of select selects all appropriate rows, and group by is used to group statistical rows,the HAVING clause is used to remove unnecessary groupings.。 This way, group by has a small cost, fast query.Group and have very consuming resources for large data rows。 If the purpose of group by is not to include calculations, just groups, then use distinct faster
38.update multiple records at one time the score is updated every time one fast, which means batch processing is good
39.Use temporary tables sparingly, and replace them with the result set and table type variables ., a variable of table type is better than a temporary table.
40, inSQL2000, calculated fields can be indexed., the conditions to be met are as follows:
A, the expression of the calculated field is determined
B, cannot be used in the Text,ntext,image data type
C, the following options must be formulated ansi_nulls = on, ansi_paddings = ON, ....
41.try to put the data processing work on the server, reducing the overhead of the network, such asUsing Stored Procedures。 Stored procedures are compiled, optimized, and organized into an execution plan, and stored in a database of SQL statements, is a collection of control flow language, the speed of course fast. Dynamic SQL, which is executed repeatedly, can use temporary stored procedures that are placed in tempdb (temporary tables). Previously, because SQL Server did not support complex math calculations, it was forced to put this work on top of other tiers and increase the overhead of the network. SQL2000 supports UDFs, which now supports complex mathematical calculations, the return value of functions is not too large, which is expensive.user-defined functions consume a lot of resources like cursors doIf a large result is returned with a stored procedure
42, do not use the same function in a sentence repeatedly, wasting resources, willresults are placed in variables and then called Faster
43.SELECT COUNT (*) is less efficient, try to work out his writing, and exists fast. Also note the difference: Select count (field of NULL) from table and select count (field of NOT null) the return value of the From table is different! !!
44, when the server'sPlenty of memory .Whennumber of threads = maximum number of connections +5 to maximize efficiencyOtherwise, use the number of compounding threads < maximum number of connections to enable the thread pool of SQL Server to resolve, if the number = max +5 connections, severely damage the performance of the server.
45.access your table in a certain order。 If you lock table A and then lock table B,in all stored procedures, follow this orderTo lock them. If you (inadvertently) lock table B in a stored procedure, and then lock Table A, this could result in a deadlock. Deadlocks can be difficult to find if the locking sequence is not well designed in advance.
46. ThroughSQL Server Performance Monitor monitors the load on the appropriate hardwareMemory:page faults/sec Counter If the value is occasionally higher, it indicates that the thread is competing for memory at that time. If it continues to be high, then memory can be a bottleneck.
Process:
1,% DPC time refers to the processor used during the sample intervalDeferred program calls(DPC) The percentage of services received and delivered. (DPC is running at a lower interval than the standard interval priority). Because DPC is performed in privileged mode, the percentage of DPC time is part of the percentage of privileged time. These times are calculated separately and are not part of the total number of interval calculations. This total shows the average busy time as a percentage of the instance time.
2,%processor Time counter if the value of this parameter continues to exceed 95%, the bottleneck is the CPU. Consider adding a processor or swapping it for a faster one.
3,% Privileged time refers to the percentage of non-idle processor times used for privileged mode. (Privileged mode is a processing mode designed for operating system components and manipulating hardware drivers.) It allows direct access to hardware and all memory. Another mode is User mode, which is a limited processing mode designed for application, environment sub-system and integer sub-system. The operating system translates the application thread into privileged mode to access the operating system services). The% of privileged time includes the time to service the interruption and DPC. A high privilege time ratio can be caused by a large number of intervals that failed devices produce. This counter displays the average busy time as part of the sample time.
4,% User time represents CPU-intensive database operations, such as sorting, executing aggregate functions, and so on. If the value is high, consider increasing the index, using a simple table join, and horizontally splitting the large table to reduce the value. Physical DISK:CURRETN Disk Queue Length counter this value should not exceed 1.5~2 times the number of disks. To improve performance, you can increase the disk. Sqlserver:cache hit Ratio counter the higher the value, the better. If it lasts below 80%, you should consider increasing the memory. Note that the value of this parameter is incremented after starting SQL Server, so the value will not reflect the current value of the system after a period of time has elapsed.
47, analysis Select Emp_name form. Employee where salary > 3000 If the salary is a float type in this statement, the optimizer optimizes it to convert (float,3000), because 3000 is an integer and we shoulduse 3000.0 while programming and do not wait for the DBMS to convert while running。 Conversions of the same character and integer data.
48.The order in which the query is associated with the write
Select A.personmemberid, * from Chineseresume a,personmember b where Personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' (A = b, b = ' number ')
Select A.personmemberid, * from Chineseresume a,personmember b where A.personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' and B.referenceid = ' JCNPRH39681 ' (a = b, b = ' number ', a = ' number ')
Select A.personmemberid, * from Chineseresume a,personmember b where B.referenceid = ' JCNPRH39681 ' and A.personmemberid = ' JCNPRH39681 ' (B = ' number ', A = ' number ')
49.
(1) If no owner code is entered then code1=0 code2=9999 ELSE code1=code2= owner Code END If Execute SQL statement is: SELECT owner name from P2000 WHERE owner code &GT;=:CODE1 and owner Code &LT;=:CODE2
(2) If no owner code is entered then select owner name from P2000 ELSE code= owner code SELECT owner code from P2000 WHERE owner code =:code END IF The first method uses only one SQL language Sentence, the second method uses two SQL statements. When no owner code is entered, the second method is obviously more efficient than the first method because it has no constraints, and the second method is still more efficient than the first method when the owner code is entered, not only is there a restriction, but also because the equality operation is the fastest query operation. WeWrite a program don't be afraid of trouble
50, about JOBCN now query paging new method (below), with the Performance Optimizer to analyze performance bottlenecks, if in the I/O or network speed, such as the next method optimization is effective, if in the CPU or memory, the current method is better. Please distinguish the following methods to explainThe smaller the index , the better.
Begin
DECLARE @local_variable Table (FID int identity (Referenceid), varchar (20))
Insert INTO @local_variable (Referenceid)
Select top 100000 Referenceid from Chineseresume order by Referenceid
SELECT * FROM @local_variable where Fid > <= and FID 60
End and
Begin
DECLARE @local_variable Table (FID int identity (Referenceid), varchar (20))
Insert INTO @local_variable (Referenceid)
Select top 100000 Referenceid from Chineseresume order by updatedate
SELECT * FROM @local_variable where Fid > <= and FID 60
The different end
Begin
CREATE TABLE #temp (FID int identity, Referenceid varchar (20))
Insert into #temp (Referenceid)
Select top 100000 Referenceid from Chineseresume order by updatedate
SELECT * from #temp where FID > <= and FID drop table #temp
End

SOURCE http://www.cnblogs.com/jyshis/archive/2011/09/16/2178309.html

SQL Server Optimization 50 method

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.