Summary of reasons for slow Oracle query speed
The reasons for the slow query are many, and the following are common:
1, no index or no index (this is the most common problem of slow query, is the defect of program design)
The 2,i/o throughput is small, creating a bottleneck effect.
3, no computed columns are created that causes the query to not be optimized.
4, Low memory
5, Slow network speed
6, the amount of data queried is too large (can use multiple queries, other methods to reduce the amount of data)
7, lock or deadlock (this is also the most common problem of slow query, is the defect of program design)
8,sp_lock,sp_who, the active user is viewed because of competing resources for reading and writing.
9, returning unnecessary rows and columns
10, query statement is not good, no optimization
You can refine your query by:
1, put the data, log, index on different I/O devices, increase the read speed, previously can be tempdb should be placed on RAID0, SQL2000 is not supported. The larger the amount of data (size), the more important it is to increase I/O.
2, portrait, horizontal split table, reduce the size of the table (Sp_spaceuse)
3. Upgrading hardware
4, according to the query criteria, index, optimize the index, optimize access, limit the amount of data in the result set. Note the fill factor is appropriate (preferably using the default value of 0). The index should be as small as possible, using a Lie Jian index with a small number of bytes (refer to the creation of the index), do not Jianjian a single index on a
5, improve speed;
6, expand server memory, Windows 2000, and SQL Server 2000 can support 4-8g memory. Configure virtual Memory: The virtual memory size should be configured based on the services that are running concurrently on the computer. Running Microsoft SQL Server? 2000, consider setting the virtual memory size to 1.5 times times the physical memory installed on your computer. If you have additional full-text search features installed and you plan to run the Microsoft search service to perform full-text indexing and querying, consider: Configure virtual memory size to at least 3 of the physical memory installed on your computer Times. Configure the SQL server max server memory server configuration option to 1.5 times times the physical memory (half of the virtual memory size setting).
7, increase the number of server CPUs; However, it is important to understand that parallel processing of serial processing requires resources such as memory. Using parallel or string travel is an automatic evaluation option for MSSQL. A single task is decomposed into multiple tasks and can be run on the processor. For example, delay the sorting of queries, connect, scan and group By sentence execution at the same time, SQL Server determines the optimal level of parallelism based on the load of the system, and complex queries that consume large amounts of CPU are best suited for parallel processing. However, the update operation Update,insert, delete cannot be processed in parallel.
8, if you use like to query, simple to use index is not, but the full-text index, consumption space. Like ' a% ' uses the index like '%a ' when querying with like '%a% ' without an index, the query time is proportional to the total length of the field value, so you cannot use the char type, but varchar. The full-text index is long for the value of the field.
9,DB Server and application server separation; OLTP and OLAP separation
10, a distributed partitioned view can be used to implement a federation of database servers. A consortium is a set of separately managed servers, but they work together to share the processing load of the system. This mechanism, which forms a federation of database servers through partitioned data, expands a set of servers to support large, multi-tiered Web The processing needs of the site. For more information, see Designing federated database servers. (Refer to SQL Help file ' partitioned view ')
A, you must first horizontally partition the table before implementing the partitioned view
B, after the member table is created, a distributed partitioned view is defined on each member server, and each view has the same name. This way, a query that references a distributed partitioned view name can run on any member server. The system behaves as if there is a copy of the original table on each member server, But in fact, there is only one member table and one distributed partitioned view on each server. The location of the data is transparent to the application.
11, rebuild index DBCC REINDEX, DBCC INDEXDEFRAG, shrink data and log DBCC SHRINKDB,DBCC shrinkfile. Sets the auto-shrink log. For large databases do not set the database autogrow, it will degrade the performance of the server. There is a great deal of emphasis on T-SQL notation, and here are some common points: first, the DBMS processes the query plan:
1, lexical, syntax checking of query statements
2, the query optimizer that submits the statement to the DBMS
3, optimizer to do algebraic optimization and access path optimization
4, query planning generated by precompiled module
5, then submit to the system at the appropriate time to process the execution
6, finally return the execution result to the user next, look at the SQL Server data storage structure: A page size of 8K (8060) bytes, 8 pages for a disk area, according to B-Tree storage.
The difference between 12,commit and Rollback Rollback: Roll back all things. Commit: Commit the current thing. There is no need to write things in dynamic SQL, if you want to write please write outside such as: Begin TRAN EXEC (@s) commit trans or write dynamic SQL as a function or stored procedure.
13, in the query SELECT statement using the WHERE clause to limit the number of rows returned, avoid table scan, if the return of unnecessary data, wasting the server's I/O resources, aggravating the burden of the network to reduce performance. If the table is large, locks the table during table scans, and prevents other joins from accessing the table, with serious consequences.
14,sql's comment statement has no effect on execution 15, as far as possible without using the cursor, it consumes a lot of resources. If you need to execute row-by-row, try to use non-cursor technology, such as: In the client loop, with temporary table, table variable, with subquery, Use case statements and so on. Cursors can be categorized according to the extraction options it supports: forward-only the rows must be fetched in the order from the first row to the last row. Fetch NEXT is the only allowed fetch operation and is the default. Scrollable allows arbitrary rows to be fetched anywhere in the cursor. The technique of cursors becomes powerful in SQL2000, and his purpose is to support loops. There are four concurrency options read_only: Do not allow updates to be positioned through cursors ( Update), and there is no lock in the row that makes up the result set. Optimistic with ValueS: Optimistic concurrency control is a standard part of transaction control theory. Optimistic concurrency control is used in situations where there is only a small chance for a second user to update a row in the interval between opening the cursor and updating the row. When a cursor is opened with this option, There are no locks to control the rows, which will help maximize its processing power. If a user attempts to modify a row, the current value of this row is compared with the value obtained when the row was last fetched. If any value changes, the server will know that someone else has updated the row and will return an error. If the value is the same, The server executes the modification. Select this concurrency option optimistic with row VERSIONING: This optimistic concurrency control option is based on row versioning. Using row versioning, where the table must have some version identifier, The server can use it to determine whether the row has changed since it was read into the cursor. In SQL server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order of changes in the database. Each database has a global current timestamp value: @@ DBTS. Each time a row with a timestamp column is changed in any way, SQL Server stores the current @ @DBTS value in the timestamp column, and then increases the value of the @ @DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can compare a The current timestamp value of the row and the timestamp value stored on the last fetch to determine whether the row has been updated. The server does not have to compare the values of all columns, just compare the timestamp columns. If your application requires optimistic concurrency based on row versioning for tables that do not have timestamp columns, Then Reise considers optimistic concurrency control based on numerical values. SCROLL LOCKS This option for pessimistic concurrency control. In pessimistic concurrency control, when a row of a database is read into a cursor result set, the application attempts to lock the numberAccording to Coux. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If a cursor is opened within a transaction, the transaction update lock is persisted until the transaction is committed or rolled back, and the cursor lock is dropped when the next row is fetched. If the cursor is opened outside the transaction, the lock is discarded. Each time a user needs full pessimistic concurrency control, the cursor should open within the transaction. An update lock prevents any other task from acquiring an update or exclusive lock, preventing other tasks from updating the row. However, the update lock does not prevent a shared lock, so it does not prevent other tasks from reading rows. Unless the second task also requires a read with an update lock. Scroll locks These cursor concurrency options can generate scroll locks based on the lock hints that are specified in the Select statement defined by the cursor. The scroll LOCK is fetched on each line at the time of extraction and remains until the next fetch or the cursor closes, whichever occurs first. The next time you extract, The server acquires a scroll lock for the rows in the new fetch and releases the last scroll lock that fetched the row. The scroll lock is independent of the transaction lock and can be persisted after a commit or rollback operation. If the option to close the cursor at commit is off, the commit statement does not close any open cursors, and the scroll lock is persisted to the commit. To maintain isolation of the extracted data. The type of scroll lock that is acquired depends on the cursor concurrency options and the lock hint in the cursor Select statement. Lock prompt read-only optimistic numeric optimistic row versioning lock silent unlocked unlocked not locked update NOLOCK unlocked unlocked unlocked Unlocked HOLDLOCK Total Share sharing update UPDLOCK error update update TABLOCKX error unlocked unlocked update other unlocked unlocked Unlocked update * Specifies that the NOLOCK hint will make the table specified for the hint to be read-only in cursor.
16, use Profiler to track the query, get the time required to query, find out the problem of SQL; Optimize indexes with index optimizer
17, note the difference between Union and union all. UNION All good
18, pay attention to using distinct, do not use when it is not necessary, it will make the query slower than the union. Duplicate records are not a problem in the query.
19, do not return rows that are not needed when querying, column
20, use sp_configure ' query governor cost limit ' or set Query_governor_cost_limit to limit the resources consumed by the query. When an estimate query consumes more resources than the limit, the server automatically cancels the query. Be killed before the query is lost. Set Locktime the time at which the lock has been setup.
21, use select Top 100/10 Percent to limit the number of rows returned by the user or set rowcount to limit the rows of the operation
22, before SQL2000, generally do not use the following words: "Is NULL", "<>", "! =", "!>", "!<", "not", "not EXISTS", "Not in", "Not like", and "L IKE '%500 ' because they don't go index is all a table scan. Also do not add functions such as convert,substring in the WHERE clause, if you must use a function, create a computed column and then create an index instead. You can also work around: Where SUBSTRING (firstname,1,1) = ' m ' is changed to where FirstName like ' m% ' (index Scan), be sure to separate function and column names. And the index cannot be built too much and too large. The not in will scan the table multiple times, using the Exists,not EXISTS, in, and left OUTER join to replace, in particular, the right-hand connection, while the EXISTS is faster than in, and the slowest is the not operation. If the value of the column is empty, the previous index does not work, and now 2000 of the optimizer can handle it. The same is NULL, ' not ', "Not EXISTS", "not in" can optimize her, and "<>" and so still can not be optimized, not used to index.
23. Use Query Analyzer to view the SQL statement's query plan and evaluate whether the analysis is an optimized SQL. The general 20% of the code occupies 80% of the resources, and the focus of our optimization is these slow places.
24, if you use in or OR and so on to find that the query does not walk the index, use the display declaration to specify the index: Select * from personmember (index = ix_title) Where ProcessID in (' Male ', ' female ')
25, the results of the query needs to be pre-calculated to put in the table, query time and then select. This is the most important means before SQL7.0. For example, the hospital fee calculation.
26,min () and MAX () can use the appropriate index.
27, the database has a principle is the code close to the data is better, so the priority to choose Default, in turn, Rules,triggers, Constraint (constraint such as external health main health checkunique ..., the maximum length of data type, etc. are constraints), Procedure. This not only makes maintenance work small, it writes programs with high quality, and executes faster.
28, if you want to insert a large binary value into the image column, use stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times its size), The server receives a character and then converts it to a binary value. Stored procedures do not have these actions: Method: Create procedure P_insert as INSERT into table (Fimage) VALUES (@image), This stored procedure is called in the foreground to pass in binary parameters, so the processing speed is significantly improved.
29,between at some point faster than in, between can find a range faster based on the index. The difference is visible with the query optimizer. SELECT * from Chineseresume where title in (' Male ', ' female ') Select * from chineseresume where between ' male ' and ' female ' are the same. Because in will be compared multiple times , so it's sometimes slower.
30, the need to create indexes on global or local temporary tables can sometimes improve speed, but not necessarily because indexes also consume a lot of resources. His creation is the same as the actual table.
31, do not build things that are not useful such as generating reports, wasting resources. Use it only when you need to use things.
32, the words with or can be decomposed into multiple queries, and the union joins multiple queries. Their speed is only related to whether the index is used, and if the query requires a federated index, the union all executes more efficiently. Multiple or words are not indexed. Rewrite the form of union to try to match the index. A key question is whether the index is used.
33, minimize the use of views, it is inefficient. The view operation is slower than the direct table operation, which can be replaced by stored procedure. In particular, do not use view nesting, nested views increase the difficulty of finding the original data. We look at the nature of the views: It is an optimized SQL that is stored on the server that has generated query planning. When retrieving data for a single table, do not use a view that points to multiple tables, read directly from the table or just the view that contains the table, or otherwise incur unnecessary overhead and the query is disturbed. To speed up the query for the view, MSSQL adds the ability to view indexes.
34, do not use distinct and order by when necessary, these actions can be changed in the client execution. They add additional overhead. This is the same as the Union and Union all.
Select Top Ad.companyname,comid,position,ad.referenceid,worklocation, convert (varchar (ten), ad.postdate,120) as Postdate1,workyear,degreedescription from Jobcn_query.dbo.COMPANYAD_query ad where Referenceid in (' JCNAD00329667 ', ' JCNAD132168 ', ' JCNAD00337748 ', ' JCNAD00338345 ', ' JCNAD00333138 ', ' JCNAD00303570 ', ' JCNAD00303569 ', ' JCNAD00303568 ', ' JCNAD00306698 ',
' JCNAD00231935 ', ' JCNAD00231933 ', ' JCNAD00254567 ', ' JCNAD00254585 ', ' JCNAD00254608 ', ' JCNAD00254607 ', ' JCNAD00258524 ' ‘,
' JCNAD00332133 ', ' JCNAD00268618 ', ' JCNAD00279196 ', ' JCNAD00268613 ') Order by postdate Desc
35, and in the list of in-face values, The most frequently occurring values are placed at the front, with the fewest occurrences being placed on the last side, reducing the number of judgements.
36, when you use SELECT INTO, it locks the system table (Sysobjects,sysindexes, and so on), blocking the access of other connections. When creating a temporary table, use the Display declaration statement instead of SELECT INTO. drop table T_lxh BEGIN TRAN SELECT * to T_lxh from Chineseresume where--commit in another connection select * from sysobjects you can see that select into will lock the system table, Cr Eate table also locks the system tables (either temporary or system tables). So don't use it within things!!! In this case, use a real table, or a temporary table variable, if it is a temporary table that you want to use frequently.
37, usually before the Group by have a have words can be removed from the redundant lines, so try not to use them to do the work of the culling line. Their order of execution should be optimal: the WHERE clause of the SELECT selects all the appropriate rows, and group by is used to group the statistical rows, The HAVING clause is used to remove redundant groupings. This makes the group by a having a little overhead and queries faster. Grouping and having large data rows is very resource-intensive. If the purpose of group by is not to include calculations, just groups, then use distinct faster
38 , updating multiple records at once each time a fast, that is, batch processing good
39, less use of temporary tables, as far as possible with the result set and table class variables to replace it, table type of variable than the temporary table good
40, under SQL2000, the calculated field can be indexed, The conditions to be met are as follows:
A, the expression of the calculated field is OK
B, cannot be used in the Text,ntext,image data type
C, the following options must be formulated ansi_nulls = on, ansi_paddings = on, ...
41, try to put the data processing work on the server, reduce the network overhead, such as the use of stored procedures. Stored procedures are compiled, optimized, and organized into an execution plan, and stored in a database SQL statement, is a collection of control flow language, speed of course fast. Dynamic SQL that executes repeatedly, You can use temporary stored procedures, which are placed in tempdb. Previously, because SQL Server did not support complex math calculations, it was forced to put this work on top of other tiers and increase the overhead of the network. SQL2000 supports UDFs, and now supports complex mathematical calculations, the return value of functions is not too large, which is very expensive. User-defined functions that consume a large amount of resources, such as cursors, if large results are returned with stored procedures
42, do not use the same function repeatedly in a sentence, wasting resources, put the result in a variable and then call faster
43,select COUNT (*) efficiency teaches low, try to adapt his writing while exists fast. Also note the difference: Select count (field of NULL) from Table and select count (field of NOT NULL The return value from Table is different!!!
44, when the server memory is long enough, the number of threads = The maximum number of connections +5, so as to maximize the efficiency; otherwise, the thread pool of SQL Server is enabled by using the number of configuration threads < Maximum number of connections, and if the number = Max Connections is +5, the server's performance is severely compromised.
45. Access your table in a certain order. If you lock table A and then lock table B, you will lock them in this order in all stored procedures. If you (inadvertently) lock table B in a stored procedure, and then lock Table A, This can lead to a deadlock. If the lock sequence is not designed in advance, deadlocks can be difficult to find
46. Monitor the load Memory:page faults/sec counter of the appropriate hardware through SQL Server Performance Monitor If the value occasionally goes up, it indicates that the thread is competing for memory at that time. If it continues to be high, memory can be a bottleneck.
Process:
1,% DPC time refers to the percentage of the processor used in the deferred program call (DPC) to receive and provide services during the sample interval. (DPC is running at a lower interval than the standard interval priority). Because DPC is performed in privileged mode, the percentage of DPC time is part of the percentage of privileged time. These times are calculated separately and are not part of the total number of interval calculations. This total shows the average busy time as a percentage of the instance time.
2,%processor Time counter If the value of this parameter continues to exceed 95%, it indicates that the bottleneck is CPU. You can consider adding a processor or swapping a faster one.
3,% Privileged time refers to the percentage of non-idle processor times used for privileged mode. (Privileged mode is a processing mode designed for operating system components and manipulating hardware drivers.) It allows direct access to hardware and all memory. Another mode for user mode, which is one for the application, A limited processing mode for the design of environment sub-system and integer sub-system. The operating system translates the application thread into privileged mode to access the operating system services.% of privileged time includes for intermittent and DPC The time the service was delivered. High privilege time ratios can be caused by a large number of intervals that failed devices produce. This counter displays the average busy time as part of the sample time.
4,% User time represents CPU-intensive database operations such as sorting, performing aggregate functions, and so on. If the value is high, consider increasing the index, using simple table joins, and horizontally splitting large tables to reduce the value. Physical DISK:CURRETN Disk Queue Length counter this value should not exceed 1.5~2 times the number of disks. To improve performance, you can increase the disk. Sqlserver:cache hit Ratio counter the higher the value, the better. If you continue below 80%, you should consider increasing the memory. Note that the value of this parameter is incremented after starting SQL Server, so the value will not reflect the current value of the system after a period of time has elapsed.
47, Analysis Select Emp_name form employee where salary > 3000 If salary is a float type in this statement, the optimizer optimizes it to convert (float,3000), Since 3000 is an integer, we should use 3000.0 in programming instead of waiting for the DBMS to be transformed by the runtime. Conversion of the same character and integer data.
48, the order in which the query is associated with the write
Select A.personmemberid, * from Chineseresume a,personmember b where Personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' (A = b, b = ' number ')
Select A.personmemberid, * from Chineseresume a,personmember b where A.personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' and B.referenceid = ' JCNPRH39681 ' (a = b, b = ' number ', a = ' number ')
Select A.personmemberid, * from Chineseresume a,personmember b where B.referenceid = ' JCNPRH39681 ' and A.personmemberid = ' JCNPRH39681 ' (B = ' number ', A = ' number ')
49,
(1) If no owner code is entered then code1=0 code2=9999 ELSE code1=code2= owner Code END If Execute SQL statement is: Select owner name from P2000 Where owner code >=:CODE1 and owner Code <=:CODE2
(2) If no owner code is entered then select owner name from P2000 ELSE code= owner code Select owner code from P2000 Where owner code =:code END IF The first method uses only one SQL language Sentence, the second method uses two SQL statements. The second method is obviously more efficient than the first, since it does not have a restrictive condition when the owner code is not entered; When the owner code is entered, the second method is still more efficient than the first method, which is not only one less restrictive, but also because the equality operation is the fastest query operation. We write programs, don't be afraid of trouble.
50, about JOBCN now query paging of the new method (below), with the Performance Optimizer to analyze performance bottlenecks, if in the I/O or network speed, such as the following method optimization is effective, if in the CPU or memory,
It's better to use the present method. Please distinguish the following methods, indicating that the smaller the index, the better.
Begin
DECLARE @local_variable Table (FID int identity (Referenceid), varchar (20))
Insert INTO @local_variable (Referenceid)
Select top 100000 Referenceid from Chineseresume order by Referenceid
SELECT * FROM @local_variable where Fid > <= and FID 60
End and
Begin
DECLARE @local_variable Table (FID int identity (Referenceid), varchar (20))
Insert INTO @local_variable (Referenceid)
Select top 100000 Referenceid from Chineseresume order by updatedate
SELECT * FROM @local_variable where Fid > <= and FID 60
The different end
Begin
CREATE TABLE #temp (FID int identity, Referenceid varchar (20))
Insert into #temp (Referenceid)
Select top 100000 Referenceid from Chineseresume order by updatedate
SELECT * from #temp where FID > <= and FID drop table #temp
End
Summary of reasons for slow Oracle query speed