Summary of reasons for slow Oracle queries __oracle

Source: Internet
Author: User
Tags joins microsoft sql server mssql one table rollback server memory

Http://www.diybl.com/course/7_databases/oracle/oraclejs/20100713/436400.html

There are a number of reasons why queries are slow, often as follows:

1, no index or no index (this is the most common problem of query slow, is the defect of program design)

2, I/O throughput is small, creating a bottleneck effect.

3. No computed columns were created to cause the query to be not optimized.

4, not enough memory

5, the network speed is slow

6, the query out of the amount of data is too large (you can use multiple inquiries, other methods to reduce the amount of data)

7, lock or deadlock (this is the most common problem of query slow, is the defect of program design)

8, sp_lock,sp_who, the active user view, the reason is the reading and writing competition resources.

9, returned the unnecessary rows and columns

10, the query statement is not good, no optimization

You can refine your query by:

1, the data, log, index on the different I/O devices, increase the reading speed, before the tempdb should be placed on the RAID0, SQL2000 not support. The larger the amount of data (size), the more important I/O is.

2, longitudinal, transverse partition table, reduce the size of the table (Sp_spaceuse)

3. Upgrade Hardware

4, according to query conditions, establish index, optimize index, optimize access mode, limit the data quantity of result set. Note that the fill factor should be appropriate (preferably with the default value of 0). The index should be as small as possible, with a small number of bytes of Lie Jian index Good (reference to the creation of the index), not to a limited number of characters Jianjian a single index such as Gender field

5, improve the speed;

6, to expand the server's memory, Windows 2000 and SQL Server 2000 can support 4-8g memory. Configure virtual Memory: The virtual memory size should be configured based on the services that are running concurrently on the computer. Run Microsoft SQL Server. When you are 2000, consider setting the virtual memory size to 1.5 times times the physical memory installed on your computer. If you installed the Full-text Search feature and intend to run the Microsoft search service to perform full-text indexing and querying, consider: Configure the virtual memory size to be at least 3 times times the physical memory installed on your computer. Configure the SQL Server max server memory server configuration option to 1.5 times times the physical memory (half of the virtual memory size setting).

7, increase the number of server CPUs; but it must be understood that parallel processing of serial processing requires resources such as memory. The use of parallel or serial stroke is the automatic evaluation of MSSQL selection. A single task can be run on a processor by breaking it into multiple tasks. For example, delays in sorting, linking, scanning, and group by words, SQL Server determines the optimal level of parallelism based on the load of the system, and complex queries that consume a large number of CPUs are best suited for parallel processing. However, the update operation is Update,insert, and delete cannot be processed in parallel.

8, if you are using like to query, the simple use of index is not good, but full-text indexing, space consumption. Like ' a% ' using the index as '%a ' without using the index as a '%a% ' query, the query time is proportional to the total length of the field value, so you cannot use the char type, but the varchar. A full-text index with a Long value for a field.

9, DB server and Application server separation; OLTP and OLAP separation

10. Distributed partitioned views can be used to implement Federation of database servers. A consortium is a set of separately managed servers, but they work together to share the processing load of the system. This mechanism for creating a federation of database servers through partitioned data can expand a set of servers to support the processing needs of large, multi-tiered Web sites. For more information, see Designing a federated database server. (Reference SQL Help file ' partitioned view ')

A, before you implement a partitioned view, you must first partition the table horizontally

B, after the member table is created, a distributed partitioned view is defined on each member server, and each view has the same name. In this way, queries that reference the distributed partitioned view name can run on any one member server. The system operates like a copy of the original table on each member server, but in fact there is only one member table and one distributed partitioned view per server. The location of the data is transparent to the application.

11. Rebuild index DBCC REINDEX, DBCC INDEXDEFRAG, shrink data and log DBCC SHRINKDB,DBCC shrinkfile. Set up automatic shrink logging. For large databases do not set the database to grow automatically, it can degrade the performance of the server. There is a lot of emphasis on T-SQL, and here's a list of common points: first, the process that the DBMS processes the query plan is this:

1. Lexical and grammatical examination of query statements

2, the query optimizer that submits the statement to the DBMS

3 Optimization of algebra optimization and access path optimization

4, from the precompiled module to generate query planning

5, and then at the appropriate time to submit to the system processing execution

6, finally return the results to the user second, look at the structure of SQL Server data storage: A page size of 8K (8060) bytes, 8 pages for a disk, according to B-Tree storage.

12, the difference between commit and Rollback Rollback: Roll back all things. Commit: Commit the current thing. There is no need to write things in dynamic SQL, if you want to write please write in the outside such as: Begin TRAN EXEC (@s) commit trans or write dynamic SQL as a function or stored procedure.

13, in the query SELECT statement with the WHERE clause to limit the number of rows returned to avoid table scanning, if the return of unnecessary data, waste the server's I/O resources, increased the burden of the network to reduce performance. If the table is large, locks the table during table scans, and prevents other joins from accessing the table, with serious consequences.

    14, the SQL Annotation declaration has no effect on the execution 15, as far as possible without the use of the cursor, it occupies a large number of resources. If you need to row-by-row, try to use non-cursor technology, such as: In the client loop, with temporary tables, table variables, with subqueries, with case statements and so on. Cursors can be categorized according to the extraction options it supports: Only the rows must be fetched in the order from the first line to the last line. Fetch NEXT is the only extraction operation allowed and the default way. Scrollable can randomly extract any row anywhere in the cursor. The technique of cursors becomes powerful under SQL2000, and his purpose is to support loops. There are four concurrent options READ_ONLY: Cursor positioning update (update) is not allowed, and there are no locks in the rows that make up the result set. Optimistic with ValueS: Optimistic concurrency control is a standard part of the theory of transaction control. Optimistic concurrency control is used in situations where there is only a small opportunity for a second user to update a row in the interval between opening the cursor and updating the row. When a cursor is opened with this option, there is no lock to control the rows in it, which helps to maximize its processing power. If the user attempts to modify a row, the current value of this row is compared to the value obtained when the row was last fetched. If any value changes, the server knows that someone else has updated the row and returns an error. If the value is the same, the server performs the modification. Select this concurrency option optimistic with row VERSIONING: This optimistic concurrency control option is based on row versioning. With row versioning, the table must have a version identifier that the server can use to determine whether the row has changed since it was read into the cursor. In SQL Server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order of changes in the database. Each database has a global current timestamp value: @ @DBTS. Each time you change a row with a timestamp column in any way, SQL Server stores the current @ @DBTS value in the timestamp column, and then increases the value of the @ @DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can compare the current timestamp value of a row with the timestamp value stored at the last fetch to determine whether the row has been updated. The server does not have to compare the values of all columns, just compare the timestamp columns. If the application requires optimistic concurrency based on row versioning for tables that do not have timestamp columns, then Reise considers optimistic concurrency control based on numeric values. SCROLL LOCKS This option to implement pessimistic concurrency control. In the pessimistic concurrency control, the databaseWhen rows are read into a cursor result set, the application attempts to lock the database row. When a server cursor is used, an update lock is placed on the row when it is read into the cursor. If a cursor is opened within a transaction, the transaction update lock remains until the transaction is committed or rolled back, and the cursor lock is dropped when the next row is fetched. If you open a cursor outside of a transaction, the lock is discarded when the next row is fetched. Therefore, whenever a user needs full pessimistic concurrency control, the cursor should open within the transaction. Updating a lock prevents any other task from obtaining an update or exclusive lock, preventing other tasks from updating the row. However, the update lock does not block shared locks, so it does not prevent other tasks from reading rows unless the second task also requires a read with an update lock. Scroll locks can generate scroll locks based on the lock hints specified in the Select statement defined in the cursor. The scroll lock is fetched on each line during extraction and is persisted to the next fetch or the cursor is closed, whichever occurs first. The next time the fetch is fetched, the server acquires a scroll lock for the row in the new fetch and releases the scroll lock from the last fetch of the rows. The scroll LOCK is independent of the transaction lock and can be persisted after a commit or rollback operation. If the option to close the cursor at commit is off, the commit statement does not close any open cursors, and the scroll lock is persisted to the commit to maintain isolation of the extracted data. The type of the obtained scroll lock depends on the cursor concurrency option and the lock hint in the cursor Select statement. Lock prompt Read Only optimistic value optimistic row version control lock silent unlocked unlocked Unlocked update NOLOCK unlocked unlocked unlocked unlocked HOLDLOCK shared share share update UPDLOCK error update update update TABLOCKX error unlocked not locked more New Unlocked Unlocked Unlocked update * Specifies that the NOLOCK prompt will cause the table specified for that hint to be read-only in Gianpene.

16, with Profiler to track the query, get the time required to find out the problem of SQL; optimizing indexes with the index optimizer

17, notice the difference between Union and union all. UNION All good

18, pay attention to use distinct, do not need to use, it is the same as the Union will make the query slow. Duplicate records are no problem in the query.

19, do not return the query does not need the row, column

20, use sp_configure ' query governor cost limit ' or set Query_governor_cost_limit to limit the resources that the query consumes. When an estimate query consumes more resources than the limit, the server automatically cancels the query and kills it before the query. Set Locktime the time of the lock

21, with select Top 100/10 Percent to limit the number of rows returned by the user or set rowcount to restrict the operation of the line

22, in the SQL2000 before, generally do not use the following words: "Is NULL", "<>", ". = ",". > ",". < "," not "," not EXISTS "," Not in "," Don't like ", and" like '%500 ' ", because they do not walk the index is all table scans. Also, do not add a function to the column name in the WHERE clause, such as convert,substring, and if you must use a function, create a computed column and then create an index instead. You can also work around: where SUBSTRING (firstname,1,1) = ' m ' is changed to where FirstName like ' m% ' (index Scan), be sure to separate the function from the column name. And the index cannot be built too much and too large. Not in will scan the table multiple times, using the EXISTS, not EXISTS, in, and left OUTER joins to replace them, in particular the left-hand connection, while the EXISTS is faster than in, and the slowest is the not operation. If the value of the column is empty, the previous index does not work, and the 2000 optimizer is now able to handle it. The same is NULL, ' not ', ' not EXISTS ', ' not in ' can optimize her, and ' <> ' is still not optimized and cannot be indexed.

23. Use Query Analyzer to view query plans for SQL statements and evaluate whether the analysis is optimized for SQL. The average 20% of the code occupies 80% of the resources, and the focus of our optimization is these slow places.

24, if you use in or or, and so on, found that the query did not go index, use the display declaration to specify the index: Select * from personmember (INDEX = ix_title) Where ProcessID in (' Male ', ' female ')

25, will need to query the results of a prior calculation to put in the table, the query and then select. This is the most important means before SQL7.0. For example, hospital hospitalization costs are calculated.

26, MIN () and MAX () can use the appropriate index.

27, the database has a principle is the code from the data closer to the better, so priority to choose Default, followed by Rules,triggers, Constraint (constraints such as the health of the main health checkunique ..., the maximum length of the data type, etc. are constraints), Procedure. This not only maintenance work small, the writing program quality is high, and the execution speed is fast.

28, if you want to insert a large binary value to the image column, use stored procedures, do not use inline insert to insert (do not know whether Java). Because the application first converts the binary value to a string (twice times the size), the server converts it to a binary value after being subjected to a character. Stored procedures do not have these actions: Method: Create procedure P_insert as INSERT into table (Fimage) VALUES (@image), call this stored procedure in the foreground to pass in binary parameters, so the processing speed is significantly improved.

29, between at some times faster than in speed, between can be faster to find the scope of the index. The query optimizer shows differences. SELECT * from Chineseresume where title in (' Male ', ' female ') select * from Chineseresume where between ' men ' and ' women ' are the same. Because in is more than once, it is sometimes slower.

30, when it is necessary to create a global or local temporary table index, sometimes can improve speed, but not necessarily, because the index also consumes a lot of resources. He created the same as the actual table.

31. Don't build things that don't work. For example, waste resources when generating reports. Use it only when it is necessary to use things.

32, with or words can be decomposed into multiple queries, and through the Union to connect multiple queries. Their speed is only related to whether the index is used, and if the query requires a federated index, it is more efficient to execute with UNION all. Multiple or phrases are not indexed, converted to union, and then try to match the index. A key question is whether to use the index.

33, try to use less view, it's inefficient. The view operation is slower than the direct table operation, and you can replace her with stored procedure. Specifically, do not use view nesting, nested views increase the difficulty of finding raw materials. We look at the nature of the view: It is an optimized SQL stored on the server that has generated a query plan. When retrieving data from a single table, do not use a view that points to more than one table, retrieve it directly from the table, or simply include a view that contains the table, otherwise it adds unnecessary overhead and interferes with the query. To speed up the query for the view, MSSQL adds functionality to the view index.

34, do not need to use distinct and order by, these actions can be changed to client execution. They add extra overhead. This is the same reason as Union and union all.

Select Top Ad.companyname,comid,position,ad.referenceid,worklocation, convert (varchar, ad.postdate,120) as Postdate1,workyear,degreedescription from Jobcn_query.dbo.COMPANYAD_query ad where Referenceid in (' JCNAD00329667 ', ' JCNAD132168 ', ' JCNAD00337748 ', ' JCNAD00338345 ', ' JCNAD00333138 ', ' JCNAD00303570 ', ' JCNAD00303569 ', ' JCNAD00303568 ', ' JCNAD00306698 ', ' JCNAD00231935 ', ' JCNAD00231933 ', ' JCNAD00254567 ', ' JCNAD00254585 ', ' JCNAD00254608 ', ' JCNAD00254607 ', ' JCNAD00258524 ', ' JCNAD00332133 ', ' JCNAD00268618 ', ' JCNAD00279196 ', ' JCNAD00268613 ') Order by postdate Desc

35, in the list of the face value, will appear the most frequent values at the front, the least appear at the end, reduce the number of judgments.

36, when using SELECT INTO, it will lock the system table (sysobjects,sysindexes, etc.), blocking access to other connections. When you create a temporary table, you use a display declaration statement instead of SELECT INTO. drop table T_LXH BEGIN TRAN SELECT * into T_lxh from Chineseresume where--commit in another connection select * from sysobjects can see Sele The CT into will lock the system table, and Create table will lock the system table (whether it is a temporary table or a system table). So don't use it in things ... In this case, use a real table, or a temporary table variable, if it's a temporary table that you often use.

37, generally after the group by a having words can eliminate the excess line, so try not to use them to do the work of the exclusion line. Their execution order should be optimal: Select the WHERE clause selects all the appropriate rows, group by is used to group a statistic row, and the HAVING clause is used to remove the unwanted groupings. So the group by a having the cost is small, query fast. Large data rows are grouped and have very consuming resources. If the purpose of group by does not include calculations, but groups, then use distinct faster

38, one update more than one record score multiple updates each time a fast, that is, batch processing good

39, use less temporary table, as far as possible with the result set and table class variables to replace it, table type of variables than temporary table good

40. Under SQL2000, the computed fields are indexed and the conditions to be satisfied are as follows:

A, the expression of the calculated field is OK

B, cannot be used in Text,ntext,image data type

C, must be prepared with the following options ansi_nulls = on, ansi_paddings = ON, ....

41, as far as possible the data processing work on the server, reduce the cost of the network, such as the use of stored procedures. Stored procedures are compiled, optimized, and organized into an execution plan, and stored in the database of SQL statements, is the control of the Flow language collection, of course, fast. Dynamic SQL, which is executed repeatedly, can use temporary stored procedures, which (temporary tables) are placed in tempdb. Because SQL Server did not support complex math calculations before, it was necessary to put this work on other layers and increase the overhead of the network. SQL2000 supports UDFs, now supports complex mathematical calculations, the return value of the function is not too large, such a large overhead. User-defined functions consume a large amount of resources, like cursors, if large results are returned using stored procedures

42, do not use the same function repeatedly in a sentence, waste resources, put the results in the variable and then call faster

43. The efficiency of Select COUNT (*) teaches low, try to adapt his writing, and exists fast. Also note the difference: The return value of select count (field of NULL) from table and select count (field of NOT null) from table is different ...

44, when the server has enough memory, the number of threads = maximum number of connections +5, so as to maximize the efficiency; otherwise, the thread pool of SQL Server is enabled by using the number of compounding threads < maximum number of connections to resolve, if still quantity = The maximum number of connections +5, severely damaging server performance.

45, in a certain order to visit your table. If you lock table a first and then lock table B, then lock them in this order in all stored procedures. If you (inadvertently) lock table B in a stored procedure and then lock table A, this can cause a deadlock. Deadlock is difficult to find if the locking order is not well designed in advance detail

46, monitor the corresponding hardware load memory:page through SQL Server Performance Monitor FAULTS/SEC counter if the value is occasionally higher, indicating that the thread is competing for memory. If it continues to be high, memory may be a bottleneck.

Process:

1.% DPC time refers to the percentage of the processor used to receive and provide services in the deferred program call (DPC) during the sample interval. (DPC is running at a lower interval than the standard interval priority). Because DPC is performed in privileged mode, the percentage of DPC time is part of the privileged time percentage. These times are calculated separately and are not part of the total interval calculation. This total shows the average busy time as a percentage of the instance time.

2,%processor Time counter if the parameter value lasts more than 95%, the bottleneck is CPU. Consider adding a processor or a faster processor.

3.% Privileged time refers to the percentage of the idle processor hours used for privileged mode. (Privileged mode is a processing pattern designed for operating system components and manipulating hardware drivers.) It allows direct access to hardware and all memory. Another mode is User mode, which is a kind of limited processing mode designed for application, environment and integer system. The operating system converts application threads into privileged mode to access operating system services. % of privileged time includes the time to provide services for intermittent and DPC. A high privileged time ratio may be caused by a large number of gaps in a failed device. This counter will display the average busy time as part of the sample time.

4,% User time means CPU-consuming database operations, such as sorting, executing aggregate functions, etc. If the value is high, consider adding an index to reduce the value by using simple table joins, and horizontally dividing the large table. Physical DISK:CURRETN Disk Queue Length counter this value should be no more than 1.5~2 times the number of disks. To improve performance, you can increase the disk. Sqlserver:cache Hit ratio counter the higher the value the better. If you continue below 80%, you should consider adding more memory. Note the value of this parameter is cumulative since the start of SQL Server, so the value will not reflect the current value of the system after a period of time.

47. Analyze Select Emp_name form employee where salary > 3000 If salary is a float type in this statement, the optimizer optimizes it to convert (float,3000). Since 3000 is an integer, we should use 3000.0 in programming and not allow the DBMS to be transformed while it is running. Conversion of the same character and integer data.

48, the query's association with the Order of the Write

Select A.personmemberid, * from Chineseresume a,personmember b where Personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' (A = b, b = ' number ')

Select A.personmemberid, * from Chineseresume a,personmember b where A.personmemberid = B.referenceid and A.personmemberid = ' JCNPRH39681 ' and B.referenceid = ' JCNPRH39681 ' (a = b, b = ' number ', a = ' number ')

Select A.personmemberid, * from Chineseresume a,personmember b where B.referenceid = ' JCNPRH39681 ' and A.personmemberid = ' JCNPRH39681 ' (B = ' number ', A = ' number ')

49,

(1) If no owner code is entered THEN code1=0 code2=9999 ELSE code1=code2= owner code End IF Execute SQL statement as: Select owner name from P2000 Where owner code >=:co De1 and Owner Code <=:CODE2

(2) IF The owner code is not entered THEN Select owner name from P2000 ELSE code= owner code Select owner code from P2000 Where owner code =:code end IF The first method uses only one s QL statement, the second method uses two SQL statements. When the owner code is not entered, the second method is obviously more efficient than the first method because it has no constraints; When you enter the owner code, the second method is still more efficient than the first, not only a limited condition, but also the fastest query operation because of the equality operation. We write programs, don't be afraid of trouble.

50, about JOBCN now query paging new methods (as follows), using Performance Optimizer to analyze performance bottlenecks, if the I/O or the speed of the network, such as the next method optimization effective, if the CPU or memory, with the current method better. Please distinguish the following methods to show that the smaller the index the better.

Begin

DECLARE @local_variable Table (FID int identity (1,1), Referenceid varchar (20))

Insert INTO @local_variable (Referenceid)

Select top 100000 Referenceid from Chineseresume order by Referenceid

SELECT * FROM @local_variable where FID > FID <= 60

End and

Begin

DECLARE @local_variable Table (FID int identity (1,1), Referenceid varchar (20))

Insert INTO @local_variable (Referenceid)

Select top 100000 Referenceid from Chineseresume order by updatedate

SELECT * FROM @local_variable where FID > FID <= 60

End of the different

Begin

CREATE TABLE #temp (FID int identity (1,1), Referenceid varchar (20))

Insert into #temp (Referenceid)

Select top 100000 Referenceid from Chineseresume order by updatedate

SELECT * from #temp where FID > and FID <= to drop table #temp

End

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.