SQL statement execution Efficiency and analysis (note)

Source: Internet
Author: User
Tags getdate

1. About SQL query efficiency, 100w data, query as long as 1 seconds, to share with you:

Machine condition
p4:2.4
Memory: 1 G
Os:windows 2003
Database: MS SQL Server 2000
Objective: To query performance tests and compare the performance of two types of queries

SQL query efficiency step by step

--SETP 1.
--Build a table
CREATE TABLE T_userinfo
(
UserID int identity (primary) key nonclustered,
Nick varchar (+) NOT null default ' ',
ClassID int NOT NULL default 0,
Writetime datetime NOT NULL default GETDATE ()
)
Go

--Build Index
Create clustered index Ix_userinfo_classid on T_userinfo (ClassID)
Go

-Step 2.

DECLARE @i int
DECLARE @k int
DECLARE @nick varchar (10)
Set @i = 1
While @i<1000000
Begin
Set @k = @i% 10
Set @nick = Convert (varchar,@i)
Insert into T_userinfo (nick,classid,writetime) VALUES (@nick, @k,getdate ())
Set @i = @i + 1
End
--It takes 08:27, you need to wait patiently

-Step 3.
Select Top Userid,nick,classid,writetime from T_userinfo
where UserID not in
(
Select Top 900000 userid from T_userinfo ORDER by userid ASC
)

--8 seconds, long enough.

-Step 4.
Select A.userid,b.nick,b.classid,b.writetime from
(
Select Top A.userid from
(
Select Top 900020 userid from T_userinfo ORDER by userid ASC
) a ORDER BY A.userid Desc
) A INNER join t_userinfo b on a.userid = B.userid
ORDER BY A.userid ASC

-It takes 1 seconds, it's too fast, you can't Incredibles.

--Step 5 where query
Select Top Userid,nick,classid,writetime from T_userinfo
where classid = 1 and userid not in
(
Select Top 90000 userid from T_userinfo
where classid = 1
ORDER BY userid ASC
)
--Time 2 seconds

--Step 6 where query
Select A.userid,b.nick,b.classid,b.writetime from
(
Select Top A.userid from
(
Select Top 90000 userid from T_userinfo
where classid = 1
ORDER BY userid ASC
) a ORDER BY A.userid Desc
) A INNER join t_userinfo b on a.userid = B.userid
ORDER BY A.userid ASC

--Query Analyzer shows less than 1 seconds.


Query Efficiency analysis:
subqueries to ensure that duplicate values are eliminated, nested queries must be processed for each result of an external query. In this case, you might consider replacing it with a join query.
If you want to use a subquery, replace in with exists instead of not exists instead of in. Because the subquery introduced by exists only tests for the presence of rows that meet the criteria specified in the subquery, it is more efficient. In either case, not in is the least effective. Because it performs a full table traversal of a table in a subquery.

Establish a reasonable index, avoid scanning redundant data, avoid table scan!
Millions of data, dozens of milliseconds to complete the query.
2.
SQL improves query efficiency
2008-05-12 21:20
1. To optimize the query, avoid full-table scanning as far as possible, and first consider establishing an index on the columns involved in the Where and order by.

2. Avoid null-valued fields in the WHERE clause, which will cause the engine to discard full-table scans using the index, such as:
Select ID from t where num is null
You can set the default value of 0 on NUM, make sure that the NUM column in the table does not have a null value, and then query:
Select ID from t where num=0

3. Try to avoid using the! = or <> operator in the WHERE clause, or discard the engine for a full table scan using the index.

4. You should try to avoid using or in the WHERE clause to join the condition, otherwise it will cause the engine to abandon using the index for a full table scan, such as:
Select ID from t where num=10 or num=20
You can query this:
Select ID from t where num=10
UNION ALL
Select ID from t where num=20

5.in and not in should also be used with caution, otherwise it will result in full table scans, such as:
Select ID from t where num in
For consecutive values, you can use between instead of in:
Select ID from t where num between 1 and 3

6. The following query will also cause a full table scan:
Select ID from t where name like '%abc% '
To be more efficient, consider full-text indexing.

7. If you use a parameter in the WHERE clause, it also causes a full table scan. Because SQL resolves local variables only at run time, the optimizer cannot defer the selection of access plans to run time; it must be selected at compile time. However, if an access plan is established at compile time, the value of the variable is still unknown and therefore cannot be selected as an input for the index. The following statement will perform a full table scan:
Select ID from t where [email protected]
You can force the query to use the index instead:
Select ID from T with (index name) where [email protected]

8. You should try to avoid expression operations on the fields in the WHERE clause, which will cause the engine to discard the full table scan using the index. Such as:
Select ID from t where num/2=100
should read:
Select ID from t where num=100*2

9. You should try to avoid function operations on the fields in the WHERE clause, which will cause the engine to discard the full table scan using the index. Such as:
Select ID from t where substring (name,1,3) = ' abc '--name ID starting with ABC
Select ID from t where DATEDIFF (day,createdate, ' 2005-11-30 ') =0--' 2005-11-30 ' generated ID
should read:
Select ID from t where name like ' abc% '
Select ID from t where createdate>= ' 2005-11-30 ' and createdate< ' 2005-12-1 '

10. Do not perform functions, arithmetic operations, or other expression operations on the left side of "=" in the WHERE clause, or the index may not be used correctly by the system.

11. When using an indexed field as a condition, if the index is a composite index, you must use the first field in the index as a condition to guarantee that the system uses the index, otherwise the index will not be used, and the field order should be consistent with the index order as much as possible.

12. Do not write meaningless queries, such as the need to generate an empty table structure:
Select Col1,col2 into #t from T where 1=0
This type of code does not return any result sets, but consumes system resources and should be changed to this:
CREATE TABLE #t (...)

13. It is a good choice to replace in with exists in many cases:
Select num from a where num in (select num from B)
Replace with the following statement:
Select num from a where exists (select 1 from b where num=a.num)

14. Not all indexes are valid for queries, SQL is query-optimized based on data in the table, and when there is a large amount of data duplication in the index columns, SQL queries may not take advantage of the index, as there are fields in the table Sex,male, female almost half, So even if you build an index on sex, it doesn't work for query efficiency.

15. The index is not the more the better, although the index can improve the efficiency of the corresponding select, but also reduce the efficiency of insert and UPDATE, because the INSERT or update when the index may be rebuilt, so how to build the index needs careful consideration, depending on the situation. The number of indexes on a table should not be more than 6, if too many you should consider whether some of the indexes that are not commonly used are necessary.

16. You should avoid updating clustered index data columns as much as possible, because the order of the clustered index data columns is the physical storage order of the table records, which can consume considerable resources once the column values change to the order in which the entire table is recorded. If your application needs to update clustered index data columns frequently, you need to consider whether the index should be built as a clustered index.

17. Use numeric fields as much as possible, if the field containing only numeric information should not be designed as a character type, which will reduce the performance of queries and connections and increase storage overhead. This is because the engine compares each character in a string one at a time while processing queries and joins, and it is sufficient for a numeric type to be compared only once.

18. Use Varchar/nvarchar instead of Char/nchar as much as possible, because the first variable length field storage space is small, can save storage space, second, for the query, in a relatively small field in the search efficiency is obviously higher.

19. Do not use SELECT * from t anywhere, replace "*" with a specific field list, and do not return any fields that are not available.

20. Try to use table variables instead of temporary tables. If the table variable contains a large amount of data, be aware that the index is very limited (only the primary key index).

21. Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.

22. Temporary tables are not unusable, and they can be used appropriately to make certain routines more efficient, for example, when you need to repeatedly reference a dataset in a large table or a common table. However, for one-time events, it is best to use an export table.

23. When creating a temporary table, if you insert a large amount of data at one time, you can use SELECT INTO instead of CREATE table to avoid causing a large number of logs to increase speed, and if the amount of data is small, create table to mitigate the resources of the system tables. Then insert.

24. If a temporary table is used, be sure to explicitly delete all temporary tables at the end of the stored procedure, TRUNCATE table first, and then drop table, which avoids longer locking of the system tables.

25. Avoid using cursors as much as possible, because cursors are inefficient and should be considered for overwriting if the cursor is manipulating more than 10,000 rows of data.

26. Before using a cursor-based method or temporal table method, you should first look for a set-based solution to solve the problem, and the set-based approach is generally more efficient.

27. As with temporary tables, cursors are not unusable. Using Fast_forward cursors on small datasets is often preferable to other progressive processing methods, especially if you must reference several tables to obtain the required data. Routines that include "totals" in the result set are typically faster than using cursors. If development time permits, a cursor-based approach and a set-based approach can all be tried to see which method works better.

28. Set NOCOUNT on at the beginning of all stored procedures and triggers, set NOCOUNT OFF at the end. You do not need to send a DONE_IN_PROC message to the client after each statement that executes the stored procedure and trigger.

29. Try to avoid large transaction operation and improve the system concurrency ability.

30. Try to avoid returning large data to the client, if the amount of data is too large, should consider whether the corresponding demand is reasonable
1. Avoid setting the field to "Allow null"
2, data table design to standardize
3, in-depth analysis of data operations to the database operations
4, try not to use temporary tables
5, more use of business
6. Try not to use cursors
7. Avoid deadlocks
8, pay attention to the use of read and write locks
9. Do not open large datasets
10. Do not use server-side cursors
11. Use a database with a large amount of data in program encoding
12. Do not create an index for the "Sex" column
13. Notice the timeout problem
14. Do not use SELECT *
15. Do not perform select MAX (ID) in the main table when inserting records in the detail table
16. Try not to use the text data type
17. Use parameter query
18. Do not import large batches of data using insert
19, learn to analyze the query
20. Use referential integrity
21. Replace where with inner join and left join
///////////////////////////////////////////////////////////////////////////////////////////
Http://blog.sina.com.cn/s/blog_4b3d79a9010006gv.html
Improve SQL query efficiency (tips and tricks):
? Tip One:
Problem type: The query prompts for a memory overflow when the Access database field contains Japanese katakana or other unknown characters.
Workaround: Modify the query statement
Sql= "SELECT * FROM tablename where column like '%" &word& "% '"
Switch
Sql= "SELECT * FROM tablename"
Rs.filter = "column like '%" &word& "% '"
===========================================================
Tip Two:
Problem type: How to use a simple way to achieve similar Baidu multi-keyword query (multi-keyword with space or other symbol interval).
Workaround:
'//split the query string with a space
Ck=split (Word, "")
'//Get the number of split
Sck=ubound (CK)
Sql= "SELECT * TableName where"
Query in a field
For i = 0 to Sck
sql = SQL & Tempjoinword & "(" & _
"Column like '" &ck (i) & "% ')"
Tempjoinword = "and"
Next
Query in two fields at the same time
For i = 0 to Sck
sql = SQL & Tempjoinword & "(" & _
"Column like '" &ck (i) & "% ' or" & _
"Column1 like '" &ck (i) & "% ')"
Tempjoinword = "and"
Next
===========================================================
Technique three: Several techniques to improve query efficiency greatly

1. Try not to use or, using or will cause a full table scan, which will greatly reduce query efficiency.
2. In practice, CHARINDEX () does not improve query efficiency more than the previous plus% like, and charindex () causes the index to become ineffective (referring to the SQL Server database)
3. Column like '% ' &word& '% ' will cause the index to not work
Column like ' &word& '% ' will cause the cable to function (remove the previous% symbol)
(refers to SQL Server database)
4. '% ' &word& '% ' with ' &word& '% ' difference at query time:
For example, your field content is an easy-to-hurt woman.
'% ' &word& '% ': match all strings, regardless of "injured" or "one", will show the results.
' &word& '% ': Only pass the preceding string, for example "injured" is no result, only check "one", will show the results.
5. Field extraction to follow the "how much, how much" principle, avoid "select *", try to use "Select field 1, Field 2, Field 3 ...". The practice proves that the extraction speed of data will be improved when one field is extracted from each less. The speed of ascension depends on the size of the field you discard.
6. Order BY is the most efficient sort by clustered index column. A SQL Server data table can only establish a clustered index, which is usually the default ID, or it can be changed to a different field.
7. Establish an appropriate index for your table, and indexing will increase your query speed by a few hundred times. (refers to SQL Server database)
? Here is a query efficiency analysis for indexing and non-indexing:
SQL Server index and query efficiency analysis.
Table News
Field
ID: auto-numbering
Title: article title
Author: Author
Content: Contents
Star: Priority
Addtime: Time
Entries: 1 million
Test machine: P4 2.8/1g memory/ide HDD
=======================================================
Scenario 1:
Primary key ID, default to clustered index, no other nonclustered index
SELECT * from News where Title like '% ' &word& '% ' or Author like '% ' &word& '% ' ORDER by Id Desc
Fuzzy retrieval from field title and author, sorted by ID
Query time: 50 seconds
=======================================================
Scenario 2:
Primary key ID, default is clustered index
Creating nonclustered indexes on title, Author, and star
SELECT * from News where Title like ' "&word&"% ' or Author like ' "&word&"% ' ORDER by Id Desc
Fuzzy retrieval from field title and author, sorted by ID
Query time: 2-2.5 seconds
=======================================================
Scenario 3:
Primary key ID, default is clustered index
Creating nonclustered indexes on title, Author, and star
SELECT * from News where Title like ' "&word&"% ' or Author like ' "&word&"% ' ORDER by Star Desc
Fuzzy retrieval from field title and author, sorted by star
Query time: 2 seconds
=======================================================
Scenario 4:
Primary key ID, default is clustered index
Creating nonclustered indexes on title, Author, and star
SELECT * from News where Title like ' "&word&"% ' or Author like ' "&word&"% '
Fuzzy retrieval from field title and author, not sorted
Enquiry Time: 1.8-2 seconds
=======================================================
Scenario 5:
Primary key ID, default is clustered index
Creating nonclustered indexes on title, Author, and star
SELECT * from News where Title like ' "&word&"% '
Or
SELECT * from News where Author like ' "&word&"% '
Retrieved from the field title or author, not sorted
Query time: 1 seconds
? How to improve the query efficiency of SQL language?
Q: How can I improve the efficiency of SQL language query?
A: This has to start from the beginning:
Because SQL is a result-oriented rather than a process-oriented query language, large relational databases that generally support SQL language use a query-cost-based optimizer to provide an optimal execution strategy for instant queries. For the optimizer, the input is a query statement, and the output is an execution policy.
A SQL query statement can have a variety of execution strategies, and the optimizer estimates the least expensive method that takes the lowest amount of time in all execution methods. All optimizations are based on the WHERE clause in the query statement used by notation, and the optimizer uses the search parameters (Serach Argument) primarily for optimizations in the WHERE clause.
The core idea of a search parameter is that the database uses the index of the field in the table to query the data without having to directly query the data in the record.
Conditional statements with operators such as =, <, <=, >, >= can use the index directly, as the following are search parameters:
emp_id = "10001" or Salary > 3000 or a =1 and C = 7
The following are not search parameters:
Salary = Emp_salary or dep_id! = 10 or Salary * >= 3000 or a=1 or c=7
You should provide as many redundant search parameters as possible, giving the optimizer more room to choose from. Consider the following 3 ways:
The first method:
Select Employee.emp_name,department.dep_name from Department,employee where (employee.dep_id = department.dep_id) and ( "Department.dep_code=") and (employee.dep_code= "01");
The results of its search analysis are as follows:
Estimate 2 I/O operations
Scan Department using primary key
For rows where Dep_code equals "01"
Estimate getting here 1 times
Scan Employee sequentially
Estimate getting here 5 times
The second method:
Select Employee.emp_name,department.dep_name from Department,employee where (employee.dep_id = department.dep_id) and ( Department.dep_code= "01");
The results of its search analysis are as follows:
Estimate 2 I/O operations
Scan Department using primary key
For rows where Dep_code equals "01"
Estimate getting here 1 times
Scan Employee sequentially
Estimate getting here 5 times
The first approach is the same as the second, but the first is best because it provides more options for the optimizer.
The third method:
Select Employee.emp_name,department.dep_name from Department,employee where (employee.dep_id = department.dep_id) and ( Employee.dep_code= "01");
This method is the worst because it cannot use the index, that is, it cannot be optimized ...
The following points should be noted when using SQL statements:
1, avoid the use of incompatible data types. For example, float and Integer,char and varchar,binary and long binary are incompatible. Incompatible data types may make the optimizer unable to perform some optimizations that could have been done. For example:
Select Emp_name Form employee where salary > 3000;
In this statement, if salary is a float type, it is difficult for the optimizer to optimize it because 3000 is an integer, and we should use 3000.0 instead of waiting for the DBMS to convert at runtime when programming.
2, try not to use an expression, because it is not available at the time of Yi, so SQL can only use its average density to estimate the number of records that will be fatal.
3. Avoid using other mathematical operators for search parameters. Such as:
Select Emp_name from Employee where salary * > 3000;
should read:
Select Emp_name from employee where salary > 250;
4. Avoid such operators as! = or <>, because it prevents the system from using the index and can only search the data in the table directly.
? The application in Oracal
A 16 million data sheet-SMS Uplink table Tbl_sms_mo
Structure:
CREATE TABLE Tbl_sms_mo
(
SMS_ID number,
mo_id VARCHAR2 (50),
MOBILE VARCHAR2 (11),
Spnumber VARCHAR2 (20),
MESSAGE VARCHAR2 (150),
Trade_code VARCHAR2 (20),
link_id VARCHAR2 (50),
GATEWAY_ID number,
Gateway_port number,
Mo_time DATE DEFAULT sysdate
);
CREATE INDEX idx_mo_date on Tbl_sms_mo (mo_time)
PCTFREE 10
Initrans 2
Maxtrans 255
STORAGE
(
INITIAL 1M
NEXT 1M
Minextents 1
MAXEXTENTS UNLIMITED
Pctincrease 0
);
CREATE INDEX idx_mo_mobile on Tbl_sms_mo (MOBILE)
PCTFREE 10
Initrans 2
Maxtrans 255
STORAGE
(
INITIAL 64K
NEXT 1M
Minextents 1
MAXEXTENTS UNLIMITED
Pctincrease 0
);
Problem: Query the table for a short message sent by a phone in a certain time period, the following SQL statement:
SELECT Mobile,message,trade_code,mo_time
From Tbl_sms_mo
WHERE mobile= ' 130XXXXXXXX '
and mo_time between To_date (' 2006-04-01 ', ' yyyy-mm-dd HH24:MI:SS ') and to_date (' 2006-04-07 ', ' yyyy-mm-dd HH24:MI:SS ')
ORDER by Mo_time DESC
It takes about 10 minutes for the results to be returned and is simply unbearable for Web queries.
Analysis:
In PL/SQL Developer, click on the "Explain Plan" button (or the F5 key) and analyze it to find that the default index used is idx_mo_date. The problem may be here, as compared to the total amount of 16 million data, both mobile data is very small, if using idx_mo_mobile is easier to lock data.
Optimize as below:
SELECT/*+ Index (tbl_sms_mo idx_mo_mobile) */Mobile,message,trade_code,mo_time
From Tbl_sms_mo
WHERE mobile= ' 130XXXXXXXX '
and mo_time between To_date (' 2006-04-01 ', ' yyyy-mm-dd HH24:MI:SS ') and to_date (' 2006-04-07 ', ' yyyy-mm-dd HH24:MI:SS ')
ORDER by Mo_time DESC
Test:
Press F8 to run this sql, wow ... 2.360s, that's the difference.
Improving SQL Server performance with indexes
Special Instructions
The efficient use of indexes in Microsoft's SQL Server system can improve the query performance of the database, but the performance increase depends on the implementation of the database. This article will show you how to implement indexes and effectively improve the performance of your database.
  
The use of indexes in relational databases can improve database performance, which is very obvious. The more indexes you use, the faster you get the data from the database system. However, it is important to note that the more indexes you use, the more time it takes to insert new data into the database system. In this article, you'll learn about the different types of indexes supported by Microsoft's SQL Server database, where you'll learn how to implement indexes using different methods, and you'll get far more of a database's read performance than the overall performance of the database.
  
Definition of an index
Index is a tool of the database, by using the index, in the database to obtain data, you can not scan all the data records in the database, which can improve the performance of the system to obtain data. Using an index can change the way data is organized so that all data is organized in a similar structure, so that data retrieval access can be easily achieved. Indexes are created by columns so that you can help the database find the appropriate data based on the values in the indexed columns.
  
Type of index
Microsoft's SQL Server supports two types of indexes: the clustered index and the nonclustered index. The Clustered index stores data in a data table in physical order. Because there is only one physical order in the table, there can be only one clustered index in each table. When looking for data in a range, the clustered index is a very efficient index because it is already sorted in physical order when it is stored.
  
The nonclustered index does not affect the following physical storage, but it is composed of data row pointers. If an clustered index already exists, the index pointer in nonclustered will contain the location reference of the clustered index. These indexes are more jincu than the data, and they are scanned much faster than the actual data table scans.
  
How to implement an index
The database can automatically create some indexes. For example, Microsoft's SQL Server System enforces a unique constraint by automatically creating a unique index, which ensures that duplicate data is not inserted in the database. You can also use the CREATE INDEX statement or the SQL Server Enterprise Manager to create additional indexes, and SQL Server Enterprise Manager also has an index creation template to guide you through how to create indexes.
  
Get better performance
While indexes can provide performance benefits, they also bring some cost. Although the SQL Server system allows you to create up to 256 nonclustered indexes per data table, it is not recommended to use so many indexes. Because the index requires more storage space on the memory and physical disk drives. The performance of the system may be reduced to some extent during the execution of the Insert Declaration, because when inserting the data it is necessary to insert the data in the order of the index, rather than inserting it directly at the first available location, so that the more indexes that exist will result in more time to insert or update the declaration.
  
When creating an index using the SQL Server system, it is recommended to follow the creation guidelines below:
  
Correct selection of data types
Using some data types in an index can improve the efficiency of your database system, such as Int,bigint, smallint, and tinyint, which are well suited for use in indexes, because they all occupy the same size and can easily be compared. Other data types, such as char and varchar, are inefficient because these data types are not suitable for mathematical operations, and the comparison operation takes longer than the data type mentioned above.
  
Ensure that the index values are used correctly during use
When you perform a query operation, you may be using a column that is only part of the clustered, which is especially important when you use the data. When you call a function with these data columns as parameters, these functions may invalidate the existing sorting advantage. For example, using a date value as an index, and in order to achieve a comparison, you might need to convert the date value to a string, which will result in the inability to use the date index value during the query.
  
When creating multi-column indexes, you need to be aware of the order of the columns
The database arranges the records based on the values of the first column index, and then sorts them further according to the values of the second column, sorting until the last index is sorted. Which column has fewer unique data values and which column should be the first index, which ensures that the data can be further cross-sorted by the index.
  
Limit the number of columns in the clustered index
The more columns you use in the clustered index, the more clustered index reference locations are included in the nonclustered index, and the more data you need to store. This will increase the size of the data table that contains the index, and will increase the search time based on the index.
  
Avoid frequent updates to clustered index data columns
Because the nonclustered index relies on the clustered index, if the data columns that make up the clustered index are frequently updated, the row locators stored in nonclustered will also be updated frequently. For all queries related to these columns, this can result in increased performance costs if a record is locked.
  
Separate operation (if possible)
For a table, if you need to perform frequent inserts, updates, and lots of read operations, try to separate the table if possible. All insert and update operations can be manipulated in a table with no indexes and then copied to another table, where a large number of indexes can optimize the ability to read data.
  
Proper rebuilding of indexes
The nonclustered index contains a pointer to the clustered index, so that the nonclustered index is subordinate to the clustered index. When rebuilding the clustered index, the first is to discard the original index, and then create the index using CREATE INDEX, or the DROP_EXISTING clause as part of the rebuild index while using the CREATE index declaration. Splitting the discards and the creation into several steps will result in rebuilding the nonclustered index multiple times, rather than rebuilding the nonclustered index only once, as with the DROP_EXISTING clause.
  
Use the fill factor wisely
Data is stored in pages that have a fixed size of contiguous memory. With the addition of a new record line, the data memory page will gradually fill up, the system must perform the data page splitting work, through this split work to transfer part of the data to the next new page. Such a split will add to the burden on the system and cause the stored data to be fragmented. The fill factor maintains the gap between the data, and the fill factor for the index is usually set when the index is created. In this way, you can reduce the number of page splits that are caused by inserting data. Because the size of the space is maintained only when the index is created, the size of the space is not maintained when the data is added or updated. Therefore, to be able to make full use of the fill factor, it is necessary to periodically rebuild the index. The gap caused by the fill factor will result in a decline in read performance, because as the database expands, more and more disk access work is required to read the data. Therefore, it is important to consider whether to use the fill factor or the default method when the number of reads exceeds the number of writes.
  
Decisions of management
The efficient use of indexes enables good query functionality in Microsoft's SQL Server system, but the efficiency of using indexes depends on several different implementation decisions. In terms of the performance balance of indexes, making the right database management decisions means having to choose between good performance and difficulty. In certain situations, some of the recommendations in this article will help you make the right decision

SQL statement execution Efficiency and analysis (note)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.