SQL Server performance optimization Overview

Source: Internet
Author: User
Tags server memory

I hope to make a comprehensive summary of my recent work needs.SQL ServerPrecautions for database performance optimization: I searched the internet and found manyArticleSome of them list hundreds of entries, but you can see that there are a lot of seemingly incorrect or outdated information (which may be applicable to versions earlier than SQL server6.5 or Oracle, I had to summarize my previous experience and test results.

I always believe that the performance improvement of a system is not only a task of performance tuning in the trial run or maintenance phase, but also a task in the development phase, in the entire software life cycle, you must pay attention to the need for effective work. Therefore, I hope to summarize the precautions for database performance optimization based on the different stages of the software lifecycle.

I. analysis stage

In general, in the system analysis phase, there are often too many places to pay attention to. The various functional, availability, reliability, and security requirements of the system often attract most of our attention. However, we must pay attention to the following, performance is an important non-functional requirement. You must determine the real-time requirement, response time requirement, and hardware configuration based on the characteristics of the system. It is better to have quantitative indicators of various needs.

On the other hand, in the analysis phase, the types of systems should be distinguished according to various requirements. In The Big aspect, the types should be OLTP (online transaction processing system) and OLAP (Online Analytical Processing System ).

Ii. Design Phase

The design stage can be said to be a key stage of system performance in the future. At this stage, there is a process related to almost all performance tuning in the future-database design.

After the database design is complete, preliminary index design can be carried out. A good index design can guide the coding stage to write highly efficientCodeTo lay a good foundation for the performance of the entire system.

Pay attention to the following performance requirements during the design phase:

1. Standardization of database Logic Design

The standardization of database logic design is what we generally call the paradigm. We can simply understand the paradigm as follows:

1st specification: there are no repeated groups or multi-value columns, which is the minimum requirement for database design.

2nd specification: each non-Keyword segment must depend on the primary keyword and cannot depend on some components of a combined primary keyword. Eliminate part of the dependency. In most cases, the database design should reach the second paradigm.

3rd specification: one non-Keyword segment cannot depend on another non-Keyword segment. To eliminate the transfer dependency, reaching the third paradigm should be the requirement of most tables in the system, unless there are tables with special functions.

The higher paradigm requirement is not introduced here. I personally think that if most of the requests reach the second paradigm, the system will generate fewer columns and more tables, this reduces data redundancy and improves performance.

2. reasonable redundancy

It is almost impossible to design a system completely according to the standardization. Unless the system is very small, it is necessary to add redundancy in a planned manner after the standardization design.

Redundancy can be a redundant database, a redundant table, or a redundant field. redundancy at different granularities can play different roles.

Redundancy can be increased for programming convenience or for performance improvement. From a performance perspective, redundant databases can distribute database pressure, while redundant tables can distribute the concurrent pressure on tables with a large amount of data and speed up special queries, redundant fields can effectively reduce database table connections and improve efficiency.

3. Primary Key Design

The primary key is necessary. The primary key of SQL Server is a unique index at the same time. In practice, we often choose the smallest key combination as the primary key. Therefore, the primary key is usually suitable for table clustered indexes. Clustered indexes have a great impact on queries. This is an index description below.

In a table with multiple keys, the selection of the primary key is also important. Generally, the selection of keys with a small total length is faster, at the same time, a small key can reduce the B-tree hierarchy of the primary key.

When selecting a primary key, pay attention to the field order of the primary key combination. For the primary key combination, the performance of the primary keys in different field order may vary greatly, generally, you should select a field with a low repetition rate, a single field, or a combination of query possibilities.

4. Foreign key design

Foreign keys are used as database objects, and many people think that they are troublesome. In fact, foreign keys are useful in most cases for the following reasons:

Foreign keys are the most efficient Consistency Maintenance Method. Database Consistency requirements can be followed by foreign keys, check constraints, rule constraints, triggers, and clients.ProgramGenerally, the method closer to the data is more efficient.

Exercise caution when using cascade deletion and cascade update. Cascade deletion and cascade update are the new functions of SQL Server 2000 in the current year. They are reserved in SQL Server 2005 and should be available. I am cautious about this because cascading deletion and cascading update break through the traditional definition of foreign keys, and the features are a little too powerful, before use, you must determine that you have mastered the function scope. Otherwise, cascading deletion and cascading update may cause your data to be inexplicably modified or lost. In terms of performance, cascading deletion and cascading update are more efficient than other methods.

5. Field Design

Fields are the most basic unit of the database, and their design has a great impact on performance. Note the following:

A. Use the numeric type as much as possible for the data type. The numeric type is much faster than the numeric type.

B. The data type should be as small as possible. The smaller here refers to meeting the foreseeable future needs.

C. Do not allow null unless necessary. Use not null + default instead.

D. Use less text and image. The reading and writing of binary fields is slow, and there are not many reading methods. It is best not to use them in most cases.

E. Use auto-increment fields with caution, which is not conducive to data migration.

6. Design of physical database storage and Environment

During the design stage, you can design the physical storage, operating system environment, and network environment of the database, in the future, our system will be able to adapt to a large number of concurrent users and a large amount of data.

Pay attention to the role of the file group. The applicable file group can effectively distribute I/O operations to different physical hard disks to improve concurrency.

7. System Design

The design of the entire system, especially the system structure design, has a great impact on the performance. For general OLTP systems, you can choose a C/S structure and a three-layer C/S structure, the performance of different system structures varies.

In the system design phase, some business logic should be summarized in Database Programming implementation. database programming includes database stored procedures, triggers and functions. The advantage of using Database Programming to Implement business logic is to reduce network traffic and make full use of the database's pre-compilation and cache functions.

8. Index Design

In the design phase, you can perform a preliminary index design based on the functional and performance requirements. here you need to design indexes based on the expected data volume and query, which may be different from actual use in the future.

You should change your mind about the selection of indexes:

A. Determine which tables need to add indexes based on the data volume. If the data volume is small, only the primary key is required.

B. Determine which fields need to be indexed Based on the Usage frequency, and select fields that are frequently used as join conditions, filtering conditions, aggregate queries, and sorting fields as index candidate fields.

C. Combine frequently-used fields to form a composite index. The field order of the composite index is the same as that of the primary key. You also need to put the most commonly used fields in front, put the fields with low repetition rate in front.

D. Do not add too many indexes to a table because the index affects the insert and update speeds.

Iii. coding stage

The coding stage is the focus of this article, because the quality of coding almost determines the quality of the entire system.

In the coding stage, all programmers need to be aware of the performance, that is, to implement functions while considering the performance. The database is a tool that can perform aggregate operations. We should try our best to use this tool, the so-called set operation is actually a batch operation, that is, to minimize the number of large data loop operations on the client, instead of using SQL statements or stored procedures. It is hard to say clearly about ideas and consciousness. We need to understand it in the programming process.

The following are some precautions for programming:

1. Only the required data is returned.

To return data to the client, you must at least extract data from the database, transmit data over the network, receive data from the client, and process data from the client. If no data is returned, it will increase invalid labor on servers, networks, and clients. The harm is obvious. To avoid such incidents, you must note:

A. in the horizontal view, do not write the select * statement, but select the fields you need.

B. Vertically, write the WHERE clause reasonably. Do not write SQL statements without where.

C. Pay attention to the WHERE clause after select into. Because select into inserts data into the temporary table, this process locks some system tables. If the data returned by this where clause is too large or the speed is too slow, the system table will be locked for a long time and other processes will be blocked.

D. For aggregate queries, you can use the having clause to further limit the returned rows.

2. Try to do less repetitive work

This is the same as the above, that is, to minimize ineffective work. However, the focus of this point is on the client program. Note the following:

A. control multiple executions of the same statement, especially the multiple executions of some basic data.

B. Data conversion may be designed to reduce the number of data conversions, but it can be done by programmers.

C. eliminate unnecessary subqueries and connection tables. subqueries are generally interpreted as external connections in the execution plan, resulting in additional costs for redundant connection tables.

D. Merge multiple updates for the same table with the same condition, for example

Update employee set fname = 'haiwer 'Where emp_id = 'vpa30890f'

Update employee set lname = 'yang' where emp_id = 'vpa30890f'

These two statements should be merged into the next statement.

Update employee set fname = 'haiwer ', lname = 'yang'

Where emp_id = 'vpa30890f'

E. Do not split the update operation into the delete operation + insert operation. Although the functions are the same, the performance difference is great.

F. Do not write meaningless queries, such

Select * from employee where 1 = 2

3. Pay attention to transactions and locks

Transactions are important tools in database applications. They have four attributes: atomicity, consistency, isolation, and persistence. We need to use transactions to ensure data correctness for many operations. When using transactions, we need to avoid deadlocks and minimize blocking. Pay special attention to the following aspects:

A. The transaction operation process should be as small as possible, and the transactions that can be split should be split.

B. There should be no interaction in the transaction operation process, because the transaction is not finished while the interaction is waiting, and many resources may be locked.

C. Access objects in the same order during transaction operations.

D. Improve the efficiency of each statement in the transaction. Using indexes and other methods to improve the efficiency of each statement can effectively reduce the execution time of the entire transaction.

E. Try not to specify the lock type and index. SQL Server allows us to specify the lock type and index used by statements. However, in general, the lock type and index selected by the SQL Server optimizer are optimal in terms of the current data volume and query conditions. What we specify is only available in the current situation, however, the data volume and data distribution will change in the future.

F. You can use a lower isolation level when querying a report, especially when querying a report. You can select the lowest isolation level (uncommitted read ).

4. Pay attention to the usage of temporary tables and table Variables

In complex systems, temporary tables and table variables are difficult to avoid. for usage of temporary tables and table variables, note the following:

A. if the statements are complex and have too many connections, you can use temporary tables and table variables for step-by-step execution.

B. If you need to use the same part of the data of a large table multiple times, use temporary tables and table variables to store the data.

C. If you need to combine the data of multiple tables to form a result, you can use temporary tables and table variables to summarize the data of these tables step by step.

D. In other cases, use of temporary tables and table variables should be controlled.

E. For the selection of temporary tables and table variables, many statements are that the table variables are in memory and are fast. Table variables should be preferred, this option mainly considers the amount of data that needs to be stored in the temporary table. When there is a large amount of data, the temporary table is faster.

F. We have tested the use of select into and Create Table + insert into for temporary tables. Generally, select into is much faster than create table + insert, however, select into locks the system tables sysobjects, sysindexes, and syscolumns of tempdb. In a multi-user concurrent environment, other processes are easily blocked. Therefore, in a concurrent system, try to use create table + insert into, while select into is used in a single statement with a large data volume.

G. Pay attention to the sorting rules. Use the temporary table created in create table. If you do not specify the sorting rules for fields, select the default sorting rules of tempdb instead of the sorting rules of the current database. If the sorting rules of the current database are different from those of tempdb, a conflict error occurs during connection. Generally, you can specify the field sorting rule database_default when creating a temporary table in create table to avoid the above problems.

5. subquery usage

A subquery is a SELECT query nested in select, insert, update, delete statements, or other subqueries. Subqueries can be used wherever expressions are allowed.

Subqueries allow for flexible programming and can be used to implement some special functions. However, in terms of performance, an inappropriate subquery may cause a performance bottleneck.

If the sub-query condition uses the field of the outer table, this subquery is called the related subquery. You can use in, not in, exists, and not exists to introduce related subqueries.

Note the following for related subqueries:

For subqueries related to a, not in, and not exists, you can use left join instead. For example:

Select pub_name

From publishers

Where pub_id not in

(Select pub_id

From titles

Where type = 'business ')

It can be rewritten:

Select a. pub_name

From publishers a left join titles B

On B. type = 'business' and

A. pub_id = B. pub_id

Where B. pub_id is null

Select title

From titles

Where not exists

(Select title_id

From sales

Where title_id = titles. title_id)

It can be rewritten:

Select title

From titles left join sales

On sales. title_id = titles. title_id

Where sales. title_id is null

B. If the subquery is not repeated, the subquery related to in and exists can be replaced by inner join. For example:

Select pub_name

From publishers

Where pub_id in

(Select pub_id

From titles

Where type = 'business ')

It can be rewritten:

Select distinct A. pub_name

From publishers a inner join titles B

On B. type = 'business' and

A. pub_id = B. pub_id

C. Use exists instead of related subqueries of in, such

Select pub_name

From publishers

Where pub_id in

(Select pub_id

From titles

Where type = 'business ')

You can use the following statement instead:

Select pub_name

From publishers

Where exists

(Select 1

From titles

Where type = 'business' and

Pub_id = publishers. pub_id)

D. Do not use the count (*) subquery to determine whether a record exists. It is best to use left join or exists. For example, someone writes a statement like this:

Select job_desc from jobs

Where (select count (*) from employee where job_id = jobs. job_id) = 0

It should be changed:

Select jobs. job_desc from jobs left join employee

On employee. job_id = jobs. job_id

Where employee. emp_id is null

Select job_desc from jobs

Where (select count (*) from employee where job_id = jobs. job_id) <> 0

It should be changed:

Select job_desc from jobs

Where exists (select 1 from employee where job_id = jobs. job_id)

6. Use the cursor with caution

Generally, a database operation is a set operation, that is, a set operation is performed on the result set determined by the WHERE clause and the selected column. A cursor is a non-set operation method. In general, the implementation of the cursor function is often equivalent to a cyclical Implementation of the client, so in most cases, we move the cursor function to the client.

The cursor places the result set in the server memory and loops through a processing record. The consumption of database resources (especially memory and lock resources) is very large. Therefore, we should use the cursor only when there are no other methods.

In addition, we can use some features of SQL Server to replace the cursor to increase the speed.

A. Example of string connection

This is a common example in the Forum. It is used to concatenate a string field of a table's qualified record into a variable. For example, to connect the fname of the employee whose job_id = 10 together with a comma, the most likely thing to think of is using a cursor:

Declare @ name varchar (20)

Declare @ name varchar (1000)

Declare name_cursor cursor

Select fname from employee where job_id = 10 order by emp_id

Open name_cursor

Fetch next from rname_cursor into @ name

While @ fetch_status = 0

Begin

Set @ names = isnull (@ names + ',', '') + @ name

Fetch next from name_cursor into @ name

End

Close name_cursor

Deallocate name_cursor

It can be modified as follows, with the same functions:

Declare @ name varchar (1000)

Select @ names = isnull (@ names + ',', '') + fname

From employee where job_id = 10 order by emp_id

B. Example of using case when for conversion

A lot of the reason for using cursors is that some processing needs to be different based on the various situations of the record. In fact, in this case, we can use the case when statement for necessary judgment and processing, in addition, case when can be nested. For example:

Table Structure:

Create Table parts table (

Item No. varchar (30 ),

Name: varchar (100 ),

Main Unit varchar (20 ),

Unit: 1 varchar (20 ),

Unit 1 parameter numeric ),

Unit 2 varchar (20 ),

Unit 2 parameter numeric (18, 4)

)

Go

Create Table (

Datetime,

Item No. varchar (30 ),

Unit: int,

Warehouse receiving count numeric ),

Number of damaged numeric)

)

Go

The Unit field can be 0, 1, and 2, which respectively represent the main unit, Unit 1, and Unit 2. Many calculations require a unified unit. The unified unit can be implemented using a cursor:

Declare @ material number varchar (30 ),

@ Unit: int,

@ Parameter numeric ),

Declare cur cursor

Select item number, In the where unit of the from warehouse receiving table <> 0

Open cur

Fetch next from cur into @ material number, @ Unit

While @ fetch_status <>-1

Begin

If @ unit = 1

Begin

Set @ parameter = (select unit 1 parameter from item table where item number = @ item number)

Number of sets in the update warehouse receiving table = Quantity * @ parameter, number of corruption = number of corruption * @ parameter, unit = 1 where current of cur

End

If @ unit = 2

Begin

Set @ parameter = (select unit 1 parameter from item table where item number = @ item number)

Number of sets in the update warehouse receiving table = Quantity * @ parameter, number of corruption = number of corruption * @ parameter, unit = 1 where current of cur

End

Fetch next from cur into @ material number, @ Unit

End

Close cur

Deallocate cur

It can be rewritten:

Update a set

Quantity = case A. Unit when 1 then A. Quantity * B. Unit 1 parameter

When 2 then A. Quantity * B. Unit 2 parameter

Else A. Quantity

End,

Number of damages = case A. Unit when 1 then A. Number of damages * B. Unit 1 parameter

When 2 then A. Number of damages * B. Unit 2 parameter

Else A. Number of damages

End,

Unit = 1

From warehouse receiving table A and Material Table B

Where a. Unit <> 1 and

A. Item No. = B. Item No.

C. Examples of update statements involved in Variables

SQL erver statements are flexible. The update statements involved in variables can implement the same functions as cursors, for example:

In

Select a, B, c, cast (null as INT) as serial number

Into # T

From table

Order by a, newid ()

After a temporary table is generated, it is sorted by field A, but the sequence is out of order when field a is the same. If you need to change the sequence number field to the record sequence number grouped by field, only the update statement involved in the cursor and variable can be implemented. The update statement involved in the variable is as follows:

Declare @ A int

Declare @ No. int

Update # t set

@ Sn = case when a = @ A then @ Sn + 1 else 1 end,

@ A =,

No. = @ No.

D. If the cursor must be used, select the cursor type. If only data is retrieved cyclically, use the forward cursor (fast_forward option ), generally, only static cursors are required ).

E. Pay attention to the uncertainty of dynamic cursors. If the record set data queried by dynamic cursors is modified, the cursor is automatically refreshed, which makes the dynamic cursors uncertain, if another process or itself changes the record, it is possible to refresh the record set of the cursor.

7. Try to use Indexes

After an index is created, not every query uses an index. When an index is used, the efficiency of the index varies greatly. As long as the index is not forcibly specified in the query statement, the selection and use of the index is automatically selected by the SQL Server optimizer, the selection is based on the conditions of the query statement and the statistical information of related tables. This requires that we try to make the optimizer use indexes when writing SQL statements.

To enable the optimizer to efficiently use indexes, note the following when writing statements:

A. do not perform operations on index fields, but try to perform transformations, such

Select ID from t where num/2 = 100

Should be changed:

Select ID from t where num = 100*2

Select ID from t where num/2 = num1

If num has an index, change it:

Select ID from t where num = num1 * 2

If num1 has an index, it should not be changed.

Such statements have been found:

Select year, month, amount from balance table

Where 100 * year + month = 2007*100 + 10

It should be changed:

Select year, month, amount from balance table

Where year = 2007 and

Month = 10

B. Do not convert the format of index fields.

Example of a date field:

Where convert (varchar (10), Date Field, 120) = '2017-08-15'

Should be changed

Where Date Field> = '2014-08-15 'and Date Field <'2014-08-16'

Example of isnull conversion:

Where isnull (field, '') <>'' should be changed to: where field <>''

Where isnull (field, '') ='' should not be modified

Where isnull (field, 'F') = 'T' should be changed to: where field = 'T'

Where isnull (field, 'F') <> 'T' should not be modified

C. Do not use functions for indexed fields

Where left (name, 3) = 'abc' or where substring (name, 1, 3) = 'abc'

Should be changed:

Where name like 'abc %'

Example of date query:

Where datediff (day, date, '2017-11-30 ') = 0 should be changed to: Where date> = '2017-11-30' and date <'2017-12-1'

Where datediff (day, date, '2017-11-30 ')> 0 should be changed to: Where date <'2017-11-30'

Where datediff (day, date, '2017-11-30 ')> = 0 should be changed to: Where date <'2017-12-01'

Where datediff (day, date, '2017-11-30 ') <0 should be changed to: Where date> = '2017-12-01'

Where datediff (day, date, '2017-11-30 ') <= 0 should be changed to: Where date> = '2017-11-30'

D. Do not connect multiple fields in the index.

For example:

Where fame + '.' + lname = 'haiwei. yang'

Should be changed:

Where fname = 'haiwei' and lname = 'yang'

8. Note the writing of connection conditions

The join conditions for multi-table join are of great significance to the selection of indexes. Therefore, we need to pay special attention when writing the join conditions.

A. When connecting multiple tables, the connection conditions must be fully written. You 'd better repeat them without missing them.

B. Use clustered indexes as much as possible in connection conditions

C. Note the differences between the on and where conditions.

9. Other notes

Experience shows that the earlier the problem is discovered, the lower the cost is solved. Many performance problems can be found in the encoding stage. To detect performance problems early, note the following:

A. the programmer pays attention to and cares about the data volume of each table.

B. Try to test the Encoding Process and unit test process in a database with a large amount of data. It is best to test the actual data.

C. Every SQL statement should be as simple as possible

D. Do not update the data of tables with triggers frequently.

E. Pay attention to the limitations of database functions and their performance

10. Learn to differentiate the advantages and disadvantages of SQL statements

It is very important to distinguish the advantages and disadvantages of SQL statements by yourself. Only when you can distinguish the advantages and disadvantages can you write efficient statements.

A. To view the execution plan of an SQL statement, you can use Ctrl + L to display the execution plan in the query and analysis. Generally, pay attention to the attributes of the graphs with the largest percentage, moving the mouse over the graph will display the properties of the graph. Pay attention to the estimated cost data and the title, generally, they are clustered index seek, index seek, clustered index scan, index scan, and table scan. scan indicates that the statement has room for optimization. You can also use statements

Set showplan_all on

Statement to be executed

Set showplan_all off

View the execution plan text details.

B. Use an event probe to track system running, suspicious statements, time, CPU usage, and I/O data, so as to analyze the statement efficiency.

C. You can use the Windows System Performance Detector to focus on CPU and I/O parameters.

Iv. Test, test run and Maintenance

The main task of testing is to discover and modify the system, and the performance problem is also an important aspect. The focus should be on identifying performance problems and necessary optimization. It mainly optimizes statements and indexes.

In the trial run and maintenance phase, the system is run in the actual environment. The problems found are extensive and may involve problems in the operating system, network, and multi-user concurrency environments, the optimization is also extended to the optimization of the operating system, network and physical database storage.

The optimization method at this stage will not be expanded here. It only describes the index maintenance method:

A. You can use the DBCC dbreindex statement or the SQL server maintenance plan to set the regular index reconstruction. The purpose of index reconstruction is to improve the index efficiency.

B. You can use the statement update statistics or SQL Server to update the index statistics at the scheduled time. The purpose of the statement is to make the statistics more reflect the actual situation, this allows the optimizer to select a more appropriate index.

C. You can use DBCC checkdb or DBCC checktable statements to check whether there are any problems with database tables and indexes. These two statements can also solve common problems.

D,

5. personal opinions on some of the online documents

1. "Try to avoid null value determination on the field in the WHERE clause. Otherwise, the engine will discard the index and perform full table scanning, for example:

Select ID from t where num is null

You can set the default value 0 on num to make sure that the num column in the table does not have a null value, and then query it like this:

Select ID from t where num = 0"

Personal Opinion: after testing, is null can also be searched by index seek. 0 and null are different concepts. The meaning and number of records of the Two Queries mentioned above are different.

2. "Try to avoid using it in the WHERE clause! = Or <> operator. Otherwise, the engine will discard the index for full table scanning ."

Personal Opinion: after testing, <> you can also use index seek to search.

3. "Try to avoid using or in the WHERE clause to connect to the condition. Otherwise, the engine will discard the index and perform full table scanning, for example:

Select ID from t where num = 10 or num = 20

You can query it as follows:

Select ID from t where num = 10

Union all

Select ID from t where num = 20"

Personal Opinion: I disagree with the full table scan statement.

4. "In and not in should also be used with caution. Otherwise, full table scanning will occur, for example:

Select ID from t where num in (1, 2, 3)

For continuous values, you can use between instead of in:

Select ID from t where num between 1 and 3"

Personal Opinion: I disagree with the full table scan statement.

5. "If a parameter is used in the WHERE clause, a full table scan is also performed. Because SQL parses local variables only at runtime, the optimizer cannot postpone the selection of the access plan to runtime; it must be selected at compilation. However, if an access plan is created during compilation, the value of the variable is still unknown and thus cannot be used as an input for index selection. The following statement performs a full table scan:

Select ID from t where num = @ num

You can change it to force query to use the index:

Select ID from T with (index name) where num = @ num"

My opinion: the explanation of local variables is strange. If a parameter is used to affect the performance, the stored procedure should be exclusive to the school. I insist on my opinion on forced indexing.

6. "Use varchar/nvarchar as much as possible to replace Char/nchar, because the storage space of the variable-length field is small, which can save storage space. Secondly, for queries, searching in a relatively small field is obviously more efficient."

Personal Opinion: "searching in a relatively small field is obviously more efficient", but the length of the field does not seem to be determined by the variable length, it is determined by the business itself. In sqlserver6.5 or earlier versions, the comparison speed of variable-length string fields is much slower than that of fixed-length string fields. Therefore, for those versions, we recommend using fixed-length fields to store some key fields. In version 2000, the comparison method of variable-length string fields is modified, which is slightly different from the comparison speed of fixed-length fields. For convenience, we use a large number of variable-length fields.

7. The following statements about the order of connecting tables or the order of conditions have been tested. in SQL Server, these orders do not affect performance. These statements may be effective for Oracle.

Address: http://www.cnblogs.com/eng308/archive/2010/06/04/1751735.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.