optimization of SQL Server database performance

Source: Internet
Author: User
Tags query sybase sybase database cpu usage oracle database
server| Data | database | performance | optimization

Editor's note: Database performance optimization and database management system are closely related, different database management system in the specific operation of a very different. Following this newspaper in 2003, the 48th, 49 issue of "Sybase Database Performance Tuning" and "Oracle Server Performance Tuning Strategy", respectively, discussed Sybase and Oracle Database management system, this period we will specifically introduce SQL The performance optimization method for the server database.

Database is the core of enterprise information, and its application level directly affects the level of enterprise management. Choosing a high-performance database product does not mean that there is a good database application system, if the database system design is unreasonable, not only will increase the client and server program programming and maintenance of the difficulty, but also affect the actual performance of the system. Generally speaking, in a management information system analysis, design, testing and commissioning phase, because the amount of data is small, designers and testers often only notice the implementation of the function, and rarely notice the lack of performance, until the system put into actual operation for some time, only to find that the performance of the system is decreasing, At this time to consider the improvement of system performance will cost more human and material resources, and the end result is to the whole system and a patch, so the design phase is the focus of optimization. Taking SQL Server database as an example, this paper discusses how to optimize the performance of database system from the following aspects.

Database design

To achieve the optimization of SQL Server database, we should first have a good database design scheme. In practice, many SQL Server scenarios tend to be poorly designed to cause poor performance. Implementing a good database design must consider these issues:

1. Logical database Normalization problem

In general, the logical database design meets the normalized top 3 standards:

1th specification: Columns without duplicate groups or multivalued;

The 2nd specification: Each non-critical field must rely on the primary key and cannot rely on certain components of a modular primary keyword;

3rd specification: A non-critical field cannot be dependent on another non-critical field.

The database design that adheres to these rules produces fewer columns and more tables, thereby reducing data redundancy and reducing the number of pages used to store data.

2. Generate a physical database

To properly select the basic physical implementation strategy, you must understand and take advantage of the operational features of the database access format and hardware resources, especially memory and disk subsystem I/O. Here are some common tips:

The data types associated with each table column should reflect the minimum storage space required for the data, especially for indexed columns. For example, you can use the smallint type to not use the integer type, so that the index field can be read faster, and can be placed on a data page more rows of data, thereby reducing I/O operations.

You can improve performance by placing a table on a physical device and then placing its nonclustered index on a different physical device through a segment of SQL Server. In particular, the system uses a number of intelligent disk controllers and data separation technology, the benefits are more obvious.

You can also improve performance by separating a frequently used large table with SQL Server segments and placing them on a database device for separate intelligent disk controllers. Data separation can also improve performance because multiple heads are being looked up.

Using SQL Server segments to store data from text or image columns on a separate physical device can improve performance. A dedicated intelligent controller can further improve performance.

Application System Design

In the design of application system, the following points should be considered emphatically:

1. Rational use of indexes

Index is an important data structure in database, and its basic aim is to improve query efficiency. The use of indexes is just right, with the following principles:

Index on a column that is frequently connected, but not specified as a foreign key, fields that are not frequently connected are indexed automatically by the optimizer, indexed on columns that are frequently sorted or grouped (that is, group by or Order by action), and indexed on columns with more values often used in conditional expressions , do not index on columns with fewer values. For example, there are only two different values for "male" and "female" on the "Sex" column of the employee table, so there is no need to index. If indexing does not improve query efficiency, it can significantly reduce the speed of updates. If there are multiple columns to be sorted, you can set up a composite index on those columns.

2. Avoid or simplify sorting

You should try to simplify or avoid repeating sorting of large tables. The optimizer avoids sorting this step when it is possible to use indexes to generate output automatically in the appropriate order. In order to avoid unnecessary sorting, it is necessary to build the index correctly and consolidate the database table reasonably (although it may sometimes affect the normalization of the table, but it is worthwhile relative to the increase in efficiency). If sorting is unavoidable, try simplifying it, such as narrowing the range of sorted columns.

3. Eliminates sequential access to large table row data

In nested queries, sequential access to tables can have a fatal effect on query efficiency. We can sometimes use the and set to avoid sequential access. Although there may be indexes on all of the check columns, some form of where clause forces the optimizer to use sequential access, which should also be noted.

4. Avoid related subqueries

If a column appears in both the main query and the WHERE clause, it is likely that the subquery must requery once the column value in the main query changes. Moreover, the more nested the query nesting level, the lower the efficiency, so the subquery should be avoided as far as possible. If the subquery is unavoidable, filter out as many rows as possible in the subquery.

5. Regular expressions to avoid difficulties

The mathes and like keywords support wildcard matching, but this is particularly time-consuming. For example: SELECT * FROM customer WHERE zipcode like "98_ _ _", even if an index has been established on the ZipCode field, in which case the sequential scan is used. If you change the statement to: SELECT * from customer WHERE zipcode > "98000", the index is used to query when executing the query, which obviously increases the speed significantly.

6. Using temporary tables to speed up queries

Sorting a subset of a table and creating a temporary table can sometimes speed up queries. It helps to avoid multiple sorting operations and, in other ways, simplifies the work of the optimizer. There are fewer rows in the temporary table than in the primary table, and the physical order is the required order, reducing disk I/O, so the query workload can be drastically reduced. Note, however, that temporary tables are not created to reflect changes to the primary table. When data is frequently modified in the primary table, be careful not to lose data.

Operating system-related optimizations

The performance of the operating system directly affects the performance of the database, if there are problems with the operating system, such as CPU overload, excessive memory exchange, disk I/O bottleneck, in this case, the simple performance of the database internal adjustment will not improve the performance of the system. We can monitor a variety of devices and discover performance bottlenecks through Windows NT System Monitor.

A common performance problem with CPUs is the lack of processing power. The processing power of the system is determined by the number, type and speed of the CPU. If the system does not have sufficient CPU capacity, it will not be able to process transactions quickly enough to meet the needs. We can use System Monitor to determine CPU usage, if at 75% or higher rate for a long time running, you may encounter a CPU bottleneck problem, you should upgrade the CPU. However, other features of the system must be monitored prior to the upgrade, and if the SQL statement is inefficient, the optimization statement will help to resolve the lower CPU utilization. And when you decide that you need more processing power, you can add a CPU or replace it with a faster CPU.

Memory The amount of memory SQL Server can use is one of the most critical factors in SQL Server performance. The relationship between memory and I/O subsystem is also a very important factor. For example, in a system with frequent I/O operations, the more available memory SQL Server uses to cache data, the less physical I/O that must be performed. This is because the data is read from the data cache rather than from the disk. Similarly, the lack of memory can cause significant disk read/write bottlenecks, because insufficient system caching can cause more physical disk I/O.

You can use System Monitor to check SQL Server's buffer Cache Hit ratio counters, and if the hit rate is often less than 90%, you should add more memory.

The bottleneck of the I/O subsystem by the I/O subsystem is the most common hardware-related problem that the database system may encounter. Poorly configured I/O subsystems cause performance problems that are second only to poorly written SQL statements. The I/O subsystem problem is that a disk drive can perform I/O operations is limited, generally a normal disk drive can only handle 85 I/O operations per second, if the disk drives overload, I/O to these disk drives will be queued, SQL I/O delay is very long. This may cause the lock to last longer, or keep the thread idle while waiting for the resource, and the result is that the performance of the entire system is affected.

Solving the I/O subsystem issues is probably the easiest, and in most cases, adding disk drives can solve this performance problem.

Of course, there are many factors affecting performance, and the application is different, to find a general optimization scheme is very difficult, only in the system development and maintenance of the operation of the specific situation, and constantly adjust.

(Computer World Newspaper, Phase No. 200402 C12)



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.