Designing 1 of applications does not seem to be difficult, but it is not easy to achieve the optimal performance of the system. In the development tools, database design, should
The structure of the program, query design, interface selection and so on have a variety of options, depending on the specific application requirements and development team skills. This article takes the SQL
Server, for example, discusses application performance optimization techniques from the perspective of the background database, and gives some helpful suggestions.
1 Database Design
To achieve optimal performance in a good SQL Server scenario, it is critical to have a good database design scenario. In actual work, many SQL
Server scenarios often result in poor performance due to poorly designed databases. Therefore, to achieve a good database design must consider these issues.
1.1 Normalization of logical libraries
In general, the logical database design meets the normalized top 3 standards:
1.1th specification: Columns that do not have duplicate groups or multiple values.
2.2nd specification: Each non-critical field must rely on the primary key and cannot rely on some parts of the 1 modular primary keywords.
3.3rd specification: 1 non-critical fields cannot be dependent on another 1 non-critical fields.
The design that adheres to these rules produces fewer columns and more tables, thereby reducing data redundancy and reducing the number of pages used to store data. But table relationships
It may need to be handled through complex merging, which can degrade the performance of the system. To some extent, the non-standard can improve the performance of the system, and the non normalization process can
This is done in a number of different ways depending on the performance considerations, but the following methods are often validated to improve performance.
1. If a normalized design produces many 4-way or more road-merging relationships, consider adding duplicate attributes (columns) to the Database Entity (table).
2. Commonly used computational fields, such as totals, maximum values, and so on, can be considered for storage in a database entity.
For example, the plan management system of a project has a schedule, whose fields are: Project number, Year plan, two plan, adjustment plan, replenishment plan ...,
The total number of plans (planned for the Year + two plan + adjustment plan + replenishment plan) is often used by users in queries and reports, when the record amount of the table is very large, there will be
Add the total number of plans to the table as 1 separate fields. Triggers can be used here to maintain data consistency at the client.
3. Redefine the entity to reduce the expense of external property data or row data. The corresponding non-canonical types are:
(1) Dividing 1 entities (tables) into 2 tables (dividing all the attributes into 2 groups). This separates the frequently accessed data from the less-visited data. This
method requires that the primary keyword be copied in each table. The resulting design facilitates parallel processing and produces a table with fewer columns.
(2) Dividing 1 entities (tables) into 2 tables (dividing all rows into 2 groups). This approach applies to entities that will contain large amounts of data (tables). In the application often
You want to keep the history, but the history is rarely used. It is therefore possible to separate the frequently accessed data from the less visited historical data. And if the data rows
is accessed as a subset by a logical workgroup (department, Sales partition, geographic area, etc.), this approach is also beneficial.
1.2 Generating the physical database
To choose the basic physical implementation strategy correctly, you must understand the operational characteristics of the database access format and hardware resources, mainly memory and disk subsystem I/O. This
is a wide range of topics, but the following guidelines may help.
1. The data types associated with each table column should reflect the minimum storage space required for the data, especially for indexed columns. Like the ability to use
smallint types do not use an integer type so that indexed fields can be read more quickly, and more rows of data can be placed on 1 data pages, so
Also reduces I/O operations.
2. Placing 1 tables on a physical device and then placing its nonclustered index on 1 different physical devices through the SQL Server segment can improve the sex
Yes. In particular, the system uses a number of intelligent disk controllers and data separation technology, the benefits are more obvious.
3. Split a frequently used large table with a SQL Server segment and place it on a database device of 2 separate intelligent disk controllers, which can also be
Performance. Data separation can also improve performance because multiple heads are being looked up.
4. Using SQL Server segments to store data from text or image columns on 1 separate physical devices can improve performance. 1 Dedicated Intelligent controller can enter
One step to improve performance.
2 hardware systems related to SQL Server
The hardware design associated with SQL Server includes system processors, memory, disk subsystems, and networks, which basically form the hardware platform,
Windows NT and SQL Server run on it.
2.1 System Processor (CPU)
The process of determining the CPU structure according to your own specific needs is to estimate the CPU workload on the hardware platform. From the previous experience, the CPU configuration is the most
The minimum should be 1 80586/100 processors. This is sufficient if you have only 2~3 users, but if you are planning to support more users and critical applications, it is recommended that you use
Pentium Pro or Pⅱ level CPU.
2.2 Memory (RAM)
Determining the appropriate memory settings for your SQL Server scenario is critical to achieving good performance. SQL Server uses memory to do process caching, data, and cable
The primer cache, static server expenses, and setup expenses. SQL Server can use up to 2GB of virtual memory, which is also the maximum set value. There's one more thing to consider.
Is Windows NT and all of its related services are also occupied by memory.
Windows NT provides 4GB of virtual address space for each WIN32 application. This virtual address space is mirrored by the Windows NT Virtual Memory Manager (VMM)
To the physical memory, on some hardware platforms can reach 4GB. SQL Server applications know only virtual addresses, so they cannot access physical memory directly, which
Access is controlled by the VMM. Windows NT allows virtual address space beyond the available physical memory so that when more virtual memory is allocated to SQL Server
Reduces SQL Server performance when it comes to available physical memory.
These address spaces are set specifically for the SQL Server system, so if you have other software on the same hardware platform, such as file and print sharing, the application
Order services, etc.), you should take into account that they also occupy a portion of the memory. Generally, hardware platforms are configured with at least 32MB of memory, where Windows NT
Take at least 16MB. The 1 simple rule is to add 100KB of RAM to each concurrent user. For example, if you have 100 concurrent users, you need at least
32mb+100 the user *100kb=42mb the memory, the actual usage quantity also needs to adjust according to the operation actual situation. It can be said that increasing the memory is to improve the performance of the system
The most economical way.
2.3 Disk Subsystem
Designing 1 Good disk I/O systems is an important aspect of implementing a good SQL Server scenario. The disk subsystem discussed here has at least 1 disk controls
Device and one or more hard disk units, as well as disk setup and file system considerations. Smart SCSI-2 disk controller or disk group controller is a good choice
Optional, its characteristics are as follows:
(1) Controller cache.
(2) The bus board has a processor, can reduce the system CPU interruption.
(3) asynchronous read and write support.
(4) 32-bit RAID support.
(5) Fast SCSI-2 drive.
(6) Advanced read cache (at least 1 tracks).
3 Search Strategy
After carefully selected hardware platform, and achieved 1 good database program, and have the user needs and application knowledge, now should be designed to check
consulted and indexed. There are 2 aspects that are important for getting good query and indexing performance on SQL Server, and the 1th is based on the SQL Server optimizer
Knowledge to generate queries and indexes; The 2nd is to use SQL Server's performance characteristics to enhance data access operations.
3.1 SQL Server Optimizer
The Microsoft SQL Server database kernel automatically optimizes data query operations submitted to SQL with 1 cost-based query optimizer. Data manipulation queries are
refers to queries that support the SQL keyword where or having, such as SELECT, Delete, and update. Fee for cost-based query optimizer generating clauses based on statistical information
Use estimates.
An easy way to understand the optimizer's data processing process is to detect the output of the SHOWPLAN command. If you use character-based tools (such as isql), you can
Type show SHOWPLAN on to get the output of the SHOWPLAN command. If you use a graphical query, such as a query tool in SQL Enterprise Manager or
ISQL/W, you can set configuration options to provide this information.
SQL Server optimization is done through 3 phases: Query analysis, index selection, and merge selection.
1. Query analysis
During the query analysis phase, the SQL Server optimizer looks at each clause represented by a regular query tree and determines whether it can be optimized. SQL Server General
Try to optimize the clauses that restrict the scans. For example, search and/or merge clauses. But not all legitimate SQL syntax can be divided into optimized clauses, such as
A clause that contains the SQL inequality "<>". Because "<>" is a 1-repulsive operator, not a sex-containing operator, where the entire table is scanned without
The method determines how large the selection range of clauses will be. When 1 relational queries contain a clause that is not optimized, the execution plan uses a table scan to access this part of the query.
For an optimized SQL Server clause in the query tree, the optimizer performs an index selection.
Current 1/2 page
12 Next read the full text