This period of time to do a SQL performance optimization case analysis, sorting out the past cases, found a more interesting, come out for everyone to share.
This project is I in the project to carry out 2 period of time to join, before a period is a financial internal information portal, there is a function is to collect the earnings of various listed companies, and then do a variety of analysis, data Chart Display, the number of people used is not much, only about hundred.
Phase 2 intends to face the outside users, just beginning to expect the number of online not more than 50, to 50 access to users/seconds of performance testing, the results in the 1-Period chart class data display response basically in about 5 minutes, is seriously not available, say our server configuration, 2 Web site front-end bearer user access, do F5 load balancing, That is, an approximately 25 user/second traffic, 2 databases do AlwaysOn. Code breakpoint tracking quickly identified performance bottlenecks in the database response layer.
Now that we've identified the performance of the database, we're ready to start tracking the performance of the server, and we've chosen the usual performance metrics (memory, CPU, network, hard disk IO) to track and run our performance tests.
The results show that the CPU basically in the process of testing is 100%, the memory also occupies 98%, the database hard disk IO response time is more than 2 minutes, memory indicators are normal, so we can draw a few assumptions:
The database is constantly being compiled by TSQL statements, which can cause high CPU usage in high concurrency.
The database has index queries that need to be optimized.
Data may be deadlocked
There are cases where there are large amounts of data in tables
In fact, we check that programmers are writing code, not using the correct use of database variables, but dynamic generation of T-SQL statements, resulting in high concurrency, frequent statement compiling, which confirms our 1th.
Further we use the Profiler tool to find long-response statements to analyze, view the actual execution plan, refine the query criteria, add indexes, and query condition optimization
To our surprise, it was a simple one-table query at some point, and it would take up to 10 minutes to execute execute sp_lock to find out that one of the applications was inserting data into the table in bulk using the INSERT into select. Cause this table read to be locked, use BULK insert to solve the problem
1 is also used for a year, in fact, some of the table's data really has reached hundreds of millions of levels, considering that the data graph display is based on one months of data, we partitioned the table data, each month's data with a separate data file for storage, in fact, also achieved good results.
Finally in the performance test, our response time reduced from the original 5 minutes to 2 seconds, in line with our needs.
SQL Performance Optimization Case study