MySQL benchmark test (i)-reasons, strategies, ideas, reasons for using benchmark
- Verify some of the issues you think are verified by benchmarking and simulation data.
- Resolve some anomalies in the production system
- Test the current operation of the system through historical benchmark results analysis.
- Simulate higher loads and find some "ceiling" bottlenecks.
- Plan for future business growth. Plan your hardware, network, and other resources with benchmark assessments. Ensure the stable operation of the production environment.
- Test the performance of your application in high-concurrency situations (peak values). (This applies not only to databases, but also to Web applications like networking, storage, tomcat, etc.)
- Test different hardware, operating system.
- Test the correct configuration parameters for the purchased equipment.
The strategy of using benchmark
- test the entire application environment : including Web server, application code, network, database, etc. Not just the database, but the performance of the entire application, because the user is concerned about the response time of the page, the sensory speed of the entire site, not the database itself. Even users don't know what the database is.
- MySQL database is not the only bottleneck of application : When the load and slow query of database and parameters are OK, we should comb the whole architecture. From the application layer began to fall from the comb, than just look at the database of unilateral performance optimization benefits more.
for database
- Compare the performance of different schemas or queries. (can be tested in different hardware environments, different operating systems, different database versions, different mysql parameter configurations)
- Benchmark for performance issues reflected by an application problem
- For a scene (peak), do a short benchmark, periodic (every other day, every three days, or every other week) benchmarks for different time periods. To analyze the problem.
Using benchmark to analyze the indicators of need analysis
- Throughput
Throughput refers to the number of transactions in a unit of time. Common unit metrics per second transaction count (TPS), transactions per minute (TPM).
- Response time
The time that is required to test the unit task. For example, a specific application that tests the response time of a particular page. and is replaced by a percentage response time (percentile response times). For example: Access to a specific application of a page 95% of the time is 20ms, then the response time of this page can be said to be within 20ms response to complete.
- Concurrency of
This concurrency is a confusing metric, such as how many users access a Web site at the same time, not how many concurrent requests are there, because HTTP is stateless and many people are accessing static pages of a Web site. does not equal the request of the Web server. Also, the Web server's request is not equivalent to the concurrent request of the database.
As a result, there are fewer concurrent requests to the database concurrent requests. A code-well-designed application should be a Web site with 100,000 user access at the same time , but there may only be concurrent requests for 30~50 databases.
- Scalability
When the application system encounters the business pressure, the system has the expansibility to be necessary. For example, this type of Web application such as Tomcat can be extended with the ability to load balance. The database is extensible, sharing the load capacity of Read requests. Because when throughput and performance cannot scale vertically. You have to scale out to increase throughput. Provides performance for the entire architecture.
And when does it need to be expanded? When are bottlenecks encountered? It is very necessary to collect the status indicators on the production environment, including the status indicators such as Web applications and databases. To analyze what bottlenecks are encountered?
The idea of using benchmark
Collect-to-analyze-and-optimize
- collects state data that collects as many production environments as possible. Optimization based on objective facts is more reliable than experience. Sometimes experience makes decisions that are more dangerous.
- analyzing The data collected by the benchmark test, regardless of the tool used, requires the human eye to make an analysis based on objective data and what optimizations need to be made.
- decision Analysis results, for example: Database parameters need to be adjusted, database SQL or index needs to be optimized, the database schema needs to be optimized, need to add a cache layer. As well as hardware, network, storage needs optimization and so on. At the application level or code, for example. These need to be communicated to achieve the highest performance improvement at the lowest cost. It takes a team to communicate with each other rather than one-sided decisions.
- optimize for 40% performance with 10% cost. This has a choice. For example, you would need to modify the table structure or add an index to get a database performance improvement. It is also found that the parameters of the database are not optimized. Improve the performance of your database by adjusting some parameters. Later found that this is a small and medium-sized website, there is a certain degree of maintenance downtime feasibility. Through analysis, decided to modify the parameter configuration restart the database, only about 10s of downtime, to achieve better database performance. There is no need to modify the existing table structure. This is a balanced adjustment.
MySQL benchmark test (i)--reasons, strategies, ideas