Objective:
Keep Empty cup spirit, use performance analysis, focus on measuring server time spent where, think 1, how to confirm that the server has reached the best performance state, 2, why a statement is not fast enough, the diagnosis is described as "pause, accumulate, card dead" some intermittent troubleshooting;
Next, we will introduce some tools, techniques to optimize the performance of the whole machine, optimize the execution speed of the single statement, diagnose the problems that are difficult to observe, show how to measure the system and generate the analysis report, how to analyze the stack of the system;
3.1 Introduction
Performance: The time measurement required to complete a task, in the other words performance is response time
Throughput: Query data per unit of time (performance-defined reciprocal)
The first step: figuring out where the time is, where it takes time.
If the measurement does not find the answer, the measurement method is wrong or not perfect, only the activities that need to be optimized are measured .
Do not start or stop tests at the wrong time, measuring aggregated information rather than the target activity itself; need to locate and optimize subtasks
Principle: Unable to measure can not effectively optimize
3.1.1 Optimized by profiling
Profiling: The main way to measure and analyze where time is spent
1, measuring the time spent on the task, 2, the results of statistics, sequencing (important front row)
You can group similar tasks together to get the results you need through profiling reports; The report lists all tasks, one for each row:
Task name, execution time, time spent, average execution time, percentage of total time performed, and sort by task consumption time descending;
Profiling type:
Execution time-based analysis: What tasks take the longest time to execute
Wait-based analysis: Determine where the task is to be blocked for the longest time
3.1.2 Understanding Performance Profiling
Missing but important information in profiling:
1, the query is worth optimizing
Queries that account for a small proportion of total response time are not worth optimizing; cost is greater than profit, stop optimization
2. Abnormal situation
Optimizations that are not explicitly optimized, such as tasks with fewer executions but are particularly slow each time
3. Unknown unknown
Lost time: The difference between the total task time and the actual measured time, even if not found to pay attention to the possibility that such problems exist
4. Hidden details
The distribution of all response time cannot be displayed, more information, histogram, percentage, standard deviation, deviation index
5. No more interactive analysis in a higher level stack
3.2 Profiling an application: top-down
Factors that affect performance bottlenecks:
1. External resources, calling external Web services or search engines
2, the application needs to process a large amount of data, analysis of a large XML file
3. Perform expensive operations in the loop: misuse of the regular
4. Using inefficient algorithms: Brute force search algorithm
Recommendation: The code that contains profiling should be considered in the new project
3.2.1 Measurement PHP Application: null
3.3 Parsing MySQL queries
3.3.1 Profiling server Load
To shop MySQL query to log file:
1, slow query log: Low overhead, high precision, large disk space, long-term open note deployment Log tool, only in the collection of load samples during the open, 5.1 microseconds level;
2, the general log, the query request to the server when logging, does not include response time and execution plan
Analyze query Logs
Top down, Mr. Cheng Anatomy Report (pt-query-digest), see the Special attention section
3.3.2 Profiling a single query
Think about why it took so long, how to optimize
After using show profile:mysql5.1
View: Show variables like "%pro%"; "Source"
Disabled by default, open: Set profiling=1; then execute the statement on the server (close set Profiling=off;)
Grammar:
SHOW profile [Type [, type] ...] [For QUERY N] [LIMIT row_count [offset offset]] Type: All--Show all overhead information | Block IO -display chunk IO-related overhead | Context Switches --contextual switching related costs | CPU -Displays CPU-related overhead information | IPC -Displays information about sending and receiving costs | Memory--Displays the cost information for the memories | Page faults --Display error related cost information | SOURCE --display and source_function,source_file,source_line related overhead information | SWAPS --information about the cost associated with the number of interchanges
In essence, these overhead information is recorded in the Information_schema.profiling table.
Show profiles; View Show profile for query 2; Gets the cost of the specified query (second query cost detail) Show profile CPU for query 2; View the cost of a specific section, as follows the cost of the CPU section show Profile block io,cpu for Query 2; View different resource costs at the same time
Use the show STATUS: Counter
Global show global status, based on a connection session level, be aware of the scope
Counter displays the frequency of activity, commonly used: Handle counter, temp file, table counter
Creates a temporary table with a handle action (reference, pointer?). ) to access this temporary table, affecting the corresponding number in the show status result
Using the slow query log: source source
Log the statement in MySQL with response time exceeding threshold long_query_time to the slow query log (the log can be written to a file or database table, if the performance requirements are high, it is recommended to write files), the default is 10s, need to manually open
View:
(1) The value of Slow_query_log is on to open the slow query log, off is to turn off the slow query log.
(2) The value of Slow_query_log_file is a slow query log into a file ( Note: The default name is hostname. Log, the slow query log is written to the specified file, you need to specify the output log format of the slow query as a file, the related command is: Show variables like '%log_output% '; to see the format of the output.
(3) long_query_time Specifies the threshold for a slow query, which is a slow query statement if the time to execute the statement exceeds the threshold, and the default value is 10 seconds.
(4) log_queries_not_using_indexes If the value is set to ON, all queries that do not utilize the index are logged ( Note: If you just Log_queries_not_using_ Indexes is set to ON, and Slow_query_log is set to OFF, this setting does not take effect, that is, if the setting is set to ON, the value of the Slow_query_log setting is on), which is normally turned on when the performance is tuned, and when turned on, use full index Scan SQL will also be logged to the slow query log.
The above command is only valid for the current, when MySQL restart fails, if you want to permanently take effect, need to configure MY.CNF view output format: file? Table show variables like '%log_output% '; turn on Universal log query: Set global general_log=on; Turn off common log query: set Globalgeneral_log=off; Set the general log output to table: Set globallog_output= ' table '; Set universal log output to file: Set globallog_output= ' file '; Sets universal log output as table and file mode: Set global log _output= ' file,table '; query the number of slow query statements: Show global status like '%slow% ';
Description of the log section:
Which statement causes slow query (Sql_text), the query time (query_time) of the slow query statement, the lock table time (Lock_time), and the number of scanned rows (rows_examined), and so on.
Using the self-brought slow query log analysis tool: Mysqldumpslow
Perl mysqldumpslow–s c–t Slow-query.log
-S means sort by how, C, T, L, R are sorted according to the number of records, time, query time, returned records, AC, at, AL, AR, indicating the corresponding flashbacks;-T denotes top, followed by the data to return the first number of bars, after-G can write regular expression matching, Casing is not sensitive.
Using the performance Schema: "Source"Source "
Monitor MySQL server, collect performance parameters, and table storage engine Performance_schema, low energy consumption
Local server, table is memory table , table content is re-populated at server startup, discarded when closed, changes are not copied or written to binary log
Characteristics:
Performance scenario configuration can be dynamically executed SQL modifications that immediately affect
Monitoring Service events: Events are anything that a service does and is perceived to, and time information can be collected
Database Performance scenarios that provide an internal check of the runtime database service and focus on performance data
Specific to a database service, database tables are associated to the data service, and modifications are not backed up or written into the binary log
The storage engine collects event data with "perceptual points" and is stored in the Performance_schema database, which can be queried by a SELECT statement
Added: Three basic libraries for initial installation of database
Mysql
Includes permissions configuration, events, storage engine status, master-slave information, log, time zone information, user rights configuration, etc.
Information_schema
Abstract Analysis of database metadata, which provides SQL statements to query the database runtime state, each query to INFORMATION_SCHEMA generates mutually exclusive access to metadata, affecting the access performance of other databases.
Performance_schema
An in -memory database that uses the Performance_schema storage engine to capture and store the runtime state of the MySQL service in the Performace_schema database via an event mechanism. Note that when the two words are connected with an underscore, the Performance_schema is a database, and when separated by a space, represents a database performance scheme and also represents a storage engine.
Related articles:
"MySQL Database" chapter III Interpretation: Server performance profiling (bottom)
"MySQL Database" chapter II interpretation: MySQL benchmark test