MySQL Database performance optimization some experience to share

Source: Internet
Author: User
Tags commit flush mysql client mysql manual mysql query sleep mysql database mysql slow query log

First, performance detection and bottleneck analysis

1 common commands for performance detection

Show status

Display status information, reference: Mysql Show status Command detailed

Show Processlist

View current SQL execution, including execution status, whether to lock the table, etc., reference: Mysql show processlist Command detailed

Show variables

Display system variables, reference: Mysql Show variables Command detailed

2 Bottleneck Analysis Common commands

Get the total number of processes under MySQL user


Ps-ef | awk ' {print $} ' | grep "MySQL" | Grep-v "grep" | Wc-l

Host performance Status


# uptime
13:05:52 up, Min, 1 user, Load average:0.00, 0.00, 0.00
CPU Usage


# Top or # Vmstat
Amount of disk IO

# Vmstat or # Iostat
Swap amount [Memory]


# free-m
Database Performance Status

QPS (query volume per second)

QPS = Questions (or Queries)/seconds

MySQL > Show/* global * status like ' question ';
TPS (transaction volume per second)

TPS = (com_commit + com_rollback)/seconds


MySQL > Show status like ' Com_commit ';
MySQL > Show status like ' Com_rollback ';
Key Buffer hit ratio

Key_buffer_read_hits = (1-key_reads/key_read_requests) * 100%
Key_buffer_write_hits = (1-key_writes/key_write_requests) * 100%

Mysql> Show status like ' key% ';
InnoDB Buffer hit Ratio

Innodb_buffer_read_hits = (1-innodb_buffer_pool_reads/innodb_buffer_pool_read_requests) * 100%

Mysql> Show status like ' innodb_buffer_pool_read% ';
Query Cache Hit Rate

Query_cache_hits = (Qcahce_hits/(qcache_hits + qcache_inserts)) * 100%;


Mysql> Show status like ' qcache% ';
Table Cache State Amount


Mysql> Show status like ' open% ';
Thread Cache Hit Rate

Thread_cache_hits = (1-threads_created/connections) * 100%


Mysql> Show status like ' thread% ';
Mysql> Show status like ' connections ';

Lock status

Mysql> Show status like '%lock% ';

Replication Delay Amount


MySQL > Show slave status
TMP table status (temporary table status)


MySQL > Show status like ' create_tmp% ';
Binlog Cache Use status


MySQL > Show status like ' binlog_cache% ';
Innodb_log_waits Quantity


MySQL > Show status like ' Innodb_log_waits ';


two, slow query configuration and Analysis

1 Slow Log configuration

View Slow Log configuration

To see if the slow query log is enabled

Mysql> Show variables like ' log_slow_queries ';
+------------------+-------+
| variable_name | Value |
+------------------+-------+
| log_slow_queries | On |
+------------------+-------+
1 row in Set (0.00 sec)
View slow query Time settings

Mysql> Show variables like ' long_query_time ';
+-----------------+-------+
| variable_name | Value |
+-----------------+-------+
| Long_query_time | 1 |
+-----------------+-------+
1 row in Set (0.00 sec)
Turn on slow query log

Method One

Open the My.ini file (under Linux file name is my.cnf), find the [mysqld] section, add the following configuration:

[Mysqld]
Slow_query_log=1
#开启慢查询日志, slow_query_log=0 closed, Slow_query_log=1 opened.
Log= "C:/temp/mysql.log"
#日志文件存放目录.
Log_slow_queries= "C:/temp/mysql_slow.log"
#慢查询日志存放位置 (previous version 5.5).
Slow_query_log_file= "C:/temp/mysql_slow.log"
#慢查询日志存放位置 (version 5.5 and later).
Long_query_time=1
#执行时间超过1s的慢查询将被记录.
long_query_time=0.1
#5.21 and later versions support millisecond logging, which is set to 100ms.
Restart MySQL to take effect.

Method Two

Turn on slow query log


MySQL > SET GLOBAL slow_query_log = On
Set slow query time


MySQL > SET long_query_time = 2
This configuration is a temporary configuration and is invalidated after restarting MySQL.

Test Slow Query configuration

Enter the MySQL console and execute the following SQL statement:

Select Sleep (2);
Then open the slow query log to view it.

2 Slow Query analysis tool-Mysqldumpslow

We can see which SQL execution is inefficient by opening the log file


[Root@localhost mysql]# more Slow.log

# time:141028 17:00:29
# User@host:root[root] @ localhost []
# query_time:2.000330 lock_time:0.000000 rows_sent:1 rows_examined:0
SET timestamp=1414486829;
Select Sleep (2);
From the log, you can find that the query time is more than 2 seconds of SQL, and less than 2 seconds does not appear in this log.

If you have a lot of records in the slow query log, you can use the Mysqldumpslow tool (the MySQL client installs itself) to subtotal the slow query log. Mysqldumpslow a subtotal of the log files, showing summary results after the summary.

Enter log directory, run:


[Root@localhost mysql]#./bin/mysqldumpslow Slow.log

Reading MySQL slow query log from Slow.log
Count:1 time=2.00s (2s) lock=0.00s (0s) rows=1.0 (1), Root[root] @mysql
Select Sleep (N)
Mysqldumpslow command

/path/mysqldumpslow-s c-t 10/database/mysql/slow-query.log
This outputs the 10 SQL statements that have the highest number of records:

-S, is to indicate the way in which C, T, L, R are sorted according to the number of records, time, query time, the number of records returned, AC, at, AL, AR, the corresponding flashback;

-T is the meaning of top N, which is the data that returns the previous number of bars;

-G, you can write a regular matching mode, the case is not sensitive;

For example:

/path/mysqldumpslow-s r-t 10/database/mysql/slow-log
Gets the 10 queries that return the recordset the most.

/path/mysqldumpslow-s t-t 10-g "left join"/database/mysql/slow-log
Get the first 10 items in chronological order that contain the query statement with the left connection.

Using the Mysqldumpslow command can be very clear to all kinds of query statements we need, the MySQL query statement monitoring, analysis, optimization is a very important MySQL optimization step. Open slow query log, due to logging operations, to some extent, will occupy CPU resources affect the performance of MySQL, but can be staged to locate performance bottlenecks.

3explain Analysis Query

Use the EXPLAIN keyword to simulate the optimizer executing SQL query statements to know how MySQL handles your SQL statements. This can help you analyze your query statements or the performance bottlenecks of the table structure. Through the explain command you can get:

reading order of tables

Type of operation for data read operations

Which indexes can be used

Which indexes are actually used

References between tables

How many rows per table are queried by the optimizer




mysql> describe test;


+-------+------------+------+-----+---------+----------------+


| Field | Type | Null | Key | Default | Extra |


+-------+------------+------+-----+---------+----------------+


| ID | bigint (20) | NO | PRI | NULL | auto_increment |


| name | char (10) | YES | MUL |                NULL | |


+-------+------------+------+-----+---------+----------------+


2 rows in Set (0.01 sec)





Mysql> explain select name from Test where id = 5;


+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+


| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |


+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+


| 1 | Simple | Test | Const | PRIMARY | PRIMARY | 8 |    Const |       1 | |


+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+


1 row in Set (0.00 sec)


Explain field

Table: Shows which table the data in this row is about

Possible_keys: Displays the indexes that may be applied to this table. If empty, there is no possible index. You can select an appropriate statement for the related field from the WHERE statement

Key: The index that is actually used. If NULL, the index is not used. MySQL rarely chooses an optimized index, in which case the use index is used to force an index or to force the index to be ignored with ignore index (index).

Key_len: The length of the index used. Without loss of accuracy, the shorter the length the better.

Ref: Shows which column of the index is used and, if possible, a constant

Rows:mysql the number of rows that must be retrieved to return the requested data

Type: This is one of the most important fields that shows what type the query uses. The connection types from best to worst are system, const, EQ_REG, ref, range, index, and all

System, Const: A variable of a query can be converted to a constant. such as id=1; ID as the primary key or a unique key.

Eq_ref: Accesses the index and returns data for a single row. (usually occurs at join, the index used by the query is a primary key or a unique key)

Ref: Accesses the index, returning data for a value. (Can return multiple rows) occurs when = is usually used

Range: This connection type uses the index to return a range of rows, such as using > or < to find things, and what happens when an index is built on the field (note: Not necessarily better than index)

Index: full-table scanning in the order of indexes, the advantage is not to sort, the disadvantage is that the full table scan

All: Full table scan, should try to avoid

Extra: The additional information about how MySQL resolves queries, there are several

Using index: Only indexes are used to avoid access to tables.

Using where: Use to where to worry about the data. Not all WHERE clause will display the using where. If you are accessing the index in = mode.

Using tmporary: Using a temporary table

Using Filesort: An extra sort is used. (an extra sort is used when an order by V1 is used without an index)

Range checked for Eache record (index map:n): No good index.




Mysql&gt; select * from test;


+----+------+


| ID | name |


+----+------+


| 1 | AA |


| 2 | ABC |


| 3 | AaB |


| 4 | ACF |


| 5 | AAC |


| 6 | ACD |


| 7 | CVF |


+----+------+


7 Rows in Set (0.00 sec)





Mysql&gt; Explain select * from test where name &gt; ' a ';


+----+-------------+-------+------+---------------+------+---------+------+------+-------------+


| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |


+----+-------------+-------+------+---------------+------+---------+------+------+-------------+


| 1 | Simple | Test | All | NULL | NULL | NULL |    NULL | 7 | Using where |


+----+-------------+-------+------+---------------+------+---------+------+------+-------------+


1 row in Set (0.00 sec)





Mysql&gt; Explain select * from test where name &gt; ' a ' limit 3;


+----+-------------+-------+-------+---------------+------+---------+------+------+--------------------------+


| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |


+----+-------------+-------+-------+---------------+------+---------+------+------+--------------------------+


| 1 | Simple | Test | Range | Test | Test | 31 |    NULL | 3 | The Using where; Using Index |


+----+-------------+-------+-------+---------------+------+---------+------+------+--------------------------+


1 row in Set (0.00 sec)


4profiling Analysis Query

Through slow log queries you can know which SQL statements are inefficient, through explain we can learn the specific execution of SQL statements, indexing, and so on, you can also combine the show command to view the status of execution.

If you feel that the explain information is not detailed enough, you can use the profiling command to get more accurate information about the SQL execution consuming system resources.

Show Profile command format

Show profile [Type [, type] ...]


[For QUERY N]


[LIMIT row_count [offset offset]]


Type


| All
| Block IO
| Context Switches
| Cpu
| Ipc
| MEMORY
| PAGE faults
| SOURCE
| SWAPS
Profiling is turned off by default. The following statement can be used to view


Mysql> SELECT @ @profiling;
+-------------+
| @ @profiling |
+-------------+
| 0 |
+-------------+
1 row in Set (0.00 sec)
Open function: Mysql>set profiling=1; Execute the SQL statement that needs to be tested:




Mysql&gt; set profiling=1;


Query OK, 0 rows affected (0.02 sec)





Mysql&gt; select * from test;


+----+------+


| ID | name |


+----+------+


| 1 | AA |


| 3 | AaB |


| 5 | AAC |


| 2 | ABC |


| 6 | ACD |


| 4 | ACF |


| 7 | CVF |


+----+------+


7 rows in Set (0.39 sec)





Mysql&gt; show PROFILESG;


1. Row ***************************


Query_id:1


duration:0.39353075


Query:select * FROM Test


1 row in Set (0.00 sec)





ERROR:


No query specified





Mysql&gt; Show profiles for query 1;


+--------------------+----------+


| Status | Duration |


+--------------------+----------+


| Starting | 0.024071 |


| Opening Tables | 0.305523 |


| System Lock | 0.000320 |


| Table Lock | 0.014546 |


| init | 0.000050 |


| Optimizing | 0.000446 |


| Statistics | 0.000359 |


| Preparing | 0.000452 |


| Executing | 0.000006 |


| Sending Data | 0.047042 |


| End | 0.000021 |


| Query End | 0.000005 |


| Freeing items | 0.000663 |


| Logging Slow Query | 0.000025 |


| Cleaning Up | 0.000004 |


+--------------------+----------+


Rows in Set (0.05 sec)


Mysql&gt; show PROFILESG; Can get the time and ID of the executed SQL statement

Mysql>show profile for Query 1; Get detailed information about the execution of the corresponding SQL statement

The above 15rows is for resource information for very simple SELECT statements, and there will be more rows and fields for more complex SQL statements, such as converting HEAP to MyISAM, copying to TMP table, etc. These fields are not displayed because the above SQL statements do not have complex table operations. Through profiling resource consumption information, we can take targeted optimization measures.

After the test is complete, turn off the parameter:mysql> set profiling=0


mysql> Set profiling = 0;
Query OK, 0 rows Affected (0.00 sec)

Iii. Index and query optimization

1 Index optimization

Types of indexes

Normal index: This is the most basic type of index, no uniqueness, such as restrictions.

Uniqueness Index: Basically the same as normal index, but all indexed column values remain unique.

Primary key: Primary key is a unique index, but must be specified as "PRIMARY key".

Full-Text indexing: MySQL supports full-text indexing and Full-text search starting from 3.23.23. In MySQL, the index type of the Full-text index is fulltext. Full-text indexing can be created on columns of varchar or text type.

Most MySQL indexes (PRIMARY KEY, UNIQUE, index, and fulltext) are stored in the B-tree. The index of the spatial column type uses the R-tree, and the memory table supports the hash index.

Single-row and multiple-column indexes (composite index)

An index can be a single-column index or a multiple-column index. Using indexes on related columns is one of the best ways to improve the performance of select operations.

Multiple-column index

MySQL can create indexes for multiple columns. An index can include 15 columns. For some column types, the left prefix of the column can be indexed, and the order of the columns is important.

A multiple-column index can be treated as an array of sorts that contain values created by connecting the values of indexed columns. In general, even the most restrictive single-column index is far less restrictive than a multi-column index.

Left-most prefix

A multiple-column index has one feature, the leftmost prefix (leftmost prefixing). If you have a multiple-column index of key (FirstName LastName age), MySQL uses the multiple-column index when the search condition is the combination and order of the following columns:


Firstname,lastname,age
Firstname,lastname
FirstName
In other words, the equivalent is also established key (FirstName LastName) and Key (FirstName).

Indexes are mainly used for the following actions:

Quickly find a row that matches a WHERE clause.

Deletes a row. Retrieves rows from other tables when a join is performed.

Find the Max () or min () value for the specific indexed column Key_col. Optimized by the preprocessor to check whether the where key_part_# = constant is used for all keyword elements that occur before key_col in the index. In this case, MySQL performs a keyword lookup for each min () or max () expression and replaces it with a constant. If all expressions are replaced with constants, the query returns immediately. For example:

[SQL]
1
SELECT MIN (Key2), MAX (Key2) from TB WHERE key1=10;
Sort or Group A table if the leftmost prefix of an available keyword is sorted or grouped (for example, order by key_part_1,key_part_2). If Desc is followed by all key elements, the keyword is read in reverse order.

In some cases, you can optimize a query so that you can retrieve values without querying the rows of data. For faster, you can retrieve values from the index tree if the query uses only the numeric type from a table and makes up the leftmost column of some of the keywords.

[SQL]
1
SELECT key_part3 from TB WHERE key_part1=1;
Sometimes MySQL does not use indexes, even if there are indexes available. One scenario is when the optimizer estimates that using the index will require MySQL to access most of the rows in the table. (In this case, the table scan may be faster). However, if such a query uses limit to search only some of the rows, MySQL uses the index because it can find several rows faster and return them in the results.

A reasonable suggestion to establish an index

The smaller data types are generally better: the smaller data types typically require less space in disk, memory, and CPU caching, and are faster to handle.

Simple data types are better: integer data has less processing overhead than characters because the strings are more complex. In MySQL, you should use a built-in date and time data type instead of a string to store the time, and an integer data type to store the IP address.

Try to avoid null: The column should be specified not NULL unless you want to store null. In MySQL, columns with null values are difficult to query optimization because they make indexing, indexing, and comparison operations more complex. You should use 0, a special value, or an empty string instead of a null value.

Some trivial suggestions and points to note when indexing and writing SQL statements

Use limit 1 When the result set has only one row of data

Avoid select *, always specify the columns you need

The more data you read from a table, the more slowly the query becomes. He increases the time that the disk needs to be operated on, or when the database server is separate from the Web server. You will experience a very long network delay, simply because data is not required to be transferred between servers.

Use connection (join) instead of subqueries (sub-queries)

Connect (Join) ... It is more efficient because MySQL does not need to create temporary tables in memory to complete this logical two-step query effort.

Use an enum, CHAR instead of varchar to use a reasonable field attribute length

Use not NULL as much as possible

Fixed-length tables can be faster

Split a large delete or INSERT statement

The smaller the query column, the faster

2 query optimization

In a query, where conditions are also a relatively important factor, as little as possible and reasonable where conditions are important, as many conditions as possible, to extract as little data as possible before the condition of the first, reduce the latter where the query time.

Where condition results in an invalid index

There are in the query conditions of the WHERE clause! =,mysql will not be able to use the index.

The index will not work when the WHERE clause uses the MySQL function, for example: SELECT * from TB where left (name, 4) = ' xxx '

When you use like to search for a match, the index is valid: SELECT * from TBL1 where name is ' xxx% ', and '%xxx% ' is invalid

Four, configuration optimization


1 Variables for connection requests

Max_connections

The maximum number of connections to MySQL, increasing this value to increase the number of file descriptors required by mysqld. If the server has a large concurrent connection request, it is recommended to increase this value. To increase the number of concurrent connections, of course, this is based on the machine can support the case, because if the number of connections between MySQL will provide a connection buffer for each connection, it will cost more memory, so the appropriate adjustment of the value can not blindly improve the set value.

The value is too small will often appear error 1040:too many connections errors, you can cross the ' conn% ' wildcard to see the current state of the number of connections, to decide the size of the values.

Maximum number of connections

[SQL]
1
Show variables like ' max_connections '
Number of connections with maximum response

[SQL]
1
Show status like ' Max_used_connections '
As follows:

[SQL]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Mysql> Show variables like ' max_connections ';
+-----------------+-------+
| variable_name | Value |
+-----------------+-------+
| max_connections | 100 |
+-----------------+-------+
1 row in Set (0.07 sec)

Mysql> Show status like ' Max%connections ';
+----------------------+-------+
| variable_name | Value |
+----------------------+-------+
| max_used_connections | 1 |
+----------------------+-------+
1 row in Set (0.03 sec)
Max_used_connections/max_connections * 100% (ideal value ≈85%)

If the max_used_connections is the same as max_connections, then the max_connections setting is too low or exceeding the server load limit, and less than 10% is set too large.

Back_log

The number of connections that can be staged by MySQL. This works when the main MySQL thread gets a lot of connection requests in a very short time. If MySQL's connection data reaches Max_connections, the new request will be on the stack to wait for a connection to release the resource, the number of which is back_log, and the connection resource will not be granted if the number of pending connections exceeds Back_log.

The Back_log value indicates how many requests can be on the stack within a short time before MySQL temporarily stops answering the new request. Only if you expect to have a lot of connections in a short time, you need to increase it, in other words, this value is the size of the listening queue for incoming TCP/IP connections.

When watching your host process list (mysql> show full processlist), find a mass of 264084 | Unauthenticated user | xxx.xxx.xxx.xxx | NULL | Connect | NULL | Login | NULL to connect process, it is necessary to increase the value of Back_log.

The default value is 50, which can be tuned to 128, and an integer with a Linux system set to a range of less than 512.

Interactive_timeout

The number of seconds an interactive connection waits for action before it is closed by the server. An interactive customer is defined as a customer who uses the client_interactive option for Mysql_real_connect ().

The default value is 28800, which can be tuned to 7200.

2 Buffer variables

Global buffering

Key_buffer_size

KEY_BUFFER_SIZE Specifies the size of the index buffer, which determines the speed at which indexing is processed, especially the speed at which index reads. By checking the status values key_read_requests and Key_reads, you can see if the key_buffer_size settings are reasonable. The proportional key_reads/key_read_requests should be as low as possible, at least 1:100,1:1000 better (the above state values can be obtained using show status like ' key_read% ').

Key_buffer_size only works on the MyISAM table. Even if you don't use the MyISAM table, the internal temporary disk table is the MyISAM table, and you also use this value. You can use the Check status value created_tmp_disk_tables to know the details.

Examples are as follows:


Mysql> Show variables like ' key_buffer_size ';
+-----------------+-----------+
| variable_name | Value |
+-----------------+-----------+
| Key_buffer_size | 536870912 |
+-----------------+-----------+
1 row in Set (0.00 sec)
Key_buffer_size is 512MB, let's take a look at the use of key_buffer_size:


Mysql> show global status like ' key_read% ';
+-------------------+-------------+
| variable_name | Value |
+-------------------+-------------+
| key_read_requests | 27813678764 |
| Key_reads | 6798830 |
+-------------------+-------------+
2 rows in Set (0.00 sec)
There are 27,813,678,764 index read requests, 6,798,830 requests are not found in memory read the index directly from the hard disk, the probability of the index misses the cache is computed:

Key_cache_miss_rate =key_reads/key_read_requests * 100%, set at about 1/1000 better

The default configuration value is 8388600 (8M) and the host has 4GB memory, which can be tuned to 268435456 (256MB).

Query_cache_size

Using query buffering, MySQL stores the results of the query in a buffer and, in future, reads the results directly from the buffer for the same SELECT statement (case sensitive).

By checking the status value qcache_*, you can see if the Query_cache_size setting is reasonable (the above status value can be obtained using show status like ' qcache% '). If the value of Qcache_lowmem_prunes is very large, it indicates that the buffer is often insufficient, if the value of qcache_hits is very large, it indicates that query buffering is used very frequently, at this time need to increase the buffer size; If the value of qcache_hits is small, It shows that your query is very low repetition rate, in this case, the use of query buffering will affect efficiency, then you can consider not to query buffer. In addition, adding sql_no_cache to a SELECT statement makes it clear that query buffering is not used.

Parameters related to query buffering are Query_cache_type, Query_cache_limit, Query_cache_min_res_unit.

QUERY_CACHE_TYPE Specifies whether to use query buffering, which can be set to 0, 1, 2, which is a variable at the session level.

QUERY_CACHE_LIMIT Specifies the size of the buffer that a single query can use, and the default is 1M.

Query_cache_min_res_unit was introduced after version 4.1, which specifies the smallest unit of allocated buffer space, which defaults to 4K. Checking the status value qcache_free_blocks, if the value is very large, indicates that there are a lot of fragments in the buffer, which indicates that the query results are relatively small, and you need to reduce query_cache_min_res_unit.

Examples are as follows:

Mysql&gt; show global status like ' qcache% ';


+-------------------------+-----------+


| variable_name | Value |


+-------------------------+-----------+


| Qcache_free_blocks | 22756 |


| Qcache_free_memory | 76764704 |


| Qcache_hits | 213028692 |


| Qcache_inserts | 208894227 |


| Qcache_lowmem_prunes | 4010916 |


| qcache_not_cached | 13385031 |


| Qcache_queries_in_cache | 43560 |


| Qcache_total_blocks | 111212 |


+-------------------------+-----------+


8 rows in Set (0.02 sec)





Mysql&gt; Show variables like ' query_cach% ';


+------------------------------+-----------+


| variable_name | Value |


+------------------------------+-----------+


| Query_cache_limit | 2097152 |


| Query_cache_min_res_unit | 4096 |


| Query_cache_size | 203423744 |


| Query_cache_type | On |


| Query_cache_wlock_invalidate | Off |


+------------------------------+-----------+


5 rows in Set (0.01 sec)


Query Cache Fragmentation Rate = Qcache_free_blocks/qcache_total_blocks * 100%

If the query cache fragmentation rate exceeds 20%, you can use flush query cache to defragment the cache, or try to reduce query_cache_min_res_unit if your queries are small amounts of data.

Query Cache utilization = (query_cache_size–qcache_free_memory)/query_cache_size * 100%

Query cache utilization below 25% indicates that query_cache_size settings are too large to be appropriately reduced; query cache utilization is above 80% and Qcache_lowmem_prunes > 50 query_cache_ Size may be a bit small, or too much fragmentation.

Query Cache Hit Ratio = (qcache_hits–qcache_inserts)/qcache_hits * 100%

Sample server query cache fragmentation rate =20.46%, query cache utilization =62.26%, query cache hit ratio =1.94%, hit ratio is poor, may write more frequently, and may be some fragments.

Buffering for each connection

Record_buffer_size

Each thread that carries out a sequential scan assigns a buffer of this size to each table it scans. If you do a lot of sequential scans, you might want to add that value.

The default value is 131072 (128K), which can be changed to 16773120 (16M)

Read_rnd_buffer_size

Random read buffer size. When rows are read in any order (for example, in sorted order), a random read buffer is assigned. When sorting queries, MySQL first scans the buffer to avoid disk search, improve query speed, if you need to sort large amounts of data, you can adjust the value appropriately. However, MySQL will release this buffer space for each client connection, so try to set this value as appropriate to avoid excessive memory overhead.

General can be set to 16M

Sort_buffer_size

Each thread that needs to be sorted allocates a buffer of that size. Increase this value to accelerate the order by or group by action.

The default value is 2097144 (2M) and can be changed to 16777208 (16M).

Join_buffer_size

The size of the buffer that the Federated query operation can use

Record_buffer_size,read_rnd_buffer_size,sort_buffer_size,join_buffer_size is exclusive to each thread, that is, if there are 100 threads connected, it is occupied as 16m*100

Table_cache

The size of the table cache. Whenever MySQL accesses a table, if there is room in the table buffer, the table is opened and placed in it, so that the table content can be accessed more quickly. By checking the status values of peak time open_tables and Opened_tables, you can decide whether to increase the Table_cache value. If you find that open_tables equals Table_cache, and Opened_tables is growing, then you need to add Table_cache values (the above status values can be used show status like ' open% Tables ' Get). Note that Table_cache can not be blindly set to a large value. If set too high, the file descriptor may be insufficient, causing performance instability or connection failure.

1G memory machine, the recommended value is 128-256. Server with memory around 4GB This parameter can be set to 256M or 384M.

Max_heap_table_size

The size of the memory table (memory table) that the user can create. This value is used to calculate the maximum row value of the memory table. This variable supports dynamic change, that is, set @max_heap_table_size =#

This variable, together with Tmp_table_size, limits the size of the internal memory table. If an internal heap (stacked) table is larger than Tmp_table_size,mysql, you can automatically change an in-memory heap table to a MyISAM table based on your needs.

Tmp_table_size

Increase the size of a temporary table by setting the Tmp_table_size option, such as a temporary table generated by an advanced group by operation. If you raise this value, MySQL will also increase the size of the heap table, to improve the speed of the join query, we recommend optimizing the query, to ensure that the temporary table generated during the query in memory, to avoid the temporary table is too large, resulting in the generation of MyISAM table based on the hard disk.

Mysql> show global status like ' created_tmp% ';
+-------------------------+---------+
| variable_name | Value |
+-------------------------+---------+
| Created_tmp_disk_tables | 21197 |
| Created_tmp_files | 58 |
| Created_tmp_tables | 1771587 |
+-------------------------+---------+
3 Rows in Set (0.00 sec)
Each time you create a temporary table, the created_tmp_tables increases, and if the temporary table size exceeds tmp_table_size, the temporary table is created on disk, and Created_tmp_disk_tables is added, Created_tmp_ Files represent the number of temporary file files created by the MySQL service, and the ideal configuration is:

Created_tmp_disk_tables/created_tmp_tables * 100% <= 25%, such as the server above Created_tmp_disk_tables/created_tmp_tables * 100 % =1.20% that should be pretty good

Default to 16M, adjustable to 64-256 best, thread exclusive, too large may not have enough memory I/O Jam

Thread_cache_size

The number of threads stored in that can be reused. If there is, a new thread is obtained from the cache, and if there is space when disconnected, the client's line is placed in the cache. If there are a lot of new threads, in order to improve performance you can have this variable value.

By comparing the variables of connections and threads_created states, we can see the effect of this variable.

The default value is 110, which can be tuned to 80.

Thread_concurrency

The recommendation is set to twice times the number of server CPU cores, such as dual-core CPUs, then the thread_concurrency should be 4, 2 dual-core CPUs, and the thread_concurrency value should be 8. Default is 8

Wait_timeout

Specifies the maximum connection time for a request, and the server with about 4GB of memory can be set to 5-10.

3 Configure several variables for InnoDB

Innodb_buffer_pool_size

For the InnoDB table, the Innodb_buffer_pool_size function is the same as key_buffer_size for the MyISAM table. InnoDB uses this parameter to specify the size of memory to buffer data and indexes. For a separate MySQL database server, this value can be set to 80% of the physical memory.

According to the MySQL manual, for 2G memory machines, the recommended value is 1G (50%).

Innodb_flush_log_at_trx_commit

The main control of InnoDB log buffer data to log files and flush disk Time point, the value is 0, 1, 23 respectively. 0, which means that when a transaction is committed, instead of log write operations, the data in log buffer is written to the log file every second and flush disk once; 1, then every second or every time a commit of things causes the log file write, flush disk operation, ensure the transaction acid; set to 2, Each transaction commit causes the action to write to the log file, but completes the flush disk operation every second.

The actual test found that this value has a very large impact on the speed at which the data was inserted, set to 2 o'clock insert 10,000 records only need 2 seconds, set to 0, only 1 seconds, and 1 seconds when set to 229. As a result, the MySQL manual also recommends merging inserts into a single transaction as much as possible, which can dramatically increase speed.

According to the MySQL manual, this value can be set to 0 or 2, subject to the risk of losing some of the most recent transactions.

Innodb_log_buffer_size

Log cache size, generally 1-8m, the default is 1M, for larger transactions, you can increase the cache size.

Can be set to 4M or 8M.

Innodb_additional_mem_pool_size

This parameter specifies the size of the memory pool used by InnoDB to store data dictionaries and other internal data structures. The default value is 1M. Usually not too big, as long as sufficient on the line, should be related to the complexity of the table structure. If not enough, MySQL writes a warning message in the error log.

According to the MySQL manual, for 2G memory machines, the recommended value is 20M, can be appropriately increased.

Innodb_thread_concurrency=8

The recommended setting is 2* (numcpus+numdisks) and the default is generally 8.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.