MySQL Development practice 8 Q, how many can you hold?

Source: Internet
Author: User
Tags connection pooling mysql backup

The recent project on the DB relies on more heavy, combing this period of time using MySQL encountered 8 more representative problems, the answer is also more biased development practice, no DBA professional and in-depth, please make hard to shoot bricks! ...

1, MySQL read and write performance is how much, what are the performance-related configuration parameters?
2, MySQL load high, how to find out which SQL is caused by?
3, how to optimize for the specific SQL?
4, the SQL layer has been difficult to optimize, the request volume continues to increase when the response strategy?
5. How does MySQL synchronize from the data?
6, how to prevent the wrong operation of the DB and good disaster tolerance?
7. What kind of storage engine should I choose for MySQL, what features does InnoDB have?
8. What are the levels of MySQL internal structure?

======================================================

1, MySQL read and write performance is how much, what are the performance-related important parameters?

* Here are a few simple pressure test experiments
Machine: 8 Core cpu,8g Memory
Table structure (simulate business as much as possible): 12 fields (1 bigint (20) for self-increment primary key,5 Int (11), 5 varchar (512), 1 timestamp)
Experiment 1 (write): insert = = 6000/s
Prerequisites: Number of connections 100, each insert a single record
Analysis: The CPU ran 50%, then the disk is sequential write, so the performance is very high
Experiment 2 (Write): Update (where condition hit index) = 200/s
Prerequisites: Number of Connections 100,10w records, 4 fields per update of a single record (2 int (11), 2 varchar (512))
Analysis: CPU run 2%, bottleneck apparent in IO random write
Experiment 3 (Read): Select (where condition hit index) = 5000/s
Prerequisites: Number of Connections 100,10w records, 4 fields per select single record (2 int (11), 2 varchar (512))
Analysis: CPU run 6%, bottleneck in Io, and DB cache size related
Experiment 4 (Read): Select (Where Condition dead index) = 60/s
Prerequisites: Number of Connections 100,10w records, 4 fields per select single record (2 int (11), 2 varchar (512))
Analysis: The CPU runs to 80%, each time the select needs to traverse all records, it seems that the effect of the index is very obvious!

* Several important configuration parameters can be adjusted according to the actual machine and business characteristics
Max_connecttions: Maximum number of connections
Table_cache: Number of cached open tables
Key_buffer_size: Index Cache size
Query_cache_size: Query Cache Size
Sort_buffer_size: Sort Cache Size (the sorted data will be cached)
Read_buffer_size: Sequential Read cache size
Read_rnd_buffer_size: A specific sequential read cache size (such as a query for an ORDER BY clause)
PS: View configuration method: Show variables like '%max_connecttions% ';

======================================================

2, MySQL load high, how to find out which SQL is caused by?

* Method: Slow query log analysis (mysqldumpslow)

* Slow Query Log example, you can see the time of each slow query sql:

# [email protected]: Edu_online[edu_online] @ [10.139.10.167]
# query_time:1.958000 lock_time:0.000021 rows_sent:254786 rows_examined:254786
SET timestamp=1410883292;
SELECT * from T_online_group_records;
The log shows that the query took 1.958 seconds to return 254786 rows of records, and a total of 254786 rows of records were traversed. and a specific timestamp and SQL statement.

* Use Mysqldumpslow for slow query log analysis
mysqldumpslow-s t-t 5 slow_log_20140819.txt
TOP5 SQL statement with the most time consuming output query
-S: Sorting method, T indicates by time (in addition, C is per number of times, R is the number of records returned, etc.)
-T: To top how many, T 5 means take the first 5
The following analysis results are performed:
count:1076100 time=0.09s (99065s) lock=0.00s (76s) rows=408.9 (440058825), edu_online[edu_online]@28hosts
SELECT * from T_online_group_records where Unix_timestamp (gre_updatetime) > N
count:1076099 time=0.05s (52340s) lock=0.00s (91s) rows=62.6 (67324907), edu_online[edu_online]@28hosts
SELECT * from T_online_course where Unix_timestamp (c_updatetime) > N
count:63889 time=0.78s (49607s) lock=0.00s (3s) rows=0.0 (+), edu_online[edu_online]@[10.213.170.137]
Select F_uin from t_online_student_contact where F_modify_time > N
count:1076097 time=0.02s (16903s) lock=0.00s (72s) rows=52.2 (56187090), edu_online[edu_online]@28hosts
SELECT * from T_online_video_info where Unix_timestamp (v_update_time) > N
count:330046 time=0.02s (6822s) lock=0.00s (45s) rows=0.0 (2302), edu_online[edu_online]@4hosts
Select Uin,cid,is_canceled,unix_timestamp (end_time) as Endtime,unix_timestamp (Update_time) as UpdateTime
From T_kick_log where Unix_timestamp (update_time) > N
Take 1th, for example, that this type of SQL (n can take a lot of values, where Mysqldumpslow will be merged) in the August 19 slow query log appears 1.0761 million times, the total time is 99,065 seconds, Total return 440058825 rows of records, 28 client IP used.
With slow query log analysis, you can find the most time-consuming SQL and then perform specific SQL analysis.

* Slow query-related configuration parameters
Log_slow_queries: Whether to open the slow query log, you must first ensure that the analysis after =on
Long_query_time: SQL that queries more than a few seconds is considered a slow query and is generally set to 1S
Log_queries_not_using_indexes: Whether to write records that do not use indexes to the slow query log
Slow_query_log_file: Slow query log storage path

======================================================


3, how to optimize for the specific SQL?


* Use explain to parse SQL statement execution Plan
Mysql>Explain select * from T_online_group_records where Unix_timestamp (gre_updatetime) > 123456789;
+----+-------------+------------------------+------+---------------+------+---------+------+------+------------ -+
| ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra |
+----+-------------+------------------------+------+---------------+------+---------+------+------+------------ -+
| 1 | Simple | T_online_group_records | All | NULL | NULL | NULL |   NULL | 47 | Using where |
+----+-------------+------------------------+------+---------------+------+---------+------+------+------------ -+
1 row in Set (0.00 sec)
As shown in the example above, focus on the following type,rows and extra:
Type: Use category, have no use to index. Result values from good to bad: ... > Range (use to index) > Index > All (full table scan), general query should reach range level
Rows:sql number of records to perform the check
Additional information Extra:sql executes, such as "using index", means that the query uses only the index columns, does not need to read the table, etc.

* Analyze SQL statement execution time and consume resources using profiles
Mysql>set profiling=1;(Start profiles, default is not turned on)
Mysql>Select COUNT (1) from T_online_group_records where Unix_timestamp (gre_updatetime) > 123456789;(executes the SQL statement to parse)
Mysql>show Profiles;
+----------+------------+-------------------------------------------------------------------------------------- --------+
| query_id | Duration | Query |
+----------+------------+-------------------------------------------------------------------------------------- --------+
| 1 | 0.00043250 | Select COUNT (1) from T_online_group_records where Unix_timestamp (gre_updatetime) > 123456789 |
+----------+------------+-------------------------------------------------------------------------------------- --------+
1 row in Set (0.00 sec)
Mysql>Show profile Cpu,block io for query 1;(You can see the time and resource consumption of SQL in each link)
+----------------------+----------+----------+------------+--------------+---------------+
| Status | Duration | Cpu_user | Cpu_system | block_ops_in | Block_ops_out |
+----------------------+----------+----------+------------+--------------+---------------+
...
| Optimizing | 0.000016 |   0.000000 |            0.000000 |             0 | 0 |
| Statistics | 0.000020 |   0.000000 |            0.000000 |             0 | 0 |
| Preparing | 0.000017 |   0.000000 |            0.000000 |             0 | 0 |
| Executing | 0.000011 |   0.000000 |            0.000000 |             0 | 0 |
| Sending Data | 0.000076 |   0.000000 |            0.000000 |             0 | 0 |
...

* Tips for SQL optimization (just mention some of the problems that business often encounters)
1.most critical: Index to avoid full table scan.
Slow query analysis of the items in contact, found that the basic TOP10 is to forget to index or improper use of indexes, such as the index field on the addition of functions resulting in index invalidation, such as where Unix_timestamp (gre_updatetime) >123456789)
+----------+------------+---------------------------------------+
| query_id | Duration | Query |
+----------+------------+---------------------------------------+
| 1 | 0.00024700 | SELECT * FROM mytable where id=100 |
| 2 | 0.27912900 | SELECT * FROM mytable where id+1=101 |
+----------+------------+---------------------------------------+
In addition, many students pull the full table data, like to use the Select xx from xx limit 5000,1000 This form of bulk pull, in fact, this SQL is a full table scan every time, it is recommended to add 1 self-increment ID to do index, SQL to select XX from XX where id>5000 and id<6000;
+----------+------------+-----------------------------------------------------+
| query_id | Duration | Query |
+----------+------------+-----------------------------------------------------+
| 1 | 0.00415400 | SELECT * FROM MyTable where id>=90000 and id<=91000 |
| 2 | 0.10078100 | SELECT * FROM mytable limit 90000,1000 |
+----------+------------+-----------------------------------------------------+
Proper use of indexes should solve most SQL problems. Of course, not as many indexes as possible, too many indexes can affect write performance.
2. Select only the required fields to avoid SELECT *
+----------+------------+-----------------------------------------------------+
| query_id | Duration | Query |
+----------+------------+-----------------------------------------------------+
| 1 | 0.02948800 | Select COUNT (1) from (select ID from mytable) a |
| 2 | 1.34369100 | Select COUNT (1) from (SELECT * from MyTable) a |
+----------+------------+-----------------------------------------------------+
3, as early as possible to do filtering, so join or union and other subsequent operations of the amount of data as small as possible
4, the logic layer can be referred to the logic layer to deal with, such as some data sorting, time function calculation, etc.
5 、......


PS: About SQL optimization, km already have enough articles, so do not say too comprehensive, only focus on their own a feeling: Index! Basically because of the index!

======================================================

4, the SQL layer has been difficult to optimize, the request volume continues to increase when the response strategy?

* Here are some of the methods I can think of, each method is a big article, and here does not expand
Sub-database sub-table
Using Clustering (Master-slave), read-write separation
Add the cache layer to your business
Using connection pooling

======================================================

5. How does MySQL synchronize from the data?

* replication mechanism (Replication)
Master through the replication mechanism, the master of the write operation through binlog to slave generation relay log (Relaylog), slave and then redo the relay log, so that the winner Coux from the library's data to stay in sync

* Copy related 3 MySQL threads
1. I/O thread on slave: request data to Master
2. Binlog dump thread on master: Read Binlog event and send data to slave I/O thread
3. SQL thread on slave: Read the trunk log and execute, update the database
mode that belongs to slave active request pull

* Problems that may be encountered in actual use
Data not strong consistent: default to asynchronous replication, master and slave data will have a certain delay (known as the main from the synchronization distance, General <1s)
Master-Slave synchronization distance is larger: may be a large db write pressure, it may be slave high machine load, network fluctuations and other reasons, specific problems specific analysis

* Related monitoring commands
Show Processlist: View MySQL process information, including the current status of 3 synchronization threads
Show Master Status: View master configuration and current replication information
Show slave Status: View slave configuration and current replication information

======================================================

6, how to prevent the wrong operation of the DB and good disaster tolerance?

* Some points to be done by the business side:
Manual modification of important DB data requires 2 points before operation:1 Backup data in test environment Operation 2 first
Make scheduled backups based on business importance, taking into account the system's recoverable recovery time
It feels necessary to carry out disaster-tolerant drills.

* MySQL backup and restore operations
1. Backup: Export Data using mysqldump
Mysqldump-u user name-p database name [table name] > exported file name
mysqldump-uroot-p Test mytable > Mytable.20140921.bak.sql
2. Restore: Import backup Data
mysql-uroot-p Test < Mytable.20140921.bak.sql
3. Recovery: Write operation sent after importing backup data. First use Mysqlbinlog to export this part of write SQL (based on point in time or location)
If you export the Binlog after 2014-09-21 09:59:59:
mysqlbinlog--database= "test"--start-date= "2014-09-21 09:59:59"/var/lib/mysql/mybinlog.000001 > Binlog.data.sql
If you export the Binlog after the starting ID is 123456:
mysqlbinlog--database= "test"--start-position= "123456"/var/lib/mysql/mybinlog.000001 > Binlog.data.sql
Finally, the binlog to be restored is imported into DB
mysql-uroot-p Test < Binlog.data.sql

======================================================

7. What kind of storage engine should I choose for MySQL, what features does InnoDB have?

* Introduction to Storage Engine
The plug-in storage engine is an important feature of MySQL, and MySQL supports a variety of storage engines to meet a wide range of user scenarios
The storage engine solves the problem: how to organize MySQL data to read efficiently in media, consider storage mechanism, index design, lock mechanism of concurrent read and write, etc.
Mysql5.0 supported storage engines include MyISAM, InnoDB, memory, merge, etc.

* The difference between MyISAM and InnoDB (focus only)
1, InnoDB
Mysql5.5 after the default engine.
1 Support row Lock: Good concurrency Performance
2 Support transactions: Therefore, InnoDB is called a transactional storage engine that supports acid and provides transactional security with commit, rollback, and crash resiliency
3 FOREIGN KEY support: Currently the only engine that supports foreign keys
2, MyISAM
Mysql5.5 before the default engine.
1 Support table Lock: Insert + query fast, update + DELETE slow
2 Transaction not supported

* Use show engines to view current MySQL-supported storage engine details


======================================================

8. What are the levels of MySQL internal structure?

* Non-Professional DBA, here is simply a schematic illustration. MySQL is an open source system, its design ideas and source code are from the hand of Daniel, have time to learn.

1 connectors: Connector. Receive client interactions in different languages
2 Management serveices & Utilities: System Management and control tools
3 Connection Pool: Connection pooling. Manage user connections
4 SQL Interface:sql interface. Accepts the user's SQL command and returns the result that the user needs to query
5 Parser: Parser. Validating and parsing SQL statements into internal data structures
6 Optimizer: Query optimizer. Select the appropriate execution path for the query statement
7 Cache and Buffer: query caching. Cache the results of the query, with a hit can be returned directly
8 Engine: Storage engine. MySQL data is finally organized and stored as a specific file



MySQL Development practice 8 Q, how many can you hold?

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.