Years of O & M experience, details on how to optimize MySQL data

Source: Internet
Author: User
Tags mysql host redis socket split system log utf 8 mysql backup server memory

The database design may only be based on the current business needs. It may not need high availability, high scaling, and other features at that time, but as the business and user volume increase, the infrastructure is gradually improved. This blog post focuses on the problems and optimization solutions faced by the MySQL database development cycle. Aside from the front-end applications, it is roughly divided into the following five stages:

1. Database table design
After the project is established, the development department develops the project according to the requirements of the product department. One part of the work of the development engineers is to design the table structure. This is very important for databases. Improper design will directly affect access speed and user experience. There are many influencing factors, such as slow queries, inefficient query statements, improper indexing, and database congestion (deadlock. Of course, a team of test engineers will perform stress tests and find bugs. For teams without test engineers, most development engineers do not consider whether the database design is reasonable at the initial stage, but complete function implementation and delivery as soon as possible. After a certain amount of access to the project, the hidden problems will be exposed, and it is not so easy to modify them.

2. Database deployment
As the O & M engineer appeared, the initial access volume of the project was not very large, so a single deployment was sufficient to cope with QPS (Query rate per second) at around 1500 ). Considering the high availability, you can use MySQL master-slave replication + Keepalived for double-click hot backup. Common cluster software include Keepalived and Heartbeat.
Dual-machine hot standby blog:

3. Database performance optimization
If you deploy MySQL on a common X86 server, the MySQL theoretical value can handle about 2000 QPS without any optimization. After optimization, it may increase to around 2500 QPS. Otherwise, when the access volume reaches about 1500 concurrent connections, the database processing performance will slow down and the hardware resources will be rich, in this case, we should consider software issues. So how can we maximize the performance of the database? On the one hand, you can run multiple MySQL instances at a time to maximize the server performance. On the other hand, you can optimize the database. The default operating system and database configuration are usually conservative, the database may be limited to some extent. You can adjust these configurations to process as many connections as possible.
The specific optimization has the following three levels:

3.1 database configuration optimization
MySQL commonly uses two storage engines: MyISAM, which does not support transaction processing, fast read performance, and table-level locks. The other is InnoDB, which supports ACID and is designed to maximize performance and lock rows for processing large data volumes.

Table lock: it has low overhead, large lock granularity, high deadlock probability, and low concurrency.
Row lock: large overhead, small lock granularity, low deadlock probability, and high concurrency.

Why are table locks and row locks? The main purpose is to ensure data integrity. For example, if a user is operating on a table and other users want to operate on this table, they must wait for the first user to complete the operation before other users can perform the operation, table locks and row locks serve this purpose. Otherwise, when multiple users operate on a table at the same time, data may conflict or be abnormal.
Based on the above, using the InnoDB storage engine is the best choice, and is also the default storage engine in MySQL 5.5. Each storage engine has many associated parameters. The following lists the parameters that affect the database performance.
Default value of public parameters:

Max_connections = 151
# Process the maximum number of connections simultaneously. We recommend that you set the maximum number of connections to around 80% of the maximum number of connections.
Sort_buffer_size = 2 M
# The buffer size for sorting. It only applies to order by and group by. The value can be increased to 16 MB.
Query_cache_limit = 1 M
# Query cache restrictions: only query results below 1 MB will be cached, so as not to overwrite a large cache pool of result data
Query_cache_size = 16 M
# View the buffer size, used to cache the SELECT query results. The next SELECT query will return results directly from the cache pool. This value can be multiplied as appropriate.
Open_files_limit = 1024
# Limit on the number of opened files. If the value displayed by show global status like 'open _ files 'is equal to or greater than the value of open_files_limit, the program cannot connect to the database or get stuck.

Default value of MyISAM parameter:

Key_buffer_size = 16 M
# Index cache size. Generally, it is set to 30-40% of the physical memory.
Read_buffer_size = 128 K
# Read operation buffer size. It is recommended to set 16 M or 32 M

Default InnoDB parameters:

Innodb_buffer_pool_size = 128 M
# Index and data buffer size, generally set the physical memory to 60%-70%
Innodb_buffer_pool_instances = 1
# Number of buffer pool instances. We recommend that you set four or eight instances.
Innodb_flush_log_at_trx_commit = 1
# Key parameter. 0 indicates that logs are written to and synchronized to the disk every second. If the database fails, transaction data is lost for about 1 second. 1. After each SQL statement is executed, it is written to the log and synchronized to the disk. The I/O overhead is high. After the SQL statement is executed, it is not efficient to wait for the log to read and write. 2 indicates that only logs are written to the system cache and synchronized to the disk every second, which is highly efficient. If the server fails, transaction data will be lost. We recommend setting 2, which does not require high data security. It has high performance and obvious effect after modification.
Innodb_file_per_table = OFF
# The default value is shared tablespace. The size of the idbdata file in the shared tablespace is increasing, which affects the I/O performance. We recommend that you enable the standalone tablespace mode. The indexes and data of each table are stored in its own standalone tablespace, so that a single table can be moved across different databases.
Innodb_log_buffer_size = 8 M
# Log buffer size. Because the log can be refreshed once every second at most, it generally does not need to exceed 16 MB.

3.2 system kernel optimization
Most MySQL instances are deployed on linux, so some operating system parameters also affect MySQL Performance. The linux kernel is optimized as follows.

Net. ipv4.tcp _ fin_timeout = 30
# TIME_WAIT timeout. The default value is 60 s.
Net. ipv4.tcp _ tw_reuse = 1
#1 indicates enable reuse, TIME_WAIT socket is allowed to be re-used for new TCP connections, and 0 indicates disable
Net. ipv4.tcp _ tw_recycle = 1
#1 indicates enabling TIME_WAIT socket fast recovery, and 0 indicates disabling
Net. ipv4.tcp _ max_tw_buckets = 4096
# The system maintains the maximum number of TIME_WAIT sockets. If this number is exceeded, the system will randomly clear some TIME_WAIT and print the warning information.
Net. ipv4.tcp _ max_syn_backlog = 4096
# Enter the maximum length of the SYN queue and increase the queue length to accommodate more waiting connections

In linux, if the number of file handles opened by a process exceeds the system's default value of 1024, the "too program files open" message is displayed. Therefore, you need to adjust the file handle limit.

# Vi/etc/security/limits. conf # add the following configuration. * indicates all users. You can also specify users to restart the system.
* Soft nofile 65535
* Hoft nofile 65535
# Ulimit-SHn 65535 # effective immediately

3.3 hardware configuration
Increase physical memory and improve file system performance. The Linux kernel allocates a cache area (system cache and data cache) from the memory to store hot data. Through the file system delay writing mechanism, when the conditions are met (for example, the cache size reaches a certain percentage or the sync command is executed), it is synchronized to the disk. That is to say, the larger the physical memory, the larger the allocated cache area, the more cached data. Of course, some cached data will be lost due to server faults.
The SSD hard disk replaces the SAS hard disk. The RAID level is adjusted to RAID 1 + 0, which has better read/write performance (IOPS) than RAID 1 and RAID 5. After all, the database pressure mainly comes from disk I/O.

4. Database architecture scaling
As the business volume grows, the performance of a single database server is no longer able to meet business needs. You should add machines to the cluster ~~~. The main idea is to break down the load of a single database, break through the disk I/O performance, store hot data in the cache, and reduce the disk I/O access frequency.

4.1 master-slave replication and read/write splitting
Because most databases in the production environment are read operations, a master-slave multi-slave architecture is deployed. The master database is responsible for write operations and double-click hot backup. Multiple slave databases are used for load balancing, read operations. Mainstream load balancers include LVS, HAProxy, and Nginx. How can I implement read/write splitting? Most enterprises implement read/write separation at the code level, with high efficiency. Another method is to implement read/write splitting through a Proxy program, with fewer applications in the enterprise. Common proxies include MySQL Proxy and Amoeba. In this database cluster architecture, the high concurrency of databases is greatly increased to solve the performance bottleneck of a single database. If one slave database can process 2000 QPS, five slave databases can process QPS, and the horizontal scalability of the database is also very easy.
Sometimes, in the face of a large number of write operations, a single write can not meet the business needs. If dual-master is used, database data inconsistency may occur. This is because different users in the application may operate on the two databases, simultaneous update operations cause conflicts or inconsistencies between the two database databases. In a single database, MySQL uses the storage engine mechanism table lock and row lock to ensure data integrity. How can this problem be solved in multiple master databases? A Master-slave replication management tool developed based on the perl language, called MySQL-MMM (Master-Master replication managerfor Mysql, Mysql Master replication Manager ), the biggest advantage of this tool is that it only provides write operations for one database at a time, effectively ensuring data consistency.
Master-slave replication blog:
Read/write splitting blog:
MySQL-MMM blog:

4.2 Add cache
Add a cache system for the database and cache hot data to the memory. If the memory cache contains the data to be requested, the database will no longer return results, improving read performance. The cache implementation includes local cache and distributed cache. The local cache caches data to the local server memory or files, which is fast. Distributed Caching can cache massive amounts of data, which is easy to expand. The mainstream distributed caching systems include memcached and redis. memcached has stable performance and data is cached in the memory. The speed is very fast and the QPS can reach about. If you want data persistence, use redis with a performance not lower than memcached.
Working process:

4.3 Sub-databases
Database Sharding refers to splitting related tables into different databases based on different services, such as databases such as web, bbs, and blog. If the business volume is large, you can also take the split database as the master from the architecture, further avoiding the pressure on a single database.

4.4 Sub-tables
With the increase in data volume, a table in the database contains millions of data records, resulting in a long query and insertion time. How can we solve the pressure on a single table? You should consider whether to split the table into multiple small tables to reduce the pressure on a single table and improve the processing efficiency. This method is called table sharding.
Table sharding technology is troublesome. To modify the SQL statements in the program code, you must manually create other tables. You can also use the merge storage engine to implement table sharding, which is much simpler. After table sharding, the program operates on a summary table. This table does not store data, but only has some table sharding relationships and Data Update methods. The summary table will be queried based on different queries, assigning pressure to different small tables improves the concurrency and disk I/O performance.
Table Sharding is divided into vertical sharding and horizontal sharding:
Vertical Split: splits a table with many fields into multiple tables to solve the table width problem. You can place infrequently used fields in a single table, separate large fields in a table, or put closely related fields in a table.
Horizontal split: split the original table into multiple tables. Each table has the same structure, solving the problem of large data volume in a single table.

4.5 partitions
Partitioning refers to dividing the data in a table into multiple blocks. These blocks can be stored on one disk or on different disks. After partitioning, a table is displayed on the surface, however, data is hashed in multiple locations. As a result, multiple hard disks process different requests at the same time to improve disk I/O read/write performance.
Note: adding cache, Database Sharding, table sharding, and partition are mainly implemented by programmers.

5. Database maintenance
Database maintenance is the main task of O & M engineers or DBAs, including performance monitoring, performance analysis, performance tuning, database backup and recovery.

5.1 Key Performance Indicators

QPS, Queries Per Second: Queries Per Second, the number of Queries that a database can process Per Second
TPS, Transactions Per Second: number of Transactions processed Per Second
You can view the running status through show status. There are more than 300 status information records. Several values help us calculate the QPS and TPS, as shown below:
Uptime: the actual number of seconds that the server is running.
Questions: number of queries sent to the database
Com_select: number of queries, which are used to operate the database
Com_insert: Number of inserts
Com_delete: number of deletions
Com_update: number of updates
Com_commit: Number of Transactions
Com_rollback: Number of rollbacks
Then, the calculation method is used to calculate the QPS based on Questions:

Mysql> show global status like 'questions ';
Mysql> show global status like 'uptime ';

QPS = Questions/Uptime
Calculate TPS based on Com_commit and Com_rollback:

Mysql> show global status like 'com _ commit ';
Mysql> show global status like 'com _ rollback ';
Mysql> show global status like 'uptime ';
TPS = (Com_commit + Com_rollback)/Uptime

Another calculation method: calculate QPS based on Com_select, Com_insert, Com_delete, and Com_update

Mysql> show global status where Variable_name in ('com _ Select', 'com _ Insert', 'com _ delete', 'com _ update ');

Wait 1 second before execution, get the interval difference, the second time each variable value minus the first corresponding variable value, is QPS
TPS calculation method:

Mysql> show global status where Variable_name in ('com _ Insert', 'com _ delete', 'com _ update ');

Calculating TPS is not a query operation. Just calculate the four values of insert, delete, and update.
According to tests on the two calculation methods, Questions is more accurate when there are many myisam tables in the database. When there are many innodb tables in the database, the computation with Com _ * is more accurate.

5.2 enable slow query log

MySQL enables slow query logs and analyzes which SQL statement is slow. You can use set to set the variable and restart the service to make it invalid. You can add the parameter in my. cnf to take effect permanently.

Mysql> set global slow-query-log = on # enable the slow query function
Mysql> set global slow_query_log_file = '/var/log/mysql/mysql-slow.log'; # specify the location of the slow query log file
Mysql> set global log_queries_not_using_indexes = on; # Records queries without Indexes
Mysql> set global long_query_time = 1; # only records slow queries with processing time greater than 1 s.

To analyze slow query logs, you can use the mysqldumpslow tool provided by MySQL to analyze the logs.
Mysqldumpslow-t 3/var/log/mysql/mysql-slow.log # view the slowest first three queries
Percona's pt-query-digest tool can also be used to analyze slow log, binlog, and general log with comprehensive log analysis functions.
Analyze slow query logs: pt-query-digest/var/log/mysql/mysql-slow.log
Analyze binlog log: mysqlbinlog mysql-bin.000001> mysql-bin.000001. SQL
Pt-query-digest? Type = binlog mysql-bin.000001. SQL
Analyze common logs: pt-query-digest? Type = genlog localhost. log

5.3 Database Backup
Backing up a database is the most basic task and the most important task. Otherwise, the consequences will be very serious. You know! However, because the database is large and hundreds of GB of data is often time-consuming, you should select an efficient backup policy. For databases with large data volumes, incremental backup is generally used. Common backup tools include mysqldump, mysqlhotcopy, and xtrabackup. mysqldump is suitable for small databases. Because it is a logical backup, it takes a long time to back up and restore data. Mysqlhotcopy and xtrabackup are physical backups. Backup and recovery are fast and hot copies are performed without affecting database services. We recommend that you use xtrabackup to support incremental backup.
Xtrabackup backup tools use blog:

5.4 database repair
Sometimes the MySQL server suddenly loses power and shuts down abnormally, causing table damage and the table data cannot be read. In this case, you can use the MySQL tool, myisamchk and mysqlcheck.
Myisamchk: only myisam tables can be repaired. You need to stop the database.
Common parameters:
-F? Force repair, overwrite old temporary files, usually not used
-R? Recover recovery mode
-Q? Quik quick recovery
-? Analyze analysis table
-O? Safe-recover: Old recovery mode. If-r cannot be repaired, use this parameter.
-F? Fast only checks tables that are not normally closed
Quick repair of the weibo database:

# Cd/var/lib/mysql/weibo
# Myisamchk-r-q *. MYI

Mysqlcheck: both myisam and innodb tables can be used without stopping the database. For example, to fix a single table, you can add a table name after the database, separated by spaces.
Common parameters:
-? All-databases check all databases
-R? Repair table
-C? Check the checklist, default options
-? Analyze analysis table
-O? Optimize optimization table
-Q? Quik fastest checks or repairs tables
-F? Fast only checks tables that are not normally closed
Quick repair of the weibo database:

Mysqlcheck-r-q-uroot-p123 weibo

5.5 In addition, view CPU and I/O performance methods
# View CPU performance

# Parameter-P indicates the number of CPUs displayed. If ALL is set to ALL, only the number of CPUs can be displayed.

# View I/O performance

# The parameter-m is displayed in MB. The default value is K.
# % Util: when it reaches 100%, it indicates that I/O is very busy.
# Await: The waiting time of a request in the queue directly affects the read time.
I/O limit: IOPS (r/s + w/s), usually around 1200. (IOPS, read/write (I/O) operations per second)
I/O bandwidth: in sequential read/write mode, the theoretical value of the SAS hard disk is about 300 MB/s, and that of the SSD hard disk is about 600 MB/s.

The above are some of the major optimization solutions I have summarized over the past three years using MySQL, which have limited capabilities and are not comprehensive. However, these solutions can basically meet the database requirements of small and medium enterprises. Due to the limitations of the original design of relational databases, some BAT companies put massive amounts of data into relational databases, which has not achieved better performance in massive data query and analysis. As a result, NoSQL is very popular. Non-relational databases, large data volumes, and high performance make up for some shortcomings of relational databases. Most companies have gradually put some business data into NoSQL, such as MongoDB and HBase. Distributed file systems are used for data storage, such as HDFS and GFS. Massive data computing and analysis uses Hadoop, Spark, and Storm. These are cutting-edge technologies related to O & M, and are also the main learning objects in terms of storage. Let's work together! Which of the following bloggers has a better optimization solution? Please contact us.

101 Tips for MySQL adjustment and optimization

MySQL is a powerful open-source database. With more and more database-driven applications, people have been pushing MySQL to its limit. Here are 101 Tips for adjusting and optimizing MySQL installation. Some tips are applicable to specific installation environments, but these ideas are common. I have already divided them into several categories to help you master more MySQL adjustment and optimization skills.

MySQL server hardware and operating system adjustment:

1. Have enough physical memory to load the entire InnoDB file into the memory. The speed of accessing files in the memory is much faster than that on the hard disk.
2. Do not use Swap to Swap partitions at all costs? It is read from the hard disk during swap, and its speed is very slow.
3. Battery-powered RAM (note: RAM is random memory ).
4. Use advanced RAID (note: Redundant Arrays of Inexpensive Disks, that is, the disk array )? Preferably RAID10 or higher.
5. Avoid RAID5 (note: A storage solution that combines storage performance, data security, and storage costs )? It is costly to ensure database integrity verification.
6. Separate the operating system from the data partition, not only logically, but also physically? The read/write operations of the operating system will affect the database performance.
7. How does one place the MySQL temporary space and copy logs and data in different partitions? When the database background reads and writes data from the disk, the database performance will be affected.
8. More disk space equals faster speed.

9. Better and faster disks.
10. Use SAS (note: Serial Attached SCSI, that is, Serial connection SCSI) instead of SATA (note: SATA, that is, Serial drive ).
11. Smaller hard disks are faster than larger ones, especially when RAID is configured.
12. Use the battery-supported high-speed cache RAID controller.
13. Avoid using a software disk array.
14. Are you considering using a solid-state iocard (not a disk drive) for data partitioning )? These cards support the write speed of 2 Gb/s for almost any amount of data.
15. In Linux, set the value of swappiness to 0? There is no reason to cache files on the database server, which is a server or desktop advantage.
16. If yes, use noatime and nodirtime to mount the file system? There is no reason to update the modification time for accessing database files.
17. How can I use an XFS file system? A file system that is faster and smaller than ext3 and has many log options, and ext3 has been proven to have dual-buffering problems with MySQL.
18. How can I adjust the XFS file system log and buffer variables? For the highest performance standard.
19. In Linux, how does one use the NOOP or deadline io scheduled scheduling program? Compared with the NOOP and DEADLINE scheduled scheduling programs, the CFQ and ANTICIPATORY scheduled scheduling programs are very slow.
20. How does one use a 64-bit operating system? For MySQL, more memory is supported and used.
21. Delete unused installation packages and Daemon on the server? Less resource usage.
22. Put the host using MySQL and your MySQL host in a hosts file? No DNS lookup.
23. Do not force a MySQL process to be killed? You will damage the database and the program that is running the backup.
24. Contribute the server to MySQL? Background Processes and other services can shorten the CPU usage time of the database.
MySQL configuration:
25. When writing data, innodb_flush_method = O_DIRECT is used to avoid double buffering.
26. Avoid using O_DIRECT and EXT3 file systems? You will serialize all the data to be written.
27. Allocate enough innodb_buffer_pool_size to load the entire InnoDB file into the memory? Read less from the disk.
28. Do not set the innodb_log_file_size parameter too large, so that you can have more disk space at the same time faster? It is usually good to lose more logs. After the database crashes, the database recovery time can be reduced.
29. Do not mix innodb_thread_concurrency and thread_concurrency parameters? These two values are incompatible.
30. Assign a very small number to the max_connections parameter? Too many connections will use up RAM and lock the MySQL service.
31. Keep thread_cache in a relatively high number, about 16? Prevents slow connection opening.
32. Use the skip-name-resolve parameter? Remove DNS lookup.

33. If your queries are repeated and the data does not change frequently, you can use the query cache. However, if your data changes frequently, using the query cache will disappoint you.
34. Increase the value of temp_table_size to prevent data from being written to the disk.
35. Increase the max_heap_table_size value to prevent data from being written to the disk.
36. Do not set sort_buffer_size too high. Otherwise, your memory will soon be exhausted.
37. The key_read_requests and key_reads values are used to determine the key_buffer size. Generally, the key_read_requests value should be higher than the key_reads value. Otherwise, you cannot use key_buffer efficiently.
38. Setting innodb_flush_log_at_trx_commit to 0 will improve performance. However, if you want to retain the default value (1), you must ensure data integrity and ensure that the replication will not lag.
39. You need to have a test environment to test your configuration and often restart it without affecting normal production.
MySQL mode optimization:
40. Keep your database organized.
41. Old data archiving? Delete unnecessary rows to return or search for queries.
42. Add an index to your data.
43. Do not over-use indexes, comparison and query.
44. Compressed text and BLOB data types? To save space and reduce the number of disk reads.
45. UTF 8 and UTF16 are both less efficient than latin1.
46. Use the trigger in a controlled manner.
47. How much redundant data is maintained to a minimum? Do not repeat unnecessary data.
48. Use a chain table instead of expanding rows.
49. Pay attention to the data type. Use the smallest data type in your real data.
50. If other data is often used for query, but BLOB/TEXT data is not, the BLOB/TEXT data is separated from other data.
51. Check and frequently optimize the table.
52. Rewrite InnoDB table optimization frequently.
53. Sometimes, the index is deleted when a column is added, and then the index is added back, which will be faster.
54. Use different storage engines to meet different requirements.

55. Use the archive storage engine log table or audit table-this is more effective to write.
56. Is session data stored in the cache instead of MySQL? The cache allows automatic value filling, and prevents you from creating time-space data that is difficult to read and write to MySQL.
57. Use VARCHAR instead of CHAR when storing variable-length strings? Space saving, because the fixed length of CHAR is not fixed, and the VARCHAR length is not fixed (UTF8 is not affected ).
58. How do I change the model gradually? A small change can have a huge impact.
59. Test all modes in the development environment to reflect production changes.
60. Do not arbitrarily change the value in your configuration file. It can have disastrous effects.
61. Sometimes, less configs in MySQL is more.
62. If you have any questions, use a common MySQL configuration file.

Query optimization:

63. Use slow query logs to find slow queries.
64. Use the execution plan to determine whether the query is normal.
65. Always test your query to see if they are running in the best state? Over time, performance will change.
66. Avoid using count (*) on the entire table. It may lock the entire table.
67. Make the query consistent so that similar queries can use the query cache.
68. Use group by instead of DISTINCT as appropriate.
69. Use indexed columns in the WHERE, group by, and order by clauses.
70. Keep the index simple and do not include the same column in multiple indexes.
71. Sometimes MySQL uses an incorrect INDEX. In this case, use index is used.
72. Check whether SQL _MODE = STRICT is used.
73. For index fields with the number of records less than 5, use LIMIT during UNION instead of OR.
74. To avoid SELECT before UPDATE, use insert on duplicate key or insert ignore instead of UPDATE.
75. Do not use MAX. Use the index Field and order by clause.
76. Avoid using order by rand ().

77. limit m, N can actually slow down the use of queries in some cases.
78. Use UNION in the WHERE clause to replace subqueries.
79. For UPDATES (update), use share mode to prevent exclusive locks.
80. When you restart MySQL, remember to warm up your database to ensure that your data is saved and queried quickly.
81. Use drop table and create table delete from to DELETE all data FROM the TABLE.
82. The minimal data is used to query the data you need, which consumes a lot of time.
83. Consider persistent connections instead of multiple connections to reduce overhead.
84. Benchmark query, including using server load, sometimes a simple query can affect other queries.
85. When the load increases on your server, use show processlist to view slow and problematic queries.
86. All suspicious queries tested in the image data generated in the development environment.
MySQL backup process:
87. Back up data from the Secondary Replication Server.
88. Stop copying during backup to avoid inconsistency between data dependency and foreign key constraints.
89. Stop MySQL and back up the database files.
90. If you use MySQL dump for backup, please back up the binary log file at the same time? Make sure that the replication is not interrupted.
91. Do not trust LVM snapshots? This may cause data inconsistency, which may cause problems in the future.
92. To make it easier to restore a single table, export data in tables? If the data is isolated from other tables.
93. How can I use mysqldump? Opt.
94. Check and optimize the table before backup.
95. For faster import, the foreign key constraint is temporarily disabled during import.
96. For faster import, the uniqueness check is temporarily disabled during import.
97. Calculate the size of the database, table, and index after each backup to better monitor the growth of data size.
98. Use an automatic scheduling script to monitor errors and latencies of copying instances.
99. Perform regular backup.
100. Regularly test your backup.
Last 101: execute MySQL Monitoring: Monitis Unveils The World's First Free On-demand MySQL Monitoring.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.