Mysql enterprise-level log management, backup and recovery tutorial, mysql enterprise-level

Source: Internet
Author: User
Tags mysql backup mysql slow query log percona

Mysql enterprise-level log management, backup and recovery tutorial, mysql enterprise-level

Background

With the development of business, the company's business and scale continue to expand, the website has accumulated a large amount of user information and data, for an Internet company, users and business data is the foundation. Once the company's data goes wrong or is lost, it is a disaster tolerance for Internet companies. To prevent data loss caused by system errors or system faults, the company needs to enhance the reliability of user data, it is required that data-level backup be fully strengthened and be restored immediately when a fault occurs.

Data backup form

File backup:

Use the Linux BACKUP command to pack all files and store them on local and remote servers. When the files are to be restored, they are restored to the specified location.

Database Data Backup:

In industries with high data reliability requirements, such as banks, securities, and telecommunications, the loss will be very heavy if unexpected shutdown or data loss occurs. Therefore, the database administrator should formulate detailed database backup and disaster recovery policies based on specific business requirements, and

To ensure the high availability of data. Database backup is a long-term process, and recovery is only performed after an accident. recovery can be seen as a backup inverse process, the degree of recovery depends largely on the backup situation. In addition,

The correct steps taken by the database administrator during recovery directly affect the final recovery result.

Data Backup Type

Divided by business: can be dividedFull backup, Incremental backup, and differential backup

1. Full backup: backs up the data and data structure of the entire database.

Advantages: intuitive and easy to understand

Disadvantages: 1. A large amount of backup data is duplicated, occupying a large amount of space and increasing costs

2. the backup data volume is large and takes a long time.

Full Backup means backing up the data and data structure of the entire database. The advantage of this backup method is that it is intuitive and easy to understand. In addition, in the event of a data loss disaster, you only need to use the backup files before the disaster to restore the lost data.

However, it also has shortcomings: first, because the system is completely backed up every day, a large number of duplicate data are in the backup. This duplicate data occupies a large amount of space, which means increased costs for users. Second, because the amount of data to be backed up is large, it takes a long time to back up the data. For those units with busy business and limited backup window time, it is wise to choose this backup policy.

2. Incremental Backup: The data backed up each time is equivalent to the data added and modified after the last Backup.

Advantage: No duplicate backup data, saving space

Disadvantage: It is troublesome to recover data. Any failure in backup data may cause data loss.

That is, the data backed up each time is equivalent to the data added and modified after the last backup. The advantages of this backup are obvious: there is no repeated backup data, which saves space and shortens the backup time. However, its disadvantage is that it is troublesome to recover data in the event of a disaster. For example

If the system fails on Thursday morning and a large amount of data is lost, the system needs to be restored to Wednesday night.

At this time, the administrator needs to first find the full backup data on Monday for system recovery, then find the data on Tuesday to restore the data on Tuesday, and then find the data on Wednesday to restore the data on Wednesday. Obviously, this is much more difficult than the first strategy. In addition, this backup is reliable.

Poor performance. Under such a backup, the relationship between the backup data is like a chain, which is a link. Any problem with the backup data will lead to a chain disconnection.

3. Differential Backup: The data backed up each time is equivalent to the data added and modified after the last full Backup.

That is, the data backed up each time is compared with the newly added and modified data after the last full backup. The Administrator first backs up the system completely on Monday. Then, in the next few days, the Administrator backs up all the data (new or modified) that is different from that on Monday to tape. For example, on Monday, the network administrator backs up the system completely. On Tuesday, assume that only one asset is cleared, the Administrator only needs to back up the asset list together. On Wednesday, the system added a new product directory, so the Administrator

This directory should be backed up together with the list of assets on Tuesday.

If you have an additional payroll in the system on Thursday, you need to back up the payroll, product catalog, and asset list on Thursday. It can be seen that full backup takes the longest time, but the shortest recovery time and the most convenient operation. When the data volume in the system is small, full backup is the most reliable; differential backup can avoid two other policy defects. However, different backup types can be combined, different backup types can be combined, and different backup types can be combined.

Examples of combined applications of different Backup Types

Full backup and differential backup

Perform full backup on Monday and differential backup from Tuesday to Friday. If the data is damaged on Friday, you only need to restore the full backup on Monday and the differential backup on Thursday. This type of policy takes much time to back up data, but it takes less time to restore data.

Full backup and Incremental Backup

Perform full backup on Monday and Incremental backup from Tuesday to Friday. If data is corrupted on Friday, you need to restore normal backups on Monday and all incremental backups from Tuesday to Friday. This type of policy takes less time to back up data, but it takes more time to restore data.

Classified by method: Hot Standby, warm standby, and cold standby

Hot Backup)Direct backup during database operation has no impact on the running database.

Cold Backup)It refers to the backup when the database is stopped. This backup is the simplest. Generally, you only need to copy the relevant physical database files.

Warm Backup)Backup is also performed when the database is running, but it will affect the operation of the current database. For example, you can add a global read lock to ensure the consistency of the backup data. (When you back up a table in the database, first lock the table to prevent others from adding, querying, deleting, and modifying the data in the table, so that when you back up, the data in the table will not change, ensuring the consistency of the backup data)

Physical backup: directly copies data files for backup (directly copies the backup data files in binary format)

Advantage: You can copy the backup file directly without any additional tools.

Disadvantage: it is related to the storage engine, and the cross-platform capability is weak.

Logical backup: Backup for "exporting" data from the database to another storage (exporting SQL statements to a text is larger than binary files)

Advantage: The editor can be used for processing, simple recovery, and network-based recovery, which helps avoid data corruption.

Disadvantage: the backup file is large, and the backup is slow, so the precision of floating point numbers cannot be guaranteed. After logical backup and data recovery are used, you still need to manually re-create the index, which consumes a lot of CPU resources.

Backup Flowchart

Mysql Log Introduction

MySQL Log:

It mainly includes:Error logs, query logs, slow query logs, transaction logs, binary logs, etc;

Logs are an important part of the mysql database. The log file records the changes that occur during the mysql database operation.

Describes the client connection status, SQL statement execution status, and error information of the mysql database. When the database is accidentally damaged

View the cause of file error in logs, and recover data through the log file.

Mysql Error Log

In the mysql database, the error log function is enabled by default. In addition, error logs cannot be disabled. By default, error logs are stored in mysql database data files. The error log file is usually named hostname. err. Hostname indicates the Host Name of the server.

You can configure the error log information by yourself. The information recorded in the error log can be defined through logerror and log-warnings, here, log-err defines whether to enable the error log function and the storage location of the Error log, and log-warnings defines whether to define the warning information to the error log. By default, error logs record the following information: information during server startup and Shutdown (not necessarily error information, for example, how does mysql start the tablespace file of InnoDB and how to initialize its own storage engine) the error information during server running, the information generated when the event scheduler runs an event, and the information generated when the server process is started on the slave server.

Mysql-uroot-p

Select globle variables like '% log % ';

You can modify log_error in the configuration file.

Vim/etc/my. cnf // For example, I changed the path of the Error log to/var/log/mariadb. err.

Log-error =/var/log/mariadb. err

Then restart the Database Service to connect to the database to view the global log. The modification is successful.

View error log Content

Temporary modification:

In the Mysql Error Log, log_error can be directly defined as the file path or ON | OFF;

Log_warings can only start with 1 | 0.

Permanent modification:

To change the location of an error log, use log_error as follows:

[Root @ stu18 data] # vim/etc/my. cnf

[Mysqld]

Log_error = DIR/[filename]

Resolution: the DIR parameter specifies the path of the error log. The filename parameter is the name of the error log. If this parameter is not specified, the default value is the host name. Modify the configuration file and restart the mysql server.

Note:Before mysql5.5.7: the database administrator can delete error logs from a long time ago to ensure hard disk space on the mysql server. In the mysql database, you can use the mysqladmin command to enable new error logs.

The syntax of the mysqladmin command is as follows:

Mysqladmin-u root-pflush-logs can also use the FLUSHLOGS statement to log on to the mysql database to enable new error logs.

Mysql query log

By default, log query is disabled.Because query logs record all user operations, including addition, deletion, query, modification, and other information, a large amount of information will be generated in an environment with large concurrent operations, resulting in unnecessary disk IO, will affect the performance of mysql.We recommend that you do not enable log query for the purpose of debugging the database.

Mysql

Show global variables like '% log % ';

Mysql slow query log

Slow query logs are used to record query statements whose execution time exceeds the specified time. Through the slow query log, you can find out which query statements have low execution efficiency (some query statements have a long execution time. You need to clear these query statements to optimize server performance ), for optimization.It is strongly recommended to enable this function, which has little impact on the server performance, but can record the query statements that have been executed on the mysql server for a long time. This helps us locate performance problems.

Start and set slow query logs:

1. The log-slow-queries option in the configuration file my. cnf enables slow query logs;

The format is as follows:

Vim/etc/my. cnf

[Mysqld]

Slow-query-log = ON

Slow-query-log-file =/var/log/mariadb/slow. log

Long-query-time = 0.01.

The DIR parameter specifies the storage path of the slow query log, the filename parameter specifies the log file name, the completion name of the generated log file is the filename-slow.log. If no storage path is specified, the slow query log is stored under the data file of mysql database by default, if no file name is specified, the default file name is hostname-slow.log

2. log on to the mysql server and directly define

The method is as follows:

First, you must have global permissions and then executeMysql> set global slow_query_log = 1;(When the SQL statement takes effect temporarily, it becomes a slow query log if the execution time of the SQL statement exceeds 1 s)

What is the default time limit of slow query logs?

This time value is usually set using the long_query_time option. The time is measured in seconds and can be accurate to microseconds. If the query time exceeds this time value (10 seconds by default), the query statement is recorded in the slow query log. The default server time value can be viewed as follows:

Note: the slow query time does not only indicate that the statement has been executed for more than 10 seconds, but also the query execution time or other reasons that are blocked due to the requisition of other resources are recorded in the slow query. Therefore, the duration of this slow query indicates all the time from the query start to the end of the query that contains any possible reasons.

View slow query log Content

Mysql transaction log

Transaction: a transaction is a collection of operations. After a series of operations are committed, these operations can be called transactions. (Either the operation is executed or not)

Transaction Log (unique InnoDB log)It can help improve the efficiency of transactions. With transaction logs, the storage engine only needs to modify its memory copy when modifying table data, and then records the modification behavior to the transaction logs that are persistent on the hard disk, instead of durating the modified data to the disk each time.Transaction logs are appended, so the log writing operation is the sequential I/O in a small area on the disk, unlike random I/O, the head needs to be moved in multiple places on the disk, therefore, the transaction log method is much faster. After the transaction log persists, the modified data in the memory can be slowly flushed back to the disk in the background. At present, most storage engines are implemented in this way. We usually call it a prewrite log. To modify data, you need to write the disk twice.

Mysql transaction-based operations will directly change the data in the corresponding memory. After the modification, it will check that all the data has taken effect, but no online disk is written, he writes the transaction log first, and then regularly refreshes it to the disk (the transaction log is written to the disk in the order of appending, greatly improves the efficiency of transactions)

If the data modification has been recorded in the transaction log and persisted, but the data has not been written back to the disk, the system crashes and the storage engine can automatically restore the modified data when it is restarted. The recovery method depends on the storage engine.

The innodb Engine supports transactions.

View transaction log Definitions

Show global variables like '% log % ';

Mysql binary log

Binary logs are also called change logs. They are mainly used to record the mysql statements that modify data or may cause data changes. They also record the statement occurrence time, execution duration, and operation data. Therefore, the binary log can be used to query the changes made in the mysql database. Generally, the maximum size and size is1G

Show global variables like '% log % ';

SQL _log_bin = {ON | OFF}# It is used to control the session level (when mysql is connected to execute an operation statement, this is the session level. For example, if a file is directly imported into mysql, this is not a session level) enable or disable the binary log function. The default value is ON, indicating that the record function is enabled. You can modify the value of this variable at the session level, but it must have the SUPER permission.

Binlog_cache_size = 32768# The default value is 32768. Binlog Cache is used in an environment where the binary log (binlog) record function is enabled, mySQL is a memory region designed for temporarily caching binlog data in a short period of time to improve the record efficiency of binlog. In general, if there are no large transactions in our database, writing is not very frequent, 2 MB ~ 4 MB is a suitable choice. However, if there are many large transactions in our database and the write volume is large, you can increase the binlog_cache_size accordingly. At the same time, we can use binlog_cache_use and binlog_cache_disk_use to analyze whether the configured binlog_cache_size is sufficient and whether a large number of binlog_cache files are cached due to insufficient memory size (binlog_cache_disk_use.

Log_bin = mysql-bin# Specify the location of the binlog, which is in the data directory by default.

Binlog-format = {ROW | STATEMENT | MIXED}# Specify the binary log type. MIXED is recommended. If the binary log format is set but the binary log is not enabled, the warning log is generated and recorded in the error log at MySQL startup.

Row: does not record the context of each SQL statement, but only records that each data is modified.

Statement: Each SQL statement that modifies data is recorded.

Mixed: indicates the first two mixed

Sync_binlog = 10 # Set how often binary logs will be synchronized to the disk file. If the value 0 indicates that the logs are not synchronized, any positive value indicates the number of write operations performed on the binary data before the binary data is synchronized. When the autocommit value is 1, the execution of each statement will cause binary log synchronization. Otherwise, the commit of each transaction will cause binary log synchronization.

You can enable the binary log by editing the log-bin option in my. cnf. The format is as follows:

The DIR parameter specifies the storage path of the binary file, the filename parameter specifies the file name of the second-level file, in the form of filename. number, number in the form of 000001, 000002, and so on. Every time you restart the mysql service or run mysql> flush logs, a new binary log file is generated, and the number of these log files increases continuously. In addition to the above files, a file named filename. index is generated. The list of all binary log files stored in this file is also called the index of binary files.

Every time the Database Service is restarted, a binary log file is generated.

View binary logs:

Binary logs are defined in binary format. This format can store more information and improve the efficiency of writing binary logs. However, you cannot directly use the view command to open and view binary logs.

Small Scale: location where binary logs are recorded, usually the end time of the last event, each log file also has its own metadata. Therefore, for the current mysql version, the starting position of the binary file is usually 107;

Connect to mysql and enter Several SQL statements that can modify data to generate binary logs.

View the specified binary log information

View the binary log in the command line:

Because you cannot directly open and view binary logs using cat or other methods, you must use the mysqlbinlog command. However, we recommend that you do not use this to open the binary log file in use when you are performing mysql read/write operations. If you do not want to open the binary log file, you can use flushlogs. How to Use the mysqlbinlog command:

Export the information of this database:

[Root @ stu18 data] # mysqlbinlog mysql-bin.000017>/tmp/a. SQL

Import information of this database:

[Root @ stu18 data] # mysql <a. SQL

Delete binary log information:

Binary logs record a large amount of information (including useless information ). If binary logs are not cleared for a long time, a lot of disk space will be wasted. However, deletion may cause the database to crash and cannot be recovered. Therefore, to delete a binary log, you must first back it up with the database. In addition, you can only delete the binary log before the backup, new log information cannot be deleted (instant point recovery is supported ). It cannot be deleted directly after the mysql server is shut down, because this may cause database errors. To delete binary logs, perform the following operations: export the backup database and binary log files for compressed and archive storage. To delete a binary file, follow these steps:

You can use the reset master statement to delete all binary logs. The statement format is as follows:

Mysql> reset master;

Query OK, 0 rowsaffected (0.17 sec)

Mysql> show binary logs;

Mysql Backup Tool

Mysqldump:The logical backup tool is suitable for all storage engines. It can be used for warm backup, full backup, partial backup, and InnoDB Storage engines.

Hot Standby is supported;

Cp, tar, and other file system tools:Physical backup tools, suitable for all storage engines; used for cold backup, can achieve full backup, partial backup;

Lvm2 snapshot:Almost hot backup; physical backup using file system tools;

Mysqlhotcopy:Almost cold standby; applicable only to the MyISAM storage engine;

Mysql backup solution ① mysqldump + binlog: (recommended)

Full backup, Incremental backup by backing up binary logs

② Xtrabackup:

For InnoDB: Hot Standby, full backup and Incremental Backup are supported

For MyISAM: Warm Standby, only full backup is supported

③ Lvm2 snapshot + binlog:

Almost hot backup, physical backup

Syntax format of mysqldump + binlog commands

Mysqldump [OPTIONS] database [tables]: backs up one or more tables specified by a single database or database.

Mysqldump [OPTIONS] -- databases [OPTIONS] DB1 [DB2DB3...]: Back up one or more databases

Mysqldump [OPTIONS] -- all-databases [OPTIONS]: Back up all databases

Other options

-X, -- lock-all-tables: lock all tables

-L, -- lock-tables: lock the backup table

-- Single-transaction: Start a large single transaction to implement backup

-C, -- compress: Compressed Transmission

-E, -- events: Back up the event scheduler of the specified database

-R, -- routines: Backup stored procedures and storage functions

-- Triggers: Backup trigger

-- Master-data = {0 | 1 | 2}

0: no record

1: records the change master to statement. This statement is not commented out.

2: record as a comment statement

-F, -- flush-logs: Execute the flush logs command after the table is locked.

Mysqldump + binlog backup and recovery 1. Modify mysql configuration file and enable binary log

Vim/etc/my. cnf

Log-bin = master-log

Restart mysql

Systemctl restart mariadb

Log on to mysql to check whether binary logs are generated.

2. Prepare the Backup Directory

3. Prepare the backup database and table

Mysql

Create database test;

Use magedu;

Create table m (id int, name char (20 ));

4. Complete backup

Mysqldump -- all-databases -- lock-all-tables -- flush-log -- master-data = 2>/backup/'date + % F _ % t'-all. SQL

5. Insert data into the table

Mysql

Use magedu;

Show master status;

Insert into m26 (id, name) values (1, 'fuming'), (2, 'zhangmeng ');

6. Perform Incremental backup to back up binary logs

Mysqlbinlog -- start-position = 245 -- stop-position = 479/var/lib/mysql/master-log.000002>/backup/binlog-'date + % F _ % t'. SQL

Judge start and stop of position

Show master logs;

Show binlog events in 'master-log.000002 ';

The end must contain commit submission.

7. Continue to insert data, delete the database without backup, and simulate misoperations.

8. Data Recovery: Because the database is deleted without a backup, we need to protect the last binary logs first. If these binary logs are lost, they cannot be recovered, view position values before the delete operation mysqlbinlog/var/lib/mysql/master-log.000002

9. Back up the last binary log

Mysqlbinlog -- start-position = 467 -- stop-position = 677/var/lib/mysql/master-log.000002>/backup/binlog-'date + % F _ % t'. SQL

Ls/backup/binlog/

10. All backups before import

Mysql </backup/2017-12-07_20 \: 20 \: 04-all. SQL import full backup

Mysql </backup/binlog/binlog-2017-12-07_20 \: 45 \: 17. SQL import Incremental backup

Mysql </backup/binlog/binlog-2017-12-07_21 \: 05 \: 42. SQL import Delete Incremental backup before the database

11. view the database and data

Xtrabackup

Xtrabackup is a mysql database backup tool provided by percona. According to the official introduction, Xtrabackup is an open source tool that can perform hot backup for innodb and xtradb databases.

Features:

(1) fast and reliable backup process

(2) The backup process does not interrupt ongoing transactions.

(3) Saving disk space and traffic based on compression and other functions

(4) automatic backup check

(5) Fast Restoration

Tutorial steps: (1) install xtrabackup

Yum install percona-xtrabackup-y

(2) Full backup

Innobackupex -- user = root/backup

(3) add data

Mysql-uroot

Create database magedu;

Use magedu

Create table m26 (id int, name char (20 ));

Insert into m26 values (007, 'fuming'), (008, 'zhangmeng ')

(4) Incremental Backup

Innobackupex -- incremental/backup/-- incremental-basedir =/backup/2017-11-16_16-53-4

(5) data recovery preparation
1. perform the operation (full backup ):

Innobackupex -- apply-log -- redo-only BASE-DIR (BASE-DIR is the directory for full backup)

Example: innobackupex -- apply-log -- redo-only BASE-DIR -- incrementaldir =/backup/2017-11-16_17-17-52/

2. Execute (incremental ):

Innobackupex -- apply-log -- redo-only BASE-DIR -- incrementaldir = INCREMENTAL-DIR-1 (the INCREMENTAL-DIR-1 is the directory of Incremental Backup)

For example: innobackupex -- apply-log -- redo-only/backup/2017-11-16_16-53-43 -- incrementaldir =/backup/2017-11-16_17-17-52/

(6) restore Data

Mv/var/lib/mysql. bak simulate deleting a database

Mkdir/var/lib/mysql

Cd/var/lib/mysql

Innobackupex -- copy-back/backup/2017-11-16_16-53-43 restore full backup

Lvm2 snapshot + binlog

Before doing the experiment, let's review the knowledge of lvm2-snapshot.

LVM snapshots simply save the metadata of all files in the snapshot source partition at a time point. If the source file does not change, accessing the corresponding file of the snapshot volume directly refers to the source file in the source partition, if the source file changes, the corresponding files in the snapshot volume will not change. Snapshot volumes are used to backup files.

Tutorial steps:

1. Add a hard disk and set the disk type to lvm.

Echo '---'>/sys/class/scsi_host/host2/scan

2. partitions

T 8e is lvm

Partx-a/dev/sdb enable the kernel to identify new Disks

3. pvcreate/dev/sdb1 Add a physical volume

4. vgcreate myvg/dev/sdb1 Add a volume group

5. lvcreate-n mydata-L 5G myvg add logical volume

6. format the logical volume by mkfs. ext4/dev/mapper/myvg-mydata.

7. mount/dev/mapper/myvg-mydata/lvm_data

8. Modify the Mysql configuration so that the data file is on the logical volume datadir =/lvm_data

9. service mysqld restart start Mysql service

10. create and operate databases

11. mysql> flush tables with read lock; # LOCK the table

12. lvcreate-L 1G-n mydata-snap-p r-s/dev/mapper/myvgmydata

# Create a snapshot volume Logical volume "mydata-snap" created.

13. mysql> unlock tables; # UNLOCK all TABLES

14. mount/dev/myvg/mydata-snap/lvm_snap/# mount snap

15. tar cvf/tmp/mysqlback.tar./* # package physical backup

16. umount/lvm_snap/# uninstall snap

17. lvremove myvg mydata-snap # Delete snap

18. Delete mysql Data rm-rf/lvm_data /*

19. decompress and restore and delete the data tar xvf/tmp/mysqlback.tar ./

20. Verify that the database data is restored correctly.

Summary

Backup Method

Backup speed

Reply speed

Accessibility

Function is generally used

Mysqldump

Slow

Slow

Generally, you can ignore the differences between storage engines.

Backup of medium and small data volumes

Lvm2 Snapshot

Fast

Fast

Supports almost hot standby, fast

Backup of medium and small data volumes

Xtrabackup

Fast

Fast

Implement innodb hot backup and have powerful storage engine requirements

Large-scale score

Cp

Fast

Fast

Moderate, low flexibility

Weak data backup

Well, today's content is here. We will see you later.

The above practical tutorial for Mysql to implement enterprise-level log management, backup and recovery is all the content that I have shared with you. I hope you can give me a reference and support me a lot.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.