Mysql Log File Usage and data recovery I. mysql Log types MYSQL has different types of log files (each storing different types of logs ), from these, you can find out what MYSQL has done. For MYSQL management, these log files are indispensable. 1. error log: records errors during database startup, operation, and stop. 2. ISAM operation log (The isam log): records all changes to The ISAM table. This log is only used to debug The ISAM mode. 3. the query log records The client connection and The executed SQL statements. 4. the update log (The update log): a statement that records data changes. It is no longer recommended. it is replaced by a binary log. 5. binary log: records all statements for modifying database data. 6. time-out log (The slow log): records all statements whose execution time exceeds The maximum SQL Execution time (long_query_time) or that do not use indexes; 7. relay log: if you are using mysql's copy and backup functions, the slave server also provides a log file called relay log. By default, all log files are recorded in the mysql data directory. You can force mysql to close and re-open a file to record the logs. Of course, the system will automatically add a suffix (such. 00001 ,. 00002). Run the mysql> flush logs statement in the mysql environment; alternatively, run the mysqladmin administrative Program # mysqladmin flush-logs or # mysqladmin refresh. 2. Configure the mysql Log to start the database in mysqld_safe mode, followed by the option parameter, you can also configure it in the configuration file. The second method is recommended. The configuration method is simple. I only configured three types of logs: [mysqld] log =/var/log/mysqld_common.log log-error =/var/log/mysqld_err.log log-bin =/var/log/mysqld_bin.bin 3. mysql day Log query is very simple. Most of the logs are text. You can directly view the logs using tools such as vim, less, and more. It is worth noting that you can view the binary files: 1. first, determine whether the binary file record function is enabled. SQL code mysql> show variables like 'Log _ bin'; 2. if you want to know the details of the file that records binary data, you can use the following statement to see which file is being recorded and the current location of the record: SQL code mysql> show master status; 3. to view binary data, you need to use the program mysqlbinlog to see which options are supported and use them as needed. SQL code mysql> mysqlbinlog/var/log/mysql/mysql-bin.000040; 4. export the binary log file txt text file to view the SQL code mysql> mysqlbinlog/var/log/mysql/mysql-bin.000040>/var/log/mysql/12740.txt; 5. you can execute the following statements to query a certain time range. If there are many records, you can direct the results to a file and read them :-): SQL code mysql> mysqlbinlog -- start-datetime = '2017-01-01 00:00:00 '-- stop-datetime = '2017-08-08 00:00:00'/var/log/mysql/mysql-bin.000040>. /tmp. log 4. Use the mysqlbinlog tool to restore data using binary logs. Help manual, which is used in detail. In this example, the focus is on the use of the -- start-position parameter and -- stop-position parameter. -- Start-position = N: read from the event where the position in the binary log is equal to N. -- Stop-position = N: stop reading from an event where the position in the binary log is equal to or greater than N. 1. Create an experiment environment in a test database, create a table, add records, and generate a log file. SQL code mysql> create table test (id int auto_increment not null primary key, val int, data varchar (20); mysql> insert into test (val, data) values (10, 'shanghai'); Query OK, 1 row affected (0.03 sec) mysql> insert into test (val, data) values (20, 'jia '); Query OK, 1 row affected (0.08 sec) mysql> insert into test (val, data) values (30, 'hui'); Query OK, 1 row affected (0.03 sec) mysql> flush logs; -- generate the second log file Que Ry OK, 0 rows affected (0.09 sec) mysql> insert into test (val, data) values (40, 'aaa'); Query OK, 1 row affected (0.05 sec) mysql> insert into test (val, data) values (50, 'bbb '); Query OK, 1 row affected (0.03 sec) mysql> insert into test (val, data) values (60, 'ccc '); Query OK, 1 row affected (0.03 sec) mysql> delete from test where id between 4 and 5; -- delete record Query OK, 2 rows affected (0.05 sec) mysql> I Nsert into test (val, data) values (70, 'ddd '); Query OK, 1 row affected (0.03 sec) mysql> flush logs; -- generate the third file: Query OK, 0 rows affected (0.11 sec) mysql> insert into test (val, data) values (80, 'ddddd'); Query OK, 1 row affected (0.05 sec) mysql> insert into test (val, data) values (90, 'eee'); Query OK, 1 row affected (0.03 sec) mysql> drop table test; -- delete table Query OK, 0 row affected (0.05 sec) 2. restore data Use mysqlbinlog to generate a txt file for analysis. Mysql> mysqlbinlog/var/log/mysql/mysql-bin.000001>/var/log/mysql/000001.txt; mysql> mysqlbinlog/var/log/mysql/mysql-bin.000002>/var/log/mysql /000002.txt; mysql> mysqlbinlog/var/log/mysql/mysql-bin.000003>/var/log/mysql/000003.txt; through these three commands, you can generate the log file content, that is, the user operation steps. Because we need to redo all the operations on the first log file, we only need to restore the first log file. SQL code mysql> mysqlbinlog/var/log/mysql/mysql-bin.000001 | mysql-uroot-p OK, then, we need to analyze the second log file. Why do we need to analyze it, because it executes a DELETE operation halfway, because what we need to do is restore all the data, that is, we do not want to redo this statement. So here we have to find a way to bypass it. Open the. txt file for analysis. SQL code /*/*! 40019 SET @ session. max_insert_delayed_threads = 0 */;/*! 50003 SET @ OLD_COMPLETION_TYPE = @ COMPLETION_TYPE, COMPLETION_TYPE = 0 */; DELIMITER /*! */; # At 4 #090427 15:27:56 server id 1 end_log_pos 106 Start: binlog v 4, server v 5.1.32-community-log created 090427 15:27:56 BINLOG 'login failed '/*! */; # At 106 #090427 15:28:37 server id 1 end_log_pos 176 Query thread_id = 1 exec_time = 0 error_code = 0 use mytest /*! */; Set timestamp = 1240817317 /*! */; SET @ session. pseudo do_thread_id = 1 /*! */; SET @ session. foreign_key_checks = 1, @ session. SQL _auto_is_null = 1, @ session. unique_checks = 1, @ session. autocommit = 1 /*! */; SET session. SQL _mode = 1344274432 /*! */; SET @ session. auto_increment_increment = 1, @ session. auto_increment_offset = 1 /*! */;/*! /C gbk *//*! */; SET @ session. character_set_client = 28, @ session. collation_connection = 28, @ session. collation_server = 28 /*! */; SET @ session. lc_time_names = 0 /*! */; SET @ session. collation_database = DEFAULT /*! */; BEGIN /*! */; # At 176 #090427 15:28:37 server id 1 end_log_pos 204 Intvar SET INSERT_ID = 4 /*! */; # At 204 #090427 15:28:37 server id 1 end_log_pos 312 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817317 /*! */; Insert into test (val, data) values (40, 'aaa ')/*! */; # At 312 #090427 15:28:37 server id 1 end_log_pos 339 Xid = 12 COMMIT /*! */; # At 339 #090427 15:28:46 server id 1 end_log_pos 409 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817326 /*! */; BEGIN /*! */; # At 409 #090427 15:28:46 server id 1 end_log_pos 437 Intvar SET INSERT_ID = 5 /*! */; # At 437 #090427 15:28:46 server id 1 end_log_pos 545 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817326 /*! */; Insert into test (val, data) values (50, 'bbb ')/*! */; # At 545 #090427 15:28:46 server id 1 end_log_pos 572 Xid = 13 COMMIT /*! */; # At 572 #090427 15:29:35 server id 1 end_log_pos 642 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817375 /*! */; BEGIN /*! */; # At 642 #090427 15:29:35 server id 1 end_log_pos 670 Intvar SET INSERT_ID = 6 /*! */; # At 670 #090427 15:29:35 server id 1 end_log_pos 778 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817375 /*! */; Insert into test (val, data) values (60, 'ccc ')/*! */; # At 778 #090427 15:29:35 server id 1 end_log_pos 805 Xid = 14 COMMIT /*! */; # At 805 #090427 15:30:21 server id 1 end_log_pos 875 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817421 /*! */; BEGIN /*! */; # At 875 #090427 15:30:21 server id 1 end_log_pos 981 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817421 /*! */; Delete from test where id between 4 and 5 /*! */; # At 981 #090427 15:30:21 server id 1 end_log_pos 1008 Xid = 15 COMMIT /*! */; # At 1008 #090427 15:30:34 server id 1 end_log_pos 1078 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817434 /*! */; BEGIN /*! */; # At 1078 #090427 15:30:34 server id 1 end_log_pos 1106 Intvar SET INSERT_ID = 7 /*! */; # At 1106 #090427 15:30:34 server id 1 end_log_pos 1214 Query thread_id = 1 exec_time = 0 error_code = 0 set timestamp = 1240817434 /*! */; Insert into test (val, data) values (70, 'ddd ')/*! */; # At 1214 #090427 15:30:34 server id 1 end_log_pos 1241 Xid = 16 COMMIT /*! */; # At 1241 #090427 15:30:41 server id 1 end_log_pos 1282 Rotate to mysql-bin.000003 pos: 4 DELIMITER; # End of log file ROLLBACK/* added by mysqlbinlog */;/*! 50003 SET COMPLETION_TYPE = @ OLD_COMPLETION_TYPE */; --*/ in this file, we can see that the start position and end position of the DELETE operation are 875, so we only need to redo the operations starting from the second log file and then from the end, we can recover the data without deleting the data. Therefore, execute two commands: SQL code mysql> mysqlbinlog/var/log/mysql/mysql-bin.000002 -- stop-pos = 875 | mysql-uroot-p mysql> mysqlbinlog/var/log/mysql/mysql-bin.000002 -- start-pos = 1008 | mysql-uroot-p OK, the data in the second log file is now available. The third log file is the same. You only need to find the drop table location. SQL code mysql> mysqlbinlog/var/log/mysql/mysql-bin.000003 -- stop-pos = 574 | mysql-uroot-p now let's look at the data to see: mysql> select * from test; + ---- + ------ + ------- + | id | val | data | + ---- + ------ + ------- + | 1 | 10 | liang | 2 | 20 | jia | 3 | 30 | hui | 4 | 40 | aaa | 5 | 50 | bbb | 6 | 60 | ccc | 7 | 70 | ddd | 8 | 80 | dddd | 9 | 90 | eeee | + ---- + ------ + ------- + 9 rows in set (0.00 sec) you can see Now, all the data is back.