Like most relational databases, log files are an important part of the
MySQL database. MySQL has several different log files, usually including error log files, binary logs, general logs, slow query logs, and so on. These logs can help us locate events that occur inside mysqld, database performance failures, record the history of data changes, user recovery of the database, and so on. This article mainly describes the error log file.
1. The composition of the
MySQL log file system
a. Error log: record the problems that occurred when starting, running or stopping mysqld. b. General log: record the established client connection and executed statements. c. Update log: record the statement that changes the data. This log is no longer used in MySQL 5.1. d. Binary log: record all statements that change data. Also used for replication. e. Slow query log: record all queries whose execution time exceeds long_query_time seconds or queries that do not use indexes. f. Innodb log: innodb redo log and undo log
By default, all logs are created in the mysqld data directory. You can refresh the log to force mysqld to close and reopen the log file (or switch to a new log in some cases). When you execute a FLUSH LOGS statement or execute mysqladmin flush-logs or mysqladmin refresh, the log is aged. In the case of MySQL replication, the slave replication server will maintain more log files, which are called replacement logs.
2. Error log
The error log is a text file. The error log records detailed information about each startup and shutdown of MySQL Server and all the more serious warnings and error messages during the operation. You can use the --log-error[=file_name] option to turn on the mysql error log, which specifies the location where mysqld saves the error log file. For specifying the --log-error[=file_name] option without a file_name value, mysqld uses the error log name host_name.err and writes the log file in the data directory. When mysqld is writing error logs to a file, the server will close and reopen the log file when FLUSH LOGS or mysqladmin flush-logs is executed. It is recommended to manually rename the error log file before flushing, after which the mysql service will open a new file with the original file name. The following is the error log backup method: shell> mv host_name.err host_name.err-old shell> mysqladmin flush-logs shell> mv host_name.err-old backup-directory
3. Logs in InnoDB
MySQL database InnoDB storage engine Log roaming
1 – Undo Log
Undo Log is to achieve the atomicity of transactions. In the MySQL database InnoDB storage engine, Undo Log is also used to implement multi-version concurrency control (referred to as MVCC).
Atomicity of the transaction (Atomicity) All operations in the transaction are either completed or do not do any operations, not only partial operations. If an error occurs during the execution, rollback to the state before the transaction started, as if the transaction has never been executed.
Principle The principle of Undo Log is very simple. In order to satisfy the atomicity of transactions, before operating any data, first back up the data to a place (this place where data backup is stored is called Undo Log). Then modify the data. If an error occurs or the user executes a ROLLBACK statement, the system can use the backup in Undo Log to restore the data to the state before the transaction started.
In addition to ensuring the atomicity of transactions, Undo Log can also be used to assist in completing the persistence of transactions.
Durability of the transaction Once the transaction is completed, all modifications made to the database by the transaction will be persisted in the database. In order to ensure durability, the database system will completely record the modified data to the persistent storage.
The simplified process of using Undo Log to achieve atomic and persistent transactions Assume that there are two data A and B, the values are 1, 2 respectively. A. Transaction starts. B. Record A=1 to undo log. C. Modify A=3. D. Record B=2 to undo log. E. Modify B=4. F. Write undo log to disk. G. Write data to disk. H. Transaction commit There is an implicit prerequisite: ‘data is first read into memory, then the data in memory is modified, and finally the data is written back to disk’.
The reason why atomicity and persistence can be guaranteed at the same time is because of the following characteristics: A. Record Undo log before updating data. B. In order to ensure durability, data must be written to disk before the transaction is committed. As long as the transaction is successfully submitted, the data must have been persisted. C. Undo log must be persisted to disk before the data. If the system crashes between G and H, the undo log is complete and can be used to roll back the transaction. D. If the system crashes between A-F, because the data is not persisted to disk. Therefore, the data on the disk remains in the state before the transaction started.
Defect: Before each transaction commits, data and Undo Log are written to disk, which will cause a lot of disk IO, so performance is very low.
If you can cache data for a period of time, you can reduce IO and improve performance. But this will lose the durability of the transaction. Therefore, another mechanism is introduced to achieve persistence, namely Redo Log.
2 – Redo Log
Principle Contrary to Undo Log, Redo Log records the backup of new data. Before the transaction is committed, only the Redo Log needs to be persisted, and the data does not need to be persisted. When the system crashes, although the data is not persisted, Redo Log has persisted. The system can restore all data to the latest state based on the content of the Redo Log.
The simplified process of the Undo + Redo transaction assumes that there are two data A and B, the values are 1, respectively. A. The transaction starts. B. Record A=1 to undo log. C. Modify A=3. D. Record A= 3 to redo log. E. Record B=2 to undo log. F. Modify B=4. G. Record B=4 to redo log. H. Write redo log to disk. I. Transaction commit
The undo log saves the data before the modification and saves it in the memory.When rolling back, the content inside is read (thus achieving atomicity), and the redolog saves the modified data (backup of the new data, and at the same time The redo log will also be backed up), and the transaction is committed and written to disk, thus ensuring durability
4- Slow query log
Overview The speed of database query is a major factor affecting project performance. For the database, in addition to optimizing SQL, it is more important to find the SQL that needs to be optimized first. How to find inefficient SQL is the main purpose of this article.
MySQL database has a "slow query log" function, which is used to record SQL whose query time exceeds a certain set value, which will greatly help us quickly locate the problem so that we can prescribe the right medicine. As for the query time, each project and business has different requirements. The software of traditional enterprises allows the query time to be higher than a certain value, but it is estimated that this standard is placed on Internet projects or websites with high traffic. It is a bug, and it may even be upgraded to a functional defect.
In order to avoid misleading readers, I specifically state that the discussion in this article is limited to Win 64-bit + MySQL 5.6. I have not tried other platforms or database types and versions, so I will not repeat them.
Setting the log function Regarding the slow query log, the following parameters are mainly involved:
slow_query_log: Whether to enable the slow query log function (required) long_query_time: exceeds the set value, it will be regarded as a slow query and recorded in the slow query log file (required) log-slow-queries: slow query log file (not required) Fill in), automatically create a [hostname]-slow.log file in \data\ That is to say, only when the above three conditions are met, the "slow query function" may be turned on or off correctly.
5. Binary log
The basis of master-slave replication: binlog log and relaylog log
What is MySQL master-slave replication is simply to ensure that the data of the master SQL (Master) and the slave SQL (Slave) are consistent. After inserting data into the Master, the Slave will automatically synchronize the modified data from the Master (there are certain Delay), in this way to ensure data consistency, that is, master-slave replication
Replication methods MySQL 5.6 has two methods for master-slave replication: log-based (binlog) and GTID (global transaction identifier). This article only involves master-slave configuration based on log binlog
Replication principle 1. Master records data changes to the binary log, which is the file specified by the configuration file log-bin. These records are called binary log events. 2. Slave reads through the I/O thread The binary log events in the Master are written to its relay log. 3. Slave redo events in the relay log, execute the event information in the relay log locally one by one, and complete the data in Local storage to reflect changes to its own data (data replay)
1. What is binlog? Binlog is a binary format file used to record the SQL statement information that users update to the database. For example, SQL statements that modify database tables and modify content will be recorded in the binlog, but queries on database tables and other content are not Will record.
By default, the binlog log is in binary format. You cannot use the commands of viewing text tools (for example, cat, vi, etc.) to view, but use mysqlbinlog to analyze and view.
2. The role of binlog When data is written to the database, the updated SQL statement will also be written to the corresponding binlog file at the same time. This file is the binlog file mentioned above. When using mysqldump to back up, only a period of time of data is fully prepared, but if the database server is suddenly found to be faulty after the backup, the binlog log will be used at this time.
The main function is for the master-slave replication of the database and incremental recovery of data.
1. What is binlog? It records database additions, deletions and changes, and does not record the binary log of the query. 2. Function: Used for data synchronization. 3. How to enable the binlog log function In the mysql configuration file my.cnf, add the log_bin parameter to enable Binlog log, you can also specify the file name of the binlog log by assignment, examples are as follows:
[root@DB02 ~]# grep log_bin /etc/my.cnf log_bin = /application/mysql/logs/dadong-bin
log_bin
[root@DB02 ~]# Tip: You can also name it as "log_bin = /application/mysql/logs/dadong-bin". Why do you need to refresh the binlog if the directory exists? Find the critical point of recovery of the complete data and binlog files.
to sum up
The binlog and relay log logs of the mysql database play an important role, and the relay log only exists in the mysql slave library. Its role is to record the binlog received from the main library by the io process in the slave library, and then wait for the slave library The sql process is used to read and apply to ensure master-slave synchronization, but both the binlog master library and slave library (slave) can exist, record the SQL statements that occur or potentially change the data, and save them in the form of binary on the disk, so you can Use binlog to back up and restore the database in real time.
1. What is binlog? Binlog is a binary format file used to record the SQL statement information that users update to the database. For example, SQL statements that modify database tables and modify content will be recorded in the binlog, but queries on database tables and other content are not Will record.
By default, the binlog log is in binary format. You cannot use the commands of viewing text tools (for example, cat, vi, etc.) to view, but use mysqlbinlog to analyze and view.
2. The role of binlog When data is written to the database, the updated SQL statement will also be written to the corresponding binlog file at the same time. This file is the binlog file mentioned above. When using mysqldump to back up, only a period of time of data is fully prepared, but if the database server is suddenly found to be faulty after the backup, the binlog log will be used at this time.
The main function is for the master-slave replication of the database and incremental recovery of data.