Task:
Web project is in the Linux Tomcat deployment, MySQL is also on the above, for unknown reasons database downtime, start, database table also suddenly lost, all empty!!!!!!!!
- MySQL installation directory
- Surprised to find a table with a similar missing database in the var/directory
The next is how to get back in, first we use the MySQL InnoDB engine, find the following information:
The main difference between the two types is that InnoDB supports transactional and foreign key and row-level locks. MyISAM is not supported. So MyISAM tend to be considered only suitable for use in small projects. I use MySQL as a user point of view, InnoDB and MyISAM are more like, but from my current operation of the database platform to meet the requirements: 99.9% stability, convenient scalability and high availability, MyISAM is definitely my first choice. The reasons are as follows: 1, first of all, I am currently hosting most of the projects are read and write less projects, and MyISAM read performance is more than InnoDB strong. 2, MyISAM index and data are separate, and the index is compressed, the memory usage of the corresponding improved a lot. Can load more indexes, and InnoDB is the index and the data is tightly bound, do not use compression which will cause innodb than MyISAM volume is large. 3, from the platform point of view, often 1, 2 months will happen application developers accidentallyUpdateA tablewhereWrite the scope of the wrong, resulting in this table can not be normal use, this time MyISAM the superiority of the manifest, casually from the day of the copy of the compressed package out of the corresponding table file, put it into a database directory, and then dump intoSQLBack to the main library, and the corresponding binlog to fill up. If it is InnoDB, I am afraid there can be so fast speed, do not tell me to let InnoDB regular use export XXX.SQLMechanism back up, because the smallest database instance on my platform has a size of dozens of g of data.4And from the application logic I contacted,Select Count(*) andOrder byIs the most frequent, probably to take up the entireSQLOf the total statement -More than the operation, and this operation InnoDB actually will lock the table, many people think InnoDB is a row-level lock, that justwhereThe primary key is valid for it, and the non-primary key will lock the full table.5, there is often a lot of application departments need me to give them regular data on some tables, MyISAM words are very convenient, as long as they correspond to the list of the frm. myd,myi files, so that they themselves in the corresponding version of the database to start the line, and InnoDB need to export XXX.SQL, because the light to others files, by the dictionary data file, the other side is not available.6, if compared with MyISAMInsertWrite operation, InnoDB also does not reach MyISAM write performance, if it is for index-basedUpdateOperation, although MyISAM may be inferior to innodb, but so high concurrency of the write, from the library can be chased is also a problem, it is better through the multi-instance sub-database sub-table architecture to solve.7, if it is with MyISAM,MergeThe engine can greatly speed up the development of the application department, as long as theyMergeTable to do someSelect Count(*) operation, very suitable for large project total of about hundreds of millions ofrowsA business table of a certain type (such as logs, survey statistics). Of course InnoDB is not absolutely not, with business projects such as the simulation of stock projects, I am using InnoDB, active users -Many times, is also very easy to cope with, so I personally also like InnoDB, but if from the database platform application, I would prefer MyISAM. In addition, some people may say that you myisam can not resist too much write operation, but I can make up by the structure, say my existing database platform capacity: The total number of master and slave data in more than hundreds of T, more than 1 billion PV dynamic page per day, there are several large items are called by the data interface method is not counted into PV total, ( This includes a large project because the initial memcached was not deployed, resulting in a single database processing per day9Tens of thousands of queries). And my overall database server load on average is0.5-1Around.
*myisam types do not support advanced processing, such as transaction processing, and InnoDB type support
The so-called transaction processing is atomic operation.
For example, support transaction processing of the InnoDB table, build one, post is to the points. You sent a post that executes an INSERT statement that inserts the content of the post, and then executes an UPDATE statement to increase your points. Assuming a special case occurs suddenly, insert succeeds and the update operation is not executed. This means that you posted a post without adding the corresponding points. This can cause user dissatisfaction. If transaction processing is used, both insert and update are put into the transaction to execute, this time, only when the INSERT and update two statements are executed to generate the data will be updated, written to, if any one of the statements fail, then rollback to the initial state, do not write. This ensures that the insert and update are definitely executed together. The Mysiam table does not support transactional processing, while the Mysiam table does not support foreign keys. Foreign key Needless to say, right? If you don't know, check it online. At the same time, when performing database write operations (Insert,update,delete), the Mysiam table locks the table, and the InnoDB table locks the rows. Popular point is that you execute an UPDATE statement, then the Mysiam table will lock the entire table, the other inserts and delete, update will be rejected, until the completion of the UPDATE statement execution will not be executed sequentially. While the lock line, that is, you execute the UPDATE statement is, will only lock this record, only the other write to this record, update operations will be blocked and wait until the UPDATE statement is completed before execution, for other records of the write operation will not affect. Therefore, when your database has a large number of write, update operations and less query or data integrity requirements are relatively high when you choose the InnoDB table. Select the Mysiam table when your database is primarily query-focused, compared with fewer updates and writes, and less stringent on business data integrity requirements. Because the query operation of Mysiam table is more efficient and faster than InnoDB *
Recovering data
Initializing the database
./mysql_install_db--user=mysql--datadir=/usr/local/mysql/data--basedir=/usr/local/mysql
Notice that the initialization is complete. No password login, so that is initialized, start MySQL see, found can, but the data did not go in, then InnoDB under
*.err
*.pid
MYSQL-BIN-000010, etc.
Ibdata1
Ib-logfile0
Ib-logfile1
TEST
What is it? We lost the data sheet if it was test, then we recovered the data to test.
- Mysql-bin-000010 and so on is MySQL's bin-log log
To clean up the log, follow these steps:
1. On each subordinate server, use Show SLAVE status to check which log it is reading.
2. Use show master logs to obtain a series of logs on the primary server.
3. Determine the oldest log in all subordinate servers. This is the target log. If all the subordinate servers are updated, this is the last log on the list.
4. Make a backup of all the logs that you want to delete. (This step is optional, but recommended.) )
5. Clean up all logs, but not target logs.
There are two parameters in the MY.CNF to control the Bin-log.
Expire_logs_days
The number of days that the binary log was automatically deleted. The default value is 0, which means "no automatic deletion".
Sql_log_bin
You can define whether the statements of your session at this time are recorded in the inverted Binlog. Turning it off can improve performance by 1%. all of MySQL's operational information is recorded in Bin-log and is useful in restoring to the nearest point in time . Delete the MySQL bin-log log, the database is bad, the wrong operation, there are logs and some save, no logs and no backup, then the game is finished.
Ibdata1
Most of the site space is ibdata1 used by InnoDB shared tablespace. When the ibdata1 file grows too fast, it is usually caused by a long-running forgotten transaction in MySQL. Try to solve the problem as soon as possible (commit or kill the transaction), because without the painful slow mysqldump process, you can not reclaim wasted disk space, it is highly recommended to monitor the database to avoid these problems.
For example, the volume of IBDATA1 is too large, has reached more than 300 g, and then to expand the new ibdata, so I added a new sentence in the configuration file,
Innodb_data_file_path =/old/ibdata1:296664170496m;/new/ibdata2:500g:autoextend:max:1000g
Ib-logfile0 Ib-logfile1
Transaction log ib_logfile, transaction log or redo log, default to Ib_logfile0,ib_logfile1 name in MySQL, can modify parameters manually, adjust
Open a few sets of logs to serve the current MySQL database, MySQL in order, circular write, every time a transaction is opened,
Will record some relevant information in the transaction log (recording the physical location or offset of the data modification to the file data);
Function: When the system crashes and restarts, the transaction is re-made, and when the system is normal, each checkpoint point in time will write the transaction
Applied to the data file.
Introduce a problem: in M/s environment, InnoDB after writing ib_logfile, the service is closed unexpectedly, will the main library recover data with ib_logfile, and
Binlog not write causes less of this transaction when synchronizing from the library? Thus causing the Lord to never agree;
Redo Log Write mode:
1.ib_logfile writes the current transaction update data and marks the transaction preparation Trx_prepare
2. Write Bin-log
3.ib_logfile current transaction Commit commit Trx_commit
Recovery method:
If Ib_logfile has already written transaction preparation, the recovery process will be based on whether the transaction exists in Bin-log to recover data.
Assume:
1) After the exception, because there is no write bin-log, from the library will not synchronize this transaction, on the main library, restart, in the recovery log in this
A transaction has no commit, that is, rollback the transaction.
2) After the end of the exception, this will Bin-log has been written, and the transaction will be synchronized from the library. The main library also resumes this transaction, depending on the recovery log and Bin-log
Comprehensive Description: Bin-log write completion, master and slave will normally complete the transaction, Bin-log is not written, master-slave library ROLLBACK TRANSACTION, there will be no master-slave library inconsistency problem.
Related parameters (Global & Static):
Innodb_log_buffer_size
Innodb_log_file_size
Innodb_log_files_in_group
Innodb_log_group_home_dir
Innodb_flush_log_at_trx_commit
Innodb_log_buffer_size: Transaction log buffer, can be set 1m~8m, default 8M, delay transaction log write disk,
The transaction log buffers are imagined as "funnel", and the cached log records are kept to the disk, and when the parameters are written
Innodb_flush_log_at_trx_commit control, explained later, enables large transaction log caches, which can be fully operational events
Service log, temporarily stored in the transaction buffer, do not have to write (before the transaction commit) to disk storage, but also to save disk space consumption;
Innodb_log_file_size: Control the size of the transaction log ib_logfile, scope 5mb~4g; all transaction logs ib_logfile0+
ib_logfile1+. The cumulative size cannot exceed 4G, the transaction log is large, checkpoint will be less, save disk IO, but the large transaction day
Log means that the database crash when it recovers slowly.
Introduction issue: Modifying this parameter size causes the size of the Ib_logfile file to not match the file size that existed before
Solution: In the case of Clean Shutdown database, delete ib_logfile, and then restart the database, the file will be created by itself;
Set up a few sets of transaction logs in Innodb_log_files_in_group:db, default is 2;
Innodb_log_group_home_dir: Transaction log storage directory, not set, Ib_logfile0 ... exists in the Data files directory
Innodb_flush_log_at_trx_commit: Control transaction log when writing disk and brush disk, safe increment: 0,2,1
Transaction buffers: Log_buffer;
0: One transaction buffer per second is flushed to the file system, while the file system to disk synchronization, but the transaction commits, does not trigger Log_buffer to file system synchronization;
2: The transaction buffer log is flushed to the file system each time the transaction commits, and the file system to disk is synchronized every second;
1: Flush to disk each time the transaction commits, the most secure;
Applicable environment:
0: Disk IO capacity is limited, security and convenience is poor, no replication or replication delay can be accepted, such as log business, mysql corruption lost 1s transaction data;
2: Data security requirements, you can lose a bit of transaction log, replication delay can also be accepted, OS corruption can only lose data;
1: Data security requirements are very high, and the disk IO capability is sufficient to support business, such as recharge consumption, sensitive business
- *.err
Error log
- *.pid
The MySQL PID file records the current mysqld process's pid,pid, which is the process ID.
- TEST
Those table files in the folder are just structures.
Modify MY.CNF
socket initialization is there, do not care!
Modify the following information to
then reboot to
mysqld_safe --defaults-file=/etc/my.cnf &
Ok!
The database is ready! Well, obviously your login to MySQL time will let you enter the password!! (Do not/usr/local/mysql/bin/mysqladmin-s/tmp/mysql_3308.sock-u root password ' New-password ') after initialization because there is data about the password in the recovered data.
The MySQL table on the server lost all of the time to recover the data