MySQL Database optimization techniques:The optimization of the database is a comprehensive technology, which mainly includes: 1. The design of the table is reasonable (in accordance with the three paradigms, 3NF) 2. Add an appropriate index (there are four common indexes: Normal index, primary key index, unique index, full-text index, (spatial index, composite Index) 3. Table Technology (horizontal split, vertical split) 4. Read and write separation 5. Stored Procedure (SQL Each execution of the statement needs to be compiled, the stored procedure is compiled only once, modular programming) 6. Configuration optimization for MySQL (Configure Max concurrency My.ini, resize cache) 7.MySQL server hardware upgrade 8. Periodically erase unwanted data, and defragment it regularly. 9.SQL language Sentence optimization***************************************************************************************Design of database tables:What kind of table is to meet 3nf--first composite 1NF, then composite 2NF, further meet the 3NF three paradigms: 1. The columns of the table are atomic and non-divisible, that is, the column information cannot be decomposed, as long as the database is a relational database, it will automatically meet the 1NF. 2. The primary key of the record in the table is unique. 3. Do not have redundant data in the table (if it can be deduced, you should not design a field to represent it separately). Inverse three-paradigm (appropriate redundancy): However, no redundant database is not necessarily the best database, sometimes in order to improve operational efficiency, it is necessary to reduce the standard paradigm, the proper retention of redundant data. The work of lowering the paradigm standard is put into consideration when designing the physical data model. The lower paradigm is the addition of fields, allowing redundancy (in the case of a table of 1 to N, in order to improve efficiency, it is possible to design fields in 1-party tables to improve efficiency). For example, what is appropriate redundancy? There are multiple classes in one grade, and multiple students in one class, and you can add a field to the grade table to hold all the students of the grade. ***************************************************************************************SQL statement Optimization:How do I quickly position a slow statement from a large project (locating slow queries)? Some of the operating states of the 1.MySQL database are queried, such as querying the current MySQL run time/How many select/update/delete/are there in total number of connections? Show status; Elapsed time: Show status like ' uptime '; Number of Select/update/delete/insert performed: Show status Like ' Com_select ', show status like ' Com_update ', show status like ' Com_ Delete '; Show status like ' Com_insert '; Show [session|globe] status like ...//default to session level (current window), Globe for entire MySQL database attempts to connect to MySQL server: Show status like ' Connectio NS '; Shows the number of slow queries: Show status like ' Slow_queries '; Actual including query, add, delete and so on 2. How to locate a slow query (by default, MySQL thinks 10 seconds is a slow query, can be modified): Shows the time of Slow query: show variabkles like ' long_query_time '; Time to modify the slow query for MySQL: set long _query_time=1; Delimiter $$ can define a new command terminator if a statement points to a time that is longer than the slow query setting, it is counted; By default, MySQL does not log slow queries and needs to be specified at startup: Bin\mysqld.exe--sqfe- Mode--slow-query-log//mysql5.5 version Bin\mysqld.exe-log-slow-queries=d:/abc.log//mysql5.0 version requires the MySQL service to be shut down and then restarted (not shutdown The current window instead closes the entire service). The path to the log file placement can be viewed DataDir in My.ini. A slow query will be logged to the log file at this time.
Common SQL optimizations: Mass Insert data: for MyISAM: 1.alter table table_name disable keys; 2.loading data; Insert statement; 3.alter table table_name enable keys; For InnoDB: 1. The data to be imported is sorted by primary key 2.set unique_checks=0, and the uniqueness check is turned off. 3.set autocommit=0, turn off auto-commit. optimize the group by statement by default, MySQL sorts all group by Col1,col2. This is similar to col2, which specifies order by col1 in the query. If the query includes group by but the user wants to avoid the consumption of sorting results, you can use ORDER by NULL to suppress sorting. In some cases, you can use a connection to replace a subquery. because using Join,mysql, you do not need to create temporary tables in memory.
Add Index: index is the most inexpensive way to improve database performance. No need to add memory, do not change the program, do not adjust the SQL statement, as long as the correct ' create INDEX ... ', the query speed may increase hundreds of thousands of times, but the increase in query speed is at the cost of inserting, updating, deleting, these write operations, increased a large number of I/O.
An index is a structure that sorts the values of one or more columns in a database table. The index acts as a catalog of books, and you can quickly find what you want based on the page numbers in the catalog. When a table has a large number of records, to query the table, there are two ways: the first way to search information is a full table search, is to remove all records one by one, and query criteria for one by one comparison, and then return to meet the criteria of the record, which will consume a lot of database system time, and cause a lot of disk I/O operations; The second is to create an index in the table, and then find the index value in the index that matches the query criteria, and finally find the corresponding record in the table by ROWID (the page number) saved in the index.
Main four types of indexes: (normal index, primary key index, unique index, full-text index) 1. Index additions: 1.1. Primary key index additions: When a table is set as the primary key, the column is the key index. CREATE TABLE AAA (ID int unsigned primary key auto_increment,name varchar (+) NOT null default '); The primary key index can also be added after the table is created: ALTER TABLE [table name] Add primary key [column name]; 1.2. Normal index additions: Generally speaking, the creation of a normal index is to create a table first and then create a normal index. CREATE TABLE CCC (int unsigned,name varchar (32)); Create INDEX [index name] on [indicate/column name] 1.3. Full-Text indexing is added: Full-text indexing is primarily for text retrieval, such as: articles. The full-text index is valid for MyISAM (a storage engine) and is valid only for English (Chinese with Sphinx (coreseek) technology). Stop word concept: Because in a text, creating an index is an infinite number, so, for some common words and characters, it is not created, these words, called stop words. CREATE TABLE articles (ID INT UNSIGNED auto_increment not NULL PRIMARY KEY, title VARCHAR (200), Body TEXT, Fulltext (title,body)) Engine=myisam charset UTF8; INSERT into articles (title,body) VALUES (' MySQL Tutorial ', ' DBMS stands for DataBase ... '), MySQL well ', ' after you went through a ... '), (' Optimizing MySQL ', ' in this tutorial we'll show ... '), (' 1001 MySQL Tricks ', ' 1. Never run mysqld as root. 2 ... '), (' MySQL vs. Yoursql ', ' in the following database comparison ... '), (' MySQL Security ', ' when Conf igured properly, MySQL ... '); How to use full-text indexing: Select *from articles where body like '%mysql% '; Error usage select *from articles where match (title,body) against (' database '); Correct usage through the Explain statement can be analyzed, how MySQL executes your SQL statement (explain SELECT * from articles where the body like '%mysql% '/g//explains the execution process). Explain tells us what kind of execution plan MySQL will use to optimize query (the statement has not been really executed yet). 1.4. Addition of a unique index: When a column of a table is specified as a unique constraint, the column is a unique index. CREATE TABLE DDD (ID int primary key auto_increment,name varchar () unique); This is the Name column, which is a unique index. A unique index can also create a unique index after the table is created: CREATE TABLE eee (ID int primary KEY auto_increment,name varchar (32)); Create unique index [index name] on [indicates/column name] The unique field can be null (not "), and can have multiple, with specific content (including") cannot be duplicated. The primary key field cannot be null and cannot be duplicated. 2. Indexed queries: desc [table name]//But cannot display index name show index (es) from [table name] show keys from [table name] 3. Delete of index: ALTER TABLE [table name] Drop index [indexed name]; ALTER TABLE [table name] drop PRIMARY key; Delete primary key index 4. Modification of the index: first delete, then recreate. Why is the lookup faster after I create an index? Using Binary tree algorithm (BinaryTree) to build index file, time complexity log2n. (sixth) different storage engines that use different algorithms to build index files. Common storage engine for MySQL: +--------------+----------------+| Storage Engine | Allowable index Types |+--------------+----------------+| MyISAM | BTREE | +--------------+----------------+| InnoDB | BTREE |+--------------+----------------+| Menory/heap | Btree/hash |+--------------+----------------+ index Cost: 1. Disk footprint--The index file consumes disk space 2. To DML (update delete insert) Efficiency impact of statements--need to maintain index files at the same timewhich columns are appropriate for adding indexes: more frequent as query criteria fields should be created index SELECT * from emp where empno = 1 uniqueness of the field is too poor to create indexes individually, even if frequently as query criteria select * FROM EMP where sex = ' man ' updated very frequent fields are not suitable for creating an index SELECT * from emp where Logincount = 1 does not appear in the WHERE clause the field should not create a summary of the cable: (Fields that meet the following criteria should be Build index) A: Certainly in the where bar is often used B: the contents of the field are not unique values (such as gender) C: field content does not change frequentlyUse of indexes: The most important condition for using indexes when querying is to use indexes in query conditions. The following scenarios may be used in the index: 1. For a multi-column index that is created, the index is typically used as long as the query condition uses the leftmost column. ALTER TABLE Dept Add index My_ind (dname,loc); Dname the left column, Loc is the right column explain select * FROM dept where loc= ' AAA ' \g//This will not use index 2. For queries that use like, the query if '%aaa ' does not use the index, and ' aaa% ' will use the index. (In a fuzzy query, the keyword cannot be preceded by a character such as% or _, and if it must be preceded by a changed value, consider using the full-text index->sphinx.) explain select * FROM dept where dname like '%aaa ' \g The following tables will not use the index: 1. If there is or in the condition, even if there is a conditional index that is not used (that is, all fields that are required to be used must be indexed, we recommend that you try to avoid using the OR keyword). 2. For multi-column indexes, not the first part of the use, the index is not used. 3.like queries are preceded by%. 4. If the column type is a string, be sure to use quotation marks to reference the data in the condition. Otherwise, the index is not used. (when added, the string must be ") 5. If MySQL estimates that using a full table scan is faster than using an index, the index is not used. View the usage of the index: Show status like ' handler_read% '; Handler_read_key: The higher the value the better, the higher the number of times that the index is queried. Handler_read_rnd_next: The higher the value, the less efficient the query. Select the appropriate storage engine: MyISAM: The table does not require a high level of transaction, while the operation is to query and add the main (such as BBS posting table and Reply table). Innoda: High transaction requirements, the data stored are important data (such as order form, Account table). Memory: Data changes very frequently, do not need to warehousing, while frequent query and modification. MyISAM, InnoDB difference: mainly engaged in business management, queryand add speed, support full-text indexing, lock mechanism, support for foreign keys in several ways. If the storage engine is MyISAM, it must be defragmented periodically. CREATE TABLE test100 (id int unsigned, name varchar (+)) Engine=myisam; Create a table with the MyISAM storage engine we should define the collation of the MyISAM optimize table [name]; Defragment a table Select the appropriate data type: in applications with high precision, it is recommended to use fixed-point numbers to store values to ensure the accuracy of the results. Deciaml don't use float***************************************************************************************.To complete a database backup regularly:1. How to manually back up the database/table: Mysqldump-u [username]-p[password] [database] > [File path]//In cmd console input, back up the database such as: Mysqldump-u root-pabc123 Temp & Gt D:\temp.bak//filename and extension can be casually written (-p and abc123 are no spaces) mysqldump-u [username]-p [password] [database] [table name 1] [table name 2] [table name 3] ... > [File path]//in CMD Console input, back up a few tables in the database 2. Use Backup files to recover data: source D:dept.bak//input in MySQL console 3. Use the timer to automatically complete the backup (write instructions for the backup data to the BAT file-the batch file, Timed execution via Task Scheduler):----mytask.bat----The absolute path to the bin of the%mysql installation directory (if there are spaces that enclose the path in double quotes)%/mysqldump-u root-pabc123 temp > D:\ Temp.bak 4. Using Task Manager to run batch files on a timed basis: Control Panel--task management By default is to overwrite the file, and consider how to save it as a date? *********************************************** **************************************** Partition of table:
When a table is large and has huge amounts of data, you can split a large table into multiple small tables. Horizontal table Segmentation (the structure of the table does not change): For example, when you divide a user table into three tables, you can use the value of the primary key id%3 to determine which table the user is on. 1. First I create three tables User0/user1/user2, 2. Then I create the UUID table, which is the function of providing the self-increment ID. CREATE TABLE User0 (ID int unsigned primary key, name varchar (+) NOT null default ' ', pwd varchar (+) No T null default ') Engine=myisam charset UTF8; CREATE TABLE user1 (id int unsigned primary key, name varchar (+) NOT null default ' ', pwd varchar (+) No T null default ') Engine=myisam charset UTF8; CREATE table user2 (ID int unsigned primary key, name varchar (+) NOT null default ' ', pwd varchar (+) No T null default ') Engine=myisam charset UTF8; CREATE table uuid (ID int unsigned primary key auto_increment) Engine=myisam charset UTF8; 3. When adding a user, determine which table should be added by ID. Vertical table segmentation (the structure of the table changes): Some of the table's fields (the query volume of these fields are very large, but do not need to query the field frequently), put in separate table, so as to improve efficiency. The general principle of selecting a field is to keep it small, so that you can use a field that is small in size without large pieces of space. For example, the primary key, it is recommended to use the self-increment type, so that space, space is efficiency. Press 4 bytes and press 32 bytes to locate a record, who is fast who is slow too obvious. involves severalWhen you join a table, the effect is even more pronounced. Special Note: It is recommended to use an ID key that does not contain business logic. ***************************************************************************************configuration optimizations for MySQL:The most important parameter is memory, we mainly use the InnoDB engine, so the following two parameters are very large. Innodb_additional_mem_pool_size = 64M innodb_buffer_pool_size =1g for MyISAM, you need to adjust the key_buffer_size, of course, adjust the parameters to see the state, with show St The ATUS statement can see the current state to determine which parameters to adjust. In My.ini, modify Port 3306, the default storage engine, and the maximum number of connections. ***************************************************************************************read/write separation:If the database is under a lot of pressure and one machine can't support it, you can use MySQL replication to achieve multiple machine synchronization, which will spread the pressure of the database (load balancing). For example: Master Library Master is used to write, slave1-slave3 are used to make select, each database share a lot less pressure; To realize this way, the program needs special design, write operations master, read all operation Slave, to the development of the program brings an additional burden; Of course, there are now middleware to implement this agent, the program to read and write which databases are transparent, the official has a mysql-proxy, but still alpha version. ***************************************************************************************Incremental Backup (5.0 cannot be incremental, 5.1 can): MySQL database will be in binary form, automatically the user to the MySQL database operation, log to the file, when the user wants to restore the time can use the backup file, to restore. Incremental backups record Add, delete, change, and build table operations, and do not log query operations (queries do not change the database). The contents of the record have an operational SQL statement, time of operation, position. Step: 1. Configure the My.ini file or MY.COF, enable binary backup (i.e. incremental backup) #The TCP/IP Port the MySQL Server will listen on port=3307 #指定备份文件的路径 Log-bin=d:/binlog/mylog 2. Start MySQL to get the file D:/binlog/mylog/mylog.index//index file, which backup files are stored d:/binlog/mylog/mylog.000001/ /real backup files you can use the Mysqlbinlog program to view the contents of the backup file. Mysqlbinlog [backup file path]//Enter the bin in the MySQL directory in cmd and then restore the database by backup file: Restore by point in time: Mysqlbinlog--stop-datetime= "2013-01-14 18 : 20:21 "d:/binlog/shunping.000001 | Mysql-uroot-p Recovery by Location: Mysqlbinlog--stop-position= "" "d:/binlog/shunping.000001 | Mysql-uroot-p Remove time period operation: Mysqlbinlog--start-datetime= "2013-01-14 18:20:21"--stop-datetime= "2013-01-14 18:40:21" d:/binlog/shunping.000001 | Mysql-uroot-p to re-execute the position segment: Mysqlbinlog--start-position= "--stop-position=" 2345 "d:/binlog/shunping.000001 | Mysql-uroot-p as time goes on, the data inside the binaries is getting bigger and larger, so do some cleanup work on a regular basis. 1.reset Master can delete all binary logs listed in the index file, re-set the binary log index file to null, and create a new binary log file 2.PURGE {Master | BINARY} LOGS to ' Log_name ' PURGE {MASTER | binary} LOGS before ' Date ' is used to delete all binary logs that are listed in the log index before the specified log or date. These logs are also deleted from the list in the log index file, so that the given log becomes the first one. 3. Set the-expire_logs_days under parameters [Mysqld] in the My.ini. This parameter sets the number of days that the log expires, the expired logs are automatically deleted, and the full and incremental backups are used at work: scenario: Make a full backup every Monday (mysqldump), enable incremental backups, and set the expiration time to be greater than or equal to the full backup cycle time. If the database is full-blown, first full recovery, in incremental recovery, if the error operation can see the incremental log for incremental recovery. (You need to view the details of the log file) ***************************************************************************************