- Index Design basis
The SQL statements that are related to the data table are counted.
Fields of the where order by or and so on are properly indexed
Principle:
High-frequency SQL statements
SQL statement with long execution time
Important SQL statements for business logic
What does a field not fit for indexing?
Fields with relatively monotonous content are not suitable for indexing
- Prefix index
A field is indexed by just a few things in front of it.
Benefits: Less index space, faster operation
The first n bits are indexed, and the first n bits have the characteristics that uniquely identify the current record
- Full-Text Indexing
Mysql5.5 only MyISAM storage engine can implement
Mysql5.6 MyISAM and InnoDB storage engines can be implemented
Fulltext Index name (field, field)
SELECT * from table name where field like '% content% ' or field like '% content% ';
SELECT * FROM table name match (field, field) against ("Content 1, Content 2");
Match (field, field) against ("Content 1, Content 2")
- Index structure
The index structure of MySQL is b+tree structure
The index is the data structure (naturally there is an algorithm), the algorithm can ensure that the information is very quickly found
Non-clustered (Myisam)
The key word of the leaf node (indexed field content) corresponds to the physical address of the record
Aggregation (INNODB)
Primary (key) index: The key word of the leaf node corresponds to the entire record
Non-primary (single/normal/full-text) Index: leaf node keyword corresponds to primary key keyword
- Query cache
Open cache, open cache space (64MB)
Cache invalidation: Table or data content changes
Do not use caching: SQL statements have changed information, such as the current time, random number
Multiple SQL statements of the same business logic, with different structures (space change, case change, and so on), each SQL statement will set the cache separately
- Partition, sub-table design
Partition table Algorithm (MYSQL): Key Hash Range list
(PHP code is not changed)
Partition Increase or Decrease:
Reduction: The Hash/range/list type algorithm loses the corresponding data
- Vertical Sub-table
Split multiple fields of a data table into separate data tables
The algorithm involved is PHP-level
- Architecture Design
Master-Slave mode (read/write separation, one master multi-slave)
The primary server is responsible for "writing" the data, and the server is responsible for "reading" the data
"Master" automatically synchronizes data to "from" (MySQL itself technology)
With load balancing, you can get data from the server on average
- slow query log settings
Show variables like ' slow_query_log% ';
Turn on slow query log switch
Set time thresholds
Two. MySQL optimization
1. Write a lot of record information
Guaranteed data very fast to write to the database
INSERT into table name values (), (), (), ();
More than one INSERT statement can write multiple record information at the same time, but do not write too much
The avoids unexpected occurrences.
Can write less at a time, such as writing 1000 each time, so that 1 million of the record information, 1000 times the INSERT statement can be done.
Batch split time writes data to the database.
The method used to write large amounts of data over time:
Write data (1000)-----> 1000 Data Maintenance index
Write data (1000)-----> Maintain index
For 2nd 1000 data ...
Write Data (1000)-----> Maintain index for 1000th 1000 data
The above design writes 1 million records information, the time is mainly by "maintenance index" to occupy the
If you optimize: You can reduce the maintenance of indexes and achieve less overall uptime.
(index maintenance does not need to do 1000 times, want to do it once)
FIX:
First stop the index, specifically write the data to the database first, Finally in a one-time maintenance index
1.1 MyISAM datasheet
- Data already exists in the datasheet (the index already exists)
ALTER TABLE name disable keys;
Bulk Write Data
ALTER TABLE name enable keys; Final Unified Maintenance Index
- In the data tableNoData (there is nothing inside the index)
ALTER TABLE name drop PRIMARY key, DROP index index name (unique/normal/full text);
Bulk Write Data
ALTER TABLE name add PRIMARY key (ID), (unique/Full text) index name (field);
1.2 InnoDB Data SheetThe storage engine supports "transactions"
This feature allows us to write a large number of SQL statements at once
Specific operation:
Start transaction;
Large amount of data write (1 million record information insert executed 1000 times)
The data is not written to the database when the insert is executed inside the transaction
"Index" maintenance is performed only if the data is actually written to the database
Commit
The "index" is automatically maintained at the end of the commit execution;
2. Single-table, multi-table queryDatabase operations are sometimes designed to even table queries , sub-query operations.
Compound queries typically involve multiple data tables,
Multiple data tables to do query benefits: SQL statement logic clear, simple
What is inappropriate is that it consumes more resources and takes a long time
Not conducive to concurrent processing of data tables because it takes a long time to lock multiple tables
For example:
Check the total number of items under each brand (Goods/brand)
Goods:id name bd_id
brand:bd_id Name
Select B.bd_id,b.name,count (g.*) from Brand B joins Goods G on b.bd_id=g.bd_id Group by b.bd_id;
The total running time of the above SQL statement is 5s
But the business requirement is that the concurrency of the database is high, it is necessary to change "multiple queries" to "single Table query"
Steps:
①select Bd_id,count (*) from Goods GROUP by bd_id; Check the number of items per brand//3s
②select bd_id,name from Brand; 3s
③ integrates ① and ②//1s in PHP via logic code
3. Limit usagedata paging using limit;
Limit offset, length (number of bars per page);
Offset: (Current page-1) * Number of articles per page
Paging implementation:
Get 10 messages per page:
Limit 0, 10;
Limit 10, 10;
Limit 20, 10;
Limit 30, 10;
Limit 990, 10; 100th Page
Limit 9990, 10; 1000th page
Limit 99990, 10; 10,000th page
Limit 999990, 10; 100,000th page
Limit 1499990, 10; 150,000th page
Limit 1500000, 10; 150,001th page
SELECT * from emp limit 1500000, 10; 1 seconds more Time
SELECT * from emp where empno>1600001 limit 10; Fast 0.00-Second
The data table currently has a empno primary key index :
limit offset, length, long run time:
Simple running limit running time is relatively long, the internal not using Index, page page before the effect of information to get out, but the "more" past, so a waste of time
Now optimize for SQL statements that get the same page number information
The combination of where and limit is changed from a simple limit:
Execution speed is significantly faster because it has an index that uses the Where Condition field
4. ORDER BY nullForce not sorted
Some SQL statements are executed by default and have a sort effect on their own.
But sometimes our business does not need a sort effect, we can enforce the restriction, and then "Save the default sort" to bring the resource consumption.
Group BY Field;
The results obtained are sorted by default according to the Group field:
ORDER BY NULL forces a non-ordering, saving the corresponding resources: