I. Pitfalls in Design
1, too many columns
The MySQL storage Engine API works by copying data between the server layer and the storage engine layer through a row buffer format, and then decoding the buffered content into columns at the server layer. This is a costly operation, the cost of conversion depends on the number of columns, too many columns, the conversion costs will be very high.
2, too many associations
A rough rule of thumb, if you want to query and publish well, don't have more than 10 tables associated with a single query.
3. Excessive enumeration
When modifying the value of an enumerated column, the blocking operation of ALTER TABLE is required, at a high cost.
4, avoid the impossible value
CREATE TABLE date_test (
dt DATETIME not NULL DEFAULT ' 0000-00-00 00:00:00 ');
Mixing paradigm and inverse paradigm
The most common way to inverse normal data is to copy or cache the same specific columns in different tables. Cache derived values, for example, the total number of comments for a post. The purpose of the mixer is to make a compromise on the performance of the associated query and deletion and modification operation.
Third, cache tables and summary tables
Cache tables to store some of the column data on one of the main tables to provide efficient query operations.
The summary table holds a table that aggregates data using the GROUP BY statement.
These tables are created to reduce the number of special indexes on the main table because some queries need to be added.
When you rebuild a summary table or cache a table, you typically need to ensure that the data is still available when you manipulate it. This needs to be done by using the shadow table.
drop table if exists my_summary_new,my_summary_old;
CREATE table my_summary_new like My_summay;
#导入数据
Rename table My_summary to My_summary_old,my_summary_new to My_summary;
Four, counter
If you save a counter in a table and you may encounter concurrency problems when you update the counter, it is usually a good idea to create a separate table storage counter.
Suppose you have a counter table with only one row of data that records the number of clicks on the site.
Each click of the site will cause the counter to be updated
Update Hit_counter Set cnt = cnt + 1;
The problem is that for any transaction that wants to update this line, the record will have a global write lock, which will allow the transaction to be executed serially. To get a higher and release, you can save the counter in multiple rows and randomly select a row update at a time.
CREATE TABLE Hit_counter (
ID tinyint unsigned NOT NULL primary key,
CNT int unsigned NOT NULL
) engine = InnoDB;
Then add 100 rows of data in advance and randomly select a record to update when the update
Update Hit_counter set cnt=cnt+1 where id= rand () *100;
This statistic turns out to be
Select SUM (CNT) from Hit_counter;
A common requirement is to start a new counter every once in a while, which requires a simple change in the design of the table.