First stage:
1, be sure to design the index correctly
2, be sure to avoid SQL statement full table scan, so SQL must go index (for example: all > <!!) and so on will result in a full table scan)
3, be sure to avoid the limit 10000000,20 such queries
4, be sure to avoid the left JOIN and other queries, do not put such a logical processing to the database
5, each table index do not build too much, big data will increase the write pressure of the database
Phase II:
1, using the Sub-table technology (large table sub-table)
A) Vertical sub-table: Separate part of the field, design the composition table, based on the primary key association of the main table
b) Horizontal table: Splits the records in the same field table into multiple sub-tables according to a hash algorithm
2, the use of MySQL partitioning technology (must be more than 5.1 version, this technology is completely able to combat Oracle), and the horizontal sub-table a bit similar, but it is the level of the logical layer table
Phase III (server side):
1, using Memory object cache system such as memcached, reduce the database read operation
2, using master-slave database design, separating the reading and writing pressure of database
3, using proxy server and Web cache server technology such as squid
PS: Because of the space problem, I simply say some basic concepts, in fact, each of the knowledge points related to a lot of content. Especially in the first stage, many programmers who have worked for several years are not fully understood. I think to really understand the index, the best way is to 1000w-more than billions of data, the test SQL statement, and then combined with the explain command to view the SQL statement index situation.
How to deal with MySQL data 500 million per month