Computers are very good at serializing, sequential writing, and sequential reads are highly efficient. In general, we think that reading data from main memory is more than reading chunks from disk, but the actual situation
Not necessarily, when we do the main memory random addressing the load is actually higher than the sequential read disk. So what is the inspiration for database optimization based on the characteristics of computers?
What are the disadvantages of using an index:
1. Constantly updating index pages, resulting in random write
2. When new data is constantly inserted, it is possible to cause page splitting of index pages, resulting in index fragmentation
How you should avoid using indexes:
When the amount of data is small, you should avoid using indexes, on the one hand can avoid index IO and index maintenance, more importantly, the data can be sequentially accessed, according to the computer's local principle, will
Greatly improve the data update and query speed.
Use clustered indexes appropriately:
You can consider a clustered index when you must use an index for a large amount of data. The leaf node of the clustered index is the data page (the leaf node of the nonclustered index is the index page), so on the physical device
is ordered so that the local specific of the calculation can be fully utilized, but this situation is not suitable for frequent insertions.
Write to disk as far as possible append writes:
For example, we want to update a field, generally we think the direct update is not a problem, but this will produce random write, we can append the new data, and then filter the data before the update.
Of course, this method is a lot of drawbacks, just describe the approximate idea.
Random reading and writing of computer and sequential reading and writing