1. It is a sub-database rather than a sub-table, which requires consideration of the introduction of a sub-table algorithm, but also affects subsequent queries.
2. Thermal data is only part of the total data, so every time we first query the hot library, the following conditions only query cold storage
--a. When the query condition is not hit (the result set is empty), query the cold storage.
--b. When the query condition section hits, query cold storage.
3. In order to distinguish partial hits and all hits, we can build an R table in the hot storehouse to store the query conditions and the number of query results for each query of cold storage, and compare the number of query results of the same query nonlocality each time. , the query ends at the same time. Inconsistent, you need to go to the cold storage to query.
4. Better plan: Inconsistent situation, only to the cold storage to query the data not found. At this point the R table needs to store not only the number of query results, but also all the primary keys of the query results.
5. For example, 100 of the 80 or hot data 20 into the cold data should be only a cold database to initiate these 20 data requests. At this point the R table data needs to be compared, only part of the cold data.
6. Nonlocality = Cold storage: When you query and use hot data, move hot data that is no longer used for a period of time to the cold storage.
7. Cold storage = nonlocality: Query cold storage, the results of this query moved to the hot library, with the latest query date.
8. Data synchronization (each query is carried out or reached a certain magnitude)
SQL Server Tens above data Table query optimization scheme "Hot and Cold database separation" thinking