A recent project involves data-table queries of large data volumes. The total data table is about 700 million-2 billion, the primary key to establish a global unique index, the partition policy is hash partition + range partition, most of the time the query condition hit record more than million, a single return to the previous XX record. The following experiences are summed up in the tuning process:
(1) Minimize the interval limit, even if the attribute is an indexed field, as permitted by the business rules. such as: Date>=min and DATA <= max. If there are interval restrictions, all records will be compared with the restriction, and if the number of records is large, the query efficiency will be seriously affected.
(2) Use ORDER by ordering as far as possible to ensure consistency with the index order, Oracle will be indexed by default in ascending order, the order by rule and the index is established in the same order, no more sorting operations
(3) If the query criteria are in the index range, consider searching the rowid of the index first, and get the data by rowID back to the table. The benefit of this approach is to minimize back-table access. It is important to note that this method is suitable for cases where the amount of data is smaller, such as paging records.
At present, there are many hash partition data access sorting efficiency is very slow problem, still in the solution, to be continued!!!!
(4) When querying the total number of records, it is recommended that SQL be written as select COUNT (1) from table_name WHERE [OPTION CONDITION ...]. If the query criteria are indexed fields, the benefit of this writing is that you do not have to do a table-back operation to directly find the index to get results.
Oracle Database SQL tuning