The cost of full table scan is calculated as follows:
Cost = ( #SRds * sreadtim + #MRds * mreadtim + CPUCycles / cpuspeed ) / sreadtim
During full table scan, the number of reads per block is 0, and # srds indicates the number of reads per block. In the cost of full table scan, the CPU consumption is actually very small and negligible. Therefore, the formula for full table scan can be rewritten:
Cost = #MRds * mreadtim / sreadtim
# MRDS indicates the number of multi-block read Io times
Mreadtim indicates that it takes time to read multiple data blocks at a time.
Sreadtim indicates that it takes time to read a single block.
The full table scan cost means actually multiple read Io times * multiple read times/single read time
So why should we divide the full table scan cost formula by sreadtim? Let's take a look at the index scan cost calculation formula:
cost = blevel + celiling(leaf_blocks *effective index selectivity) + celiling(clustering_factor * effective table selectivity)
Blevel is the root-to-branch of the index scanning --- single-block read
Celiling (leaf_blocks * Valid tive index selectiing) is the number of scanned blocks-single block read
Ceiling (clustering_factor * Valid tive table selectivity) is the number of times the table is returned through the indexed rowid-single block read
The essence of the cost formula for index scanning is actually the number of I/O reads per block.
For single-Table Access (what is single-Table access? Select only one table), either full table scan or index scan, but also materialized view (this is not considered)
Does Oracle perform full table scan or index scan? It is the cost used to calculate full table scans, and the cost used to calculate index scans can be used to calculate which cost consumes less.
The question is, why can Oracle computing cost accurately determine which one is the best? Now let's compare the cost of full table scan and the cost of index scan.
Full table scan costMulti-block read Io count * multi-block read time/Single-block read time
Index scan cost read Io count per block
So now we all scan the entire table cost, both scan the index cost multiplied by a single read time
Full table scan cost * single block read time = multiple block read Io times * multiple block read time = total consumed time
Index scan cost * single block read time = single block read Io count * single block read time = total consumed time
To put it bluntly, it is to select the full table scan and index scan who takes the least time.
You should have understood it here. I have to admire the minds of Oracle designers.
Why is the cost of full table scan (cost) divided by sreadtim?