How Oracle tables are connected (2)-----Basic mechanism of HASH join 2

Source: Internet
Author: User

principle of hash algorithm

What is the principle of hash algorithm? This question is a bit difficult, not very clear, to make a metaphor: we have a lot of pigs, each weight is different, assuming that the weight distribution is relatively average (we consider the kilogram level), we divide by weight, divided into 100 small pigsty. Then each pig, in accordance with the weight of the drive into their own pigsty, record files. Well, what if we're going to find some little piggy? We need every pigsty, every piggy, right? Of course it's not necessary. Let's look at the weight of the piggy we're looking for, and we'll find the corresponding pigsty. The number of piglets in this pigsty is relatively small. In this pigsty we can find the little pig that we want to find relatively quickly. Corresponds to the hash algorithm. is to assign a different pigsty according to Hashcode, and put the same pig hashcode in a pigsty. When looking, first find the hashcode corresponding to the pigsty, and then compare the inside of the pig. So the crux of the matter is how many pigsty it is appropriate to build. If each pig's weight is all different (considering the milligram level), each of which builds a pigsty, we can find the pig at the fastest speed. The disadvantage is that the cost of building so many pigsty is a little too high. If we divide by 10 kilograms, then there are only a few pens built, so there are a lot of pigs in each lap. Although we can find the pigsty very quickly, it is very tiring to identify the pig from the pigsty. So, good hashcode, can be based on the actual situation, according to the specific needs, in the time cost (more pigsty, faster speed) and space Ben (less pigsty, lower space demand) balance between. (Excerpt from the network)

hash Join basic process (can be compared with "hash join mode")

If two tables (named Table T1 and table T2 respectively) use a hash connection when making a table connection, Oracle performs the following steps sequentially in a hashed connection:

1, determine the number of hash partition : First Oracle will determine the hash based on the values of the parameters hash_area_size, Db_block_size, and _hash_multiblock_io_count. The number of Partition (hash Partition is a logical concept, all the set of hash Partition is called hash table, that is, a hash table is composed of multiple hash Partition, and a hash Partition is also made up of several hash buckets);

2. determine the drive result set : Table T1 and T2 The result set with the lower amount of data in the result set when the predicate condition specified in the target SQL is applied (if any) is selected by Oracle as the drive result sets of the hash connection, Here we assume that the T1 of the corresponding result set of the relatively small amount of data, we recorded as s;t2 the corresponding result set of the relatively large amount of data, we remember as B; Obviously here S is the driving result set, B is driven result set;

3. calculate the hash value: then Oracle traverses S, reads each record in S, and hashes each record in s by that record in the Join column in table T1, which uses two built-in hash functions, both of which compute the hash value for the connection column at the same time. The two built-in hash functions are recorded as hash_func_1 and Hash_func_2 respectively, and their computed hashes are recorded as Hash_value_1 and hash_value_2 respectively;

4. Group: then Oracle stores corresponding records in the corresponding s in the different hash buckets of different hash partition according to the value of Hash_value_1, and the record is stored together with Hash_func _2 calculates the value of the hash_value_2. Note that the record stored in the hash bucket is not the complete row record of the target table, but rather it is sufficient to store the query columns and connection columns that are related to the target table in the target SQL; we partition each hash of s corresponding to the SI;

5. Build a bitmap : while building the SI, Oracle constructs a bitmap (BITMAP), which is used to mark whether each of the hash buckets contained in SI is recorded (that is, if the number of records is greater than 0)

6, the construction of the hash table process : If the data size of S is very large, then in the construction of the corresponding hash table of s, it is possible that the PGA's working area is filled, this time Oracle will be the workspace of the existing hash Partition contains the most records of the hash partition written to disk (temp tablespace), then Oracle will continue to build the corresponding hash table s, in the process of continuing to build, if the workspace is full, Oracle will continue to repeat the above selection of the hash partition that contains the most records and write it back to the disk, if the hash partition for the record to be built has been written back to disk by Oracle beforehand, Then Oracle will go to the disk to update the hash Partition, which will add the record and hash_value_2 directly to the disk is already located in the hash Partition the corresponding hash bucket; Note that in extreme cases there may be only Some of the hash partition are still in memory, and the remainder of the hash partition and all the remaining hash partition have been written back to disk. Note

7. The process of constructing the hash table corresponding to s will persist until all the records in s are traversed;

8, Sort: then Oracle will sort all the SI according to the number of records they contain, and then Oracle will put the ordered hash partition in order and put it all in memory (PGA's workspace), of course, If it does not fit, the part of the hash partition will still be located on the disk. I think this sort of action is not necessary in terms of the number of records in SI, because the fundamental purpose of this sort action is to keep as many of the smaller hash Partition in memory as possible, and to write those that have been written back to disk, The hash partition with a large number of records and the existing memory has not been placed on the disk, obviously, if all SI is in memory, there is no way to write Si back to the disk operation, there is no need to sort at all.

9. Now that Oracle has finished processing s, we can begin to deal with B.

10, Oracle will traverse B, read each record in B, and each record in B according to the record in the table T2 the connection column hash operation, the hash operation is identical to the hash in step 3, that is, the hash operation will still use the hash_func_1 and hash in step 3 _func_2, and two hash values hash_value_1 and hash_value_2 are computed, and then Oracle will hash_value_1 a matching hash Bucket to the SI based on the hash value corresponding to the record; If a matching hash can be found Bucket, Oracle also iterates through each record in the hash bucket and verifies the connection column of each record stored in the hash bucket to see if it is a true match (that is, to verify that the matching records in S and B correspond to the true equality of the connection columns, Because for the hash operation, different values may be the same after the hash operation, if it is a true match, then the records in the corresponding B hash_value_1 the query column in the target SQL and the matching record in the hash bucket will be combined. Together as a record that satisfies the target SQL join condition, if no matching hash bucket is found, Oracle accesses the bitmap built in step 5, if the bitmap shows that the hash bucket has a record number greater than 0 in the SI, then the hash Although the bucket is not in memory, but it has been written back to the disk, then Oracle will follow the above hash_value_1 value to the corresponding B corresponding record also in the form of a hash partition write back to disk, At the same time, the record is stored together with the value of the hash_value_2 computed by hash_func_2, and if the bitmap shows that the hash bucket corresponds to 0 in the SI, Oracle does not have to put the above Hash_value_ 1 the records in the corresponding B are written back to the disk, because this record must not satisfy the target SQL connection condition; This is based on the bitmap to determine whether to write the records in the corresponding B hash_value_1 to the disk is called "bitmap filtering"; We have each hash of B corresponding to a Partition as BJ;

11, the above-mentioned to find matching hash bucket and build BJ process will continue until the completion of all the records in B;

12. Now that Oracle has processed all of the SI and the corresponding BJ in memory, only the SI and BJ located on the disk are left unresolved;

13, because the same hash function hash_func_1 and hash_func_2 are used when building Si and BJ, Oracle can be assured of pairing processing when processing Si and BJ on disk, that is, only the corresponding hash Partition Si and BJ with the same number value may produce records that meet the connection conditions; Here we use Sn and bn to represent the same SI and BJ that are located on disk and correspond to hash Partition number values;

14. For each pair of Sn and bn, less of them will be treated as a drive result set, and Oracle will use the hash_value_2 in the hash bucket of the drive result set to build a new hash Table, A larger number of records will be treated as a driven result set, and Oracle will use the hash_value_2 recorded in the hash bucket of the driven result set to find a matching record in the new hash table constructed above; Note that for each pair of Sn and bn, Oracle will always choose to have fewer records in them as a driving result set, so the drive result set for each pair of Sn and bn may change, which is called "Dynamic role swapping";

15, in step 14 if there is a matching record, the matching record will also be returned as a record satisfying the target SQL connection condition;

16, the above process of processing Sn and BN will continue until all the SN and bn are traversed.

The advantages and disadvantages of hash joins and the applicable scenarios are summarized as follows:

1. Hash joins do not necessarily sort, or in most cases do not need to be sorted;

2. The connection column corresponding to the hash connected driver table should be as good as possible, because this selectivity can affect the number of records in the corresponding hash bucket, and the number of records in the hash bucket will directly affect the efficiency of finding matching records from the hash bucket. , if a hash bucket contains too many records, it can seriously reduce the execution efficiency of the corresponding hash connection, when the typical performance is that the hash connection execution has not ended for a long time, the database server on which the CPU utilization is high, However, the logical reading consumed by the target SQL is very low, because most of the time is spent on all the records in the above hash bucket, and traversing the hash bucket records that this action occurs in the PGA's work area, so it does not consume logical reading;

3. The hash connection applies only to the CBO, and it can only be used for equivalent connection conditions (even if the hash is an inverse connection, Oracle actually converts it to an equivalent equivalent connection);

4. Hash joins are well suited for table joins between a small table ( result set ) and a large table, especially if the connection column of a small table is very selective, the execution time of the hash connection can be approximated as the time spent in the full table scan of that large table;

5. When the two tables are hashed, the hash table corresponding to the result set with the lower amount of data in the specified predicate condition (if any) applied to the target SQL can be fully accommodated in memory (PGA's workspace). The execution efficiency of the hash connection at this point is very high.

---organized from the network

How Oracle tables are connected (2)-----Basic mechanism of HASH join 2

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.