Sort Merge Join, Nested loops, Hash join (three types of connection)

Source: Internet
Author: User

So far, there are 3 typical types of connections:


Sort Merge Join (SMJ sort-merge connection):


First, the data required for the driving table is produced, then sorted by the Connection Operation Association column, then the data required by the probed table is produced, and the data is sorted by the Connection Action column corresponding to the driving table At the end, the rows that have been sorted on both sides are put together to perform the merge operation. Sorting is a time-consuming, resource-intensive operation, especially for large tables. So SMJ is usually not a particularly effective connection method, but if driving table and probed table are already pre-sequenced, this connection method is more efficient.


Nested Loops (nl nested loops):


The connection process is the process of nesting driving table and probed table in a nested loop. is to match all rows of probed table with each row of the driving table. Nested loops can return a connected row without having to wait for all of the connection operations to complete before returning the data, which enables fast response times.




Hash join (Hashed connection):


The smaller row source is used to construct hash table and bitmap, and the second row source is used to be hashed and matched with the hash table produced by the first row source. For further connectivity. This connection is highly efficient when the built hash table and bitmap can be accommodated in memory. However, the appropriate hash_area_size parameters need to be set and can only be used in equivalent connections.


Cartesian product (Cartesian product): Each row of the table matches all rows in the other table in turn.




_______________________________________________________
Experiment:
Sql> CREATE TABLE Segs as SELECT * from dba_segments where owner= ' SYS ';


Table created.


Sql> CREATE TABLE objTS as SELECT * from dba_objects where owner= ' SYS ';


Table created.


Sql> Select COUNT (*) from Segs;


COUNT (*)
----------
2355
Sql> Select COUNT (*) from OBJTS;


COUNT (*)
----------
30967
Sql> CREATE index Idx_segs_name on Segs (segment_name);


Index created.


Sql> CREATE index Idx_objts_name on objTS (object_name);


Index created.


sql> exec dbms_stats.gather_table_stats (user, ' segs ', cascade = TRUE);


PL/SQL procedure successfully completed.


sql> exec dbms_stats.gather_table_stats (user, ' objTS ', cascade = TRUE);


PL/SQL procedure successfully completed.


Role: Dbms_stats. Gather_table_stats statistics for tables, columns, and indexes.
Dbms_stats. The syntax for gather_table_stats is as follows:
Dbms_stats. Gather_table_stats (Ownname VARCHAR2, tabname VARCHAR2, PartName VARCHAR2, estimate      _percent number, block_sample BOOLEAN, method_opt VARCHAR2, degree number, granularity VARCHAR2, Cascade BOOLEAN, Stattab VARCHAR2, Statid VARCHAR2, Statown VAR CHAR2, No_invalidate Boolean, Force Boolean);
Parameter description:
Ownname: To analyze the owner of a table
TabName: The name of the table to parse.
PartName: The name of the partition, which is useful only for partitioned tables or partition indexes.
Estimate_percent: The percentage of sampled rows, the range of values [0.000001,100],null is all analyzed, not sampled. Constant: Dbms_stats. Auto_sample_size is the default value and is determined by Oracle to determine the best sampling value.
Block_sapmple: Whether block sampling is used instead of row sampling.
Method_opt: Determines how the histograms information is counted. The value of Method_opt is as follows:
For all columns: counts the histograms of all columns.
For all indexed columns: counts the histograms of all indexed columns.
For all hidden columns: statistics You can't see the histograms of the column
For columns <list> SIZE <N> | REPEAT | AUTO | Skewonly: Counts the histograms of the specified column. The value range of n [1,254]; Repeat the last statistic histograms; Auto is determined by Oracle for the size of N; Skewonly multiple end-points with the same value which are what we define by "there are skew in the data
Degree: Determines the degree of parallelism. The default value is null.
Granularity:granularity of statistics to collect, only pertinent if the table is partitioned.
Cascace: Is the information that collects the index. The default is Falase.
STATTAB Specifies the table to store statistics for, Statid if the statistics for multiple tables are stored in the same stattab for differentiation. Statown stores the owner of the statistics table. If the above three parameters are not specified, the statistics will be updated directly to the data dictionary.
No_invalidate:does not invalidate the dependent cursors if set to TRUE. The procedure invalidates the dependent cursors immediately if set to FALSE.
Force: Collects statistics even if the table is locked.
Example:
Execute dbms_stats.gather_table_stats (ownname = ' owner ', tabname = ' table_name ', estimate_percent = null, Method_opt = ' For all indexed columns ', cascade = TRUE);
--------------------------------------------------------------------------------------------------------------- ---------
Since Oracle8.1.5 introduced the Dbms_stats package, experts has recommended using dbms_stats instead of analyze. For the following reasons


Dbms_stats can be analyzed in parallel
Dbms_stats features automatic analysis (ALTER TABLE monitor)
Analyze inaccurate some of statistical information




Good understanding, and the 2nd is actually the most attractive in VLDB, 3 before the more vague, read metalink236935.1 explanation, analyze in the analysis of partition table, sometimes calculate the inaccurate global statistics.


The reason is that dbms_stats will actually parse the table global statistics (when specifying parameters), whereas analyze is the statistics summary of table partitioning (local) is calculated as a global statistics, which can lead to errors.




Sql> SELECT * from Segs, objts where segs.segment_name=objts.object_name;


2851 rows selected.




Execution Plan
----------------------------------------------------------
Plan Hash value:779051904


----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time |
----------------------------------------------------------------------------
| 0 |       SELECT STATEMENT |  |   2450 |   528k| 139 (1) | 00:00:02 |
|* 1 |       HASH JOIN |  |   2450 |   528k| 139 (1) | 00:00:02 |
|   2 | TABLE ACCESS full|  Segs |   2355 |    287k| 14 (0) | 00:00:01 |
|   3 | TABLE ACCESS full| objTS |  30967 |   2903k| 125 (1) | 00:00:02 |
----------------------------------------------------------------------------


predicate information (identified by Operation ID):
---------------------------------------------------


1-access ("Segs". Segment_name "=" objTS "." object_name ")




Statistics
----------------------------------------------------------
1 Recursive calls
0 db Block gets
674 consistent gets
0 physical Reads
0 Redo Size
228361 Bytes sent via sql*net to client
2561 bytes received via sql*net from client
192 sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
2851 rows processed


Sql> Select/*+use_merge (SEGS,OBJTS) */*from segs, objTS where segs.segment_name=objts.object_name;


2851 rows selected.




Execution Plan
----------------------------------------------------------
Plan Hash value:2272228973


-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | tempspc| Cost (%CPU) | Time |
-------------------------------------------------------------------------------------
| 0 |       SELECT STATEMENT |  |   2450 |       528k|   | 822 (1) | 00:00:10 |
|  1 |       MERGE JOIN |  |   2450 |       528k|   | 822 (1) | 00:00:10 |
|   2 |       SORT JOIN |  |   2355 |       287k|    | 15 (7) | 00:00:01 |
|    3 | TABLE ACCESS full|  Segs |   2355 |       287k|    | 14 (0) | 00:00:01 |
|* 4 |       SORT JOIN | |  30967 |  2903k|   8136k| 807 (1) | 00:00:10 |
|    5 | TABLE ACCESS full| objTS |  30967 |       2903k|   | 125 (1) | 00:00:02 |
-------------------------------------------------------------------------------------


predicate information (identified by Operation ID):
---------------------------------------------------


4-access ("Segs". Segment_name "=" objTS "." object_name ")
Filter ("Segs". " Segment_name "=" objTS "." object_name ")




Statistics
----------------------------------------------------------
1 Recursive calls
0 db Block gets
487 Consistent gets
0 physical Reads
0 Redo Size
248233 Bytes sent via sql*net to client
2561 bytes received via sql*net from client
192 sql*net roundtrips To/from Client
2 Sorts (memory)
0 Sorts (disk)
2851 rows processed


Sql> select/*+use_nl (SEGS,OBJTS) */*from segs, objTS where segs.segment_name=objts.object_name;


2851 rows selected.




Execution Plan
----------------------------------------------------------
Plan Hash value:2045044449


-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time |
-----------------------------------------------------------------------------------------------
| 0 |                SELECT STATEMENT |  |   2450 |  528k| 4725 (1) | 00:00:57 |
|  1 |                NESTED LOOPS |       |       |            |          | |
|   2 |                NESTED LOOPS |  |   2450 |  528k| 4725 (1) | 00:00:57 |
|    3 | TABLE ACCESS Full |  Segs |   2355 |    287k| 14 (0) | 00:00:01 |
|* 4 | INDEX RANGE SCAN |     Idx_objts_name |       1 |     | 1 (0) | 00:00:01 |
|   5 | TABLE ACCESS by INDEX rowid|     objTS |    1 |     96 | 2 (0) | 00:00:01 |
-----------------------------------------------------------------------------------------------


predicate information (identified by Operation ID):
---------------------------------------------------


4-access ("Segs". Segment_name "=" objTS "." object_name ")




Statistics
----------------------------------------------------------
1 Recursive calls
0 db Block gets
3473 consistent gets
0 physical Reads
0 Redo Size
227906 Bytes sent via sql*net to client
2561 bytes received via sql*net from client
192 sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
2851 rows processed


Three ways to connect, SQL data volume, the same statement, and finally get different cost consumption. As you can see, the random reads of the Nest Loop join increase dramatically as the amount of data reaches the million level, and the cost of CPU and total execution time is greatly increased.


The block reads with the merge Sort join are relatively small, but the CPU cost and execution time are not negligible. Mapping the data collection to memory (possibly using the temp tablespace) requires a lot of CPU and memory resources (sort segments).


In general, the Hash join in this SQL can still bring a good comprehensive performance. Only blocks read slightly larger, and other indicators are the best value to accept.


Here we introduce some of the system parameters associated with the hash join, and the three operating modes of the hash join. Different system parameters may have an impact on the CBO cost operation. Different modes of operation help us understand how the Hash_area size in the PGA affects the performance of the hash join operation.






3, Hash join related parameters


A hash join is an execution plan action that the CBO optimizer can generate, and if you select Rbo, you cannot generate an execution plan that includes a hash join. In addition, the Oracle parameters associated with the hash join include the following:


Ühash_join_enable


This parameter is a switch that controls the CBO-enabled hash join. If set to true, it means that the CBO can use a hash join connection, otherwise it cannot be used. In the current version, this parameter has been translated into an implied parameter named "_hash_join_enable".


Sql> col name for A20;
Sql> col value for A10;
Sql> Col describ for A30;
Sql> SELECT x.ksppinm NAME, Y.KSPPSTVL VALUE, X.ksppdesc describ
2 from Sys.x$ksppi x, SYS.X$KSPPCV y
3 WHERE x.inst_id = USERENV (' Instance ')
4 and y.inst_id = USERENV (' Instance ')
5 and X.indx = Y.indx
6 and x.ksppinm like '%hash_join_enable% ';


NAME VALUE Describ
-------------------- ---------- ------------------------------
_hash_join_enabled TRUE enable/disable Hash Join




Ühash_area_size


The Hash join operation is dependent on independent private spaces, which we call Hash_area. The role of hash area in the join process is to cache the connection table as much as possible in the hash area for hash matching and bucket internal exact matching. Hash area is a space that is stored in the PGA and belongs to the session Independent. If the hash area is small enough to hold all the data in a small table, it will cause the use of the temp table space and affect the hash join performance.


4. Connect three modes


Hash Join Compare Merge Sort join a comparative advantage is the limited use of the PGA space. However, using the PGA is, after all, a risky operation. Because the hash area is the same as the sort area, the hard disk space of the temp tablespace is called when the small table cannot be fully loaded into the system. In this way, some problems will arise.


The following is a description of three modes, drawing on the Oracle Hash Join (http://www.alidba.net/index.php/archives/440) from the eight-god predecessors. Thank you for that.


For different states, Oracle has a different pattern corresponding to each other.


Optimal mode


This is the ideal situation for us to hash join. The set of hash data generated by the driver table (small table) can be completely stored in the hash area, which we call the optimal mode.


ü First find the driver table and get to the driver table. stored in the Hash_area;
In the hash area, the driver table is hashed to form a hash bulket to form the corresponding partition information. For multiple bulket, at the same time form a bitmap list, to achieve bulket and bitmap-bit contact;
ü In each bulket, different data rows are distributed. If the connection column distribution is more uniform, the data in the Bulket is more uniform. If data is included in the Bulket, the bitmap bit that should be bulket is 1, otherwise 0;
ü Find each column of the driver table, the concatenated column value is hashed. Matches the bitmap bit, if the bitmap is 0, indicates that the column value does not exist and is discarded directly. Otherwise enter the bulket for exact matching;




OnePass mode


If the PGA space we set is small, or the size of the attached small table is already large, then the temporary table space is used. The specific processing, is to carry out two times the hash processing, on the bulket level of the above to establish the partition partition.


When the hash operation occurs, a part of the partition is in memory, and the other part of the partition is stored in the temp table space.


In the case of connection matching, if it can be determined in bitmap to partition in memory, then the retrieval and exact matching process is done directly in memory. Otherwise, the corresponding partition is transferred to memory from the temp table space for matching operations.




Multipass Mode


This is a very extreme situation, if the hash area small to a partition can not fit. After the hash operation, only half of the partition can be loaded into the hash area.


In this case, if a partition match is not done, it is not possible to abandon the operation, to get the remaining half of the partition to a hash join match. That is, a partition to pass two times the bitmap matching process.




5. Conclusion


Hash join is a very high-efficiency, common way of connection in the CBO era. However, compared with other classical algorithms, the integrated efficiency of Hash join is very high, especially in the mass data age.

Sort Merge Join, Nested loops, Hash join (three types of connection)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.