Oracle batch binding forall bulk collect usage and Test Cases

Source: Internet
Author: User
Document directory
  • Use bulk collect to reduce loop processing overhead
  • Oracle batch binding forall bulk collect
  • Use bulk collect to improve Oracle query efficiency
Bulk collect can be used to reduce the overhead of loop processing. Bulk collect can be used to load query results to collections at a time. Instead of processing it one by one Using cursor. You can use Bulk collect in select into, fetch into, and returning into statements. Note that when bulk collect is used, all into variables must be collections.
Here are some simple examples: -- Use Bulk collect in select into statementDeclare
Type sallist is table of EMP. Sal % type;
SALS sallist;
Begin
-- Limit the number of rows to 100.
Select Sal bulk collect into SALS from EMP
Where rownum: <= 100;
-- Retrieve 10% (approximately) of the rows in the table.
Select Sal bulk collect into SALS from EMP sample 10; end;
/ -- Use Bulk collect in fetchDeclare
Type deptredtd is table of dept % rowtype;
Dept_recs deptrectl;
Cursor C1 is
Select deptno, dname, LOC from Dept where deptno> 10;
Begin
Open C1;
Fetch C1 bulk collect into dept_recs;
End;
/ -- Use Bulk collect in returningCreate Table emp2 as select * from employees;
Declare
Type numlist is table of employees. employee_id % type;
Enums numlist;
Type namelist is table of employees. last_name % type;
Names namelist;
Begin
Delete from emp2 where department_id = 30
Returning employee_id, last_name bulk collect into enums, names;
Dbms_output.put_line ('deleted' | SQL % rowcount | 'rows :');
For I in enums. First... enums. Last
Loop
Dbms_output.put_line ('employee # '| enums (I) |': '| names (I ));
End loop;
End;
/
Drop table emp2; Oracle batch binding forall bulk collect http://otn.oracle.com/oramag/oracle...tech_plsql.html http://otn.oracle.com/docs/products..colls.htm%23723the use of limit in bulk binds, please refer to Tom's explanation
Http://asktom.oracle.com/pls/ask/f ?... 5918938803188, y

Forall enhancement in Oracle10g
Http://www.itpub.net/showthread.php? S = & threadid = 184794 bulk binds improves performance by reducing context switches between PL/SQL and SQL engines. bulk binds includes:
(I) Input collections, use the forall statement to Improve the Performance of DML (insert, update, and delete) Operations
(Ii) Output collections, use Bulk collect clause, which is generally used to improve the query (select) performance by 10 Gb. The forall statement can be used in three ways:
I in low... Up
I in indices of collection get the value of the lower mark of the Set Element
I in values of collection get the value of the Set Element
The forall statement can also use some of the Set element SQL % bulk_rowcount (I) to indicate the number of rows that the forall statement applies to element I CREATE TABLE parts1 (pnum integer, pname varchar2 (15 ));
Create Table parts2 (pnum integer, pname varchar2 (15 ));
Create Table parts3 (pnum integer, pname varchar2 (15 ));
Create Table parts4 (pnum integer, pname varchar2 (15); Set serveroutput on -- set the screen display switch to declare
Type numtab is table of parts1.pnum % Type Index by pls_integer;
Type nametab is table of parts1.pname % Type Index by pls_integer;
Pnums numtab;
Pnames nametab;
Iterations constant pls_integer := 50000;
T1 integer; T2 integer; T3 integer; T4 integer; T5 integer;
Stmt_str varchar2 (255); table_name varchar2 (255 );
Begin
For J in 1 .. iterations loop -- load index-by tables
Pnums (j): = J;
Pnames (j): = 'Part No. '| to_char (j );
End loop;
T1: = dbms_utility.get_time;
For I in 1 .. iterations loop -- use for Loop
Insert into parts1 values (pnums (I), pnames (I ));
End loop;
T2: = dbms_utility.get_time;
Forall I in 1 .. iterations -- use forall statement
Insert into parts2 values (pnums (I), pnames (I ));

T3: = dbms_utility.get_time;
Table_name: = 'parts3 ';
Stmt_str: = 'insert into '| table_name | 'values (: num,: pname )';
For I in 1 .. iterations loop -- use forall statement
Execute immediate stmt_str using pnums (I), pnames (I );
End loop;

T4: = dbms_utility.get_time;
Table_name: = 'parts4 ';
Stmt_str: = 'insert into '| table_name | 'values (: num,: pname )';
Forall I in 1 .. iterations -- use forall statement
Execute immediate stmt_str using pnums (I), pnames (I); T5: = dbms_utility.get_time;
Dbms_output.put_line ('execution time (SECs )');
Dbms_output.put_line ('---------------------');
Dbms_output.put_line ('for loop: '| to_char (T2-t1)/100 ));
Dbms_output.put_line ('forall: '| to_char (T3-t2)/100 ));
Dbms_output.put_line ('for loop: '| to_char (T4-t3)/100 ));
Dbms_output.put_line ('forall: '| to_char (T5-T4)/100 ));
Commit;
End;
/Drop table parts1;
Drop table parts2;
Drop table parts3; drop table parts4 ;/*
Bulk collect statement:
Used to obtain batch data. It is applicable only to the return clauses of select into, fetch into, and DML statements.
Declare
Type type_emp is table of Scott. EMP % rowtype index by binary_integer;
Tab_emp type_emp; Type type_ename is table of Scott. EMP. ename % Type Index by binary_integer;
Tab_ename type_ename;
Cursor C is
Select *
From Scott. EMP; begin
Select * Bulk collect
Into tab_emp
From Scott. EMP;
For I in 1 .. tab_emp.count Loop
Dbms_output.put_line (tab_emp (I). ename );
End loop; dbms_output.new_line;
Delete Scott. EMP returning ename bulk collect into tab_ename;
For I in 1 .. tab_emp.count Loop
Dbms_output.put_line (tab_emp (I). ename );
End loop;
Rollback; Open C;
Fetch C bulk collect
Into tab_emp;
Dbms_output.new_line;
For I in 1 .. tab_emp.count Loop
Dbms_output.put_line (tab_emp (I). Sal );
End loop; end;
* Use Bulk collect to improve Oracle query efficiency

The bulk collect feature is introduced in Oracle8i for the first time. This feature allows you to use batch queries in PL/SQL. Batch queries can significantly improve query efficiency in some cases. Now, we will perform some simple tests and analysis on this feature.

1. First, create a table and insert 100000 records.
Execute the following script in SQL/plus:

Drop table empl_tbl
/
Create Table empl_tbl (last_name varchar2 (20 ),
First_name varchar2 (10 ),
Salary number (10 ))
/

Begin
For I in 3000 .. 102999 Loop
Insert into empl_tbl (last_name, first_name, salary) values ('Carl '| (i-3000), 'wu' | (103000-i), I );
End loop;
End;
/
Commit
/
Select count (*) from empl_tbl;
/

2. Use three methods to calculate the number of non-repeated values contained in a field in the table

2.1 using regular distinct

SQL> select count (distinct last_name) "distinct last name" from empl_tbl;

Distinct last name
------------------
100000
 
Executed in 0.36 seconds

We can see that the regular method takes 0.36 seconds to find that the table has 100000 non-repeated last_name values.

2.2 using a cursor

Run the following statement to count the number of distinct last_name fields:

Declare
All_rows number (10 );
Temp_last_name empl_tbl.last_name % type;
Begin
All_rows: = 0;
Temp_last_name: = '';
For cur in (select last_name from empl_tbl order by last_name) loop

If cur. last_name! = Temp_last_name then
All_rows: = all_rows + 1;
End if;
Temp_last_name: = cur. last_name;

End loop;
Dbms_output.put_line ('all _ rows are '| all_rows );
End;

Note that the simhei part of the code above uses a for loop cursor. To improve program readability, we do not display the defined cursor variable.

Execution result:
All_rows are 100000
PL/SQL procedure successfully completed
Executed in 1.402 seconds

The cursor takes 1.4 seconds to check that the table has 100000 distinct last_name values, which consume more than three times the distinct query time.

2.3 using bulk collect batch query

The sample code is as follows:
Declare
All_rows number (10 );
-- First, define an index-by table data type.
Type last_name_tab is table of empl_tbl.last_name % Type Index by binary_integer;
Last_name_arr last_name_tab;
-- Define an index-by table set variable
Temp_last_name empl_tbl.last_name % type;

Begin
All_rows: = 0;
Temp_last_name: = '';
-- Use Bulk collect batch query to fill in set variables
Select last_name bulk collect into last_name_arr from empl_tbl;

For I in 1 .. last_name_arr.count Loop
If temp_last_name! = Last_name_arr (I) then
All_rows: = all_rows + 1;
End if;
Temp_last_name: = last_name_arr (I );
End loop;
Dbms_output.put_line ('all _ rows are '| all_rows );
End;

Note that in the code above, we first define an index-by table data type last_name_tab, and then define a variable last_name_arr for the Data Type of the set, finally, we use Bulk collect batch query to fill the last_name_arr. Pay attention to its syntax.

Execution result:
All_rows are 100000
PL/SQL procedure successfully completed
Executed in 0.28 seconds
From the above execution results, we can see that bulk collect batch query only takes 0.28 seconds to find that the table has 100000 non-repeated last_name values, the time consumed is only 1/5 of the cursor query, at the same time, it is faster than the general query speed of distinct.

3. Test Result Analysis
Why is the above result displayed? We can use SQL _trace of Oracle to analyze the results. In the SQL command line, use the alter session set SQL _trace = true statement to open the trace of Oracle, execute the preceding three queries in the command line, and use the tkprof tool to generate the trace report.

3.1 regular distinct Query Result Analysis ********************************** **************************************** * ***** select count (distinct last_name)
From
Empl_tbl

Call count CPU elapsed disk query current rows
-----------------------------------------------------------------------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.28 0.32 198 425 4 1
-----------------------------------------------------------------------
Total 3 0.28 0.32 198 425 4 1

Misses in library Cache during parse: 1
Optimizer goal: Choose
Parsing user ID: 62

Rows row Source Operation
----------------------------------------------------------
1 sort group
100000 table access full empl_tbl

**************************************** ****************************************
The preceding query first scans the entire table of empl_tbl, and then sorts the results in groups. The time for SQL parsing and execution is negligible, and the main time is spent on reading data, because the current SQL is only a query and there is no addition or modification operation. In the data reading phase, 198 ORACLE data blocks need to be read from the disk, and 425 data blocks need to be read in a consistent manner (query, consistent gets. Because the Disk Physical reading is very time-consuming, the query is not executed very quickly.

3.2 analyze the efficiency of cursor query *********************************** **************************************** *****
Select last_name
From
Empl_tbl order by last_name

Call count CPU elapsed disk query current rows
-----------------------------------------------------------------------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 100001 0.71 0.62 198 425 4 100000
-----------------------------------------------------------------------
Total 100002 0.71 0.62 198 425 4 100000

Misses in library Cache during parse: 0
Optimizer goal: Choose
Parsing user ID: 62 (recursive depth: 1) **************************************** **************************************** this method is mainly time-consuming in the Data Reading phase, although the number of data blocks read from the disk and the number of consistent data blocks are the same as that of the distinct SQL query, the cursor in this method must be cyclically 100001 times, therefore, the preceding SQL reads 100001 times, and a total of 100000 rows of data are read. This is why it takes 1.4 seconds to use the cursor. Next let's take a look at what will happen when using bulk collect?

3.3 bulk collect query efficiency analysis ********************************* **************************************** *******
Select last_name
From
Empl_tbl

Call count CPU elapsed disk query current rows
-----------------------------------------------------------------------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.08 0.07 0 425 0 100000
-----------------------------------------------------------------------
Total 2 0.08 0.07 0 425 0 100000

Misses in library Cache during parse: 0
Optimizer goal: Choose
Parsing user ID: 62 (recursive depth: 1)
**************************************** ****************************************
Although this method also needs to read 100000 rows of data, the read operation takes place only once, and the bulk collect statement reads the required data into the memory at one time, so this method does not read data blocks from the disk, therefore, this method has advantages over the above two methods, so the execution efficiency is the highest.

4. Conclusion
Through the test and analysis above, we can see that bulk collect batch query can improve the query efficiency to some extent. It first reads the required data into the memory, and then performs statistical analysis, this improves the query efficiency. However, if the memory of the Oracle database is small and the shared pool size is insufficient to save the bulk collect batch query results, this method needs to save the bulk collect set results on the disk, in this case, the bulk collect method is less efficient than the other two methods. Interested readers can further test it.

In addition to bulk collect batch queries, we can also use the forall statement to implement batch insert, delete, and update, which can significantly improve the execution efficiency in bulk data operations.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.