We know that there is an overhead for running SQL statements in PL/SQL programs, because SQL statements must be submitted to the SQL engine for processing.
The transfer of control between PL/SQL engine and SQL engine is called context change, but each change has additional overhead.
See:
However, forall and bulk collect allow the PL/SQL engine to compress multiple contexts into one, this causes the time required to execute the SQL statement to process multiple rows of records in PL/SQL to plummet.
Check again:
Let's take a closer look at the two.
(I) accelerated query through bulk collect
(1) usage of bulk collect
Bulk collect can be used to load the query results to collections at a time, instead of processing the results one by one Using cursor.
You can use Bulk collect in select into, fetch into, and returning into statements.
Note that when using bulk collect, all into variables must be collections
Here are some simple examples:
① Use Bulk collect in select into statement
Declare type sallist is table of employees. Salary % type; SALS sallist; begin select salary bulk collect into SALS from employees where rownum <= 50; -- use the data end in the set next ;/
② Use Bulk collect in fetch
Declare type deptresce is table of orders % rowtype; includeptrealb; cursor cur is select department_id, department_name from orders ments where department_id> 10; begin open cur; fetch cur bulk collect into orders; -- use the data end in the set next ;/
③ Use Bulk collect in returning
CREATE TABLE emp AS SELECT * FROM employees;DECLARE TYPE numlist IS TABLE OF employees.employee_id%TYPE; enums numlist; TYPE namelist IS TABLE OF employees.last_name%TYPE; names namelist;BEGIN DELETE emp WHERE department_id=30 RETURNING employee_id,last_name BULK COLLECT INTO enums,names; DBMS_OUTPUT.PUT_LINE('deleted'||SQL%ROWCOUNT||'rows:'); FOR i IN enums.FIRST .. enums.LAST LOOP DBMS_OUTPUT.PUT_LINE('employee#'||enums(i)||':'||names(i)); END LOOP;END;/deleted6rows:employee#114:Raphaelyemployee#115:Khooemployee#116:Baidaemployee#117:Tobiasemployee#118:Himuroemployee#119:Colmenares
(2) Optimization of bulk collect for Big Data Delete update
Here we can use delete, and update is the same.
Example:
Delete 10 million rows of data in a large 0.1 billion rows table
The requirement is to achieve the fastest speed when the impact on other database applications is minimal.
If the business cannot be stopped, you can refer to the following ideas:
Sharding by rowid, sorting by rowid, batch processing, and table Deletion
This method is indeed the best option when the business cannot be stopped.
Generally, it can be controlled to submit within 10 thousand rows without causing too much pressure on the rollback segment.
When I do DML, I usually select 1000 or 2000 rows and one commit.
When the business peak is selected, the application will not be significantly affected.
The Code is as follows:
Declare -- cursor sorted by rowid -- the deletion condition is oo = XX. You need to determine the cursor mycursor is select rowid from t where oo = XX order by rowid according to the actual situation; type rowid_table_type is table of rowid index by pls_integer; v_rowid rowid_table_type; begin open mycursor; loop fetch mycursor bulk collect into v_rowid limit 5000; -- submit row 5000 once exit when v_rowid.count = 0; forall I in v_rowid.first .. v_rowid.last Delete t where rowid = v_rowid (I); Commit; end loop; close mycursor; end ;/
(3) Limit the number of records extracted by bulk collect
Syntax:
Fetch cursor bulk collect into... [limit rows];
Rows can be a constant, and the result of a variable or a value is an integer expression.
Suppose you need to query and process rows of data, you can use Bulk collect to retrieve all rows at a time, and then fill it into a very large collection.
However, this method will consume a large amount of PGA for this session, and the app may suffer performance degradation due to PGA page feed.
At this time, the limit clause is very useful. It can help us control how much memory the program uses to process data.
Example:
Declare cursor limit is select * from employees; Type employee_aat is table of limit % rowtype index by binary_integer; v_emp employee_aat; begin open limit; loop fetch limit bulk fetch into v_emp limit 100; /* process data through a scan Set */For I in 1 .. v_emp.count loop upgrade_employee_status (v_emp (I ). employee_id); End loop; exit when allrows_cur % notfound; end loop; close allrows_cur; end ;/
(4) Batch extract multiple columns
Requirements:
Extract all traffic details with fuel consumption less than 20 km/RMB in the transportation table
The Code is as follows:
Declare -- declare the set type vehtab is table of transportation % rowtype; -- initialize a set of this type; begin select * Bulk collect into gas_quzzlers from transportation where mileage <20 ;...
Using the returning clause for batch operations
With the returning clause, we can easily determine the results of the just-completed DML operation without any additional query work.
For the example, see the third point in bulk collect usage.
(Ii) DML Acceleration Through forall
Forall indicates that the PL/SQL engine binds all the members of one or more sets to SQL statements, and then sends the statements to the SQL engine.
(1) syntax
To be continued ,,,