Oracle's concurrency technology allows a large task to be broken down into multiple small tasks that are done together by multiple processes. Reasonable use of concurrency can make full use of system resources and improve efficiency.
One, the type of concurrency
Parallel Query
Parallel DML (PDML)
Parallel DDL
Parallel Recovery
[@[email protected]]
Second, the application of the occasion
Two conditions for parallel
1) Large task, such as full table scan large table
This is the same as the experience in everyday life, and it's easier to do a little task by yourself than to dispatch a task.
2) The system has sufficient resources (Cpu/io)
In other words, concurrency is a technique designed to make full use of system resources to improve the speed of task processing on systems with sufficient resources and fewer users. Here are a few scenarios:
1) OLTP system has a large number of users and sessions, if each session using concurrent queries will cause the system to crash . But there are exceptions, such as a billing system that is not available at the end of the month or after work, or where users rarely visit, running a batch program that can use concurrency to increase speed
2) Data Warehouse system can often use concurrency query, pdml and other concurrency, note that some data warehouse systems are also available to a large number of users access, this system has some OLTP features, should be cautious to use concurrent
3) using parallel DDL and PDML during maintenance is very useful for administrators, both OLTP and data warehouses
Three, Parallel query
Use concurrent query methods:
1) Modify table Properties
ALTER TABLE big_table Parallel 4;
Alter table big_table Parallel, which is determined by Oracle based on system resource conditions. This is recommended. Oracle depends on the number of CPUs multiplied by the PARALLEL threads per CPU parameter (default 2), such as 4CPU machine, Oracle determines PARALLEL number of 8
2) using hint, select */*+ PARALLEL (emp,12) */...
Four, PDML
Example:
ALTER TABLE emp PARALLEL (ten);
ALTER SESSION ENABLE PARALLEL DML;
INSERT into EMP
SELECT * from T_emp;
COMMIT;
ALTER SESSION ENABLE PARALLEL DML;
INSERT/*+ PARALLEL (emp,12) */into EMP
SELECT/*+ PARALLEL (t_emp,12) */* from t_emp;
COMMIT;
Note: When you use parallel, the Insert SELECT * statement automatically uses direct-load, and you no longer need to use the APPEND hint (/*+append */)
PDML restriction:
Tables with trigger are not supported, PDML on the above can succeed, but concurrency is ignored
does not support certain constraints, such as self-referential integrity. The reason is that PDML is divided into several independent sessions to modify the data, there is no guarantee of certain integrity, it is easy to cause deadlock already other lock problem
one session used Pdml, before Commit/rollback, another session can no longer use PDML
Advanced replication is not supported (because trigger is used)
Deferred constraints (the Deferred mode of the constraint refers to the modification operation to verify that the constraint is satisfied when committing) does not support
The distributed transaction does not support
Clustered tables does not support
V. Concurrency and space waste
Parallel DDL and some pdml rely on direct path load, which bypasses DataBuffer direct write data files.
For example, create table as SELECT, insert/*+append */,
This creates a waste of space, such as pouring 1010M of data, each extent 100m,direct path load assigns a new 100m extent to hold the data (if there is a extent less than 100m, the normal insert can use these spaces). Assuming that 10 concurrent, each concurrent pour 101M data, will create 2 extent, then a total of 20 extent will be created, resulting in 990m wasted space. A waste of space is wasted (if the table is created with a regular insert, you can use these spaces), and on the other hand the full table scan searches for these empty extent, which also reduces the speed of the full table scan.
There are two ways to extent management of a tablespace, unform size is the same for each extent, and autoallocate is a more flexible way for Oracle to determine extent size based on the internal mechanism
The Uniform method does not support extent trimming, and autoallocate uses parallel extent in trimming DDL, which reduces wasted space.
Therefore, on table spaces where parallel DDL operations are frequently used, either reduce the size of uniform size per extent, or use autoallocate to reduce wasted space.
Vi. concurrency of concurrent diy-stored procedures
Here's a common task: scan the entire table, modify the data, and then write a new table
If a process is too slow to process, we usually divide the data ourselves and then make multiple process calls.
Use 11GR2 built-in concurrency package: dbms_paralllel_execute, which greatly simplifies this process
(Prior to 11GR2, there was no built-in concurrency package, you would need to manually divide the large table by ROWID or primary key, and then call it through Dbms_job or dbms_schedule.) )
We've had two days before. A procedure for example, see How to use this concurrency technique (this example is simpler, not necessarily need to use such technology, just as an example to illustrate)
The purpose of the program is to delete BMF in the orig_bill_ref_no like ' 18% ' record, originally a sentence of SQL can be completed, due to the large amount of data, the system rollback segment is insufficient. So developers are ready to run in multiple processes
Declare
Cursor C1
Is select Orig_bill_ref_no from BMF where orig_bill_ref_no like ' 18% '
and mod (account_no, 5) = 0; (Divide the data into 5 segments)
Begin
For R1 in C1 loop
Delete from BMF where orig_bill_ref_no = R1.orig_bill_ref_no;
Commit
End Loop;
Commit
End
/
What is the problem with this writing, and soon encounter snapshot too old error. The reason is that select opens the BMF cursor while modifying the BMF and commit data, because of query conformance requirements, the open cursor to see is the situation before BMF modification, which is read from Undo, so that once the time exceeds the Undo_retention,undo information expires, The newspaper snapshot too old.
Use the ora11g provided by the concurrency package:
1) Create a process serial process to be called by multiple concurrent threads
Create or replace
Procedure serial (P_lo_rid in rowID, P_hi_rid in rowID)
Is
Begin
Delete from BMF
where rowID between P_lo_rid and P_hi_rid and orig_bill_ref_no like ' 15% ';
End
/
2) Divide the table into multiple chunk according to ROWID for the thread to call
Begin
Dbms_parallel_execute.create_task (' PROCESS BIG TABLE ');
Dbms_parallel_execute.create_chunks_by_rowid
(Task_name = ' PROCESS BIG TABLE ',
Table_owner = ' LUW ',
table_name = ' BMF ',
By_row = False,--block number by number of rows not recorded
chunk_size = 2000);
End
/
SELECT *
From (
Select chunk_id, status, Start_rowid, End_rowid
From Dba_parallel_execute_chunks
where task_name = ' PROCESS BIG TABLE '
ORDER BY chunk_id
)
where RowNum <= 5
/
3) Initiate concurrent tasks, assign and run tasks according to the 2nd step partitioning of the table
Begin
Dbms_parallel_execute.run_task
(Task_name = ' PROCESS BIG TABLE ',
sql_stmt = ' Begin serial (: start_id,: end_id); End; ',
Language_flag = Dbms_sql. NATIVE,
Parallel_level = 4);
End
/
4) Delete concurrent jobs
Begin
Dbms_parallel_execute.drop_task (' Process big table ');
End
/
So what's the speed compared to using concurrency and simple delete?
Using Concurrency:
PL/SQL procedure successfully completed.
elapsed:00:00:03.07
Direct Delete:
Delete from BMF where orig_bill_ref_no like ' 15% ';
403525 rows deleted.
elapsed:00:00:08.12
This shows that using concurrency increases the speed, not to mention the lack of space requirements for the rollback segment.
SQL Optimization (iii) concurrency for Oracle