The effect of high water level and the performance of large concurrent inserts
The rule summary of finding available blocks when inserting data blocks:
High water level and low high water level: the state of the data blocks existing between the low and high water levels may be unformatted or formatted. Below the low and high water level is formatted and can be used.
1. First, insert a piece of data, only the data blocks below the high water level are used.
The position of a high water point: the boundary of a block of data contained in a L1 block, or the boundary of a zone
2. The first time you insert a row of data, format the number of blocks?
There is not a certain number, from the dump L1 block, there are formatted 5, 32 64 and so on.
3. Insert a row of data, how to use the l3-->l2-->l1--data block, this process to determine which block to insert?
Under the ASSM table space: A row is randomly inserted into different blocks below the high water level at different sessions.
The details are as follows:
L3 Select L2, not randomly, and according to L2 Hint for inserts:0x01c00081 This line, determine the L2 to select.
In L2, according to the PID hash, get a random value, based on this value selection l1.--but this is affected by high water level, if there is only one area in the high water level, in fact will only select the corresponding L1.
In L1, a random value is obtained based on the PID, and the data block is selected according to this value.
--During the experiment, be aware that under the same window, Sqlplus quickly exits and logs on, using the same SID, so the inserted rows are in the same block of data.
4. Multiple sessions Each insert a row of data, insert order?
Suppose that the data inserted in each row requires no more than one block of data:
The same session inserts data into a table, is ordered, and is inserted into the same block of data.
If more than one session inserts data into a table, it is unordered, and each session inserts data in a different block of data.
That is, the insertion of multiple sessions is unordered, and the insertion in the same session is sequential. When reading data, the default is by ROWID, which is read in ascending order.
Experiment: The same session insert two statements, another new session to insert a new session, through the Dbms_rowid view of the row in the block of data, you will find the same conversation under the same piece, different sessions do not insert the same piece. It's too easy to get out of the towel.
Two, high water level and putfree value caused by improper performance problem--hot block causes buffer busy waits wait event
Insert a piece of data, will only use the data block below the high water level will cause the buffer busy waits performance problem?
First, the position of the high water point: the boundary of the data block contained in the L1 block, or the boundary of the zone.
A L1 or a region contains a limited number of blocks of data. For example 8KB in block, when the system automatically admin size, the first 16 areas only 8 pieces of data, 1-64 m time zone size is 1m,128 block. The area size after 64M is 8M, with 1024 blocks of data.
In 8M area, there are 1024 blocks can be used to remove the storage metadata library, if there are 1000 blocks available, this time is able to support 1000 concurrent insert operation, there are more concurrent operations, it is inevitable to have multiple sessions at the same time to the same block of data to operate, At this point there is a hot block-waiting for the event buffer busy waits.
1000 concurrency is already very large, more used in log-type applications.
If more than 1000 concurrent, and the buffer busy waits appears many, has affected the system performance, there is a less perfect solution is: in the business peak before the arrival of a large number of data inserted in the table, pushing high water position, and then delete the data, At this point, there are more blocks of data to insert before the high water level can support higher concurrency.
Cause the buffer busy waits wait for the event, there is also a possible unreasonable pctfree value.
Because in ASSM, there are five states in L1:75-100% 50-75% 25-50% full to indicate the idle state of the block, and if the value of the setting is close to the value of the monitor, such as the 0-25% is 20% or 24%, the data may be Block of data in a few rows of inserts, updates or deletions will result in the state of the data block changes, and the state of the data block changes in the L1 block will also occur corresponding changes to record the state of the data block.
Because a L1 block manages multiple blocks of data (such as a 8M area 8KB block when one L1 manages 1024 blocks), if a L1 manages multiple blocks of data to update the space state of the block at the same time, it can also cause contention for the L1 block--hot block--waiting for events buffer busy waits.
This also can experiment, but not very good experiment, want to use a row of data longer, with pctfree value, do delete a row and insert a row or a few lines, block state changes, at this time dump L1 block to verify.
ASSM Disadvantages:
The sequence of data and index is inconsistent, which results in the increase of clustering factor ASSM
Iii. Experiment: Verify that inserted rows can only be inserted into blocks of data below the high water level by default
Idea: First build the table, manually assign multiple areas to the table. Using multiple sessions, inserting a row of data into each session, and verifying the insertion order of a block of data
############################ #33
bys@ bys3>create table test11 (aa int, BB varchar2 (10));
Table created.
bys@ Bys3>insert into test11 values ("a");
1 row created.
bys@ bys3>commit;
Commit complete.
bys@ Bys3>alter Table Test11 allocate extent (size 1m);
Table altered.
bys@ Bys3>alter System checkpoint;
System altered.
bys@ Bys3>select Header_file,header_block from dba_segments where segment_name= ' TEST11 ' and owner= ' BYS ';
Header_file Header_block
----------- ------------
4 170
#####################