In daily O & M, DDL operations on production data tables must be taken with caution. O & M DBAs generally consider data DDL operations globally, such as impact on production, execution duration, and impact on storage data. Adding the default values of Data columns is a common headache for DBAs. Traditional statement execution consumes time
In daily O & M, DDL operations on production data tables must be taken with caution. O & M DBAs generally consider data DDL operations globally, such as impact on production, execution duration, and impact on storage data. Adding the default values of Data columns is a common headache for DBAs. Traditional statement execution consumes time
In daily O & M, DDL operations on production data tables must be taken with caution. O & M DBAs generally consider data DDL operations globally, such as impact on production, execution duration, and impact on storage data.
Adding the default values of Data columns is a common headache for DBAs. Traditional execution statements consume a long time, consume a large amount of resources, and have a high impact on the production environment. Some other workarounds are used, and the operation steps are cumbersome. How to quickly add a data column with default values and minimize the impact on the existing production environment is a goal we hope to achieve.
This article starts with operations, discusses the problem of adding default data columns, and finally introduces the "revolutionary" optimization of Oracle 11g.
1. From adding 10 GB Data Columns
In order to achieve the comparison effect, we prefer Oracle of 10 Gb for the test to construct a relatively large data table.
SQL> select * from v $ version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0-Prod
PL/SQL Release 10.2.0.1.0-Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0-Production
NLSRTL Version 10.2.0.1.0-Production
SQL> create table t as select object_id from dba_objects;
The table has been created.
SQL> select count (*) from t;
COUNT (*)
----------
3220352
The data table t contains only one data column, but the data volume is about 3.2 million. We evaluate the volume as follows:
SQL> set timing on;
SQL> select bytes/1024/1024, blocks from dba_segments where wner = 'sys 'and segment_name = 'T ';
BYTES/1024/1024 BLOCKS
-------------------------
39 4992
Used time: 00: 00: 00.03
SQL> exec dbms_stats.gather_table_stats (user, 't', cascade => true );
The PL/SQL process is successfully completed.
Used time: 00: 00: 00.35
SQL> select blocks from dba_tables where wner = 'sys 'and table_name = 'T ';
BLOCKS
----------
4883
Used time: 00: 00: 00.01
The space allocated by Oracle to this segment is 4992 data blocks, and the formatted data blocks under the high-water level line HWM are 4883. The total volume is about 40 MB.
The following two methods are used to add the default value column of a data table to observe the changes. First, allow null default columns.
SQL> alter table t add vc varchar2 (100) default 'tttttttttttttttttttt ';
The table has been changed.
Used time: 00: 34: 37.15
SQL> exec dbms_stats.gather_table_stats (user, 't', cascade => true );
The PL/SQL process is successfully completed.
Used time: 00: 00: 03.86
SQL> select bytes/1024/1024, blocks from dba_segments where wner = 'sys 'and segment_name = 'T ';
BYTES/1024/1024 BLOCKS
-------------------------
208 26624
Used time: 00: 00: 00.06
SQL> select blocks from dba_tables where wner = 'sys 'and table_name = 'T ';
BLOCKS
----------
25864
Used time: 00: 00: 00.01
It is a time-consuming operation. Adding a data column default value consumes nearly 30 minutes. The volume of the original data table also expands from less than 40 M to 208 M.
This phenomenon tells us that when we add a data column with the default value and directly add it, some data is inserted into the data block, resulting in space expansion.
In the original structure, adding data to a data block is required. Only in this way can the default data column be added to it.
In addition to this statement, we can also provide the not null option for data columns, which can also implement the same function.
SQL> alter table t add vc2 varchar2 (100) default 'ttttttttttttttttttttttt' not null;
The table has been changed.
Used time: 00: 15: 58.85
SQL> exec dbms_stats.gather_table_stats (user, 't', cascade => true );
The PL/SQL process is successfully completed.
Used time: 00: 00: 36.87
SQL> select bytes/1024/1024, blocks from dba_segments where wner = 'sys 'and segme
Nt_name = 'T ';
BYTES/1024/1024 BLOCKS
-------------------------
256 32768
Used time: 00: 00: 00.14
SQL> select blocks from dba_tables where wner = 'sys 'and table_name = 'T ';
BLOCKS
----------
32448
Used time: 00: 00: 00.04
It also took 15 minutes, and the space was greatly changed. New space allocation, while the number of data rows does not change, the potential Row Migration and Row Chaining are seriously deteriorated!
Comprehensive Analysis of Oracle operations under 10 GB: to add the default value of the data field, Oracle will access each data row on each data block for data column expansion, this process is accompanied by new space allocation and redundant data row replication.
This type of operation is terrible for the production environment. During the entire job process, the data table structure is locked, and related business processing operations are blocked or slow. Therefore, O & M DBAs are handled in the maintenance window or workarounds.
In the Oracle 11g environment, things are different.
2. Default Configuration for 11g
We perform similar operations on 11 GB.
SQL> select * from v $ version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0-Production
PL/SQL Release 11.2.0.1.0-Production
CORE 11.2.0.1.0 Production
Build Data Tables of similar scale.
SQL> set timing on;
SQL> create table t as select object_id from dba_objects;
Table created
SQL> select count (*) from t;
COUNT (*)
----------
3323167
SQL> exec dbms_stats.gather_table_stats (user, 't', cascade => true );
PL/SQL procedure successfully completed
SQL> select segment_name, bytes/1024/1024, extents, blocks from user_segments where segment_name = 'T ';
SEGMENT_NA BYTES/1024/1024 EXTENTS BLOCKS
---------------------------------------------
T 40 55 5120
SQL> select NUM_ROWS, BLOCKS from dba_tables where wner = 'Scott 'and table_name = 'T ';
NUM_ROWS BLOCKS
--------------------
3323167 5041
In the 11g environment, we have prepared about 3.3 million data to add non-null data columns with default values.
SQL> alter table t add vc2 varchar2 (100) default 'tttttttttttttttttttttt ';
Alter table t add vc2 varchar2 (100) default 'tttttttttttttttttttt'
ORA-01013: user request to cancel current action
When the defalut column is added and not null is not specified, the data duration exceeds our imagination. I took the initiative to disconnect it. Next, try to add not null.
-- The operation cannot be completed within 1 s;
SQL> alter table t add vc varchar2 (100) default 'tttttttttttttttttttttttt' not null;
Table altered
Executed in 0.047 seconds
SQL> exec dbms_stats.gather_table_stats (user, 't', cascade => true );
PL/SQL procedure successfully completed
SQL> select NUM_ROWS, BLOCKS from dba_tables where wner = 'Scott 'and table_name = 'T ';
NUM_ROWS BLOCKS
--------------------
3323167 5041
Executed in 0 seconds
SQL> select segment_name, bytes/1024/1024, extents, blocks from user_segments where segment_name = 'T ';
SEGMENT_NA BYTES/1024/1024 EXTENTS BLOCKS
---------------------------------------------
T 40 55 5120
SQL> select * from t where rownum
OBJECT_ID VC
------------------------------------------------------------------------------------------
20 TTTTTTTTTTTT
46 TTTTTTTTTTTT
28 TTTTTTTTTTTT
15 TTTTTTTTTTTT
(Space reasons, omitted ......)
9 rows selected
We found that Oracle completed the process at an incredible speed when not null was executed. Note: The volume of the data table has not changed !! However, when we check the data table, we find that the default value of the corresponding column has been added.
This is a strange thing. One reason is that the default value is definitely not actually added to the data block during execution, this will not affect the volume of data segments.
3. 11G default value Processing Optimization
So how is the 11g process handled? And why is this feature available only when Not null is added. We start with the trace of the select data row.
We chose 10046 to track the entire select process and see where the default value is.
SQL> select value from v $ diag_info where name = 'default Trace file ';
VALUE
-----------------------------------------------------------------------
/U01/diag/rdbms/wilson/trace/wilson_ora_6177.trc
SQL> alter session set events '10046 trace name context forever, level 12 ';
The session has been changed.
SQL> select * from t where rownum
OBJECT_ID
----------
VC
-----------------------------------------------------------------------------
SQL> alter session set events '10046 trace name context off ';
The session has been changed.
Process the generated trace file and obtain the tkprof result.
D: \ des> tkprof wilson_ora_6177.trc
Output = res.txt
TKPROF: Release 10.2.0.1.0-Production on Friday August 24 22:07:10 2012
Copyright (c) 1982,200 5, Oracle. All rights reserved.
In the analysis results, we found that a lot of recursive statements, that is, Oracle ran a lot of statements to execute this SQL statement. Among them, we found a "suspicious" object.
**************************************** *******************************
Select binaryDefVal, length (binaryDefVal)
From
Ecol $ where tabobj # =: 1 and colnum =: 2
Call count cpu elapsed disk query current rows
-----------------------------------------------------------------------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 2 2 0 1
-----------------------------------------------------------------------
Total 3 0.00 0.00 2 2 0 1
Ecol $ is a newly added dictionary base table under the sys user. The content is as follows:
SQL> desc ecol $;
Name Type Nullable Default Comments
-----------------------------------------
TABOBJ # NUMBER Y
COLNUM NUMBER Y
BINARYDEFVAL BLOB Y
SQL> select * from ecol $;
TABOBJ # COLNUM BINARYDEFVAL
--------------------------------
76046 2
Executed in 0.031 seconds
SQL> col owner for a10;
SQL> col object_name for a10;
SQL> select owner, object_name, object_id from dba_objects where object_id in (76046 );
OWNER OBJECT_NAM OBJECT_ID
------------------------------
Scott t 76046
Executed in 0 seconds
From the ecol $ data table, we found an object reference for the second column (column = 2) of the data table T object. The referenced value binarydefval is of the blob type. Intuitively, we can guess that this is the default value of the vc column in the data table.
Here, we will talk about a problem. In Oracle, the default values are all saved through the large object type. In the data dictionary col $, the default value is saved through the long class. In the 11g ecol $ table, this value is saved using the blob type.
Another thing to note is that there is only one data row in the data table, that is, only the default value of T when we create the data table. What does this mean?
At this point, we can guess the painstaking efforts of Oracle. First, Oracle noticed that it was difficult to add data with default values during online production. However, it is unavoidable to insert the default values row by row under the existing architecture and storage structure, which leads to row migration. Therefore, Oracle adopts a "blind eye" approach ".
If we specify the default value of the data column when creating the data table, or do not require that all data null values be changed to the default value at one time, Oracle manages the data according to the original storage policy. If you want to add data columns and add all default columns at a time, Oracle simply does not insert or delete data, but saves the default value in ecol $.
Next, to retrieve data, Oracle uses the recursive call method to save and extract the default value. During data retrieval, if the default value column is null (no interpolation is performed), the default value is output to the interface for display. In fact, the default column corresponding to the data row does not have this value.
This explains why this optimization is available only when the not null default value column is added. Because Oracle needs to confirm that this column does not have a null value before it can match all the null values.
4. Conclusion
With the 11g feature, we say that temporarily adding a default column in the online production environment is not a terrible task. However, with caution, we still want to refactor the data table when conditions are available. This feature is considered for use in emergency environments.