[Production library combat] use stored procedures to archive the production line History table data

Source: Internet
Author: User

Production line historical data archiving is a daily basic work of database operation and maintenance. In the design of the table, usually the data flow tables (such as log, user login history, software download records, user attribute change history table, etc.) are designed as a range partition table, interval partition table (11G), when the business requirements of the retention range, this data is basically static data and the application does not access it, However, it is possible to manually query for some special requirements. In this case, the data is archived from the production library to the history library and compressed, and if the scope of the special requirements query is exceeded, it is exported and compressed back to tape directly.

In the Oracle database, there are many methods for table data archiving operations, such as: Exp/imp, EXPDP/IMPDP, transfer table space, and so on, these are the daily database maintenance may be used, the use of these tools are not expanded here, the following into today's topic, Using stored procedures to archive the line History table data, the first simple summary:


1, simple, not easy error;

2, to its source library (here is the production library), is a SELECT query statement, performance impact is small;

3, the data from the source library to the target library does not need to land, and DBLINK+IMPDP migration data similar method, save the export data required space (hundreds of GB of table) and archive time;

4, can monitor the progress of their archives and the number of archived records;

5, if the transmission across the room, please monitor the network bandwidth flow.

6 、......


Operation Flow:

1. Create a view that contains the partitions of the tables that need to be archived under the schema of the production library maintenance User:


CREATE VIEW log_table_p201209
As
SELECT * from user01.log_table partition (P201209);

Note: Why do you want to build a view? Because the Dblink query cannot be used with SELECT * from table partition (Partition_name) ..... Such a statement.


2. Create the Data Archive status table and sequence under the user who put the archive data in the History Library (the operation of the history database is under this user)

--Create table
CREATE TABLE Data_archive_status
(
ID number,
Threadno number,
TABLE_NAME VARCHAR2 (60),
Syncnum number,
State number,
StartTime DATE,
Synctime DATE,
Remark VARCHAR2 (2000)
);
--ADD comments to the columns
Comment on column data_archive_progress.state
Is ' 0: Start, 1: Open, parse cursor, 2: Extract data, 3: One table completes synchronously, 4: All tables complete, other negative numbers: error code ';

--Create sequence
Create sequence seq_id
MinValue 1
MaxValue 9999999999999
Start with 1
Increment by 1
Cache 20;


3. Create a Dblink in the history Library that can connect to the production library through read-only permissions, example:


--Create Database link
Create DATABASE link xxdb. Localdomain
Connect to ReadOnly
Identified by ""
Using ' (description=
(Load_balance=no)
(address_list=
(address=
(PROTOCOL=TCP)
(host=172.16.xx. XX)
(port=1521)
)
(address=
(PROTOCOL=TCP)
(host=172.16.xx. XX)
(port=1521)
)
)
(Connect_data=
(failover_mode=
(Type=select)
(Method=basic)
(retries=180)
(delay=5)
)
(server=dedicated)
(Service_name=xx_service)
)
)‘;


4. The history library creates a table with the same table structure as the production library, and the table name is proposed instead with the archive data ID


Create Tabel log_table_p201209 (... );


5. Create a stored procedure for data archiving:


CREATE PROCEDUREp_log_table_p201209As
--Index Table
Type U_type is Table oflog_table_p201209%rowtype index by Pls_integer;
V_list U_type;
--Define the array and hold the name of the view you want to synchronize.
Type varchar_arrary is table of VARCHAR2 (at) index by Pls_integer;
V_remoteview_list varchar_arrary;
--Define a reference index
Type cur_ref is REF CURSOR;
Cur_data Cur_ref;
--local variables, recording Sql%rowcount
V_counter number: = 0;
V_rowid rowID;
V_sqlcode varchar2 (+): = null;
V_querystr varchar (+): = null;
v_parse_elapsed_s Date: = null;
V_parse_elapsed_e Date: = null;
v_fetch_elapsed_s Date: = null;
V_fetch_elapsed_e Date: = null;
Begin
--Initialize the array (The view created in the 1th step
V_remoteview_list (1): =' Zhanghui.log_table_p201209‘;
--cyclic synchronization of each partition table
For k in 1.. V_remoteview_list.count Loop
--Add a synchronization task record
INSERT INTO Data_archive_status
Values
(Seq_id.nextval,
K
V_remoteview_list (k),
0,
0,
Sysdate,
Sysdate,
Null
returning ROWID into V_rowid;
Commit
V_QUERYSTR: = ' select/*+ rowid (t) */* from ' | | V_remoteview_list (k) | |
‘@XXDB. Localdomain T ';
Update Data_archive_status T
Set t.synctime = sysdate, t.state = 1
where rowid = V_rowid;
Commit
--record the length of time the cursor was opened and parsed.
v_parse_elapsed_s: = sysdate;
Open Cur_data for V_QUERYSTR;
V_parse_elapsed_e: = sysdate;
Update Data_archive_status
Set synctime = Sysdate,
State = 2,
Remark = Remark | | ' [' | | | v_remoteview_list (k) | |
':p arse_elapsed= ' | |
(v_parse_elapsed_e-v_parse_elapsed_s) | | ' sec, '
where rowid = V_rowid;
Commit
V_counter: = 0;
v_fetch_elapsed_s: = sysdate;
--Loop synchronization of open cursors.
Loop
--use bulk Binding to process 10,000 records at a time
Fetch Cur_data Bulk Collect
into v_list limit 10000;
ForAll i in 1.. V_list.last
INSERT INTOlog_table_p201209
Values V_list
(i);
--Record the number of records currently synchronized
V_counter: = V_counter + sql%rowcount;
Update Data_archive_status T
Set t.syncnum = v_counter, T.synctime = sysdate
where rowid = V_rowid;
Commit
Exit when Cur_data%notfound;
End Loop;
V_fetch_elapsed_e: = sysdate;
--Update the progress table to record the current partition completion time into the notes.
Update Data_archive_status
Set state = 3,
Synctime = Sysdate,
Remark = Remark | | ' Fetch_elapsed= ' | |
Round ((v_fetch_elapsed_e-v_fetch_elapsed_s) * 24 * 60,
4) | | ' Min,syncnum= ' | | V_counter | |
', endtime= ' | | To_char (sysdate, ' Yyyymmddhh24miss ') | | ‘]‘
where rowid = V_rowid;
Commit
Close Cur_data;
--Update Progress Chart
Update Data_archive_status t Set t.state = 4 where rowid = V_rowid;
Commit
End Loop;
exception
When others then
V_sqlcode: = Sqlcode;
Update Data_archive_status
Set synctime = sysdate, state = V_sqlcode
where rowid = V_rowid;
Commit
Raise
End


6, create the compressed object stored procedure, because the move operation needs to close to double storage space, so before compressing, please evaluate the space requirements in advance


CREATE PROCEDURE P_compress_object (Vobject_name varchar2,--Object
Vpartition_name varchar2 default NULL,--partition name
Vparallel int Default 0,--degree of parallelism
Vpctfree int Default 0,--storage parameter Pctfree no longer considers DML operation set to 0
Vtablespace varchar2 default NULL,--table space
Vowner varchar2 Default User,--object owner
VType number--type: 0, table 1, index 2, Partition table 3, index partition
) Authid Current_User is
Vsql varchar2 (4000);
VSQLERRM VARCHAR2 (256);
v_sqlstring varchar2 (4000);
Begin
V_sqlstring: = ' alter session set DB_FILE_MULTIBLOCK_READ_COUNT=128 ';
Execute immediate v_sqlstring;
If VType = 0 Then
Begin
Vsql: = ' ALTER TABLE ' | | Vowner | | '. ' | | Vobject_name | | ' Move ' | | Case when vtablespace was null then null Else ' tablespace ' | | Vtablespace End | | ' Pctfree ' | | Vpctfree | | ' Compress nologging ' | | Case is Vparallel in (0, 1) then null else ' parallel ' | | Vparallel end;
Execute immediate vsql;
End
elsif VType = 1 Then
Begin
Vsql: = ' alter index ' | | Vowner | | '. ' | | Vobject_name | |
' Rebuild ' | | Case when vtablespace was null then null Else ' tablespace ' | | Vtablespace End | | ' Pctfree ' | | Vpctfree | | ' Compress nologging ' | | Case is Vparallel in (0, 1) then null else ' parallel ' | | Vparallel end;
Execute immediate vsql;
End
elsif VType = 2 Then
Begin
Vsql: = ' ALTER TABLE ' | | Vowner | | '. ' | | Vobject_name | |
' Move Partition ' | | Vpartition_name | | Case when vtablespace was null then null Else ' tablespace ' | | Vtablespace End | | ' Pctfree ' | | Vpctfree | | ' Compress nologging ' | | Case is Vparallel in (0, 1) then null else ' parallel ' | | Vparallel end;
Execute immediate vsql;
End
elsif VType = 3 Then
Begin
Vsql: = ' alter index ' | | Vowner | | '. ' | | Vobject_name | |
' Rebuild Partition ' | | Vpartition_name | | Case when vtablespace was null then null Else ' tablespace ' | | Vtablespace End | | ' Pctfree ' | | Vpctfree | | ' Compress nologging ' | | Case is Vparallel in (0, 1) then null else ' parallel ' | | Vparallel end;
Execute immediate vsql;
End
End If;
exception
When others then
VSQLERRM: = SQLERRM;
Dbms_output.put_line (vsqlerrm| | ' | ' | | VSQL);
End


7, the above work is ready to complete, confirm the historical database table space situation, call the data archive stored procedure p_log_table_p201209 , the processing completes the data compression, call the stored procedure P_compress_object (.... );


8, confirm the data is correct, drop the production library to maintain the user's corresponding view and the partition of the business table, freeing the object to occupy space (note: Check whether the index of the partitioned table is local, otherwise ...). ).


Above...... Finish!


[Production library combat] use stored procedures to archive the production line History table data

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.