1. Realization Method
Automatic migration of data through an application requires that the source and target databases being manipulated exist, and that data migration policies (data pipelines) should also be established. Based on this, the data pipeline is used by the application to realize the automatic migration of data.
2 . 1 Implementation Steps
In general, there are five basic steps to using a data pipeline in your application:
(1) Create an object to create a data pipeline object, a support user object for the pipe object, and a Window object.
(2) The initialization operation creates an instance of two transaction objects, connecting the source and destination databases separately, creating an instance of the pipeline object that supports the user object, and initializing it to the established data pipeline object.
(3) Start the data pipeline to start the data pipeline by supporting an instance of the user object
(4) Handling errors can be repaired and discarded for data rows that have failed in the pipeline operation.
(5) The end operation disconnects the database and frees the used instance to close the window.
2.2 Creating Objects
2.2.1 To create a data pipeline object
Data pipelines are used to migrate data, either interactively or using an application in a powerbuild environment, and you must first establish a data pipeline.
Establishing a data pipeline should be done in a powerbuild environment.
1) Configure two ODBC data sources, assuming they are S and D, where s connects to the source database, and D connects to the target database.
2) define the pipe object with "New→database→pipeline". In the definition process, if the data source is a data table, then "Stored Procedure" is selected as "Quick select"/"SQL select" and if it is a stored procedure, "source Connection" select S, " Destination Connection "Select D; Select the data table in the source connection, select the field as needed, set the filter criteria for the data row, set the name of the target table, the type of the pipeline operation, and save it.
There are several types of pipeline operations that you can create, replace, refresh, add, or modify, and select one of them as needed. Under normal circumstances, if you are backing up data, you should define two related pipe objects, which are only different types of pipe operations, one of which is "create" and the other is "add or modify." A pipe object that uses the Create action when there is no corresponding data table in the target database (first backup), and a pipe object that uses the Add or modify action when there is a corresponding data table in the target database (not the first backup). Here, two pipe objects are defined as Pile_create and pile_modify.
2.2.2 Create a support user object for a pipeline
The two pipeline objects created by Pile_create and pile_modify can be migrated interactively, but are not yet available in the application. To use the properties and functions of a pipe object in your application, you should establish a support user object for the pipe object.
To improve the versatility of the program, you can set up a user object that compares a generic pipe object (that is, without specifying a data object).
Set up the pipe user object by selecting "New→object→standard Class" and then "Pipeline", with the name set to Uo_pipeline. The user object Uo_pipeline has 6 properties, 5 events, and 9 functions, of which properties, events, and functions that are used more frequently in the application are shown in table 1, table 2, table 3.
Table 1 Properties of Pipeline objects
Property name |
Data type |
Meaning |
rowsInError |
Long |
Indicates the number of rows in an error line during pipeline operation |
Rowsread |
Long |
Indicates the number of rows that the pipeline reads into |
Rowswritten |
Long |
Indicates the number of lines written by the pipeline |
DataObject |
String |
Indicates the name of the pipe object (for example, pile_create) |
Table 2 events for pipeline objects
Event |
Trigger Time |
Pipeend |
When the start () or repair () function finishes executing |
Pipemeter |
Triggered each time a piece of data is read or written. The chunk size is determined by the commit factor |
Pipestart |
triggered when start () or repair () function starts execution |
Table 3 functions for pipe objects
function /p> |
return value data type |
features |
cancel |
integer |
End pipeline run |
repair |
integer |
Fix error data window for pipeline User object, update target database with correct data |
start |
|
start Data pipeline object |
In order to be able to display the process of pipeline operation dynamically, it is necessary to Uo_pipeline rowsInError, Rowsread, Pipemeter event in the Uo_pipeline user object. Rowswritten Three property values are passed to the Window object (after the Window object is visible, its name is W_copydata). A custom window-level function getpipemsg () implementation through the Window object W_copydata (or by defining a global variable implementation in the declare of W_copydata).
Write the following script in event Pipemeter:
W_copydata. Getpipemsg (Rowsread,rowswritten,rowsinerror)
2.2.3 Creating a Window object
The Window object is used to display dynamic information about the pipeline operation, to monitor the pipeline operation, and to interact with the pipe object when an error occurs. Its name is W_copydata.
The controls and functions contained in W_copydata are shown in table 4.
Table 4 Window object W_copydata contained controls
control type |
Control Name |
Role |
static text |
St_t_read |
"Read the number of" tips, whose text is "read the number of" |
St_t_written |
"Write rows" prompt with text of "Number of rows written" |
St_t_error |
"Error line number" Prompt with text "Error line number" |
St_read |
Read the number of U_pipeline, whose text is the Rowsread value of the |
St_written |
Writes the number of rows, whose text is the Rowswritten value of U_pipeline |
St_error |
The number of error lines, whose text is the rowsInError value of U_pipeline |
Data window |
Dw_pipe_error |
Automatic display of error lines in pipeline operations |
command button |
Cb_write |
Start pipeline operation |
Cb_stop |
Terminating a pipeline operation |
Cb_applyfixes |
To pass the repaired lines in the Dw_pipe_error data window to the target table |
Cb_clear |
Clears all error rows in the Dw_pipe_error data window |
Cb_return |
Closes the window w_copydata and returns |
The window-level function getpipemsg () that defines w_copydata, the access level is "public", the return value "None", has three parameters: Readrows, Writerows, Errorrows, the type is long, the value is passed. The script is as follows:
W_copydata.st_read.text=string (Readrows)
W_copydata.st_written.text=string (Writerows)
W_copydata.st_error.text=string (Errorrows)
2 . 3 Initialization Operations
Initialization operations include creating instances of two transaction objects Itrans_source and itrans_dest, connecting the source and destination databases, and creating an instance Iuo_pipeline of the pipeline object that supports the user object Uo_pipeline.
Because instance variables Itrans_source and itrans_dest are also used when starting a data pipeline operation (in the clicked event of a command button cb_write), they should be defined as global instance variables. Select instance variables in the declare of the window and add the following script:
Transaction Itrans_source
Transaction itrans_dest
Uo_pipe Iuo_pipe
The initialization operation is done in the Open event of the Window object W_copydata. Enter the following script in the W_copydata Open event:
Itrans_source = CREATE Transaction
Itrans_dest = CREATE Transaction
Profile S, for added versatility, parameters can also be obtained from the exported profiles
Itrans_source. DBMS = "ODBC"
Itrans_source. Autocommit = False
Itrans_source. Dbparm = "connectstring= ' dsn=s; UID=DBA; Pwd=ygg ' "
Connect USING Itrans_source;
Profile D, for added versatility, parameters can also be obtained from the exported profiles
Itrans_dest. DBMS = "ODBC"
Itrans_dest. Autocommit = False
Itrans_dest. Dbparm = "connectstring= ' Dsn=d; UID=DBA; Pwd=ygg ' "
Connect USING itrans_dest;
An example of establishing a pipe object uo_pipe
Iuo_pipe = CREATE Uo_pipe
2 . 4 Starting the Data pipeline
The start data pipeline operation is performed in the clicked event of the command button Cb_write to establish the data table (pile_create) as the default action in the target database, and if there is an error, make a data table modification (pile_modify). The error line information automatically appears in the Data window dw_pipe_errors.
Add a script to the clicked event of the command button Cb_write:
Iuo_pipe.dataobject = ' P_create '
Integer Li_start_result
Li_start_result=iuo_pipe.start (itrans_source,itrans_dest,dw_pipe_errors)
Returns-3 indicates that the data table already exists, using the modified method
If Li_start_result=-3 Then
Iuo_pipe.dataobject = ' p_modify '
Li_start_result=iuo_pipe.start (itrans_source,itrans_dest,dw_pipe_errors)
End If
2 . 5 Handling Row Errors
When you define pipe objects Pile_create and pile_modify, you can set the maximum number of error rows, the number of rows can be from 1~no limit, and the pipeline operation stops automatically when the number of rows in error reaches the specified value. However, in the operation, the user can manually stop the operation of the data pipeline because the user has made a mistake or the pipeline operation takes longer than expected.
When the pipeline operation is stopped, the data window dw_pipe_errors can be repaired if there is an error line in it.
The termination of the data pipeline can be implemented by executing the Cancel () function of the data pipeline instance object in the clicked event of the command button cb_stop. The script is as follows:
If Iuo_pipeline.cancel () = 1 Then
MessageBox ("Prompt message", "Terminate data pipeline operation failed!")
End If
The operation to ignore the error line is performed in the clicked event of the command button Cb_clear, as follows:
Dw_pipe_error.result ()
After modifying the data, resubmit the work in the command button Cb_applyfixes clicked event, the script is as follows:
Iuo_pipeline.repair ()
2 . 6 End Operation
After the pipeline operation is complete, you should disconnect the database and release the used instance. The end operation is performed in the clicked event of the command button Cb_return, as follows:
Destroy Iuo_pipeline
DISCONNECT USING Itrans_source;
Destroy Itrans_source
DISCONNECT USING itrans_dest;
Destroy Itrans_dest
Close (parent)
3 . Conclusion
This paper makes a useful attempt to implement the method of automatic migration and backup of data in application using Data pipeline. In the correct configuration of the actual system, this method has realized the correct migration of data. The entire development process is done in a window user object, which increases the portability and reusability of the code. Of course, there is still a need for further practice in the selectivity and versatility of migrating data.
PB Data Pipeline