In the Oracle ERP method system, the application implementation method belongs to the implementation management category. From the aim process, we can see that the system timeline is divided into six phases:
◆ Definition stage)
◆ Operations Analysis)
◆ Solution design)
◆ Build)
◆ Transition)
◆ Production stage)
Note: The legend is from the official aim 3.1.0 documentation.
The implementation of information systems takes precedence over planning, while planning and data must be implemented first. comprehensive, accurate, and timely data is the prerequisite for normal system operation. High-quality data is a huge driving force for the launch. The data conversion discussed in this article spans the four stages of requirement definition, solution design, system establishment, and system switching on the timeline. In traditional aim, the main task of data conversion in the demand definition phase is to define data conversion requirements and develop solutions based on requirements. The main transaction of data conversion in the design phase is to customize the conversion standards, prepare the environment; establish the data conversion work in the system stage mainly to design and test the imported program and verification script, and prepare for the next import; in the system switching stage, the main task of data conversion is to install the import program and import tool in the formal environment, finally import the collected data, and finally verify the data.
However, the methodology focuses more on the overall framework from the theoretical perspective. Each process in each stage is an "optional" step. How to do it and how to do it well is not mentioned above. Data collection is highly demanding, time-consuming, and difficult. From the beginning of definition, data collection should begin in a certain emergency and order. I will summarize my experience here for my colleagues' reference. Due to the limited level, mistakes and omissions are inevitable. You are welcome to discuss and correct them.
I. Data
First, let's look at the data. Generally, the data of an enterprise when implementing ERP includes three parts, which are called Basic data. The base data is the data required during system setup. For example, accounting subject, tax rate, inventory organization, sub-stock, goods location, etc. The second part is called static data, which is used in actual operations after the system goes online. For example, customers/suppliers, employees, bank accounts, material information, and process routes. Basic data and static data can also be called static data. The other part is called dynamic data, dynamic Data is the time point data required for system initialization, such as the account balance, supplier balance, customer balance, inventory amount at the beginning of the period, sales/purchase orders not closed, and tickets not closed. When formulating the ERP launch plan, we need to comprehensively consider the data in these three aspects and collect the data in a certain emergency and order.
Basic data is usually in a fixed format, with high requirements on data quality and high timeliness. Therefore, data collection may need to be prepared in advance at the requirement definition stage. After the basic data is complete, we can test the integrity of the system settings through the diagnostic tools provided by the system.
Static data is usually relatively simple. Since static data is basically not changed after it is entered into the system, it is sufficient to ensure that the data is complete and accurate. In fact, static data often exists in the current business of an enterprise. On the one hand, it is easy to collect, and on the other hand, it is universal within the enterprise, if the original material code is improved or re-edited, the uniqueness, uniformity, practicability and ease of use must be taken into account.
Dynamic Data is divided into initial data and daily data by time. The most familiar initial data is the initial value of the storage period and the opening balance of the financial account. qualified enterprises should check the initial data. The daily data includes unpaid sales orders, unpaid procurement orders, and unpaid tickets. To prepare this part of data, enterprises should define the days of the outstanding documents collected before going online. These documents only need to count the number of outstanding documents, that is, the total number of orders minus the number of deliveries, the total number of tickets minus the number of completed warehouse receiving orders can be summarized according to certain conditions. For example, a supplier may have multiple unpaid purchase orders, you can collect statistics by supplier into an outstanding purchase order table and import it to the system. The subsequent documents will be used as the daily operations of the system, and will be input to the system at any time as needed during the launch process. Unsettled documents should be settled as much as possible before going online to reduce the difficulty of manual and system switching and reduce the workload for future reconciliation.
Ii. Data collection process
In the process of data collection, we should follow a certain sequence and certain collection methods. Qualified enterprises should perform several rounds of data collection and analysis to avoid data modification after the formal system is imported.
1) understand and confirm the characteristics and data volume of the enterprise's industry
Different industries have different requirements for data. For example, the assessment system of medical systems for suppliers is different from that of general enterprises, the size of the same data size also determines how you can process the imported data. For example, customers of companies with long distribution channels may only manage the imported data to agents; short-channel enterprises may manage wholesalers and retailers. We can use a manual method to import a small amount of data, but the data volume is large. We have to use a program to import it to help us.
2) discuss and confirm data collection projects, sources, and detailed plans
The collected items usually determine which data needs to be collected based on the system's requirements on data and the special nature of the enterprise industry. It also determines where the data comes from, whether it is migrated from the old information system or obtained from the archive data, detailed plans include evaluation of data collection workload, lead time, and owner.
3) sort out data collection solutions and tables
The collection scheme is the data requirements of enterprises and the collection strategy for these requirements, and is the guiding principles in the data collection process. The data collection table is reasonably designed according to the data collection scheme, detailed data collection instructions should be provided for data tables.
4) Customers collect data
Data collection usually takes a lot of time and resources from the beginning to the end. to ensure data accuracy, in addition to answering any questions in real time, the customer's collection process is usually sampled and analyzed in stages, and the monitoring data is moving towards our goal from the very beginning.
5) organize the format of the data imported into the system
According to the completed data collection table, the ing system needs to perform format conversion and adjustment. So that the system can be imported or input in different ways.
6) confirm data
Multiple parties confirm the input data.
Iii. Data Import Method
According to the data scheme, the collection responsibilities and the collection period are determined, and finally the data is collected, sorted (verified) and submitted. After the parties confirm the data, they should begin to import the data. The Oracle system provides a variety of solutions at the database and application levels for you to choose the data import method. The following uses asset data as an example.
There are two methods to import fixed asset data: one is to add interfaces in batches, and the other is to manually input data in the system.
1) manually input data
If you choose to manually enter the system, you can also select some tools, such as dataload and LoadRunner, to help you quickly import data through the application interface.
The advantage of manually importing data is that you can directly input data, check errors while inputting data, and easily modify some default data, for example, the depreciation method and remaining service life are automatically displayed based on the asset type (these methods in the interface table must be modified if the old asset is imported ). Of course, manual data input is only recommended for data items with relatively small data volume, such as tax rate data. If the data volume is larger than, it is recommended to import data through the batch add interface.
2) Batch add interface Import
An important function of Oracle is to prepare one or more interface tables in each module to facilitate data migration between modules and before the system and the system. For example, an important interface table of assets is "batch Add" (fa_mass_additions). Oracle also provides many methods to import data to the interface table. SQL * loader is a software at the database level, and there are also Adi and API:
A) Adi
Application desktop integration (ADI) is a third-party tool that can be used for actual account processing and report output. But we can use it to import some financial data. For example, journals, asset details, and accounting subjects, we usually create an Excel-based template, for example, and then collect data in the template format, then, you can directly connect to the system to complete the Asset Operations and add actions. ADI is easy to use, secure, and has good data verification functions. The ADI import function is generally only used in the general ledger and asset module of the finance module.
B) SQL * Loader
SQL * loader is a tool for directly inserting external data files into database tables. It is very fast and only requires a small amount of programming to import data, it contains three parts: data files, control files, and command syntax.
Command example: sqlldr userid = internal/Oracle control = test. CTL
Splldr is a command prompt. Enter the database access username and password after userid. Control is the name of your control file. For detailed parameter information, see relevant documents.
Data File example:
| 1 | 04490896 | 347.76 |-
| 2 | 10256837 | 349.40 |-
| 3 | 09956875 | 532.30 |-
| 4 | 10256871 | 581.30 |-
Data only needs to tell the system which data is using a specific separator (|.
CTL control file:
Load data Infile '/Prod/prodappl/FA/11.5.0/bin/data file name .txt' Into Table fa. fa_mass_additions Fields terminated by "|" Trailing nullcols (Mass_addition_id recnum, Asset_number nullif rec_id_no = blanks, Tag_number nullif column1 = blanks, Description nullif column2 = blanks, Asset_category_id nullif column3 = blanks, ...... Asset_id nullif columnx = blanks)
|
The control file includes the data file path and file name, and defines the name of the imported table, the delimiter, and how each column of the data file correctly imports each segment of the system table. SQL * loader is usually used to import data from temporary tables. Other methods are required to verify the data accuracy.
C) API
Explorel also provides some application programming interfaces (APIS ). API allows you to directly import data through PL/SQL, and transparent interface tables to users. In the new version of the asset system, El provides a standard additions API that allows users to directly call fa_addition_pub.do_addition to generate asset data. If you are interested, go to Metalink to find the document: Note: 206473.1 for research.
The above methods, from top to bottom, the higher the technical requirements, from bottom to top, the easier to operate. But it does not mean that you must choose this or select simple operations. It is obviously impossible for Oracle to perform API interfaces on all data tables. Appropriate methods should be selected based on the overall data situation.
Iv. Data Verification
Even if we carefully check the data before importing and strictly design the data during the import test, we still cannot ensure that the imported data is completely correct, this requires us to verify the imported data. Common verification methods for input data review include print output check, screen check, and secondary input check.
During the implementation process, it is a common method to verify the data by using the printed output. Through the "Export" function of the system, the data imported to the system is printed into paper documents, it is then distributed to relevant personnel for verification and can be permanently saved as an archive.
The screen check method mainly involves accumulating and calculating the amount of data with a small quantity and scattered data storage, checking the balance, checking the amount of borrowing and credit, checking the creden and books, and so on to find errors.
The secondary input method is tested using the corresponding software. The input process is divided into three steps: one entry, two inputs, and each data entry is saved and exited. Finally, it is a collision, the purpose is to check whether the data entered in the previous two items are consistent. If they are consistent, they will pass. If they are inconsistent, an error will be prompted, and the work may seem complicated, this step can be omitted, which usually requires special conditions, such as temporary tables and strict program logic. The method used for verification varies with the data, but as long as the data can be checked accurately, you can enter your name in the Data Import validation report with confidence, that's OK. This article from: http://developer.51cto.com/art/200702/40036.htm