According to the official Oracle documentation, when using logminer to obtain data dictionary files, you need to set the utl_file_dir parameter. However, to set this parameter, you must restart the database, some key services cannot be implemented. Is there any other way to construct dictionary information? The answer is yes.
The specific steps are as follows:
1. Set the database to archive
2. Add supplement Log Data
3. store the data dictionary in redolog, execute dbms_logmnr_d.build (-2 options => dbms_logmnr_d.store_in_redo_logs );
4. view the archive logs corresponding to the data dictionary
SQL> select name from V $ archived_log where dictionary_begin = 'yes'; -- Query start
SQL> select name from V $ archived_log where dictionary_end = 'yes'; -- Query end archived logs
5. Use logminer to parse archived logs and parse archived logs into SQL statements.
SQL> execute dbms_logmnr.start_logmnr (Options => dbms_logmnr.dict_from_online_catalog );
6. query SQL _redo of V $ logmnr_contents and upload the result spool to a file.
7. Create a data dictionary file through other test databases of the same version, call up the create_table statement, and put it in the file of your output spool.
8. verify the data. Compared with the data dictionary file generated by the process, the important ID values are the same. After the test, the constructed data dictionary file is available.
Finally, we despise oracle. We still need to set this parameter when we reach 11gr2. We should have changed it to our own directory.