Oracle Logminer performance test

Source: Internet
Author: User

Oracle Logminer Performance Test 1 test Introduction 1.1 The purpose of the test is to simulate the running status of LogMiner parsing online/archive log files in different environments and analyze the data obtained through the test, verify the following two points to determine the technical feasibility of using LogMiner: 1. The impact on the memory and CPU of the database server under different log files and different data pressures; 2. Check whether the number of data in the dynamic table of LogMiner is consistent with that in the actual physical table to verify its accuracy. 1.2 test environment purpose and description hardware configuration software configuration other instructions Database Server Model: T420i processor: Intel (R) core (TM) i5 CPU M430 clock speed: 2.2G memory: 2G hard disk: 300G Operating System: WindowXP database and version: oracle10.2 g ip Address: 10.88.54.83 Testing Machine Model: T420i processor: Intel (R) core (TM) i5 CPU M430 clock speed: 2.2 GHz memory: 1.8 GB display: 1280*800 wide screen Operating System: windows xp browser and version: ie8 1.3 Test Solution 1.3.1 performance impact (for Objective 1) in order to simulate the actual running environment, added the Logminer running background environment to test the running status of databases in the absence of operations, 300 insert/second operations, and 500 insert/second operations, and compare the running status of the log file in the size of 50 MB and MB. 1.3.2 accuracy (for Objective 2) 1. Data Type No. Does the Data Type support problem handling? 1 BINARY_DOUBLE 8.1 and above 2 BINARY_FLOAT 8.1 and above 3 CHAR 8.1 and above 4 DATE 8.1 and above the time format needs to be set, otherwise, only the date alter system set nls_date_format = 'yyyy-MM-dd HH24: mi: ss' scope = spfile can be synchronized; 5 interval day 8.1 and above 6 interval year 8.1 and above 7 NUMBER 8.1 and above 8 NVARCHAR2 8.1 and above 9 RAW 8.1 and above 10 TIMESTAMP 8.1 and above 11 timestamp with local time zone 8.1 and the above 12 VARCHAR2 8.1 and above 13 LONG 9.2 and above 14 CLOB 10.1 and above need to be set as follows: ALTER Database add supplemental log data (ALL) COLUMNS; two statements are inserted at the time of insertion. In addition, the binary DATA inserted is not tested for 15 BLOB 10.0 or above. 2. DDL statement test (not completed) whether the sequence number type supports table creation (Create table) supports table deletion (Drop table). If two statements are displayed, first modify the table name to the temporary table name, delete the temporary table. Monitoring of this type requires merging. 3. Creating a job does not support creating a sequence. 5. Creating a stored procedure (Create pocedure) is supported) support for 6 fields added (alter table TABLE add column) Support 7 delete fields (alter table emp drop column) support 8 modify fields (alter table emp modify column) Supports alter table rename column (alter TABLE rename column), supports 10 rename emp to table, supports 11 TABLE data cleanup (truncate table TABLE), supports 12 drop table) support 13. Flashback table TABLE to before drop. Support 14 not null constraints (alter table TABLE modify COLUMN not null) 15 UNIQUE constraints supported 16 primary key constraints supported 17 foreign key constraints supported 18 CKECK constraints supported 19 forbidden/activation constraints supported 20 deletion constraints supported 21 creation of Non-UNIQUE indexes supported 22 creation of UNIQUE indexes supported 23. Support for creating Bitmap indexes 24. Support for creating reverse indexes 25. Function indexes support 26 index modifications support 27 index mergers support 28 index reconstruction support 29 index deletion support 30 create view support 31 VIEW modifications (create or replace view) support 32 drop view support 33 create sequence support 34 alter sequence support 35 DROP SEQUENCE) support 3. Other problem Test serial number problem symptom and handling method 1 The data inserted into the primary table and sub-table test can be normally inserted and synchronized 2 the transaction commit (commit, rollback) can see the committed and uncommitted content, consider adding the DBMS_LOGMNR.COMMITTED_DATA_ONLY parameter to the product design. This parameter only reads data when transaction 3 has been submitted for batch update. This parameter affects multiple data entries and generates a statement corresponding to each updated data entry in the online log, synchronous Retrieval Then execute the 4 Update and delete statement and add rowid to remove the rowid parameter dbms_logmnr.NO_ROWID_IN_STMT 5 2 test conclusion 2.1 test preliminary conclusion 1. from the performance impact test, we can see that: a) the logminer loading and analysis process is randomly divided into 6 ~ 21 seconds; B) the loading and analysis process does not change significantly in time, CPU, and memory as the number of log files increases; c) the maximum number of log files to be analyzed during the loading and analysis process is the memory, followed by the CPU, which is less time-consuming. 2. from the perspective of accuracy testing, a) the DML statements can be obtained by setting them (the LOB field also needs to be tested). B) from the current situation, DDL support is insufficient and further testing is required; attached test data: Online Log Size Read File count run job count insert data volume generate dictionary file loading processing analysis processing log_contents data volume size (M) time (s) Time consumed (s) CPU (%) memory (M) elapsed time (SEC) CPU (%) memory (M) solution 1 50 M 1 0 0 pen/second 47.5 12.7 1 309 5.5 25 438 3 0 0 pen/second 1 600 309 25 5.7 444 5 0 0 pen/second 1 1 326 5.6 25 445 492,606 10 0 0 pen/second 1 1 326 5.6 445 1,149,284 solution 2 50 M 1 1 500 estimated 300 pen/second 47.5 20 1 26 391 6.7 35 530 3 3 111,328 estimated 500 pens/second 1 21 300 473 6.4 619 372,389 5 500 estimated 300 pens/second 1 25 534 6.8 44 692 622,390 10, 500, 300, 624, 6.7, 780, 1,254,748, solution 3, 50 M (CPU 80% not running, 680 M) 1, 1000, 500, 47.5, 54.7, 3.5, 688, 806, 35,892, 1000, 500, 1.5, 688, 14.4, 777, 384,743, 1000, 500 PEN/second 1 68 687 75 805 652,148 10 1000 estimated 500 pens/second 10 80 689 13.2 806 1,295,158 solution 4 50 M (CPU 80% not running, 667 M) 1, 2000, 1000, 47.5, 73.7, 5.5, 691, 14.6, 808, 133,844, 2000, 1000, 11.4, 691, 809, 390,029, 2000, 1000 PEN/second 5.5 76 690 13.6 806 668,013 2000 10 1000 estimated 6.1 pens/second 690 15.4 809 1,335,587 100 solution 5 M (CPU not running 25%, 464 M) 1, 500, 300, 23.8, 8.7, 0.8, 484, 4.1, 573, 268,715, 500, 300, 0.9, 534, 3.2, 622, 768,989, 500, 300 PEN/second 0.9 27 581 3.2 35 662 1,324,447 10 500 estimated 300 pens/second 1.1 690 5.2 763 35 2,619,322

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.