1. Test Purpose
Test the performance of dry and aggregate reports on the same hardware and web containers, comparing performance differences in grouping, sorting, filtering, linking, ranking in reports, and their performance in concurrency scenarios. During the test, the dry report will use the built-in calculation engine of the report tool, which uses the built-in calculation engine of the integrated calculator.
2. Environmental Description
Test model: Dellinspiron 3420
CPU: Intel Core [email protected]
RAM: 4G
HDD: West number WDC (500g5400 rpm/min)
Operating system: Win7 (X64) SP1
JDK: 1.6
Database:ORACLE11GR2
Tomcat:6.0.36_x64
TOMCATJVM Memory:-xms512m-xmx2048m
Dry Run report version: 4.5.6
Collection Report version: 5.0
3. Data Description
The test uses three tables T1, T2, T3, and the following table is three table information, where the amount of data is the number of record bars.
the T1 and T2 tables have the same structure, as follows:
T3 Table Structure:
4. Use case Description4.1. Grouping
Using the T2 table, the report is grouped by Date2 and date fields, grouped together, with the number of Sql:ds1:select * from T2.
The report format is:
4.1.1 Run Dry report Realization
4.1.2 Set Calculation report implementation
The Collector script:
Report Template:
4.2. Sorting
Use the T1 table, fetch 500,000 records, sort the report according to the Date field, report 5 fields, take the number of SQL:
Ds1:select * from T1 where rownum<500000
The report format is:
4.2.1, run dry report realization
4.2.2, set calculation report implementation
The Collector script:
Report Template:
4.3. Filter
Using the T1 table, in the report by the ID field filter, filtered data volume of 82, take the number of Sql:ds1:select * from T1.
The report format is:
4.3.1, run dry report Realization
4.3.2, set calculation report implementation
The Collector script:
Report Template:
4.4. Connection
Using the T2 and T3 tables, in the report, follow the T2 ID field to the left of the T3 and take the number of SQL:
Ds1:select * from T2 where userid<267427523
ds2:select * from T3 where userid<10485202
The number of DS1 Records is: 7171;DS2 Records: 12730
The report format is:
4.4.1, run dry report realization
4.4.2, set calculation report implementation
The Collector script:
Report Template:
4.5. Ranking
Use the T3 table to rank in the report according to Sumtime, take the number Sql:ds1:select * from T3 whereuserid<8883948
The number of data set DS1 Records is: 10430.
The report format is:
4.5.1, run dry report realization
4.5.2, run dry report to implement the collector script:
Report Template:
4.6. Concurrent grouping
Use 5.1 grouping cases for multiple concurrent grouping tests, where 4 concurrency is used.
5. Test method
Use two identical tomcat (JVM, etc.) to deploy the run-Dry report application and the collection report application separately, the report data source is the same, the dry report uses the SQL data set, directly takes the number; The collection report takes the collector data set and takes the number in DFX. Compare report performance such as grouping, sorting, and so on in the report, and restart Tomcat before each use case test to ensure that no other reports occupy memory and other resources.
In addition, because of the test machine configuration, different use cases using different data tables, so different use cases between the vertical no comparability, mainly looking at the landscape, that is: Dry report and set the performance of the report.
6. Test results
* The following table results data units: seconds
Description
The time record here is from the SQL after the completion of the calculation of the report, the dry report will be directly provided in the Journal of the two points of time, subtraction is the interval; the collection report needs to be in the script to add debug information record the time of the completion of the report, and then the report in the log in the end of the calculation when compared to the
7. Interpretation and Analysis
The set-up report puts the calculation in the collector script, and the collector uses a more efficient hash algorithm, which greatly improves the computation of data association such as grouping, and does not improve the algorithm level for hard traversal calculation such as filtering sort. The small increase is mainly due to the fact that the operation of the collector is not represented by the presentation attribute.
All operations in this test are single-threaded, and in fact, the collector can support multithreaded parallel computing to take full advantage of the CPU's multicore, and can further improve performance even when sorting and filtering class operations can be increased several times. Most reporting tools, including the dry report, currently do not support multi-threaded parallel computing and do not utilize CPU multicore to improve performance.
Comparison of calculation performance between set-up report and run-dry report