In the report test case design, test data is the key. As Jackie said in "report test in the invoicing system", if you want to complete the report test more effectively and with higher quality, you must pay more attention to and pay more attention to data preparation. In fact, the test data is also used for testing scenarios. one or more sets of test data are often used to verify whether the report can correctly display the statistical value in a test scenario. In the final analysis, the design of the test scenario is the key. After the previous report analysis, the basic framework of the test case has been completed. Next, we need to refine and complete the Scenario Design on this framework, and then design the corresponding test data through the scenario.
I roughly divided the design of test data into three categories:
1. valid data
Valid data, as the name implies, is data that conforms to both the foreground business rules and statistical rules. They will be included in the report, and the statistical value of the report will have a positive impact.
2. Invalid Data
Invalid data, which is not included in the statistical rules. Such data conforms to the foreground business rules, but does not comply with the report statistical rules, that is, the statistical value of the report will not have any impact.
3. abnormal data
Abnormal data is used to test the data fault tolerance capability of the report system. This type of data does not comply with the front-end business rules and has a negative impact on the statistical value of the report. The most common scenario is that the denominator of the statistical value is zero.
The design of such data is more applicable to the separation of the report system from the business system. When the report system and business system are unified, the abnormal data is restricted by the front-end business rules, that is, the abnormal data may not be connected; when the report system and business system are separated, abnormal data may occur in a short period of time due to the non-synchronization of data transmission, at this time, the report system is very important for the error handling mechanism.
In addition to the above three types of data, we also need to pay attention to the following points when designing the report test data:
1. Ensure the independence of test data between scenarios
This is important to ensure data control. If the same or group of test data is used to check multiple report statistical values, once the test results are inconsistent with the expected results, it will increase the difficulty of checking errors. Furthermore, to ensure data independence, defect can be better elaborated, and the defect site can be retained, waiting for developers to solve the problem.
2. Data diversity
Diversity refers to multiple groups of test data prepared for the scenario. Because different data can be closer to the real, and problems are more likely to be discovered. I have encountered a similar situation before: In a test report, I am sending the same statistical value. What I got in July is correct, but what I got in July is wrong. Under normal circumstances, using the same program to calculate only two results, either the two is correct or both are wrong. Why is one-to-one error? Later, the developers checked and found that there were still problems in the computing program. One-to-one error occurs because the data in February and February uses different groups of test data, and the data in February is exactly the same, the correct value can also be calculated in the incorrect calculation program. Therefore, the report test requires multiple groups of test data to be supported, otherwise the defect will slide from our eyes.
3. Do not forget to empty reports
An empty report means that no consistent source data exists under the REPORT query conditions, resulting in a blank statistical value in the report. This test aims to ensure the correctness of the report and check whether report statistics are transparent.
4. Pay attention to the numerical design.
The value here refers to the statistical value. For example, when the statistical value is a percentage, we need to cover the maximum value (100.00%), minimum value (0.00%), median value (such as 38.01%), and scale check (99.99% ). In addition, we also need to consider negative numbers, percentages over 100%, and decimals.
5. Comparison between different reports
The performance of the same group of data in different reports should be consistent. For example, in the total sales volume report, the total sales volume of business point A is 10 thousand yuan in May, but the sales volume of business point A can only view the sales data of 9000 yuan. So, this means that it must have been a problem with one of the reports.
6. Pay attention to the design of historical data
In a report system designed based on OLAP technology, the historical dimension is also the focus of testing. Tests on historical dimensions involve the design of historical data. For example, if salesperson A serves business a in February January 2011, then his sales performance should be calculated in business a. However, in February, salesman a is transferred to business B, then his sales performance after March should be calculated into business B. Whether the report can correctly calculate the sales performance of salesperson A at different times to the corresponding business point requires us to design a batch of sales source data for salesperson A to check.
7. Test Data Backup
Similar to general system tests, report tests also require multiple versions. In addition, the report test data volume is large, at least three times the business test data. Therefore, data backup is necessary. I have used database backup files, SQL statements, CSV or Excel format to back up data. Through comparison, we recommend that you save data in CSV or Excel format. In tests of different versions, it is difficult to avoid changes in the database structure or data table fields. If a database backup file is used, once the database changes, this backup cannot be used. SQL statements can avoid this problem, but the test data stored in SQL statements is not intuitive, it is not convenient to modify. Therefore, the CSV or Excel format is easier to use, and many databases provide the ability to import CSV or Excel files in batches.
[Tool] Test Data design in report Testing