How to quantitatively manage testers

Source: Internet
Author: User
In a project, the tester's assessment is often a difficult problem for the project manager and test Manager. The project team's tester's assessment mainly includes two parts: work efficiency and work quality. Work efficiency is used to evaluate activities, the quality of work is used to evaluate the quality of deliverables. According to the traditional test cycle, the test process is divided into three aspects: test plan, test design and test execution. The Test Plan belongs to the scope of the test Manager. The tester is mainly responsible for testing design and execution. The test Manager's assessment can be included in the testing personnel's assessment. Of course, this part of the assessment can also be included in the project team. The assessment indicators are as follows:

1. Test Design
1. Work Efficiency Indicators
(1) Document Output Rate: This indicator value is mainly used to obtain the number of pages in the test case document, except for the effective time of writing the document. Measure the test case document productivity of testers.
Formula: Sigma test case document page (PAGE)/SIGMA preparation test case document effective time (hours)
Reference indicator: according to the project summary, the average value is about 1.14 pages/hour. The value above this value is excellent, and the value below this value is poor.
(2) Case output rate: This indicator value is mainly used to supplement the above indicator value, used to evaluate the output rate of testing cases. The number of pages in the test document may contain a large amount of redundancy information. Therefore, you need to check the number of test cases in the document. The method is to divide the total number of test cases in the test case document by the effective time of writing the document.
Formula: Σ Number of test cases (s)/Σ effective time for writing test case documents (h)
Reference indicators: Average 4.21 use cases/hour
2. Work Quality Indicators
(1) demand coverage rate: calculate the total number of test cases except for the sum of the number of functional points corresponding to one of them to check whether there are functional points missing tests.
Formula: Σ Number of test cases (s)/Σ Function Points (s)
Reference indicators: 100%. If none of the function indicators meet the requirement of 100% coverage, at least the test is insufficient. It is quite difficult to collect this indicator. If there is a requirement tracking matrix or a test management tool, it is much easier to match the use cases with the requirements one by one.
(2) document quality: number of defects found during review and peer review of test cases, or the number of defects is calculated based on the document pages. This indicator describes the quality of the tester's documentation.
Formula: Σ number of defects (review and peer review) (count)
Σ defect count (review and peer review)/Σ test case document pages (pages)
Reference indicator: because the number of defects found in the review is not fixed, there is no reference value for this indicator. If the number of defects cannot be used for comparison, use the defect/Page Method for horizontal comparison.
(3) Document efficiency: the number of system test defects found during testing using the test case document is exceeded to this document page. The document is used to evaluate the test work.
Formula: Σ defect count (System Test)/Σ test case document page (page)
Reference indicators: Average 2.18 defects/Page
Note: This section should be included if a tester creates a document during the test for secondary testing.
(4) efficiency of use cases: all defects discovered by use of test cases are divided by the total number of test cases. This indicator is a complementary indicator of the previous indicator, used to check whether the use case quality is high
Formula: Σ defect count (System Test)/Σ test case count (unit test)
Reference indicator: an average of 0.59 defects/use cases. That is to say, one defect is obtained for each execution of two use cases. Each project is different and can be practiced by yourself.

2. test execution
1. Work Efficiency Indicators
(1) execution efficiency: the number of pages in the test case document is counted based on the total execution time of the system test (excluding the test case document writing time ). The supplementary indicator method is to divide the number of use cases by the total time of the system test. It is used to obtain the testing speed per hour for testers at work.
Formula: Σ test case document pages (pages)/Σ effective time for executing system tests (hours)
Σ Number of test cases (units)/effective time for executing system tests (hours)
Reference indicators: Average 0.53 pages/hour, 1.95 use cases/hour. That is, the tester executes half-page test cases every hour or two test cases every hour. Through horizontal comparison, it is easy to know that the member has a high execution efficiency. Note: High execution efficiency does not mean that the test quality is high, or even the execution efficiency is inversely proportional to the test quality. Therefore, the following work quality indicators will supplement the deviation of this part. The actual results show that the defect discovery rate of members with high case execution efficiency is often low. If this is not included in the assessment, it can be collected as an important data for test improvement.
(2) Progress deviation: check the progress of the planned time and the actual time by measuring the difference between the planned time and the actual time except for the sum of the actual working hours to check the progress of the testing personnel, monitor and test whether the schedule is followed and whether the project progress requirements are met.
Formula: Σ (planned start time-actual start time) + Σ (planned end time-actual End Time)/total working hours
Reference indicator: 15% progress deviation is a relative indicator, which may deviate from 20 working days. However, for a half-year test, the deviation days is less than 15% of the total test days, the deviation may be 3 working days, but a test with only one week has exceeded 60% of the number of days required for the entire test phase.
(3) defect discovery rate: the total number of defects discovered by testers is divided by the total testing time spent by testers. Because the execution efficiency is insufficient to indicate whether or not the tester is working seriously, the number of defects detected per hour is an important evaluation indicator. You can use this indicator to get feedback.
Formula: Effective time (in hours) for performing a system test on Σ defect count (System Test)/Σ)
Reference indicators: an average of 1.1 defects/hour if a tester fails to find one defect within one hour, unless the product quality is high and the module size is small, that is, his defect discovery capability is inferior to that of other testers. Of course, the detailed classification can define the defect discovery capability based on the quantity of important defects found.
Work Quality Indicators
(4) Valid defect rate: Total number of rejected and deleted defects, or total number of rejected and deleted defects, except the total number of defects. This indicator is used to evaluate the number or percentage of Defects Detected by testers and identified as defects. The lower the number and ratio, the higher the test quality.
Formula: Σ number of defects (rejected and deleted in system tests)
Σ defect count (rejected and deleted in system test)/Σ defect count (system test)
Reference indicators: an average of 21.9% (22 of every 100 defects discovered by testers are not confirmed by the development team, considered as "defects", or incorrectly entered as defects ). The valid defect ratio is easy to give, but the specific data of the Number of valid defects should be based on the project situation, and the value that can be referenced cannot be given.
(5) Serious defect rate: this proportion is used to make up for the defect discovery rate. It mainly refers to the number of defects classified by severity to compare the total number of defects or the number of valid defects. Generally, each company divides the severity of a defect into severe, General, and minor, or finer (usually with an odd number of levels ). In addition, the severity of the defect can be converted (severe: General: Tiny = 1: 3: 5) to get the weight, and then calculate the score of the tester.
Formula: Σ serious/General/minor/Σ defect count
Σ serious/General/tiny/Σ valid defect count
Reference indicators: severe ~ 10% average ~ 70% Tiny ~ 20%. When a tester discovers a higher ratio of serious errors, it means that the testing quality is relatively good. Generally, the distribution of the number of serious defects is normal.
(6) module defect rate: This indicator is mainly calculated based on the number of defects in a single test module in addition to the number of functional points of the module. If a module is tested separately, it is easy to compare the indicators with other modules horizontally. The number of defects of the tested module can be obtained by referring to the corresponding tester, and the testing level of the tester can be investigated, it also provides data for development assessment.
Formula: Σ number of defects (system test items)/Function Points (items)
Σ number of defects (system test items)/sub-function points (items)
Average 3.74 defects/functional points 1 defect/sub-functional points of reference indicators

three-Test Management
the assessment of test managers is complicated. Besides the test Manager's participation in test design and execution, his test management capabilities, namely, work in the test plan phase, should also be examined.
(1) plan quality: Number or ratio of defects reviewed in the test plan, it can be compared with other similar projects or database average indicators.
formula: Σ number of defects (review and peer review)
Σ number of defects (review and peer review) (count) /Σ test plan document page size (PAGE)
reference indicator: None
(2) cost quality: cost measurement mainly focuses on workload. Because both wages and bonuses are related to the workload. The cost quality mainly refers to the sum of the planned workload of the test activity compared to the actual workload. The progress factor has been taken into account for the Progress deviation of the tester's assessment, and the workload involves the cost factor.
formula: workload of the Sigma test Activity Plan (estimated person-day)/actual workload of the Sigma test activity (person-day)
reference indicator: in principle, the workload of the plan cannot be deviated from the ± 15% ~ + 20%. In fact, this indicator is a measure of cost. For a large project, the estimation value is often very different, and the period Statistics may include ± 500% !! At this time, it is necessary to adjust the plan and calculate the average estimate value in the final stage. A test manager must effectively control the cost of completing the task.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.