How to evaluate the software quality?

Source: Internet
Author: User

Abstract:This paper starts from the related concepts of software quality, and puts forward the selection principles of corresponding software quality evaluation indicators based on the analysis of software quality characteristics, then, a software quality evaluation system is established.

Keywords: Software Quality Evaluation Index System

1. concepts related to software quality

Software Quality is the sum of features and features related to the ability of software products to meet defined or implied requirements ". According to the National Standard GB-T8566--2001G of software quality, software quality evaluation usually starts from the analysis of software quality framework.

1.1 Software Quality Framework Model

As shown in 1, the software quality framework is a three-layer structure model of "quality features-Quality Sub-features-measurement factors.
In this framework model, the upper layer is management-oriented quality features. Each quality feature is a set of attributes used to describe and evaluate software quality and represents an aspect of software quality. Software Quality is determined not only from the characteristics shown outside the software, but also from the internal characteristics.

The second layer of Quality Sub-features are refined by upper layer quality features. A specific sub-feature can correspond to several quality features. Software Quality Sub-features are communication channels between managers and technicians regarding software quality issues.

The bottom layer is the software quality measurement factor (including various parameters) used to measure quality features. Quantitative Measurement factors can be directly measured or calculated to provide a basis for obtaining Software Quality Sub-feature values and feature values.

Figure 1 Software Quality Framework Model

1.2 software quality features

According to the national standards of software quality GB-T8566--2001G, software quality can be evaluated with the following characteristics:

A. functional features: a set of attributes related to a set of functions and their specified properties. The functions here are those that meet the explicit or implicit requirements.

B. Reliable features: a set of attributes related to the ability of the software to maintain its performance level for a specified period of time and conditions.

C. ease-of-use features: a set of attributes related to the efforts and evaluations required by a set of defined or potential users to use the software.

D. efficiency characteristics: a set of attributes related to the relationship between the software performance level and the amount of resources used under the specified conditions.

E. Maintenance features: a set of attributes related to the effort required to make the specified changes.

F. Portable features: a set of attributes related to the ability of the software to move from one environment to another.

Each quality feature corresponds to several sub-features respectively.

2. selection principles of evaluation indicators

Selecting and quantifying an appropriate indicator system is the key to software testing and evaluation. Evaluation Indicators can be divided into qualitative indicators and quantitative indicators. In theory, quantitative indicators should be selected as far as possible to reflect the quality characteristics of software scientifically and objectively. However, for most software, not all quality features can be described using quantitative indicators, so certain qualitative indicators must be inevitably used.

When selecting evaluation indicators, we should grasp the following principles:

A. Targeted
That is, unlike general software systems, which can reflect the essential features of software evaluation, the specific manifestation is functionality and high reliability.

B. Testability
That is, the data can be quantitatively expressed and obtained through mathematical computation, Platform Testing, empirical statistics, and other methods.

C. Concise
It is easy to be understood and accepted by all parties.

D. Completeness
That is, the selected indicator should cover the scope of the analysis target.

E. Objectivity
That is to say, it objectively reflects the essential features of software and cannot be different from person to person.

It should be noted that the more evaluation indicators selected, the better. The key lies in the role of indicators in the evaluation. If there are too many indicators during the evaluation, it not only increases the complexity of the results, but sometimes may even affect the objectivity of the evaluation. Generally, the top-down method is used to determine indicators, which are decomposed layer by layer and need to be balanced repeatedly in the dynamic process.

3. Software Quality Evaluation Index System

Generally, we focus on functional features, reliable features, easy-to-use features, and efficiency features in software testing and evaluation. In the specific implementation of the evaluation activities, the development task book of the software to be evaluated should be taken as the main basis, adopt the top-down layer-by-layer decomposition method, and refer to the relevant national software quality standards.

3.1 functional indicators

Functionality is one of the most important quality features of software and can be refined into completeness and correctness. Currently, the functional evaluation of software mainly uses qualitative evaluation methods.

A. Completeness
Completeness is a software attribute related to the completeness and completeness of software functions. If the function actually completed by the software is less than or does not comply with the explicit or implicit functions stipulated in the task book, it cannot be said that the function of the software is complete.

B. correctness
Correctness is a software attribute related to whether a result or effect is correct or consistent. The correctness of the software is largely related to the engineering model of the software module (directly affecting the accuracy of the auxiliary computing and the advantages and disadvantages of the auxiliary decision-making solution) and the programming level of the software compiler.

The evaluation of these two sub-features is mainly based on the results of software functional testing. The evaluation criteria are the degree of conformity between the functions shown in the actual operation of the software and the prescribed functions. In the software development task book, it clearly defines the functions that should be completed by the software, such as information management, provision of auxiliary decision-making solutions, auxiliary office and resource updates. The software that is about to undergo acceptance testing should have these explicit or implicit functions.

At present, functional tests for software are designed for a number of typical test cases for each function. Test Cases are run during the software test, and then the results are compared with known standard answers. Therefore, the comprehensiveness, typicality, and authority of the test case set are the key to functional evaluation.

3.2 Reliability Indicators

According to the relevant software testing and evaluation requirements, reliability can be refined into maturity, stability, and recoverability. The software reliability evaluation mainly adopts the quantitative evaluation method. Select an appropriate reliability measurement factor (reliability parameter), analyze the reliability data, and obtain the specific value of the parameter.

The Reliability measurement factor (reliability parameter) of the software can be obtained through detailed decomposition of software reliability and reference the development task book ).

A. Availability
Availability refers to the probability that the software is in a usable state when a specified task or function is required to be executed at any random time after the software is run. Availability is a comprehensive measure of the reliability of application software (that is, to integrate various operating environments and complete various tasks and functions.

B. Initial Failure Rate
Initial failure rate refers to the number of failures per unit time in the initial fault period of the software (the initial fault period is generally within three months after the software is delivered to the user. Generally, the unit is the number of failures per 100 hours. It can be used to evaluate the quality of software delivered and predict when the software reliability is basically stable. The initial failure rate depends on factors such as the software design level, number of check items, software scale, and completeness of software debugging.

C. Accidental Failure Rate
Refers to the number of failures per unit time during the accidental failure period (generally four months after the software is delivered to the user for the accidental failure period. Generally, the number of failures per 1000 hours indicates the quality of the software in a stable State.

D. Average time before expiration (MTTF)
The average statistical time of normal software operation before the failure.

E. Average failure interval (MTBF)
The average statistical time for normal operation of the software between two successive failures. In actual use, MTBF usually refers to the average statistical time between the nth invalidation and NTH + 1st invalidation of the system when n is large. MTBF and MTTF are almost the same when the error rate is constant and the system recovery time is short.

MTBF of common civil software in foreign countries is about 1000 hours. For software with high reliability requirements, the requirement is 1000 ~ Within 10000 hours.

F. defect density (FD)
Number of defects hidden in the source code of the software unit. Generally, the source code is not annotated per thousands of lines. Generally, you can estimate the specific value of FD based on earlier versions of the same software system. If there is no earlier version information, you can estimate it based on the general statistical results. "Typical statistics show that, in the development phase, each thousand lines of source code has an average of 50 ~ There are 60 defects. After delivery, each thousand lines of source code has an average of 15 ~ 18 defects ".

G. Average failure recovery time (MTTR)
The average statistical time required to resume normal operation after the software fails. For software, the failure recovery time is the time used for troubleshooting or system restart, rather than the time when the software itself was modified (because the software has been solidified on the machine, changing software is bound to involve re-fixing, and the time of this process cannot be determined ).

3.3 usability indicators

Ease of use can be subdivided into comprehensibility, learning habits, and operability. These three features are mainly for users. The Usability Evaluation of software mainly uses qualitative evaluation methods.

A. comprehensibility
Comprehensibility is a software attribute related to the efforts made by users to understand the logical concepts of software and its application scope. This feature requires that the language of all documents formed during software development is concise, consistent, easy to understand, and unambiguous.

B. learning habits
Learning habits are software attributes related to the effort you spend on learning software applications (such as Operation Control, input, and output. This feature requires that the user documentation (mainly the computer system operator manual, the software user manual, and the software programmer manual) provided by the investigator be detailed, structured, and accurate.

C. ease of operation
Ease of operation is the software property related to the effort the user spends on operation and operation control. This feature requires user-friendly software, scientific and reasonable interface design, and simple operation.

3.4 efficiency feature indicators

Efficiency features can be subdivided into time features and resource features. The software efficiency features are evaluated using quantitative methods. Efficiency feature decomposition 2 is shown.

Figure 2 efficiency feature breakdown

A. Update cycle of output results
The update period of the output result is the interval between two adjacent outputs of the software. To coordinate the entire system, the update cycle of the software output result should be the same as that of the system information.

B. Processing Time
The processing time is the processing time used by the software to complete a function (Auxiliary computing or auxiliary decision-making) (Note: The time for human-computer interaction should not be included ).

C. Throughput
The throughput is the information processing capability of the software per unit time (that is, the number of processing batches of various targets ). In the future, the social conditions are complex and information is numerous. software must be able to process massive data. Throughput is a parameter that reflects this capability. With the proliferation of information, the throughput of software should reach hundreds of batches.

D. Code Scale
The code size is the number of lines (excluding comments) of the software source program and is a static attribute of the software. The large size of software code not only occupies too much hard disk storage space, but also shows that the program is not concise, the structure is not clear, and it is prone to defects.

Because these parameters belong to the internal performance of the software, special test tools and special channels are required to obtain them. Compare the test data with the indicators in the development task book. The results can be used as the basis for efficiency feature evaluation.

4 Conclusion

With the rapid development of computer technology, data integration technology, network technology and communication technology, the requirements for software functions are getting higher and higher. How to evaluate the quality of software has become an urgent issue. Selecting and quantifying an appropriate indicator system is the key to evaluating software quality. Of course, the software evaluation has its own specifications and requirements. The evaluation indicators involve a wide range, there are many uncertainties, and it is difficult to quantify. So far, there is no unified standard.

We believe that, by establishing a scientific and rational software quality evaluation index system, taking full account of the software's particularity, and drawing on the Quality Evaluation Theory of other disciplines, we can fully and objectively assess the software quality.

Bibliography:

[1] Wu baihong: Complete Set of MPA in the public management knowledge Manual. first version, Management Publishing House, 2001
[2] Zheng Renjie Yin renkun Tao yonglei: Practical Software Engineering. first version, Tsinghua University Press, 1997
[3] Establishment and certification of ISO9000 Quality System in Software Industry

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.