Research on Software Testing and Reliability Evaluation Methods

Source: Internet
Author: User
Abstract: With the rapid development of science and technology, the software has become more and more powerful, and the complexity of the software has become higher and higher, thus greatly increasing the difficulty of software testing and reliability evaluation. In order to ensure the quality of a software system, it is necessary to conduct a special research on the software testing and reliability evaluation methods. This article focuses on some research in this field.

I. Definition of software testing

Software testing is an important stage in software life cycle and a key step in software quality assurance. Generally speaking, software testing is the final review of Software Requirement Analysis, Design Specification Description and code before the software is put into operation. In software engineering terms proposed by IEEE in 1983, software testing is defined as: "the process of running or measuring a software system using manual or automatic means, the purpose is to check whether it meets the specified requirements or to find out the difference between the expected results and the actual results ". This definition clearly states that the purpose of software testing is to check whether the software system meets the requirements.

From the user's point of view, it is generally hoped that the hidden errors and defects in the software will be exposed through software testing, so the software testing should be "executed to detect errorsProgram". In other words, software testing should carefully design a batch of test cases (that is, input data and expected output results) based on the Specification Description of each stage of software development and the internal structure of the program ), use these test cases to run programs to discover program errors or defects.

Ii. lifecycle of software testing

The test is based on the development task book and technical specifications of the system to test and evaluate the overall function and performance of the software. The testing principle is the theoretical basis of software testing activities, and the testing method is the practical application of the testing principle and the means to obtain the testing data. Based on the commonality of software, testing of software should follow the testing principles and methods of general software. At the same time, appropriate testing methods must be found for software features. The rationality of test cases plays a key role in software testing and evaluation. It is not easy to make the design cases reasonable and typical. Therefore, we should study and describe the actual operating environment with the software developers and end users to form a reasonable test case set. On the other hand, the complexity of the software runtime environment plays an important role in software evaluation, so we should generate a realistic running background as much as possible to facilitate research. The software test cycle 1 is shown.

Practice has proved that although many methods and technologies are used to ensure the quality of software during software development, many errors and defects are hidden in the developed software. This is especially true for software with large scale and high complexity. Therefore, strict software testing plays an important role in ensuring software quality.


Figure 1 life cycle of software testing

Software testing spans two phases during the software life cycle. In the software coding stage, when a module is compiled, it is usually necessary to perform necessary tests (called unit testing). At this time, testing and coding belong to the same stage. After the completion of the coding phase, various comprehensive tests (integration and system tests) are required for the software system. This is an independent phase, that is, the software testing phase. In this test phase, there are two different types of tests: the integration test conducted by the Development Unit and the acceptance test conducted by the system test and the user (or a third party.

Errors may be brought in at every stage of software development during the software test lifecycle. In software testing, some errors are discovered, classified, and isolated and eventually corrected. Because the software is constantly modified, this process is a process that is repeated.
Iii. Test Methods and Procedures

Software testing methods include black box testing and white box testing. Black box testing, also known as function testing, data-driven testing, or specification-based testing, does not fully consider the internal structure and characteristics of the program, check whether the relationship between input and output meets the requirements. White box testing, also known as structure testing, logic-driven testing, or program-based testing, is a test method for designing test cases when the internal structure of the program is known. Obviously, white box testing is suitable for unit testing, while black box testing is usually used in the independent testing phase.

A test case is an abstract description of all possible goals, movements, actions, environments, and results in the software running process. Design a test case: design a test plan for a specific function or combination of functions, and write it into a document. Test cases should reflect the ideas and principles of software engineering. The selection of test cases should be either general or extreme, and the maximum and minimum boundary values. Because the purpose of the test is to expose the defects hidden in the application software, when designing and selecting test cases and data, consider the test cases and data that are easy to detect defects and combine them with the complex running environment, determine the test data among all possible input and output conditions to check whether the application software can generate the correct output.

After the data obtained by the software test is processed, it can be used as a basis for evaluating whether the software system meets user requirements. Information Flow 2 in the software test phase is shown below:

 


Figure 2 Software Test Information Flow

Iv. Software Evaluation Theory and Its Development Status

Software Evaluation Theory is the theoretical basis for evaluation, and evaluation method is the practical application of evaluation theory and the method for processing test data. For different indicators in the evaluation index system, the evaluation theory and method should be selected based on the test data. The essence of software evaluation is the measurement and evaluation of software quality.

Our definition of software quality evaluation is: "to determine whether a particular software module, software package, or software product is accepted or released, the specific evaluation criteria are applied to the activities of the software module, software package, or software product ".

It can be seen that the software evaluation target is "software module, software package or software product", and the purpose of the software evaluation is to "determine whether the evaluated object is accepted or released ". The evaluation criteria mentioned in the definition are "a set of rules and conditions for determining whether a product passes acceptance or is published based on specific software products and quality requirements ". In a broad sense, the evaluation criteria have included evaluation methods and indicator systems, that is, how to handle the obtained test data and how to apply the evaluation criteria to the evaluated software.

The complete meaning of software reliability evaluation is: According to the software system reliability structure (reliability relationship between units and systems), life type and reliability test information of each unit, the probability statistics method is used to evaluate the reliability feature quantity of the system.

At present, software reliability engineering is a new engineering discipline that is still in the immature stage of development and establishment. In foreign countries, we began to strengthen software reliability research since the late 1960s S. after about 20 years of research, we launched various reliability models and prediction methods, and formed a systematic software reliability engineering system around 1990. At the same time, since the middle of 1980s, major Western industrial powers have established specialized research plans and topics, such as aivey (software reliability and measurement standards) in the UK) plan, European ESPRIT (European Information Technology Research and Development Strategy) plan, spmms (software production and maintenance management assurance) project, Eureka (Ulica) plan, etc. Every year, a large amount of manpower and material resources are invested in software reliability research projects and some achievements are made.

The Research on Software Reliability in China started late. It has a big gap with foreign countries in terms of software reliability Quantification Theory, measurement standards (Index System), modeling technology, design method, and testing technology. Most software production methods in China are still in the early stage of the computer era, with obvious disadvantages, mainly manifested in: 1. Poor transparency; 2. Software Delivery Systems rely only on self-check before joint debugging, quality is not guaranteed; 3. Users lack confidence in the reliability of the delivered software. Most of the so-called "software tests" only pass the performance of several pre-specified use cases. At present, there is no perfect inspection system like hardware, and the quality of delivery software is not high. Typical statistics show that "the development stage averages every thousand rowsCodeThere are 50-60 defects, and each thousand lines of code after delivery has 15-18 defects. "sometimes there are serious risks.

Currently, no authoritative management system or specification has been established for software reliability management. For example, how to describe software reliability, how to test, how to evaluate, how to design, and how to improve. At present, the research on software reliability models at home and abroad is mostly concentrated in the software development stage, but there are few reliability models involved in the testing and evaluation stages, therefore, the study of software reliability testing and evaluation is a subject with theoretical value, practical significance, and certain difficulty.

With the standardization of computer software compilation, software reliability assessment must be incorporated into a scientific and standardized track. Specific manifestations: 1. In the software system development task, the quantitative indicators of software reliability should be formulated to provide clear standards for software assessment; 2. A complete software testing and reliability information collection system should be established, enable scientific software testing in computer software development to continuously reduce defects; 3. Study software reliability assessment methods to develop appropriate software assessment procedures and standards; 4. Develop software reliability evaluation software to facilitate software identification.

5. Definition of software reliability assessment

Reliability is the ability of a product to complete a specified function under specified conditions and within a specified time. Its Probability measurement is called reliability.

Software reliability is one of the inherent features of a software system. It indicates the degree to which a software system performs its functions correctly according to user requirements and design objectives. Software reliability is related to software defects and system input and use. Theoretically, reliable software systems should be correct, complete, consistent, and robust. But in fact, no software can be correct and cannot be accurately measured. Generally, you can only test the software system to measure its reliability.

In this way, the following definition is given: "software reliability is the ability of the software system to complete the required functions within the specified time and under the specified environment conditions ". According to this definition, software reliability includes the following three elements:

1. Specified Time

Software reliability is only reflected in its running stage. Therefore, the "running time" is used as a measurement of the "specified time. The "running time" includes the accumulated time for the software system to work and suspend (enabled but idle) after running. The running time is a random variable because of the randomness of the software running environment and program path selection, and the software failure is a random event.

2. required environment conditions

Environment conditions refer to the running environment of the software. It involves various supporting elements required for running software systems, such as supporting hardware, operating systems, other supporting software, input data format and scope, and operating procedures. The software reliability varies with different environment conditions. Specifically, the required environment conditions mainly describe the computer configuration and requirements for input data during the running of the software system, and assume that all other factors are ideal. With clearly defined environmental conditions, it is also possible to effectively determine whether the responsibility for software failure lies with the user or the developer.

3. required functions

Software reliability is also related to defined tasks and functions. Because the tasks to be completed are different, the software running profile is different, and the called sub-modules are different (that is, the program path is selected differently), and their reliability may be different. Therefore, the reliability of a software system must first clarify its tasks and functions.

When talking about software reliability evaluation, we have to mention the software reliability model. Software reliability model refers to the reliability block diagram and mathematical model established to predict or estimate the reliability of software. A reliability model is established to classify the reliability of a complex system into the reliability of a simple system, so that the reliability of a complex system can be quantitatively estimated, allocated, estimated, and evaluated.

6. Software defects and failures

Defect/fault is the internal defect of the software. In each stage of the software life cycle, especially in the early design and coding stages, the actions of designers and programmers (such as incomplete requirements, ambiguous understanding, incomplete or potential demands,AlgorithmLogical errors, programming problems, etc.) will make the software unable or will not be able to complete the required functions under certain conditions, so that there will inevitably be "defects ".

Once the software has a defect, it will lurk in the software until it is discovered and modified correctly. On the contrary, in a certain environment, once the software runs correctly, it will continue to maintain this correctness unless the environment changes. In addition, defects in the software are not "lost" for use ". Therefore, defects are hidden in the software without loss.

If the software does not use defective parts during operation, the software can run normally and work correctly. If the defective part is used, then, the calculation or judgment of the software will be inconsistent with the prescribed ones, thus causing the software to lose the ability to execute the required functions. The software cannot complete the specified function, that is, "failure" or "failure ". For software without fault tolerance design, partial failure means the entire software fails. For software with Fault Tolerance Design, Local faults or failures do not necessarily lead to failure of the entire software.

The criteria for determining whether a software fails are: system crashes, the system cannot be started, the display records cannot be input or output, the calculation data is incorrect, the decision-making is unreasonable, and other events or states that weaken or make the software function lost.

7. Software Reliability Test process

The complete test process includes five steps: pre-test check, design test cases, test implementation, reliability data collection, and preparation of test reports. The five steps are described one by one below.

1. pre-test check

Before performing the application software reliability test, it is necessary to check whether the software requirements are consistent with the development task book, check whether the delivered programs and data and the corresponding software support environment meet the requirements, and check the consistency between the documents and the program, check whether documents generated during software development are complete, the accuracy and integrity of the documents, and whether the documents have passed the relevant review.

According to the relevant standards of the software industry, we know that there are a total of 16 documents formed during the software development process: system and segment design documents, software development plan, Software Requirement Specifications, interface requirement specifications, interface design documents, software design documents, software Product Specification Description, version description, software test plan, Software Test description, software test report, computer system operator manual, software user Manual, software programmer manual, firmware assurance manual, and comprehensive computer resource protection manual.

Note: The Software Test Plan, software test instructions, and software test report refer to the test documents formed by the tester during the development process. In principle, some documents can be merged if the software scale is not large.

Although these checks increase the workload, they are necessary to detect errors early in the test and improve the quality of the software.

2. Design Test Cases

Designing a test case is to design a test plan for a specific function or a combination of functions and compile it into a document. The selection of test cases should be either general or extreme, and the maximum and minimum boundary values. Because the purpose of the test is to expose the defects hidden in the application software, when designing and selecting test cases and data, consider the test cases and data that are easy to detect defects and combine them with the complex running environment, determine the test data among all possible input and output conditions to check whether the application software can generate the correct output.

A typical test case should contain the following details:

A. test objectives;

B. functions to be tested;

C. test environment and conditions;

D. test date;

E. Test input;

F. Test procedure;

G. Expected output;

H. Criteria for Evaluating output results.

All test cases should be reviewed by experts before they can be used.

The first step in designing and selecting a test case set is to describe the test case. Is this description authoritative, complete, understandable, and standardized, it determines whether or to what extent the test case can be understood and accepted by the operator, software developers, and test validators. Therefore, standardized test case descriptions play an important role in software testing and evaluation.

3. Test implementation

After completing the above preparations, you can perform the test. All software documents delivered by the coders, including product manuals, user documents, procedures, and data, shall be tested in accordance with the Requirement Description and quality requirements. The program and data must be tested in all configurations specified in the project contract, demand statement, and user documentation.

During the test, you can consider "enhanced input", that is, the input is worse (reasonable) than the normal input. If the software is reliable under reinforcement input, it can only be described as much more reliable than regular input.

To obtain more reliable data, multiple computers should be used to run software simultaneously to increase the cumulative running time.

4. Reliable Data Collection

Software reliability data is the basis for reliability evaluation. A software error reporting, analysis, and rectification system should be established. Develop and implement Software Error Reporting and reliability data collection, storage, analysis and processing procedures as required by relevant standards, complete and accurate recording of software error reports during software testing and collection of reliability data.

Software reliability data defined by time can be divided into four types: 1. Failure Time data, recording the Time of the cumulative occurrence of a failure; 2. Failure interval data, record the interval between the current failure and the previous failure; 3. Group data, record the number of failures in a certain time zone; 4. The cumulative number of failures within the group time, records the cumulative number of failures in a certain range. These four types of data can be converted to each other.
Each test record must contain sufficient information, including:

A. test time;

B. Test plans or instructions containing test cases;

C. All test results related to the test, including all faults occurred during the test;

D. Personal identity to participate in the test.

5. Compile the test report

The software reliability test report must be prepared after the test activities, and the test items and test results should be summarized and summarized in the test report. You can refer to the "Software Test Report" format provided in GJB 438a-97 for compiling and tailoring it as needed. The test report shall contain the following content:

A. Product ID;

B. configuration used (hardware and software );

C. Documents used;

D. Test results of product descriptions, user documents, programs and data;

E. List of items that do not match the requirement;

F. End Date of the test.

This standardized process management control is conducive to obtaining real and effective data and laying the foundation for obtaining objective evaluation results.

8. Conclusion

This article focuses on software testing and reliability evaluation methods. Of course, the best method of software reliability evaluation is to fully use the field test method. The reliability of the software is limited by many objective conditions. The biggest limitation is that the reliability information is insufficient. Therefore, the reliability of the entire system should be evaluated statistically based on the historical reliability test information of each module of the software. This need: Collect sufficient software and historical reliability test information of each module; the reliability relationship between each module and the software is clear; the life type of each module is known; and the cooperation of the Software Development Department (because the historical software information data is mainly controlled by the developer ).

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.