Traditionally, the methods of software testing have been divided into two categories in general.
The first type of test method is to try to verify that the software is "working", that the function of the software is performed according to the pre-design, while the second type of test method is to try to prove that the software is "not working".
The representative of the first class of methods is the pioneer of software testing in the field of Dr. Bill Hetzel, who organized the first formal forum on software testing in June 1972 at the University of North Carolina in the United States. He first gave the software test a definition in 1973: "It's about building a confidence that the program will run as expected." Later in 1983 he revised the definition to: "Evaluate the characteristics or capabilities of a procedure and system and determine whether it achieves the desired results." Software testing is any behavior that is intended for this purpose. The "assumptions" and "expected results" in his definition are actually what we are talking about as user requirements or functional designs. He also defined the quality of the software as "meeting the requirements".
The first type of test can be described as a simple and abstract process: the function of running the software in a defined environment, comparing its results to the user's requirements or design results, and if so, the test passes, and if not, it is considered a bug. The ultimate goal of this process is to run all of the software's features in all the design-mandated environments and through.
In the software industry, the first kind of method is generally regarded as the mainstream and industry standard. The 1990 Ieee/ansi standard defines software testing as: "The process of running a system or building, observing the results, and evaluating some aspects of it under established conditions." The so-called "established condition" can also be understood as demand or design.
Nonetheless, this approach has been challenged and challenged by many industry authorities. The representative is Glenford J. Myers, who believes that testing should not focus on verifying that the software is working, but instead should first identify that the software is faulty and then discover as many errors as possible. He also argues from the point of view of human psychology that "verifying software is working" is very detrimental to testers ' discovery of software errors. He then introduced his definition of software testing in 1979: "It is the process of running a program for the purpose of discovering errors."
This is the second method of software testing, simply to verify that the software is "not working", or that there are errors. He even thinks that a successful test must be a test to find a bug, otherwise it will be worthless. This is like a patient (assuming that the person does have a disease), to a hospital to do a medical examination, the results of the indicators are normal, which indicates that the medical examination for the diagnosis of the patient's condition is not valuable, is a failure.
The second type of software testing method is also popular in the industry, supported by many academic experts, with a clear and concise definition: "The software tester's goal is to find software defects, as early as possible, and to ensure that they are repaired." "Some software companies use the number of bugs as an indicator of the performance of testers, in fact, they accept this approach."
Two kinds of classical methods of software testing