Questions and answers for exploratory testing
This section discusses the concepts and practices of exploratory testing in the form of dialogs. The questioner is a virtual reader of the book, and the respondent is the author of the book.
Q:How should we understand "Exploration" in exploratory testing?
A:The so-called exploration refers to purposeful roaming, that is, roaming in a space with a mission, but there is no pre-determined route [kaner01]. The exploration includes in-depth research on products and technologies and practical applications based on research results.
Q:How to Implement exploratory testing?
A:This issue will be discussed in part 1 of this book. Here we first introduce a feasible Exploratory Test implementation method, which is inspired by session-based test management based on test process [1 ].
Test Management) [bach2000].
Exploratory testing encourages testers to select appropriate testing procedures and technologies based on the current context. During the test, the SMART principle [2] provides good guidance for testers.
N
Specpacific (specific): The test requires a specific goal.
N
Measurable: a clear indicator can be used to assess whether a target is achieved.
N
Achievable: the goal should be achievable. This potentially requires dividing a large target into multiple small targets, and each small target is specific, measurable, and achievable. In addition, tracking the completion of Small Targets provides measurable overall progress.
N
Relevant (related): the goal must be consistent with the current context, be in the interests of the team, and do not forget the company vision.
N
Time-Boxed (with time constraints): set a reasonable deadline for each target. This helps testers eliminate irrelevant interference in a fixed time window and focus on their work.
Based on the SMART principle, testers can perform exploratory testing as described below.
(1) Testers develop test plans. Analyzes tested applications and establishes a number of specific test missions. Each mission targets a possible product risk.
(2) The tester splits the test mission into a series of test tasks (Charter), each of which has a clear exit condition and time limit.
(3) After a short test plan, the tester selects a task based on the priority and executes the exploratory test in a fixed time window (The Window Length is 60 ~ 120 minutes, preferably 90 minutes ). Such a time window is called a session ). In this test process, the tester designs, executes, and evaluates the test results. He will design test cases based on knowledge and questions to expand the breadth and depth of the test.
(4) After the test, the tester should take a proper rest and relax his mind.
(5) later, he will reflect on the current test progress and optimize the test plan. Maybe he will append a test schedule for the current task; maybe he will add another new task to make up for the shortcomings of the previous test plan; maybe he will delete some tasks to reflect his latest understanding of the test object.
(6) At this time, he will start a new round of exploratory testing with more confidence.
The above is only a possible Exploratory Test implementation method. A responsible tester will certainly choose his own method for testing, because only as a field expert can he make the most contextual decisions. In addition, gathering the strength of the entire team for peer review, brainstorming, pairing testing and other activities will help to produce better test results.
Q:What is the difference between exploratory testing and ad-hoc testing?
A:Both exploratory and ad hoc tests emphasize "ad hoc play", that is, using intuition and experience to quickly test software and constantly adjust testing strategies. Software Expert Andrew HUNT pointed out that intuition is synonymous with non-explicit knowledge and is an outstanding capability of the rich brain model. If humans only use the linear model of the brain (including explicit knowledge, abstract ability, and logic ability that can be expressed by language), and ignore the energy of the rich model, we will waste our huge potential [hunt08].
However, human beings are imperfect, and some intuition may be cognitive prejudice or wrong. This leads to the key difference between exploratory testing and impromptu testing: exploratory testing is a test with "reflection. In exploratory testing, testers constantly propose assumptions, use tests to test hypotheses, and analyze test results to confirm or overturn hypotheses. In this process, testers continuously improve product models and test models in their minds, and then use models, skills, and experience to drive further tests. By taking test learning, design, execution, and result analysis as mutually supportive activities in parallel, exploratory testing constantly optimizes the test model, test design, and test value. Many people mistakenly think that there is no plan or design for exploratory testing because the test design and test execution are quickly switched. In fact, these activities are divided into tiny time sheets and executed repeatedly.
Impromptu testing often uses false guesses, typical risks, and common attacks to quickly test software and detect many software errors in a short time. However, impromptu testing does not emphasize the system and integrity of testing. Testing omission is highly risky, and it is difficult to find some defects that require in-depth research. Exploratory testing thoroughly understands the tested products through testing, expands the breadth and depth of testing, and continuously optimizes the value of testing.
Q:If the exploratory test is the front of the coin, what is the opposite of the coin?
A:The opposite of exploratory testing is scripted testing ). Script testing requires that the test script be prepared in advance, which specifies how to configure the tested software, how to input, and how to determine whether the software outputs the correct results. Writing detailed scripts usually requires a lot of test resources.
If used properly, script testing may benefit from [kaner08]:
N testers can think carefully about the software to be tested.
The n test script can be reviewed by the project owner (stakeholder.
N test scripts can be reused.
N The test team can evaluate the completeness of the test script set.
N The test team can measure the execution of the test script to evaluate the Test Progress.
Q:Why does this book oppose script testing?
A:I am not opposed to any testing ideas or methods, but I am opposed to abusing a testing idea or method regardless of context. For example, writing detailed test scripts with strict requirements may cause the following test risks.
N a large number of test resources were used for test design before the test was executed. However, the development of products is often unpredictable, and early pre-design cannot effectively deal with dynamic changes. It may take a lot of time, but it has obtained a batch of testing scripts that are full of defects.
N too detailed test scripts reduce the flexibility of test execution, making test execution a monotonous process. This may cause testers to turn a blind eye to some obvious errors because they are not within the scope of the test script check. In addition, running testing is an excellent time to observe software behavior, get test inspiration, and design new test cases, which requires testers to concentrate on testing execution, be flexible, and be responsive. However, boring test execution will make these goals hard to achieve.
N a large number of detailed test scripts cause heavy maintenance costs. Under the pressure of progress, the tester may not have time to update the test script, which makes the test script unable to evolve along with the requirement and product. Over time, a large number of test scripts have become outdated and unattended "furnishings", and the testing assets that were originally invested in a large amount of resources are almost exhausted.
N a test script that requires testers to write all the blame may lead to bad psychological suggestion: The tester may unconsciously regard writing the script as the purpose of the test, rather than the tool used to assist the test. They will blindly pursue the number of scripts, with little attention to the risks of products and projects, and provide a false sense of security with a seemingly "complete" script set. Even worse, test leaders who are obsessed with the number of Scripts may encourage such a test process with "copywriting" as the core, further reducing the value of the test.
Compared with the "coin" metaphor, the author prefers Cem kaner's point of view: "Pure exploratory testing and pure script testing are like two endpoints of the interval. In practice, most testers are located between the two. However, most good tests are very close to the end of the exploratory test ." [Kaner09]
In addition, according to the basic principles of context-driven testing, testers often need to reflect on whether the testing strategy is exploratory or script testing based on the current context? How can we combine their advantages to get better test results?
This article is excerpted from the book "The path to exploratory testing practices"
Shi Liang, Gao Xiang
Published by Electronic Industry Publishing House
[1] In session-based test management, session is a period of time dedicated for exploratory testing. The commonly used Chinese term "session" is not suitable for this context. After careful consideration, the author translates the session into a "test schedule" to express that it is a test-focused process.
[2] http://en.wikipedia.org/wiki/SMART_criteria