1. History and Sanchong Fog
In the article "Understanding the Black Swan in software testing", I describe what is the black swan in software testing and its characteristics, and this article will discuss the story before, after, and what is happening before the Black Swan in the test. "History is vague," says Nassim, author of the Black Swan. You see the results, but you don't see the reason behind the events that led to history. "In fact, the test is not so, if the test as a box, the box is also fuzzy, you can not see what is inside the box, the whole mechanism is how to run." The book describes: "to deal with historical problems, human thinking will make three faults, I call Sanchong fog." They are:
Imaginary understanding is that people think they know what is happening in a world that is more complex (or more random) than they perceive.
The deviation of introspection is that we can only evaluate things afterwards, as if we can only look in the rearview mirror (history is more clear and conditioning than in the experience reality).
Overestimate the value of factual information, while authority and scholars themselves are flawed, especially when they are classified, that is, "Plato". "It is easy to see that the triple mist corresponds to the story of the Black Swan before and after the test and the formation of the Black Swan, namely," Blind prediction "," hypothetical explanations "after the occurrence, and" Plato "in the formation of the Black Swan.
2. "Blind prediction" before the Black Swan happened
"The first mist is that we think the world we live in is more comprehensible, understandable and predictable than it actually is." Turn on the radio or TV, and you'll hear or see that every day there are millions of people predicting all sorts of things in confidence: the trend of the stock market, the trend of house prices, whether the war will break out and whether the disease will prevail ...
As Nassim points out, "almost everyone who cares about the situation seems convinced that he knows what is happening." Every day is a completely unexpected thing, but they just don't realize that they're not predicting it. Many of the things that have happened should have been thought to be completely insane, but after it happened, it didn't look so crazy. This hindsight reduces the rarity of the event and makes the event appear comprehensible. "Take the city I live in--Shanghai, many recent events have confirmed this: the Huangpu River to salvage thousands of dead pigs, H7N9 popular, yesterday suddenly saw a" Shanghai tap water added xx "Micro-blog is shocking, we all predict these things should not happen, But it actually happened.
This "blind prediction" before the Black Swan took place reminds me of the "Test evaluation" before the release of the software test. A product after the Test team's centralized testing, published to the user there, who can accurately predict whether there will be "Black swan"? In your team, do you need a test team to fill out a test assessment sheet on product quality before releasing the release? The following figure is a sample of the test evaluation table.
The attribute here refers to "test feature". Depending on the context of each team, this test feature may correspond directly to the development features, or not to the development feature one by one, which corresponds to one or more requirements (user requirements or system design requirements) for each test feature. The quality assessment that I most often see is "xx characteristic basic function is normal." "Some people will attach the more serious bugs found later." Such a description is clearly unsatisfactory.
The problem is:
Is the a/b/c/d of the quality of each feature accurate? How does the a/b/c/d of all the attributes, together with the conclusion of the whole system, discriminate the quality of the entire systems? Whether the quality of all features are a or B, the version can be released, and released after the release will not be an unexpected "black swan" phenomenon? How much of the a/b/c/d given by testers is based on a feeling?
For any requirement, the developer's task is to implement it, whether it is implemented by a project team or by multiple project teams. But for testers, the things that are considered are more complicated. In addition to verifying that the requirement itself is implemented correctly, we also need to verify the interaction between the requirements and other functions, and consider the various scenarios (scenario) that customers may use, including various networking scenarios, configuration of various parameters, and so on. If you interact with test scenarios and test features, the tests are endless, and there is no need to validate the basic functions, exception functions, interactions with other functions, non-functional attributes, and so on in each test scenario. How to design a more effective, limited number of use cases to maximize the test coverage, which is another topic, this article does not discuss.
There is no doubt that, despite the exhaustion of test designers and test executives trying to overwrite all possible scenarios for the feature, testers still cover only a small part of the scenario with a partial test case for this feature, and if there is no fatal legacy problem, whether the quality of the feature can be evaluated as a or B, and Think you can release it? Rather than letting testers evaluate the quality of the object being tested, the test evaluation is to let the tester predict the quality of the object, because the tester knows only part of the information, not all of it. "There are some elements of desire in these false predictions and blind hopes, but there is also the question of knowledge," Nassim said. "I would prefer to believe that the testers ' predictions of product quality are mainly" knowledge issues ", after all, the complete test is impossible. Now that the tests are not fully covered, it is better to cover the typical scenarios as much as possible within a limited amount of time and resources, to evaluate the typical scenarios that have been covered, specifically, James Bach and Michael Bolton at RST (Rapid Software Testing) The three-step method presented in the course can be used for reference: