Whether automated testing can help us improve our development efficiency is the key to its effectiveness. If there is a problem with its effectiveness, what could possibly have caused the problem? There are misunderstandings about the ways that automated testing can work, the conditions required to automate test analysis, the randomness of automated test and design, the low standard for automated test development and maintenance, and the low qualification conditions for automated test assets ... In this paper, I will briefly explain the validity of the automated testing of my own point of view.
One idea: The lone wood is difficult to be born in the desert, the dense planting side
Planting a tree in the middle of a desert is just a matter of sooner or later, even if there is enough resources to keep it alive, there is no other meaning than to add a green tinge to the composition of a passing photographer. If it is to live long and alive and expect it to improve its ecology, it needs to be rooted in a jungle. Automated testing, especially front-end automation testing, should be separated from other levels of automated testing and technical means, like the tree in the middle of the desert, and soon extinguished.
The Friday afternoon of 51 and two development managers review a project's performance test requirements, resting on the gossip by mentioning one of the development managers responsible for the public network system CI continuous red problem. He admits that he is distrustful of automated testing, that automation can not find any problems, so that the help of CI to doubt, so do not want to devote too much energy to CI. I understand the manager's feelings and share his concerns about the effectiveness of automated tests, but that doesn't make me agree with his attitude toward CI and automated testing. For software development, there is no automated testing, no CI, we may not expect more efficient development and better quality assurance. But he did not have the patience to plant a forest, which led to his mind to automate the test of the death of the tree, and then denied the meaning of this forest, it is not appropriate.
Concept bis: Reduce expectations in the heart, enhance the target row
Why is there a lot of people who are skeptical about the degree of trustworthiness that automated testing can be? This stems from a seemingly true nonsense: automation is not used to detect defects, but to verify that the system changes are associated with the impact and to increase quality confidence. This is meant to tell everyone: do not take the results of automated testing too high expectations, machine intelligence and the gap between the human brain to recognize. This is understandable, but this phrase is now becoming an excuse for the low quality of automated testing.
First of all, many people think that automated testing can not detect defects because the test script is not a substitute for human autonomy thinking, not flexible. The reason we expect to be flexible when testing is performed is because of the uncertainty about the design details of the application being tested in the analysis design. Non-exploratory scripted test design is to enter the precise operation and data types, so as to get accurate output and expected results of comparison, in this mode, there is no problem is need to adapt to be discovered.
Secondly, it has to be said that the difficulty in the design of automated test analysis lies in the poor lifting and selection of the input condition branch and the state machine combination, which requires a great deal of cost. Unfortunately, we usually do automated test analysis design, we are accustomed to using experience and subjective feeling to do, not comprehensive and primary and secondary situation often appear. Therefore, the automation test analysis and design of the imprecise, incomplete and automated testing can not be trusted one of the reasons.
In addition, automation does not perform well on those covered tests, and the authors observe that most people do not use automated test scripts to test or quality guard work, but only to allow them to run through to achieve KPI achievement and fulfillment. Automated test runs fail, and colleagues who are not accustomed to it or are not comfortable with it can easily ignore potential problems, so it is often the case that leaks and seals are postponed.
Finally, for example, automated testing is like a janitor's dog, without it, the home into the thief may not be found, the key is to find whether there is still time to recover the loss, and the dog can be a thief to patronize when the warning, depending on how we tame it. We expect dogs to be able to warn when thieves are in the house, but we don't expect it to be able to help us catch a thief directly, and not every time a dog warning means a burglary, it may be your passing. Automated testing similar to this, where it is caused, but there is a little inconsistency will be the result of failure, and as to whether the discovery of worms is another thing, want to use automation to help you solve all problems, then you still wash and sleep.
All in all, what kind of expectations we have for automated testing, what kind of goals and requirements should be, what level of test goals are set, and what effect should be used. If you tell you that automated testing can be directed to any breed of dog to cultivate, the goal is to housekeeping guard, you have to eat big huskies to be fed by the choice of small teddy, that can have what method? If the automated test analysis design is not rigorous, the coverage of the script is not strictly written, and the script is not very good to use, then automated testing is a complete lie.
We can expect to use automated tests to expose all the problems to the user, rather than to pinpoint each problem. Even if we don't expect automated tests to find all bugs, we have to analyze the design and use of our automated test cases with the criteria for discovering each bug.
One way: Use the manual Test case Library
If you have a complete or near-complete manual test case Library, it is great to use it to identify automated test areas and automate test requirements analysis and design when doing automated testing, which can be achieved with less effort. Because in addition to the list of key functions signed by the SLA, we can analyze the frequency and scale of the test cases executed in the test case library in the last x months, y versions, and so on, so that it is easy to derive the most valuable scope for automated testing, and if there are other factors that can be added to the comprehensive assessment. At this point, associating or mapping manual test cases and automated test scripts can easily measure metrics such as the coverage of automated tests.
In addition, the significance of manual test cases for automated test development lies in the development of automated testing in a specific pattern: Testers Write manual test cases for products or projects according to a specific specification, and then use a modeled conversion rule to generate automated test scripts that are then modified and optimized. The advantage of this pattern is that the developer of the automated test script does not need to understand the business knowledge, just follow the detailed test cases written by the business tester. But the deficiency is also very obvious, due to the differences in thinking mode, writing test scripts in the process of communication costs will not be too low, and the level of modeling requirements are high.