Good use case and bad use case of interface use case

Source: Internet
Author: User

The importance of automated testing is obvious, but automated testing does not solve all problems, so it is impossible to rely entirely on automation, but there is absolutely no automation at all. In software development projects, heavy reliance on human resources for continuous regression is a very tedious and repetitive task. Enterprises need to spend a lot of time and money to maintain such a team to ensure product quality, and the team of students in the daily work of repetition, also do not have the slightest growth, see direction.
Although automated testing does not solve all problems, it has an advantage: "Once" written, Run Anytime as desired (once written, can be repeated at will). Therefore, automated testing is often used in conjunction with a continuous integration system (such as Jenkins), just as "idyllic scenes represent" should be matched with a "Moonlight Cup" to calculate the ultimate. This way we can avoid the last moment when the software is online or delivered, and it's stuck in the mire of software problems. Of course, this is also the key to agile development, to eliminate the problem in the process, just keep focusing on incremental content. In addition, in continuous integration, you can determine the frequency and timing of the automated tests according to your needs, such as "code Submission", "timed trigger", etc.
There are two sides to everything, automated testing has so many advantages, of course, it has its disadvantages. So far, there are still a lot of company automation level is not high. We analyze these weaknesses, mainly in the following areas:
Relatively high requirements for testers.
Test cases need to be updated according to version iterations, with a certain maintenance cost.
Test results are not necessarily reliable, and test cases are divided into "good" and "bad".

The first two points are also known to all the problems, each company has its own situation to judge, today do not repeat. Today I mainly discuss the third question, namely: how to ensure that we spend time and effort to do automated testing, the result is effective, can reflect the quality of the code measured?

First, the test case is also divided into good or bad

For the title, you may have questions about whether the test case is good or bad. Indeed, test cases have good or bad points, so what are bad use cases? What is a good use case? Let's start with the characteristics of the test case:
The fundamental purpose of an automated test or test case is to judge (judge) whether the system being tested has a problem and is measured by the "ruler" of the product being tested. So, it has an important feature: the test results should be stable and unchanging in the case that both the test script and the code being tested remain the same.
According to this principle, the "bad use case" does not refer to the use cases that are not passed by the test, not the use cases passed by the test, but the test cases that are occasionally passed and occasionally passed under the same conditions. Conversely, "good example" is the use case of stable performance.
Why is "bad use case" devastating? Because if the use case itself does not have a stable result output, it is not possible to accurately measure whether the product under test is a problem with the code or the use case itself. If each test results can not directly explain the problem, need to be analyzed repeatedly, will directly lead to the loss of confidence in the test case. In other words, the test students and the Development Association to the test cases do not pass, as "Warning" rather than "Error", the ultimate effect is the automated test slowly abandoned.

Second, the life cycle of test cases

With the distinction between "good example" and "bad use case", the test case is "alive". In fact, we can also plan the life cycle of a test case from birth to death.

In general, we can divide the "good/Bad" by the test case pass rate or by the number of times. As the number of executions increases, the test case can toggle the "good/Bad" state, and when the "bad use case" persists for a period of time, we can mark it as a "garbage use case" and remove it from the sequence of automated executions. "Bad use cases" and "garbage use cases" can be developed or tested to be repaired by classmates, and then go to "unknown state". Applications in "unknown state" continue to cycle through this lifecycle as the number of executions increases.

Third, how to eliminate bad use cases

At this point, we understand the "good/bad" of the test case and the life cycle of the test case. So how do we ensure that use case quality, "eliminate bad use cases"?

1, found "bad use case" through "CI" (continuous integration continuous integration)
A "bad use case" refers to a use case that occasionally passes without passing. Therefore, you will find it difficult to find "bad use cases" when running locally, because "bad use cases" need to be executed many times to be detected. Many times the process can be implemented very well through the CI system, so if you have not yet used the CI system, it is still recommended to use the Continuous integration tool for multiple execution cases, even if your project volume is small. Another point is that CI systems can execute use cases at different times of the day, and time is also a possible attribute of a "bad use case".
Of course, mature CI systems (such as Jenkins) can satisfy most people's business needs.

2, preventive measure
Perhaps everyone has heard the "broken Window Theory": When a window on the house of Glass is broken, if not repaired in time, there will be destruction of more windows. The "bad use case" phenomenon is the same, when a "bad use case" occurs, if you do not grasp the repair, the entire test case set or even automated test results credibility will quickly decline.
Adopting a 0 tolerance approach to "bad use cases" contributes to the overall level of automation and the improvement of quality. You can create a test or developer "Bad use case" file, and automatically track each "bad use case" source, urging the person responsible for follow-up resolution.

3. Avoid environmental differences
For example, ensure that the environment for local use case execution is consistent with the CI execution case environment.

4, using asynchronous wait
Typically, a test case is composed of multiple test steps, each of which requires a specific execution time, so the general practice of writing test cases is to wait for a specific length of time, such as 5 seconds. However, the same test steps differ in the duration of each use case, and sometimes the difference is significant. This tends to cause the previous step to be incomplete and the next steps to execute, resulting in a "bad use case".
In addition, even if the step execution does not time out, but still can cause a waste of time, such as a step to wait for 5s, but the actual implementation of only 2s, there is a 3s of time wasted.
Theoretically, there are two ways to solve this problem: callback, polling. A callback is a process/thread that notifies the execution of a test case to proceed to the next step after the previous execution completes. However, this method is not used in practice because it requires tight coupling of the system under test, which may bring new problems and maintenance costs to the system under test. So, in practice, it is more of a poll that exists as an "observer". For example, a small interval of time to constantly query whether to reach the next step of the state of execution. This avoids the generation of a certain "bad use case".

5, solve the problem of parallel execution
If there is a parallel execution of the test case, make sure that the use case becomes a "bad use case" by ensuring that multiple test cases do not conflict with each other's impact on the system being tested. For example, during the execution of all test cases, database-related operations take the form of transactions, and when the use cases are completed, they are rolled back immediately.

6. Avoid test cases being dependent on each other
If a test case in a use case is dependent on each other, then if there is a "bad use case" in it, it will cause the entire set of use cases to become unstable. Therefore, try to ensure that each use case in the use case set does not have interdependencies, and each can perform validation independently.

7, avoid test scripts too long
There is no doubt that the more steps a test case is likely to become a "bad use case" the higher the probability, so, in general, an app's test case does not exceed the best in 30 steps.

8, improve test case code level
A "good example" in addition to the result is stable enough, but also need to have excellent structural design, as well as good readability, maintainability. This is a high requirement for test case writers, and of course, the ability to write automated use cases can be improved quickly by reading, multi-thinking, and writing.

This article transferred from: https://testerhome.com/topics/5921,vividly

Good use case and bad use case of interface use case

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.