Establish unit test standardsChen nengji Original article: establishing unit test criteria-Alan S. Koch is a new version. So what should we include? Obviously, it should include the latest and best version of each module. Right? The "latest and best" is based on the assumption that the latest version is the best version. The latest version adds features to correct the problem. In short, it improves the previous version. Why isn't it the best? However, in fact, things are not as simple as you think. New features may be incompatible with other existing features. User dependent items may disappear. New features may weaken availability (especially for new users ). Also, bugs always appear in the new changed code. Therefore, how can we determine the latest is the best? How do we know that the code is actually ready to be included in the next build? Many development teams establish upgrade standards to solve this problem. The upgrade standard is about whether a module has a judgment policy included in a build version.
Unit Test StandardsAlthough many other things can be included in your upgrade standards, unit testing is the basis of all these standards. Almost every organization assumes that software developers are doing appropriate unit tests. However, unfortunately, different people tend to have a very different understanding of "appropriate" testing. Good practices require developers to document their tests and conduct peer reviews of those tests to ensure proper coverage. If automated testing is used, developers can simply create test scripts for development tools and submit those scripts for review. Of course, the criteria for what should be included in unit tests must be set up in a group. As an Development Team, it takes some time and trade-offs to reach an agreement on what testing should be done, however, the time spent here will be doubled from the correct build process! Let's take a look at some examples of unit test expectations.
FunctionOf course, each module must be tested to ensure that it meets the design requirements and that it does what really should be done correctly. What input should it process? What must I do? What services will be provided? What output should it produce? What data must it manage and how should it be used? We must ensure that this module actually does what it needs to do.
Negative TestThen handle the error. When an error occurs, will this module perform "correct? What happens when it processes some special input? What happens when there is a lack of data composition or the order of data input is disrupted? What if a non-numerical value is required? Data overflow? What if it receives an error status returned from a database or network interface? What will it do? A module must correctly handle all error conditions before being considered complete.
OverwriteWe all know that complete testing is not a reasonable goal of software testing. If there are too many input combinations, events may occur in too many possible sequences, and too many different error methods, it is impossible to test everything completely, even a very small program. However, code and path coverage are a goal that can be achieved by unit tests. In fact, the unit test phase is the only time to take full coverage of code and paths as a reasonable target. -Code coverage during unit testing requires that each line of code be executed. This is not a problem (and many analysis tools can help us ensure this ). Some code (especially error handling) cannot be tested unless additional steps (such as writing a function that passes bad data or injecting the error code into memory) are used ), however, these are not only suitable for implementation, but also critical for ensuring that the program can handle various situations that should be handled! -Besides code line coverage, each path of the test code is reasonable. For example, we can ensure that every branch of the "if" goes through and that all branches of each "case" are executed. We can also ensure that the initialization and termination conditions for each loop path are correct.
Regression testingThese are all the content that should be tested for "freshly released" code. But how many tests should be performed if the code module is changed a little bit? How many regression tests should be performed at the unit level? This is easy to misunderstand: because it is only a small change, it is not worth a lot of time to test it. Indeed, we cannot perform a complete test for every small line of code change. However, at the same time, these "small" changes often have potential, significant, and unexpected effects. The best way is to properly assess regression testing: Combined with risk-based testing and regional impact testing.
Risk-based testingRisk-based testing refers to the selection of testing based on the risk of defects. There are two risks: Possibility and impact degree. The possibility is an opportunity to determine whether a change will cause some problems. We should test where problems may occur. The evaluation of code changes is one of the methods of judgment, and the other is to find similar changes that have occurred. The extent of impact is about the extent of losses caused by errors in the Program (regardless of the likelihood ). Areas with high impact should be reproduced for testing. For example, the core functions frequently used by programs and the areas that affect security.
Regional Impact TestA region impact test is a test that focuses on the region where the code is changed. Example:-A developer should completely overwrite the added or changed code modules in the test. -Correspondingly, he should test all paths affected by changes. -In addition, developers should perform relatively less rigorous tests on the areas associated with the modified Code. For example, if the code changes the set of parameters, you should test the parameters used.
Objective evidence of Unit TestingAll of these tests are well planned, but some objective methods are required to verify that the test is actually executed. What evidence should be collected and stored to indicate that the developer has performed those tests? What are the expected results? How do we know that all the tests we decided to do have been done and are done right? Obviously, the burden of verification checks that we add to developers should be reasonable, and the risk of testing being neglected should be considered. We don't want to impose unnecessary jobs; we just want to make sure that when developers say that a module is ready for submission, we all know what this means.