There is often a lot of misunderstanding about product testing, and below we'll explain some common misconceptions about testing. If you have such misunderstandings when writing tests, this article will help you and your team decide when to fit the test and when not.
Misunderstanding one: testing can show that my code is correct!
While this misconception is intuitively correct, you really can't rely on tests to establish any form of strict correctness. Every time you write a test, you have tested one of the possible scenarios in the program. When there are many units in the program, it may not be feasible to test all possible situations where there are infinitely many (or much more difficult) possible situations to test-so the typical response is to test for error situations, boundary conditions, and a number of "general" situations that ensure that everything is OK.
If your goal is correct, then the content is far from enough to meet the requirements. Although there are still many bugs in the program, it is fairly easy to develop a set of tests that can always be passed. Some bugs, however, cannot be tested by tests--the competitive conditions and other errors, including concurrency, are classic examples, even if you have control over the scheduler, the number of possible interleaved operations is growing so fast that reliable testing quickly becomes an impossible task.
As a result, the test does not show the correctness of all cases, except in the most common case, so that we can completely specify the behavior of the program unit in the test. For these ordinary situations, it is often not worth writing a test from the outset; it is so common that the code we are testing is trivial! The only thing you can do by writing a test for a trivial piece of code is to increase the maintenance overhead and increase the workload for the tester.
Since testing is just some code, there may be bugs in your tests as well. If you write tests and write code for the same person, they can often mistakenly implement a program unit and then write a test that will ensure that the error behavior passes. The root of this problem is that the developer misunderstood the specification, not the minor mistakes made during the implementation.
If you really need to be sure of correctness, then formalize your code (the current verification tool is much better than it used to be). If you don't need to be sure of correctness, then write the test. Keep in mind that writing tests is like a smoke alarm for a fire, but it does not detect a wide variety of problems.
Misunderstanding two: The test is executable specification!
I think this view is wrong for several reasons. Let's take a look at the definition of the specification in my dictionary:
A set of requirements that define an accurate description of an object or process.
Therefore, if my code meets the specifications, then it should be completely correct because the specification defines the code's behavior exactly. If my test is a specification, then it is necessary to prove the correctness of the test. As we have already discussed, the test does not do such a thing, so the test is not a specification.
Let's take a look at the actual situation, assuming that a developer can infer the expected behavior of a function by reading a test case, then introduce a bunch of ambiguous test cases, and if the test case is not comprehensive enough, then we may end up with the wrong conclusion, sometimes only slightly different from the expected behavior.
In addition, there is no consistency check for test cases. That is, due to a developer error or misunderstanding, your test may actually "specify" an unexpected behavior. This may lead to some inconsistencies in your tests, so you can also say that there are inconsistencies in your specifications.
Random test software, such as QuickCheck, makes writing a test very simple, like a Boolean attribute that should be included, and the software generates test cases for you. The software makes testing closer to executable specifications, but it still does not check the properties for consistency.
Misunderstanding three: Testing will let us have a good design!
This design still has the potential to improve when a poor design can be tested, so testing is not a substitute for good design practice. When writing a lot of tests for a system interface, it actually increases the "work investment" that developers put into those interfaces. The problem arises when these interfaces are no longer the best choice, that is, developers have written a lot of tests for those interfaces. Changing the interface also means changing all the tests that match it. Because tests are tightly coupled with those interfaces, most of these tests will have to be discarded and rewritten. Since the growth of most developers is dependent on the work they do, this can lead to a hesitant design decision in the life cycle of the project, even though those decisions are not the most appropriate.
The solution given here is to start testing after you have written a series of prototypes. This way you don't have to worry about testing code that might later be heavily refactored. For developers and testers, everything is doing more work, and when requirements or interfaces change, developers have to destroy hours of work, which makes them more heartache. And if you do not wait for the test, then your test will actually lead to poor design, because the developer will not want to do any major refactoring.
In addition, it is difficult to get code to test. Often people use problematic design decisions just to make tests easier, try a lot of analog interface implementations, or write test cases with a lot of code, so that the test case code itself is almost always tested, exposing the abstraction Mock objects and stubs often suffer from this problem.
Myth Four: Testing makes it easier to change code!
Testing does not always make it easier to change the code, however, if you are modifying the underlying interface implementation, the test can help you capture the functional decline or unexpected behavior in the new implementation. If you are modifying a higher-level structure of a program, this is a more general phenomenon. Tests are usually tightly coupled to higher-level interfaces. Changing these interfaces means rewriting the tests. In that case, you're going to have to live a hard life-you'll have to rewrite those tests to add more work to yourself, and the old tests will do nothing to make sure you're not introducing a functional recession, which means the tests aren't going to help at all.
So, don't write tests?
I didn't say you shouldn't write tests. Testing is a valuable way to boost confidence and prevent a decline in software functionality. However, testing cannot be unified to bring good design, correctness, technical specifications or easy refactoring, as explained above. Overuse tests can make development harder * and not easier.
Similarly, not validating the code at all makes quality assurance impossible, but makes it easier to build prototypes quickly. Testing introduces a trade-off between quality assurance and flexibility, so we have to make an appropriate compromise between the two.
About the author
Liam O ' Connor has worked for Google and teaches at the University of New South Wales. Recently, he started working on the L4.verified project for NICTA, which formalized the operating system kernel, NICTA is Australia's leading ICT (information and Communications Marvell, Information and communication Technology) Research Institute.