I 've noticed two styles I 've used on my last, first TDD, project. the first is from when I was newer, And I wocould write one, absolute minimum test and make it pass, then move on to another test. now, still taking small steps, I often find myself writing a complete test, with say five asserts, and then incrementally adding to the method under test to progress past each assert, until all pass. any Co Mments on this approach?
================
I 've done both, and I feel that the flow is better with tiny tests-but I still find myself doing incremental tests from time to time. one technique that I 've found helpful in keeping the tiny test approach fluid is use of setup ()-and that got easier after Dave Astel's book taught me Write test classes per purpose, rather than per class . So tests that need different setups wind up in different test classes. When I was still in One-test-class-per-class Mode, I was more inclined to write bigger test functions, because what belonged in setup was getting Duplicated In each test. utility Functions also come in handy-encapsulating the assertions that you want to make more generally. for me, there's kind of a tradeoff between utility functions and one-test-per-purpose, because if I want the utility function to Do assertions, then it shoshould belong to a subclass of testcase, and if I want my test classes to call those functions directly, then they want to subclass My Utility Class, and that soon becomes more structure than I want in my test classes.
======================
One test-class per purpose, and not per class? Please elucidate. What defines a purpose, and how do you organize your tests? Does one class have implements purposes, or does one purpose span implements classes?
====================
A simplistic, mechanical answer-and one that serves me reasonably well-is that the setup defines the purpose. if I need different setup, that drives me to create a different test class. I Don't Have astels 'book in front of me, but I do recommend it for its explanation of this, much better than I can do here. my practice is not as absolute as maybe my remarks suggested-sometimes my test classes will include cases with different (Inline) setup. but when I see too much of that, or duplicated setup in multiple tests-that's the code asking me to sprout a new test class. or a utility function, depending. on what, I can't really say.
Well-the tests are never far from the class (same package, suffixed. test), and eclipse lets me find references. so I never have a problem finding the test class. regarding naming-Take range, for example. I might have rangetest, covering contains (), isempty (), encompass (), which setup except des a single range, or a few ranges for which the functions can be evaluated, plus rangeinteractiontest, which tests intersects (), intersection (), and other-well, range interactions. I 'd have to look at my code to find real examples; no time right now. but that shoshould give you an idea. really, though-Check out the book.
================
From dave astels 'test-driven development: a practical guide, pp 74-76: Let's begin by considering testcase. it is used to group related tests together. but what does "related" mean? It is often misunderstood to mean all tests for a specific class or specific group of related classes. this misunderstanding is reinforced by some of the IDE plug-ins that will generate a testcase for a specified class, creating a test method for each method in the target class. these test creation facilities are overly simplistic at best, and misleading at worst. they reinforce the view that you shoshould have a testcase for each class being tested, and a test for each method in those classes. but that approach has nothing to do with TDD, so we won't discuss it further. this structural corresponsponof tests misses the point. you shoshould write tests for behaviors, not methods. A test method shoshould test a single behavior ...... testcase is a mechanism to allow _ fixture _ reuse. each testcase subclass represents a fixture, and contains a group of tests that run in the context of that fixture. A fixture is the set of preconditions and assumptions with which a test is run. it is the runtime context for the test, embodied in the instance variables of the testcase, the code in the setup () method, and any variables and setup code local to the test method ...... instead of using testcase to group tests for a given class, try thinking about it as a way to group tests that need to be set up in exactly the same way. A measure of how well your testcase is mapping to the requirements of a single fixture is how uniformly that fixture (as described by the setup () method) is used by all of the test methods. whenever you discover that your setup () method contains code for some of your test methods, and different code for other test methods, consider it a smell that indicates that you should refactor the testcase into two or more testcases. once you get the hang of defining testcases This narrowly, you will find that they are easier to understand and maintain.