In a sentence:code coverage tells your what you definitely
Haven ' tTested, not what
have.
Part of building a valuable unit test suite is finding the most important, high-risk code and asking hard of it. You are want to make sure the tough stuff works as a priority. Coverage figures have no notion of the ' importance ' of code, nor the quality of tests.
In my experience, many of the most important tests your'll ever write are the tests that barely add no coverage at all ( Edge cases that add a few extra% here and there, but find loads of bugs).
The problem with setting hard and (potentially counter-productive) coverage targets-is-developers may have to start B Ending over backwards to test their code. There ' s making code testable, and then There ' s just torture. If you are hit 100% coverage with great tests then that's fantastic, but in most situations the extra effort are just not worth It.
Furthermore, people start obsessing/fiddling with numbers rather than to on the focussing of the quality. I ' ve seen badly written tests that have 90+% coverage, just as I ' ve seen excellent tests this only have 60-70%.
Again, I tend to look at coverage as a indicator of what definitely ' t hasn been.