Thursday, April 14, 2011

How much code coverage do you really need?

This post was prompted by reading a number of categorical tweets from @unclebobmartin. In case you’re not familiar with Uncle Bob – he’s one of the most prominent Software Industry experts, author of Clean Code, signatory to Agile Manifesto. In late nineties he did a profound work on documenting best OO practices (SRP, open/closed, interface segregation, etc). So when he speaks – it’s worth at least consideration.

He takes a maximalistic approach to TDD and unit testing in general. It can be clearly seen from his tweets:
 “Two things. Repeatability and cost. Manual tests are horrifically expensive compared to automated tests.”
“Manual tests aren't tests; they are trials. And since there are humans involved, they are suspect.”
“What you are telling me is that I should be open to the possibility that some code shouldn't be tested. Hmmm..”
100% code coverage isn't an achievement, it's a minimum requirement. If you write a line of code, you'd better test it.”

He goes on to compare software testing with other mundane but critical activities that are considered mandatory in other fields:
“A surgeon on the battlefield may not have time to wash thoroughly, but the risk of death and cost of treatment will be high.”
“Do accountants cover only 80% of their spreadsheets with double entry bookkeeping?”
“How many times have you seen major outages that were due to some silly code that some silly programmer thought wasn't worth testing?”
 
While all these points certainly have merits, they show only one side of the picture. The reality is that not all applications require such a meticulous testing. Not all application are of the same importance as surgeries on a battlefield or accounting of big $$$. (not to mention the “creative” accounting employed in many cases:).

An even more important point is that thorough code coverage does not guarantee absence of bugs. Even Uncle Bob admits that:
“Tests cannot prove the absence of bugs. But tests can prove that code behaves as expected.”
This is obvious considering that same misconceptions and logical mistakes that were put in the code by the developer, are not likely to be discovered by the same developer when testing his own code.

In the end it all boils down to ROI and pragmatism. Some apps need more testing than others. Some modules need more testing than others. Some bugs need more fixing than others. There will always be a judgment call about whether additional time and money spent on automated testing and coverage are justified or are just a premature optimization.

12 comments:

  1. "Premature optimization" seems to be a misnomer in this context. It generally refers to optimization of program performance, while automated testing is instead an optimization of development process.

    I feel that it is confusing and misleading when used here.

    ReplyDelete
  2. You use the word profound much too easily. Uncle Bob isn't actually that important in the great balance; he's just some zealot.

    ReplyDelete
  3. Most programmers learn over the years how to program and we don't need all these gurus. It won't take a young programmer long to learn how much risk to take. If we are talking about video games... don't make generalizations.

    ReplyDelete
  4. There is one idea missing in this text. Unit test aren't just made to check your code is correct. It is also to check that the code that will be modified by others won't break a feature.

    ReplyDelete
  5. Thomas Langston: i used "premature optimization" as a concept of spending extra resources on optimizing something that does not really needs an optimization. In this case it's premature QUALITY optimization. Just like with performance optimization, it's wasteful to speed up some rarely used part of the code.

    ReplyDelete
  6. TerryD: you wrote "Most programmers learn over the years how to program and we don't need all these gurus."

    I'm not so sure about that. I've seen "serial perfectionists" as well as their opposites with considerable amount of experience. It's more or personality traits that determine approach to programming.

    ReplyDelete
  7. John Haugeland: I agree that Uncle Bob is a not figure of paramount importance on a grand scale, but he's opinions still carry some weight.

    ReplyDelete
  8. b.hoessen: Actually i mentioned "perimeter testing". In my experience this is indispensable for allowing future refactorings. Once your verify main work-flows end-to-end, you're relatively free to change things inside as long as external behavior does not change.

    ReplyDelete
  9. The problem with this argument is that it's often impossible to tell ahead of time precisely which code is going to end up being business-critical; the assumptions going into a project often don't hold even half-way through it, and that applies at every level of detail. If everything isn't tested, then the best you can say is that you might get lucky.

    ReplyDelete
  10. Thank you for your informative content. It would be useful to learn something new in this direction, check our solutions and experience bearing in mind that we provide premium custom software development services, useful tips and information for developers.

    ReplyDelete