Why Tests Don't Pass


CAST July, 2009; PNSQC October, 2009


Most testers think of tests passing or failing. Either they found a bug or they did not. Unfortunately, experience shows us repeatedly that  passing a test does not really mean there is no bug. It is quite possible for a test to surface an error but it not be detected at the time. It is also possible for bugs to exist in the feature being tested in spite of the test of that capability.  Passing really only means that we did not notice anything interesting.

Likewise, failing a test is no guarantee that a bug is present. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that do not mean that there is anything wrong with the software being tested. Failing really only means that something that was noticed warrants further investigation.

The talk explains the ideas further, explores some of the implications, and suggests some ways to benefit from this new way of thinking about test outcomes. The talk concludes with examination of how to use this viewpoint to better prepare tests and report results.

Presented at Toronto Association of Systems and Software Quality (TASSQ) March 31, 2009

Presented at Kitchner Waterloo Software Quality Association (KWSQA) April 1, 2009

Paper and presentation at the Conference for the Association for Software Testing (CAST) July 14, 2009

TASSQ Slides

CAST Slides

Why Not Pass Paper (CAST)


566ec5136bae0