Mon, 07 Mar 2005
That pernicious sense of completeness
Here's a mistake that seems easy to make.
You have a large test suite. In any given run, a lot of the
tests fail. Some of the tests fail because they are signaling a
bug. But others fail for incidental reasons. The classic example
is that the test drives the GUI, something about the GUI
changes, so the test fails before it gets to the thing it's
trying to test.
Because so many of the failures are spurious, people don't look
at new failures: it's too likely to be a waste of time. So the
test suite is worthless.
Someone comes up with an idea for factoring out incidental
information, leaving the tests stripped down to their
essence. That way, when the GUI changes, the corresponding test
change will have to be made in only one place.
Someone begins rewriting the test suite into the new format.
It's that last step that seems to me a mistake, in two ways.
The majority of the tests don't fail. There's no present value
in rewriting such a test into the new format. There's only
value if that test would have someday failed because of
some GUI change.
Rather than a once-and-for-all rewrite, I prefer to let the
test suite demand change. When a test fails for an
incidental reason, I'll fix it. If it continues to run in the
old format, I'll leave it alone. Over time, on demand, the test
suite will get converted. And in the steady state, new failures
are worth looking at. They're either a bug or a reason to
convert a test.
The "convert them all and get it over with" approach also falls
prey to what James Bach has called the "wise
oak tree" myth. There's an assumption that each test in the
test suite is valuable just because someone once found it worth
writing. But what's worth writing may not be worth rewriting.
If you're examining tests on demand, it's easier to make a
case-by-case judgment. For each test, you can decide to fix it
or throw it away. Does this failing test's expected future
value justify bringing it back to life?
For more on this way of thinking, see my "When should a test be
automated?" (pdf). Some of the assumptions are dated, but I'm
still fond of the chain of reasoning. It can be applied to more
modern assumptions.
## Posted at 07:22 in category /testing
[permalink]
[top]
|
|