Exploration Through ExampleExample-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
|
Wed, 24 Sep 2003Agile testing directions: business-facing product critiques
Part 5 of a series As an aid to conversation and thought, I've been breaking one topic, "testing in agile projects," into four distinct topics. Today I'm starting to write about the right side of the matrix: product critiques. Using business-facing examples to design products is all well and good, but what about when the examples are wrong? For wrong some surely will be. The business expert will forget some things that real users will need. Or the business expert will express needs wrongly, so that programmers faithfully implement the wrong thing. Those wrongnesses, when remembered or noticed, might be considered bugs, or might be considered feature requests. The boundary between the two has always been fuzzy. I'll just call them 'issues'. How are issues brought to the team's attention?
These feedback loops are tighter than in conventional projects because agile projects like short iterations. But they're not ideal. The business experts may well be too close to the project to see it with fresh and unbiased eyes. Users often do not report problems with the software they get. When they do, the reports are inexpert and hard to act upon. And the feedback loop is still less frequent than an agile project would like. People who want instant feedback on a one-line code change will be disappointed waiting three months to hear from users. For that reason, it seems useful to have some additional form of product critique - one that notices what the users would, only sooner. The critiquers have a resource that the people creating before-the-fact examples do not: a new iteration of the actual working software. When you're describing something that doesn't exist yet, you're mentally manipulating an abstraction, an artifact of your imagination. Getting your hands on the product activates a different type of perception and judgment. You notice things when test driving a car that you do not notice when poring over its specs. Manipulation is different than cogitation. So it seems to me that business-facing product critiques should be heavy on manipulation, on trying to approach the actual experience of different types of users. That seems to me a domain of exploratory testing in the style of James Bach, Cem Kaner, Elisabeth Hendrickson, and others. (I have collected some links on exploratory testing, but the best expositions can be found among James Bach's articles.) Going forward, I can see us trying out at least five kinds of exploratory testing:
For each of these, we should explore the question of when the tester should be someone from outside the team, someone who swoops in on the product to test it. That has the advantage that the swooping tester is more free of bias and preconceptions, but the disadvantage that she is likely to spend much time learning the basics. That will skew the type of issues found. When I first started talking about exploratory testing on agile projects, over a year ago, I had the notion that it would involve both finding bugs and also revealing bold new ideas for the product. One session would find both kinds of issues. For a time, I called it "exploratory learning" to emphasize this expanded role. I've since tentatively concluded that the two goals don't go together well. Finding bugs is just too seductive - thinking about feature ideas gets lost in the flow of exploratory testing. Some happens, but not enough. So I'm thinking there needs to be a separate feature brainstorming activity. I have no particularly good ideas now about how to do that. "More research is needed." |
|