Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Sun, 20 Jun 2004

Exploratory game design

In our tutorial on exploratory testing at Agile Development Conference, Elisabeth Hendrickson and I will be doing something odd. We'll talk about exploratory testing of software, but we'll demonstrate it by having teams design and test a game. Our view of where exploratory testing fits into Agile is that it's a dandy end-of-iteration activity, during which people give the software a test drive and get ideas that feed into later iterations.

But suppose we wanted to demonstrate that with software. We'd spend half an hour getting people's laptops ready, then they'd do the exploratory testing, then... what? You can't have another iteration - the software is what it is. So that would miss the feel of the process, and the feel is important. So we'll concentrate on the feel - and on four key techniques - and defer the direct experience of software exploration until after the session (perhaps later in the conference).

Coming to the tutorial? Here are the game design notes. Couldn't hurt to read them in advance (but it's not required).

## Posted at 13:13 in category /agile [permalink] [top]

The danger of numbers

From a Washington Post article summarizing the state of Iraq:

Bremer acknowledged he was not able to make all the changes to Iraq's political system and economy that he had envisioned, including the privatization of state-run industries. He lamented missing his goal for electricity production and the effects of the violence. In perhaps the most candid self-criticism of his tenure, he said the CPA erred in the training of Iraqi security forces by "placing too much emphasis on numbers" instead of the quality of recruits. (Emphasis mine.)

In a Wall Street Journal article about the Abu Ghraib scandal, we have this:

"The whole ball game over there is numbers," a senior interrogator, Sergeant First Class Roger Brokaw, told the paper. "How many raids did you do last week? How many prisoners were arrested? How many interrogations were conducted? How many [intelligence] reports were written? It was incredibly frustrating."

From a Christian Science Monitor article on the same topic:

Yet Specialist Monath and others say they were frustrated by intense pressure from Colonel Pappas and his superiors - Lt. Gen Ricardo Sanchez and his intelligence officer, Maj. Gen. Barbara Fast - to churn out a high quantity of intelligence reports, regardless of the quality. "It was all about numbers. We needed to send out more intelligence documents whether they were finished or not just to get the numbers up," he said. Pappas was seen as demanding - waking up officers in the middle of the night to get information - but unfocused, ordering analysts to send out rough, uncorroborated interrogation notes. "We were scandalized," Monath said. "We all fought very hard to counter that pressure" including holding up reports in editing until the information could be vetted.

I am reminded of my paper, How to Misuse Code Coverage (PDF). (I'm a little appalled that I'm comparing bad testing to Abu Ghraib. Thank God I lead so sheltered a life that I can make such comparisons. But onward.)

I have a wary relationship with numbers. On the one hand, you do sometimes have to make decisions, and when two parties disagree, numbers can shorten arguments. On the other hand, numbers do not merely measure some chosen aspect of reality, they also serve to create reality, often with horrifying unintended consequences.

What to do?

  • Cem Kaner has recommended balanced scorecards, the basic idea - I believe - being that it's harder to "game" multiple numbers than one.

  • I often ask people proposing new techniques, "What could go wrong?" That has two sub-questions: "Let's assume that your idea is wonderful in general. But there must be situations for which it's a bad idea. What are they?" And "Even if your idea is wonderful in this situation, it will be implemented by frail, fallible, and probably inexperienced humans. What mistakes are they likely to make?" Those questions can be used when someone proposes a particular measurement. The followup question is "how will we know when things are starting to go wrong?"

  • People know when numbers are being misused, if only through a vague feeling of disquiet. They need time and permission to reflect. Enter the retrospective.

  • Keep pointing out the dangers until the riskiness of numbers becomes common knowledge. Catchy slogans help.

  • Teach people the difference between numbers and reality. Cem Kaner has an article (PDF) on that topic.

But those seem mostly negative, reactive. We also need examples of problems solved through incremental use and adjustment of partial information. It also seems to me we need changed attitudes toward management, subjectivity and objectivity, information flow, problems, and solutions. But those are topics for another day.

## Posted at 11:46 in category /misc [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.




Agile Testing Directions
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects

Permalink to this list


Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list


Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI


Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."


Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich


Where to Find Me

Software Practice Advancement


All of 2006
All of 2005
All of 2004
All of 2003



Agile Alliance Logo