Fri, 05 Sep 2003
The Marquis de Sade and project planning
I dabble in science studies (a difficult field to define, so I won't
try) partly because it causes me to read weird stuff. Last year, I
read "Sade, the Mechanization of the Libertine Body, and the Crisis
of Reason", by Marcel Henaff
1. Here's a quote about
Sade's obsession with numerical descriptions of body parts and
orgies:
It as if excessive precision was supposed to compensate for the rather
obvious lack of verisimilitude of the narrated actions.
The same could be said of most project plans. Affix this quote
to the nearest Pert chart.
1
In Technology and the Politics of Knowledge, Feenberg & Hannay eds., 1995.
## Posted at 14:32 in category /misc
[permalink]
[top]
Agile testing directions: business-facing team support
Part 4 of a series The table of contents is on the right
As an aid to conversation and thought, I've been breaking one topic,
"testing
in agile projects," into
four distinct topics. Today I'm writing about how we can use
business-facing
examples to support the work of the whole team (not just the
programmers)1.
I look to project examples for three things:
provoking
the programmers to write the right code, improving
conversations between the technology experts and the business
experts, and helping the business experts more quickly realize
the possibilities inherent in the product. Let me take them in turn.
- provoking the right code
-
This is a straightforward extrapolation of standard test-driven
(or example-driven)
design from internal interfaces to the whole-product interface. To add a new feature, begin with one or more
examples of how it should work. The programmer then makes the
code match that example. The stream of examples continues until
the required code is in place. Then the feature is complete
(for now).
Although the extrapolation is straightforward, we have a ways
to go before we've ironed out the details, before the practice
is well understood. I'll say more about that below.
- improving project conversations
-
It makes no more sense to toss examples over the wall to a
programmer and expect her to write the right code than it does
to do that with requirements. Programmers need context,
background, and tacit knowledge. They get that through conversation
with business experts. Examples can improve that conversation, if
only by giving people something to talk about. They
ground conversation in the concrete.
Where examples can help particularly, I think, is in forging a
common vocabulary. I'm a fan of the notion that
domain terminology should be "reified" by being turned into
objects in the code. Not in the naive "write about the domain
and underline all the nouns" style of object-oriented design,
but in the more sophisticated style of Eric Evans's Domain-Driven
Design2.
So what we must have is a process by which fuzzily-understood
domain knowledge is made very concrete, turned into 1's
and 0's. It seems to me that examples are an intermediate step,
a way to gradually defuzz the knowledge. But, as with using
examples to guide programmers, a lot of lore remains to be learned.
- making possibilities more noticeable
-
We want business experts to have "Aha!" moments, in which they
realize that because the product does A in way B
and also X in way Y, it makes sense for it to
do some new Z that hadn't been imagined before. We also
want other people on the team to have the same kind of realizations,
which they can then propose to the business experts. In short,
we want creativity.
Probably the best way to unleash creativity is to get your hands
on working software and try it out. (I'll write more about that
in a later posting.) But another way is to explain an example
to someone else. Ever had trouble finding a bug, then had the
mistake jump out at you as soon as you started explaining the
code to someone else? For me, writing user documentation has a
similar effect: I use examples to explain
what the fundamental ideas behind the software are and how they
hang together. Quite often, I realize they don't. It's the same
feeling as with bugs, even though the person I'm explaining it to is an
imaginary reader, not a real person sitting next to me.
So the way in which we create examples and discuss them might
accelerate the product's evolution.
One of my two (maybe three) focuses next year will be these
business-facing examples. I've allocated US$15K for visits to shops
who use them well. If you know of such a shop, please
contact me. After these
visits (and after paid consulting visits and after practicing on my
own), I want to be able to tell stories:
-
Stories about the pacing of examples. When do people start
creating them? How many examples are created before the
programmer starts on the code? What kinds of examples come
first?
-
Stories about the conversations around examples. Who's involved?
What's the setting and structure of the conversation? Who writes
the examples down? What's it like when business experts do it?
programmers? testers?
(And what did people notice if they
switched from one scribe to another?) How much do examples
change during the process of turning them into code?
-
Stories about the interaction between business-facing examples
and technology-facing examples (unit tests). How and when do
programmers turn their attention from one to the other? How
often are the customer-facing examples checked? Do examples
migrate from one category to the other?
-
Stories about the way business-facing examples affect the
design and architecture of the code.
-
Stories about FIT, surely the
notation with the greatest mindshare. For what sorts of systems
is it most appropriate? One of FIT's neatest features is that
it encourages explanatory text to be wrapped around the
examples - how do people make use of that? When people have
migrated to FIT from some other approach (most likely examples
written in a scripting language), what have they learned along
the way? And what did people who went in the other direction
learn? How do FIT and scripting languages compare when it comes
to developing a project vocabulary?
-
Stories about balancing examples that push the code forward
("... and here's another important aspect of feature X...") with
examples that rule out bugs ("... don't forget that the product
has to work in this situation..."). What kinds of bugs should
be prevented, and what kinds should be left to after-the-fact
product critique (the other half of my matrix)?
(See also Bill Wake's
"generative" and "elaborative"
tests.)
-
Stories about
the distinction between checked examples and
change detectors. Does this play out differently in the
business-facing world than in the technology-facing world?
Only when we have a collection of such stories will the practice of
using business-facing examples be as well understood, be as routine,
as is the practice of technology-facing examples (aka test-driven
design).
1 I originally called this quadrant
"business-facing
programmer support". It now seems to me that the scope is
wider - the whole team - so I changed the name.
2 I confess I've
only read parts of Eric's book, in manuscript. The final copy is in
my queue. I think I've got his message right, though.
## Posted at 14:04 in category /agile
[permalink]
[top]
|
|