Wed, 24 Aug 2005
Still more on counterexamples
Due to conversations with Jonathan Kohl and John Mitchell, a bit more
on counterexamples.
I now think that what I'm wondering about is team learning. I
want to think more about two questions:
-
Say someone comes up with a counterexample, perhaps that one kind
of user uses the product really differently.
How is that integrated into
the mindset of the team? That is, how does it become an example of an
extended model of product use? (I fear too often it stays as an
awkward, unintegrated counterexample.)
Take the blocks world example. In Winston's work, he taught a
computer to identify arches by giving it examples and
counterexamples. (Eugene Wallingford confirms that the
counterexamples were necessary.) In that world, an arch was two
pillars of blocks with a crosspiece. The counterexamples included,
if I remember correctly, arches without a top (just two pillars)
and maybe a crosspiece balanced on a single pillar.
It's fine and necessary for a researcher to teach a computer - or a
product owner a development team - about already understood
ideas like "arch". But it's even more fine when the process of
teaching surprises the teacher with a new, useful, and more
expansive understanding of the domain. I want more surprise in the
world.
-
Is there a way to give counterexamples elevated
importance in the team's routine action? So that it isn't exceptional
to integrate them into the domain model?
One thing testers do is generate counterexamples by, for
example, thinking of unexpected patterns of use. What happens when those unexpected patterns
reveal bugs? (When, in Bret Pettichord's definition of "bug", the
results bug
someone.) The bugs may turn into new stories for the
team, but in my experience, they're rarely a prompt to sit down
and think about larger implications.
An analogy: that's as if the refactoring step got left
out of the TDD loop. It is when the programmer acts to remove
duplication and make code intention-revealing that unexpected
classes arise. Without the refactoring, the code would stay a
mass of confusing special cases.
Sometimes - as in the Advancer example I cite so compulsively - the unexpected
classes reflect back into the domain and become part of the
ubiquitous language. So perhaps that reflection is one way to
make incorporating counterexamples routine. We tend to think of
the relationship between product expert and team as mainly
directional, one of
master to apprentice: the master teaches the apprentice what
she needs to know. Information about the domain flows from the
master to the apprentice. There's a conversation, yes, but the
apprentice's part in the conversation is to ask questions about
the domain, to explain the costs of coping with the domain in a
certain way, to suggest cheaper ways of coping - but not to
change the expert's understanding of the domain. Perhaps we
should expect the latter.
Put another way: suppose we grant that a project develops its
own creole - its own jargon - that allows the domain expert(s) and
technical team to work effectively with each other. Something
to keep casual track of would be how many nouns and verbs in
the creole originated in the code.
## Posted at 08:02 in category /ideas
[permalink]
[top]
Mon, 22 Aug 2005
More on counterexamples
Andy Schneider responded to my counterexamples
post with the following. I think they're neat ideas.
-
I express project scope in terms of what the project is delivering and what it is not
delivering. I learnt to do this in 1994, after listening to a bunch of people interpret my scope
statements in different ways, depending on what they wanted to read into them. On the surface it
seems daft to list all the things a project is not, it'd be a long list. However, there is always
some obvious set of expectations you know you're aren't going to fill and some obvious confusions.
I use those to draw up my 'Is Not' Lists.
-
I'm writing a lot of papers laying down design principles for common architectural scenarios,
trying to get some re-use at the design level and also trying to improve productivity by having
the boring stuff already sorted for 80% of the cases. I communicate my principles with a narrative
text within which the user 'discovers' the principles (which I highlight so it can be read by
consuming the principles). At the end of the paper I normally write a section labelled something
like implications. Here i walk through a set of counter-examples that describe practices that
contradict the principles. This gets people to think about the implications of what's being said.
Creates me a bunch of work working through the feedback, as these sections always elicit more
feedback than the rest. If I didn't provide counter-examples no one would consider the space not
covered or excluded by the principles.
So, I've learnt it is useful, I have seen the fact it gets people to think about what something
is not and the feedback from people is always better for it. In many ways it is opposite to a
politician's approach, where they avoid counterexamples because they want you to read into their
words what you want to. They don't want you to consider the space not covered or excluded.
(Reprinted with permission.)
## Posted at 08:47 in category /ideas
[permalink]
[top]
Mon, 08 Aug 2005
Counterexamples
In my thinking about tests as examples, I've been thinking of them
as good examples:
The right system behaves like this. And like this. And
don't forget this.
But what about counterexamples?
A system that did
this would be the wrong system. And so would a system that
did this.
There's some evidence that differences are important in
understanding.
The linguist Ferdinand
de Saussere taught that meaning of the word
"boat" isn't "a
small vessel for travel on water." Rather the meaning of "boat"
is generated by contrast with other words like "ship", "raft",
"yawl", "statue of a boat", etc. (Derrida
would later go on to make perhaps too much of the fact that there's
no limit to the recursion, since all those other words are also
defined by difference.)
In
the early '70s, Patrick Winston wrote a
program that learned the concept of "arch" from a series of
examples and "near misses". My copy of his book has long
since gone to the place
paperclips, coathangers, and individual socks go, so
I can't check if the near-miss counterexamples merely improved
the program or were essential to its success.
My kids are now of the age (nine and ten) where they ask for
dictionary-like definitions of words. But when they were
younger, they more obviously learned by difference: they'd point at
something, give the wrong name, then accept the correction
without further discussion. ("Duck." "No, that's a goose."
"Dog." "Yes, a nice dog.") Presumably the counterexamples
helped with that amazing burst of vocabulary young kids have.
So what about those times when the programmer proudly calls the
product owner over to show the newest screen and watches her face
fall just before she says, "That's not really what I had in mind"?
Or those times when a small group is talking about a story and a
programmer pops up with an idea or a supposed consequence that's
wrong? That's an opportunity to - briefly! - attend to what's
different about the way two people are thinking.
Does anyone make explicit use of counterexamples? How? What have
you learned?
## Posted at 20:05 in category /ideas
[permalink]
[top]
Thu, 28 Oct 2004
"Methodology work is ontology work" posted
Now that I've presented my paper at OOPSLA, I can
post
it here (PDF).
Here's the abstract:
I argue that a successful switch from one methodology to another
requires a switch from one ontology to another. Large-scale
adoption of a new methodology means "infecting" people with new
ideas about what sorts of things there are in the (software
development) world and how those things hang together. The paper
ends with some suggestions to methodology creators about how to
design methodologies that encourage the needed "gestalt switch".
I earlier blogged the extended
abstract.
This is one of my odd writings.
## Posted at 09:37 in category /ideas
[permalink]
[top]
Sat, 24 Jul 2004
Methodology work is ontology work
I've had a paper accepted at
OOPSLA
Onward. I had to write a one-page extended abstract. Although I
can't publish the paper before the conference, it seems to me that
the point of an abstract is to attract people to the session or,
before then, the conference. So here it is. I think it's too dry -
I had to take out the bit about
bright
cows and the bit about
honeybee navigation - but brevity has its cost.
(As you can guess from the links above, the paper is a stew of
ideas that have surfaced on this blog. I hope the stew's simmered
enough to be both tasty and nourishing.)
I argue that a successful switch from one methodology to another
requires a switch from one ontology to another. Large-scale adoption
of a new methodology means "infecting" people with new ideas about
what sorts of things there are in the (software development) world and
how those things hang together. The paper ends with some suggestions
to methodology creators about how to design methodologies that
encourage the needed "gestalt switch".
In this paper, I abuse the word "ontology". In philosophy, an ontology
is an inventory of the kinds of things that actually exist, and
(often) of the kinds of relations that can exist between those
things. My abuse is that I want ontology to be active, to drive
people's actions. I'm particularly interested in unreflective actions,
actions people take because they are the obvious thing to do in a
situation, given the way the world is.
Whether any particular ontology is true or not is not at issue in the
paper. What I'm concerned with is how people are moved from one
ontology to the other. I offer two suggestions to methodologists:
Consider your methodology to be what the philosopher of science Imre
Lakatos called "a progressive research programme." Lakatos laid out
rules for such programmes. He intended them to be rules of
rationality, but I think they're better treated as rules of
persuasion. Methodologies that follow those rules are more likely to
attract the commitment required to cause people to flip from one
system of thought to another (from one ontology to another) in a way
that Thomas Kuhn likened to a "gestalt switch".
It's not enough for people to believe; they must also
perceive. Make what your methodology emphasizes visible in the world
of its users. In that way, methodologies will become what Heidegger
called ready-to-hand. Just as one doesn't think about how to hold a
hammer when pounding nails, one shouldn't think about the methodology,
its ontology, and its rules during the normal pace of a project: one
should simply act appropriately.
Methodologies do not succeed because they are aligned with some
platonic Right Way to build software. Methodologies succeed because
people make them succeed. People begin with an ontology - a theory of
the world of software - and build tools, techniques, social relations,
habits, arrangements of the physical world, and revised ontologies
that all hang together. In this methodology-building loop, I believe
ontology is critical. Find the right ontology and the loop becomes
progressive.
## Posted at 13:06 in category /ideas
[permalink]
[top]
Thu, 11 Mar 2004
Telling the code to tell you something
Two related ideas:
Chad Fowler
builds on an idea from Dave Thomas to show
dynamically-typed code that documents
the types it probably wants.
Dave's idea was that the first type a
method's called with is probably the right type. So, in an
aspectish way, you point at the method, tell it to remember how
it's called and to complain if later calls are different.
Chad's idea is that a method can just as easily record the
information in a way that lets you create documentation
about the types of method arguments.
Chad's idea of using the results to inform an IDE is
particularly clever.
Agitator is a new testing
tool that semi-intelligently floods a program with data and
records interesting facts about what happens. (Full disclosure:
I've received some money from the company that sells it.) Kevin
Lawrence tells
a
story of how the desire to simplify the results Agitator
produced for some code resulted in a small class hierarchy
replacing much more code smeared all over the place. The end result and
the feel of the process is the standard test-driven refactoring
story, but the "impulsive force" was interestingly different.
(See also a
different
story, from Jeffrey Fredrick. To me, it's a story about how
Agitator's large number of
examples hinted that there's a bug somewhere and that maybe an
assertion right over there would be a good way to trap it.)
The common thread is quickly instrumenting your program, running a
lot of ready-to-hand examples through it, getting some
output that's not too noisy, and deriving value from that. Not a new
idea: how long have profilers been around?
But now that
"listening to what the code is trying to tell us" is marginally
more respectable (I blithely assert), we should expand the range of
conversation. It shouldn't be only about the code's static nature,
but also about its dynamic nature. And perceiving the dynamic nature
should be as simple, as semi-automatic, even as rawly perceptual as, say, noticing
duplication between two if statements.
## Posted at 20:30 in category /ideas
[permalink]
[top]
|
|