Exploration Through ExampleExample-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
|
Sun, 23 May 2004There are a couple of catch phrases in programming: "intention-revealing names" and "composed method". (I think they're both from Beck's Smalltalk Best Practices Patterns, but this plane doesn't have wireless. Imagine that.) The combined idea is that a method should be made of a series of method calls that are all at the same level of abstraction and that also announce clearly why they matter. A good idea. In my travels, I don't find that tests follow those rules. Tests too often contain superfluous text: it is (or seems) necessary to make the silly things run, but it obscures their intent. Let me give you an example. It's a test I wrote. It's about when students and caretakers in a veterinary clinic perform certain tasks.
Now, this test will drive most any programmer toward a state machine. (The complete test has more complicated state dependencies than you see.) The problem is that it's very hard to tell whether all the relevant sequences of orders have been covered. The relationship between sequences of clinician orders and worker actions is obscured. I claim the following tables are better:
I worry that might not be completely clear. Like many descriptions, it depends on previous domain knowledge. For example, the domain expert and I had a lot of discussion about the difference between a clinician "ordering" something and "recording" something. At this point, there's no difference as far as the program's concerned; but there is a clear distinction in the expert's mind, making it worthwhile to preserve both terms. So the very last line says, "check that when a clinician orders milking and then later records a death, the caretaker never milks that (dead) cow." It is, I think, easier to check completeness in the latter table because it encourages systematic thinking:
That's not to say that the first test was useless. By discussing tasks with reference to the flow of events in a real medical case, I was encouraged to learn and talk about the domain. I learned things (like that ordering and recording have the same effect). So the first test was a good starting point for conversation, but it was not a good summary of what was learned. Nevertheless, it seems to me that people get trapped into picking one test format and sticking to it too long. Step-by-step formats seem particularly sticky. I'm not sure why we end up that way, but I have two speculations. (1) It seems that there's often a division of labor. One person writes the test (perhaps a business expert, more often a tester), and another person implements the "fixturing" that makes the test executable. The problem is that a new table format that helps the tester causes more work for the programmer. Given the usual power imbalance in a project - a programmer's time is more valuable than a tester's - reusing old and mis-fitting fixtures is the natural consequence. (I should note that this new table format was actually quite simple - only one support method was more than a couple of lines of obvious code - but I initially hesitated because it looked different enough that it seemed it must be more work than that.) (2) The testing tradition is one of implementing tests to find bugs, not one of discovering the right language in which to express a problem. The Lisp ethic of devising little, problem-specific languages is missing. It's not intuitive behavior; it's learned. And that approach - and its power - haven't been learned yet amongst test-writers. But I think we need to instill a habit in (at least) business-facing test writers that says that both repetition and verbiage that obscures the intent of the test are bad, are signs that something's amiss.
## Posted at 21:33 in category /fit
[permalink]
[top]
So here I am in the Salt Lake City airport. I just finished a couple of days in support of a redesign of the Agile Alliance web site, aiming to make it more supportive of people aiming to sell Agile to executive sponsors. The people we interviewed brought up a couple of interesting points. One is the need for the whole organization (marketing, etc.) to change in order to take advantage of more capable software development. Otherwise, the benefits of Agile get dissipated by impedence mismatch. Another was the perennial catchphrase that agile advocates "need to talk the executive's language". One chance utterance of the latter made me flash to Galison's Image and Logic: A Material Culture of Microphysics, which is all about how scientific subcultures adjust to each other. He uses the metaphor of a "trading zone" between subcultures, in which they communicate through restricted languages that he likens to pidgins and creoles. Galison is not saying that Wilson (who invented the cloud chamber) didn't speak English to the theorists who used his results. He's saying that they used a restricted vocabulary and invented specialized communication devices like diagrams. Those devices meant something different to each party, but they allowed detailed coordination without requiring anyone to agree on global meaning. Moreover Galison claims his scientists used objects in particular ways: "... it is not just a matter of sharing objects between traditions but the establishment of new patterns of their use [...] I will often refer to wordless pidgins and wordless creoles: devices and manipulations that mediate between different realms of practice in the way that their linguistic analogues mediate between subcultures in the trading zone." (p. 52) I hope you can see where I'm going with this. It's not that "we" need to speak "their" language: it's that both groups need to learn a new language that works for our joint purposes. That'll be especially true as executive sponsors see the agile team as a responsive tool they can wield flexibly toward their ends. Obedient reader that I am, I'm not peeking ahead to Galison's big summary chapter. First, a further 400 pages of exhaustive details about bubble chambers and the like. So any summary of what thinking tools Galison offers us will have to wait. In the meantime, I should point to last year's writeup of Star and Griesemer's boundary objects. Galison's ideas are close to theirs. He's more explicit about the mechanisms of language, and he expands the focus from just objects (perhaps abstract) to include procedures and acts of interpretation. |
|