A development style

Here’s a style of development I once proposed. I was reminded of it at SDTConf.

The project was to allow scientists to ask questions of a big database. There was a product owner who understood the scientific domain and - importantly - could have used the database to answer the questions (either by constructing SQL queries or by walking someone who knew SQL through query construction - I forget which). There were programmers who knew SQL and Java but were ignorant of the domain.

I proposed that the scientists not worry about programs. They should instead ask their questions of the product owner. They could either email them to her or talk to her on the phone. It would be her job to respond to the scientists with the data they needed.

At the beginning, I envisioned her doing one of two things. If the questions were dirt simple, she could pass them off to the programmers, who would build the queries, run them, and email back the results. Otherwise, she’d have to do that work herself.

So the programmers would be barraged with simple questions that would turn into simple SQL. Being lazy, they’d soon write code to automate query construction.

As the project continued, the product owner would hand off more and more complicated questions to the programmers, who would construct more and more complicated queries. They’d improve their programmatic interface to the database. Who knows? - they might find a need for an object model.

At some point, the programmers would be capable of answering most any query. They would have improved their interface to the point that it would make common queries easy and uncommon queries possible. Except… the user interface would be lousy, suitable only for programmers.

At this moment, the scientists are happy in one sense and sad in another. They’re happy because they can ask questions of an intelligent interface (a human being, rather than a program) - after all, their goal is not to use a program but to effortlessly get their questions answered. But they’re unhappy because the results don’t come fast enough. They have to wait for some human (programmer) to type at a program.

The programmers have become bored entering queries for the scientists. They’ve mastered the domain (of scientifically-relevant query construction) and want to move on.

So now the project becomes one where the programmers try to create a good enough user interface that the scientists will prefer the instant gratification of using it to the pleasantness of interfacing with a smart human instead of a dumb computer.

—-

I liked this approach because it front-loads the important bit — capturing the domain, representing it in code — but it still starts providing value to the end-users really, really early.

Unfortunately, I discovered once again how politically inept I am, so I got tossed off the project without ever trying this scheme. I’ve wanted to ever since.

Erasing history in tests

Something I say about the ideal of Agile design is that, at any moment when you might ship the system, the code should look as if someone clever had designed a solution tailored to do exactly what the system does, and then implemented that design. The history of how the system actually got that way should be lost.

An equivalent ideal for TDD might be that the set of tests for an interoperating set of classes would be an ideal description-by-example of what they do, of what their behavior is. For tests to be documentation, the tests would have to be organized to suit the needs of a learner (most likely from simple to complex, with error cases deferred, and - for code of any size - probably organized thematically somehow).

That is, the tests would have to be more than what you’d expect from a history of writing them, creating the code, rewriting tests and adding new ones as new goals came into view, and so forth. They shouldn’t be a palimpsest with some sort random dump of tests at the top and the history of old tests showing through. (”Why are these three tests like this?” “Because when behavior X came along, they were tests that needed to be changed and it was easiest to just tweak them into shape.”)

I’ve seen enough to be convinced that, surprisingly, Agile design works as described in the first paragraph, and that it doesn’t require superhuman skill. The tests I see - and write - remind me more of the third paragraph than the second. What am I missing that makes true tests-as-documentation as likely as emergent design is?

(It’s possible that I demand too much from my documentation.)

Fertile assets

One result of the technical debt workshop was that the attendees are more likely to talk about the code as an asset to preserve, improve, or destroy than as evidence of debt. See Mike Feathers on code as an asset or Chris McMahon on a project as an investment. Here’s my twist:

Consider a vegetable garden or a forest of pulpwood. The immediate value of either is what you get out of them at harvest time. That’s akin to the value a business gets when the deployed product gets used or a delivered product gets bought.

But there’ll be other harvests, and the way you prepare for this harvest affects the ones that follow. Over time, a well-tended garden gets better and better at growing vegetables (because the soil is looser, you’ve dug under organic matter, etc.). If you tend it poorly–just push seeds into the ground–it’ll never yield what it could.

Product code is like such a fertile asset: the programmers can tend it well or poorly as they prepare for release, and the result will only become obvious over time. The big difference is that the fertility of code varies a lot more than the fertility of a backyard garden. No matter how sloppily I treat my garden, I can’t ruin it as completely as sloppy, rushed coding can.

Barriers to acceptance-test driven design

At the AA Functional Test Tools workshop, we had a little session devoted to this question: Even where “ordinary” unit test-driven design (UTTD) works well, acceptance-test driven design (ATDD) is having more trouble getting traction. Why not?

My notes:

  1. Programmers miss the fun / aha! moments / benefits that they get from UTDD.

    1. Especially, there is a difference in scope and cadence of tests. (”Cadence” became a key word people kept coming back to.)
    2. Laborious fixturing, which doesn’t feel as valuable as “real programming”.
    3. No insight into structure of system.
  2. Business people don’t see the value (or ROI) from ATDD

    1. there’s not value for them personally (as perhaps opposed to the business)
    2. they are not used to working at that level of precision
    3. no time
    4. they prefer rules to examples
    5. tests are not replacing traditional specs, so they’re extra work.
  3. There is no “analyst type” or tester/analyst to do the work.

  4. There is an analyst type, but their separate existence (from programmers) leads to separate tools and hence general weakness, lack of coordination

  5. There’s no process/technique for doing ATDD, not like the one for UTDD.

  6. ATDD requires much more collaboration than UTDD (because the required knowledge and skills are dispersed among several people), but it is more fragile (because the benefit is distributed - perhaps unevenly - among those people).

  7. Programmers can be overloaded with masses of analyst- or tester-generated examples. The analyst or testers need to be viewed as teachers, teaching the programmers what they need to know to make right programming decisions. That means sequences of tests that teach, moving from simple-and-illustrative, to more complicated, with interesting-and-illuminating diversions along the way, etc.

Real-life soap opera test

Soap opera tests exaggerate and complicate scenarios in the way that television soap operas exaggerate and complicate real life. Jason Gorman describes something that happened to him that might have been caught by a soap opera test. (It shares the “upgrade happens in the middle of something else” property of the example at the link.)

A couple of years back, I lost my wallet on the way to an off-site meeting with a client.

I called my credit card company to ask them to cancel my card and send me a replacement. It just so happens at that very moment a replacement card was already on its way to me in the post because I’d been upgraded to a gold card.

They canceled the gold card. And didn’t send a replacement card because a replacement card was already on its way.

More on soap opera testing here.

What’s special about teams with low technical debt?

Notes from a session at the workshop on Technical Debt. It was organized around this question: Consider two sets of Agile teams. In the first set, the teams do a fine job at delivering new business value at frequent intervals, but their velocities are slowly decreasing (a sign of increasing technical debt or a declining code asset). The other teams also deliver, but their velocities are all increasing. What visible differences might there be between the two sets of teams?

The following are descriptions of what’s unique about a typical decreasing-debt team:

  • Most of the team produces 98% clean code all the time, but there is some untidiness (whether due to lack of knowledge, lack of effort, or whatever). However, one or two people take one or two hours a week doing a little extra to compensate. (Chet Hendrickson)

  • Behavior in relation to the code is not apprehensive. They’re not scared of the code they’re working on, they’re not afraid of making mistakes that break the code base. (Michael Feathers)

  • They’re frequently talking about the state of the code. There are many give-and-take technical discussions. They might be heated or not, but participants are always alert and engaged. The content is more important than the tone: they are discussing new things rather than rehashing old issues. The ongoing conversation is making progress. (Many)

  • There are no big refactorings; instead, there are many small ones. (Ron Jeffries?)

  • No code ownership. (I forget)

  • Time is spent on building affordances for the team’s own work. In Richard P. Gabriel’s terminology, they spend time making the code habitable: the code is where they live, so they want it to be livable, put things where they are easy to find, make them easier to use next thing. (Matt Heusser and others)

This isn’t a complete list, only the ones that struck me and that I remembered to write down.

RubyCocoa (etc.) podcast

I forgot to mention that I did a podcast with Daniel Steinberg:

Brian Marick on Ruby Cocoa and Testing
Who’s smart enough to program?

Brian Marick talks to Daniel Steinberg on a wide variety of topics. Brian asks, who’s smart enough to program?, and describes how he met Andy and Dave at the Agile Manifesto summit. He talks about using Lisp, Smalltalk and Ruby, and about introducing programming to testers. Brian also shares the secrets of Domain Specific Languages (DSLs), and of course, his new book on Ruby Cocoa: marrying Ruby with the uber-cool Mac OS X Cocoa GUI framework, and test driven development with Ruby Cocoa code.

The Pragmatic Bookshelf has other podcasts, too.

2008 Gordon Pask Award for Contributions to Agile Practice

We are soliciting nominations for the 2008 Gordon Pask Award for Contributions to Agile Practice.

Each year, the Agile Alliance awards the Gordon Pask Award on the last day of the Agile 200X conference. It recognizes two people whose recent contributions to Agile Practice make them, in the opinion of the Award Committee, people others in the field should emulate. In order to grow the next generation of Agile thought leaders, the award is given to people whose reputation is not yet widespread.

Each year, we fiddle with the award. This year’s fiddling is with the nomination process. It will be roughly modeled after the collaboration or trust model of some forms of microcredit. (See http://en.wikipedia.org/wiki/Grameen_Bank.) We solicit group nominations made by collections of at least five individuals who have personal experience with the nominee.

The nominations should describe that experience and cover such topics as:

  • what ideas the nominee has championed, and what the effect has been.
  • which people the nominee has helped improve the practice of their craft, and how.
  • the ways in which the nominee has fostered community (such as user groups, conferences, and the like).

The nominating group may be people who work with the nominee, but a successful nominee would have had an effect beyond a single company. You’ll forgive the nominating committee if they’re dubious about five consultants from one company nominating a sixth—please find clients who’ve benefited.

Send nominations to paskers@googlegroups.com by Wednesday, August 6. You may revise your nomination at any time up to the deadline, and committee members may suggest ways to make the nomination better before then.

The committee is composed of past recipients of the award (Laurent Bossavit, Steve Freeman, Naresh Jain, Nat Pryce, Jeff Patton, J.B. Rainsberger, and James Shore), plus the original members (Rachel Davies, Dave Thomas, and Brian Marick).

As is always the case, it’s Brian Marick’s fault the nominations are starting late.

Please forward this link to people you’d like to see form a nominating group.

Position statement for functional testing tools workshop

Automated functional testing lives between two sensible testing activities. On the one side, there’s conventional TDD (unit testing). On the other side, there’s manual exploratory testing. It is probably more important to get good at those than it is to get good at automated functional testing. Once you’ve gotten good at them, what does it mean to get good at automated functional testing?

There is some value in thinking through larger-scale issues (such as workflows or system states) before diving into unit testing. There is some value (but not, I think, as much as most people think) in being able to rerun larger-scale functional tests easily. In sum: compared to doing exploratory testing and TDD right, the testing we’re talking about has modest value. Right now, the cost is more than modest, to the point where I question whether a lot of projects are really getting adequate ROI. I see projects pouring resources into functional testing not because they really value it but more because they know they should value it.

This is strikingly similar to, well, the way that automated testing worked in the pre-Agile era: most often a triumph of hope over experience.

My bet is that the point of maximum leverage is in reducing the cost of larger-scale testing (not in improving its value). Right now, all those workflow statements and checks that are so easy to write down are are annoyingly hard to implement. Even I, staring at a workflow test, get depressed at how much work it will be to get it just to the point where it fails for the first time, compared to all the other things I could be doing with my time.

Why does test implementation cost so much?

We are taught that Agile development is about working the code base so that arbitrary new requirements are easy to implement. We have learned one cannot accomplish that by “layering” new features onto an existing core. Instead, the core has to be continually massaged so that, at any given moment, it appears as if it were carefully designed to satisfy the features it supports. Over time, that continual massaging results in a core that invites new features because it’s positively poised to change.

What do we do when we write test support code for automated large-scale tests? We layer it on top of the system (either on top of the GUI or on top of some layer below the GUI). We do not work the new code into the existing core—so, in a way that ought not to surprise us, it never gets easier to add tests.

So the problem is to work the test code into the core. The way I propose to do that is to take exploratory testing more seriously: treat it as a legitimate source of user stories we handle just like other user stories. For example, if an exploratory tester wants an “undo” feature for a webapp, implementing it will have real architectural consequences (such as moving from an architecture where HTTP requests call action methods that “fire and forget” HTML to one where requests create Command objects).

Why drive the code with exploratory testing stories rather than functional testing stories? I’m not sure. It feels right to me for several nebulous reasons I won’t try to explain here.

Functional testing tools workshop just before Agile 2008


Agile Alliance Functional Testing Tools Open Space Workshop
Call for Participation

Dates: Monday, August 4, 2008
Times: 8 AM - 6 PM
Location: Toronto, Ontario, at the Agile2008 venue

Description

This is the second Agile Alliance Functional Testing Tools workshop.
The first, held in October 2007 in Portland Oregon, was a great
success. In this second workshop, we're increasing the size and
moving to an open space like format. The primary purpose of this
workshop is still to discuss cutting-edge advancements in and envision
possibilities for the future of automated functional testing tools.

As an open-space style workshop, the content comes from the
participants, and we expect all participants to take an active role.
We're seeking participants who have interest and experience in
creating and/or using automated functional testing tools/frameworks on
Agile projects.

This workshop is sponsored by the Agile Alliance Functional Testing
Tools Program. The mission of this program is to advance the state of
the art of automated functional testing tools used by Agile teams to
automate customer-facing tests.

There is no cost to participate. Participants will be responsible for
their own travel expenses.

Due to room constraints, we can accommodate up to 60 participants.
Registrations will be granted on a first-come, first-served basis to
participants who complete the registration process.

Registering for the AA-FTT Open Space Workshop

We will be using the conference submission system
(http://submissions.agile2008.org) to process the requests for
invitation (RFI). If you're interested in being invited to
participate in this workshop, please:
a) login to the submission system (create an account if you don't have
one already). NOTE: make sure your email address is correct.
b) click the 'propose a session' link to request an invitation,
filling in the following required fields:
- title: enter RFI 
- stage: select ‘AAFTT’
- session type: select ‘other’
- duration: select any of the values (not relevant for the RFI process)
- summary: briefly answer the following three questions
i) What do you see as the biggest issue for Functional
Testing Tools on Agile projects?
ii) What do you hope to contribute?
iii) What do you hope to get?
c) click ‘create’

The AAFTT stage producers will review the RFI, and send you an
invitation to attend the workshop, along with further instructions for
pre-organizing openspace sessions.

Please register as soon as possible, before the workshop fills up.

Pass This Along
If you know of someone that would be a candidate for this workshop,
please forward this call for participation on to them.