Workshop on technical debt

From Matt Heusser:

Personally, I believe that the “technical debt” metaphor is compelling. It explains the tradeoffs involved in taking shortcuts in terms that management can understand, and can take the vague and undefined and turn it into something more concrete.

At the same time, it is not a perfect metaphor, it has weaknesses, and we have a lot more to explore. For example:
- How do you measure technical debt?
- How do you communicate it?
- How do you reverse it?
- How do you avoid it?
- Is reversing or avoiding technical debt always the right thing to do?

To get to some more satisfying answers, Steve Poling and I will be organizing a peer workshop on Technical Debt on August 14/15. The event is hosted by the Calvin College CS Department and free to participants. Seating is limited to fifteen (at most, twenty) people and is by application.

The Call for Participation is available on-line:
http://www.xndev.com/downloads/WOTD_2008_CFP.rtf

The announcement is available on my blog.

If you’ve got some experiences to share, some stories to relate, or you would just like to explore the issue some more, I would encourage you to apply.

Making the rounds in veterinary circles

Two patients limp into two different American medical clinics with the same complaint. Both have trouble walking and appear to require a hip replacement.

The first patient is examined within the hour, is x-rayed the same day, has a time booked for surgery the next day and within two days, is home recuperating.

The second sees the family doctor after waiting a week for an appointment, then waits eighteen weeks to see a specialist, then gets an x-ray, which isn’t reviewed for another month and finally has his surgery scheduled for 6 months from then. Why the different treatment for the two patients?

The first is a Golden Retriever

The second is a Senior Citizen

Every time my wife gets a physical, she fumes about how superficial it is compared to the ones she gives to cows.

Unresolved issues in Agile

Here are three unresolved debates that many people seem to have agreed to stop having:

  • When do Agile teams need to be deftly led in the right direction, and when can managers/leaders/ScrumMasters/those-responsible-for-a-budget just sit back and let them figure it out?

  • On the spectrum between intensely-focused specialists and generalists who do everything they do with the same skill, where do we want team members? in what combinations?

  • To what extent does Agile require “better” (along some dimension) people?

The tacit, path-of-least-resistance result is not to my taste. In the worst cases I see and hear of, the answers are:

  • With the increased emphasis on leadership and greater focus on the executive suite, the tilt is toward guided or nudged teams over “self-organizing” teams.

  • What difference does it make? We’ve got the people we’ve got, and we’ll make the best of them.

  • Ditto, and however those people improve themselves and along what axes is going to depend on the happenstance thrown up by the day-to-day work.

Perhaps I exaggerate. Early exposure to Norse mythology has made me hypersensitive to centres not holding and that famous quote from Hunter S. Thompson:

We had all the momentum; we were riding the crest of a high and beautiful wave. So now, less than five years later, you can go up on a steep hill in Las Vegas and look West, and with the right kind of eyes you can almost see the high-water mark—the place where the wave finally broke and rolled back.

Why my life is hopeless

A large number of the submissions to the Agile2008 Examples stage do not use examples to clarify or explain. As a result, they are too vague to evaluate.

And people wonder why I despair.

Lightning Shows

Summary: Google and blogorrhoea have turned many conference tutorials into anachronisms. I propose an alternative that’s more like a Lightning Talk session.

I’m reviewing a pile of submissions to Agile2008. (You can too!) I just noticed a pattern in my comments. Tutorials often have this structure:

  1. Here’s a problem that needs solving. I have a solution.

  2. Here I demonstrate the solution… You now know enough to replicate the solution back home.

  3. Wrapup, pros & cons, questions.

What I’ve discovered while reviewing is that I’m much less interested in step 2 than I used to be. For a vast number of problems out there, a quick Google session will find me some tutorials or blog entries or screencasts, each of which probably gives me a poor-to-adequate understanding of a solution. I can fairly quickly dip into them to see if the solution interests me; if it does, I can install the appropriate software, gather all the facts of what would be a dynamite tutorial from the middling-to-poor ones I probably have, and likely find a mailing list to help me when that’s not enough.

That doesn’t work for real cutting-edge approaches, but honestly most of the time I just want to get Capistrano working.

So I find myself recommending something more like lightning talks. The nice thing about lightning talks is that they’re low risk for the listener: if the current talk is boring, the next one is coming in just a few minutes.

So what I want from a non-cutting-edge tutorial is:

  1. Here’s a problem. Here are some of its variants. Here are some constraints on a solution.

  2. Here I quickly demonstrate what one solution, in action, looks like. Here’s what I think about it. Notice the URL I’m putting at the top-right corner of each slide? Copy it down. That’s my collection of favorite links about all these solutions. It’s the only thing you need to copy from this talk; you get everything else there.

  3. Here I quickly demonstrate the next solution…

  4. So that’s where the state of the practice stands. What innovations do I see coming that you should watch for? What do I wish was happening that isn’t?

Note that I’m not saying that all tutorials should be like this. Some things aren’t documented well. Others have to be learned in a group. But, to an increasing extent, a tutorial presenter doesn’t have to be the Person Who Knows, but rather an editor who filters down a flood of possibilities into a few high-relevance ones and tells me a little about each.

An occasional alternative to mocks?

I’m test-driving some Rails helpers. A helper is a method that runs in a context full of methods magically provided by Rails. Some of those methods are of the type that’s a classic motivation for mocks or stubs: if you don’t want them to blow up, you have to do some annoying behind-the-scenes setup. (And because Rails does so much magic for you, it can be hard for the novice to have a clue what that setup is for helpers.)

Let’s say I want a helper method named reference_to. Here’s a partial “specification”: it’s to generate a link to one of a Certification's associated users. The text of the link will be the full name of the user and the href will be the path to that user’s page. I found myself writing mocks along these lines:

mock.should_receive(:user_path).once.
     with(:id=>@originator.login).
     and_return("**the right path**")
mock.should_receive(:link_to).once.
     with(@originator.full_name, "**the right path**").
     and_return("**correct-text**")

But then it occurred to me: The structure I’m building is isomorphic to the call trace, so why not replace the real methods with recorders? Like this:

  def user_path(keys)
    "user_path to #{keys.canonicalize}"
  end

  def link_to(*args)
    "link to #{args.canonicalize}"
  end

  def test_a_reference_is_normally_a_link
    assert_equal(link_to(@originator.full_name, user_path(:id => @originator.login)),
                 reference_to(@cert, :originator))
  end

This test determines that:

  • the methods called are the right ones to implement the specified behavior. There’s a clear correspondence between the text of the spec (”generate a link to”) and calls I know I made (link_to).

  • the methods were called in the right order (or in an order-irrelevant way).

  • they were called the right number of times.

  • the right arguments were given.

So, even though my fake methods are really stubs, they tell you the same things mocks would in this case. And I think the test is much easier to grok than code with mocks (especially if I aliased assert_equal to assert_behaves_like).

What I’m wondering is how often building a structure to capture the behavior of the thing-under-test will be roughly as confidence-building and design-guiding as mocks. The idea seems pretty obvious (even though it took me forever to think of it), so it’s probably either a bad idea or already widely known. Which?

Alternately, I’m still missing the point of mocks.

P.S. For tests to work, you have to deal with the age-old problems of transient values (like dates or object ids) and indeterminate values (like the order of elements in a printed hash). I’m fortunate in that I’m building HTML snippets out of simple objects, so this seems to suffice:

class Object
  def canonicalize; to_s; end
end

class Array
  def canonicalize
    collect { | e | e.canonicalize }
  end
end

class Hash
  def canonicalize
    to_a.sort_by { | a | a.first.object_id }.canonicalize
  end
end

Jeff Patton Agile Usability references

On the agile-usability mailing list, Jeff Patton wrote something very like this:

One past paper I constantly reference is Lynn Miller’s customer involvement in Agile projects paper, Gerrard Meszaros’ Agile usability paper, and last year’s paper from Heather Williams on the UCD perspective, before and after Agile.

All these are great papers - and I know there’s more.

If he thinks they’re great papers, I do too. I’ve been meaning to read two of them for ages.

Next Naked Agilists

The next Naked Agilist tele-conference will be Saturday April 26th 2008 at 8pm GMT.

A tagging meme reveals I short-change design

There’s one of those tagging memes going around. This one is: “grab the nearest book, open to page 123, go down to the 5th sentence, and type up the 3 following sentences.”

My first two books had pictures on p. 123.

The next three (Impro: Improvisation and the Theatre, AppleScript: the Definitive Guide, and Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life) didn’t have anything that was amusing, enlightening, or even comprehensible out of context. So I kept going, which is cheating I suppose. The last, How Designers Think, had this:

The designer’s job is never really done and it is probably always possible to do better. In this sense, designing is quite unlike puzzling. The solver of puzzles such as crosswords or mathematical problems can often recognize a correct answer and knows when the task is complete, but not so the designer.

That’s a hit. It made me realize a flaw in my thinking. You see, it reminded me of one of my early, semi-controversial papers, “Working Effectively With Developers” (referred to by one testing consultant as “the ‘how to suck up to programmers’ paper”). In its second section, “Explaining Your Job”, I explicitly liken programmers to problem solvers:

A legendary programmer would be one who was presented a large and messy problem, where simply understanding the problem required the mastery of a great deal of detail, boiled the problem down to its essential core, eliminated ambiguity, devised some simple operations that would allow the complexity to be isolated and tamed, demonstrated that all the detail could be handled by appropriate combinations of those operations, and produced the working system in a week.

Then I point out that this provides a way for testers to demonstrate value. I show a sample problem, then write:

Now, I’d expect any programmer to quickly solve this puzzle - they’re problem solvers, after all. But the key point is that someone had to create the puzzle before someone else could solve it. And problem creation is a different skill than problem solving.

Therefore, the tester’s role can be likened to the maker of a crossword or a mathematical problem: someone who presents a good, fully fleshed-out problem for the programmer to master and solve:

So what a tester does is help the programmer […] by presenting specific details (in the form of test cases) that otherwise would not come to her attention. Unfortunately, you often present this detail too late (after the code is written), so it reveals problems in the abstractions or their use. But that’s an unfortunate side-effect of putting testers on projects too late, and of the unfortunate notion that testing is all about running tests, rather than about designing them. If the programmer had had the detail earlier, the problems wouldn’t have happened.

Despite this weak 1998 gesture in the rough direction of TDD, I still have a rather waterfall conception of things: tester presents a problem, programmer solves it, we all go home.

But what that’s missing is my 2007 intellectual conception of a project as aiming to be less wrong than yesterday, to get progressively closer to a satisfactory answer that is discovered or refined along the way. In short—going back to the original quote—a conception of the project as a matter of design that’s at every level of detail and involves everyone. That whole-project design is something much trickier than mere puzzle-solving.

I used the word “intellectual” in the previous paragraph because I realize that I’m still rather emotionally attached to the idea of presenting a problem, solving it, and moving on. For example, I think of a test case as a matter of pushing us in a particular direction, only indirectly as a way of uncovering more questions. When I think about how testing+programming works, or about how product director + team conversations work, the learning is something of a side effect. I’m strong on doing the thing, weak on the mechanics of learning (a separate thing from the desire to learn).

That’s not entirely bad—I’m glad of my strong aversion to spending much time talking and re-talking about what we’ll build if we ever get around to building anything, of my preference for doing something and then taking stock once we have more concrete experience—but to the extent that it’s a habit rather than a conscious preference, it’s limiting. I’ll have to watch out for it.

Agile Coach Camp (May 30 - June 1, Grand Rapids, MI, USA)

Agile Coach Camp is about creating a network of practitioners who are striving to push the limits in guiding software development teams, while staying true to the values and principles at the core of the Agile movement. We’ve invited practitioners who, like you, are passionate about their work, active in the field and willing to share what they’ve learned.

Do you have a technique or practice worth sharing with your peers? Or an idea you’d like to test out with some leaders in the community? Are you facing challenges and want to get some perspective from other practitioners, or hear how they do things? If you feel you’d benefit from connecting with 80-100 ScrumMasters?, XP Coaches, Trainers, Change Agents and Mentors to talk, draw, argue and explore ideas, then this conference is for you.

You can learn all about AgileCoachCamp on this wiki.

I’m writing my position paper now. I think it will be on avoiding doing whatever things cause the legitimate part of the Post-Agile reaction. (And I do think some parts are definitely legitimate.)

UPDATE: the position paper.