Archive for the 'conferences' Category

2008 Gordon Pask Award for Contributions to Agile Practice

We are soliciting nominations for the 2008 Gordon Pask Award for Contributions to Agile Practice.

Each year, the Agile Alliance awards the Gordon Pask Award on the last day of the Agile 200X conference. It recognizes two people whose recent contributions to Agile Practice make them, in the opinion of the Award Committee, people others in the field should emulate. In order to grow the next generation of Agile thought leaders, the award is given to people whose reputation is not yet widespread.

Each year, we fiddle with the award. This year’s fiddling is with the nomination process. It will be roughly modeled after the collaboration or trust model of some forms of microcredit. (See We solicit group nominations made by collections of at least five individuals who have personal experience with the nominee.

The nominations should describe that experience and cover such topics as:

  • what ideas the nominee has championed, and what the effect has been.
  • which people the nominee has helped improve the practice of their craft, and how.
  • the ways in which the nominee has fostered community (such as user groups, conferences, and the like).

The nominating group may be people who work with the nominee, but a successful nominee would have had an effect beyond a single company. You’ll forgive the nominating committee if they’re dubious about five consultants from one company nominating a sixth—please find clients who’ve benefited.

Send nominations to by Wednesday, August 6. You may revise your nomination at any time up to the deadline, and committee members may suggest ways to make the nomination better before then.

The committee is composed of past recipients of the award (Laurent Bossavit, Steve Freeman, Naresh Jain, Nat Pryce, Jeff Patton, J.B. Rainsberger, and James Shore), plus the original members (Rachel Davies, Dave Thomas, and Brian Marick).

As is always the case, it’s Brian Marick’s fault the nominations are starting late.

Please forward this link to people you’d like to see form a nominating group.

Position statement for functional testing tools workshop

Automated functional testing lives between two sensible testing activities. On the one side, there’s conventional TDD (unit testing). On the other side, there’s manual exploratory testing. It is probably more important to get good at those than it is to get good at automated functional testing. Once you’ve gotten good at them, what does it mean to get good at automated functional testing?

There is some value in thinking through larger-scale issues (such as workflows or system states) before diving into unit testing. There is some value (but not, I think, as much as most people think) in being able to rerun larger-scale functional tests easily. In sum: compared to doing exploratory testing and TDD right, the testing we’re talking about has modest value. Right now, the cost is more than modest, to the point where I question whether a lot of projects are really getting adequate ROI. I see projects pouring resources into functional testing not because they really value it but more because they know they should value it.

This is strikingly similar to, well, the way that automated testing worked in the pre-Agile era: most often a triumph of hope over experience.

My bet is that the point of maximum leverage is in reducing the cost of larger-scale testing (not in improving its value). Right now, all those workflow statements and checks that are so easy to write down are are annoyingly hard to implement. Even I, staring at a workflow test, get depressed at how much work it will be to get it just to the point where it fails for the first time, compared to all the other things I could be doing with my time.

Why does test implementation cost so much?

We are taught that Agile development is about working the code base so that arbitrary new requirements are easy to implement. We have learned one cannot accomplish that by “layering” new features onto an existing core. Instead, the core has to be continually massaged so that, at any given moment, it appears as if it were carefully designed to satisfy the features it supports. Over time, that continual massaging results in a core that invites new features because it’s positively poised to change.

What do we do when we write test support code for automated large-scale tests? We layer it on top of the system (either on top of the GUI or on top of some layer below the GUI). We do not work the new code into the existing core—so, in a way that ought not to surprise us, it never gets easier to add tests.

So the problem is to work the test code into the core. The way I propose to do that is to take exploratory testing more seriously: treat it as a legitimate source of user stories we handle just like other user stories. For example, if an exploratory tester wants an “undo” feature for a webapp, implementing it will have real architectural consequences (such as moving from an architecture where HTTP requests call action methods that “fire and forget” HTML to one where requests create Command objects).

Why drive the code with exploratory testing stories rather than functional testing stories? I’m not sure. It feels right to me for several nebulous reasons I won’t try to explain here.

Functional testing tools workshop just before Agile 2008

Agile Alliance Functional Testing Tools Open Space Workshop
Call for Participation

Dates: Monday, August 4, 2008
Times: 8 AM - 6 PM
Location: Toronto, Ontario, at the Agile2008 venue


This is the second Agile Alliance Functional Testing Tools workshop.
The first, held in October 2007 in Portland Oregon, was a great
success. In this second workshop, we're increasing the size and
moving to an open space like format. The primary purpose of this
workshop is still to discuss cutting-edge advancements in and envision
possibilities for the future of automated functional testing tools.

As an open-space style workshop, the content comes from the
participants, and we expect all participants to take an active role.
We're seeking participants who have interest and experience in
creating and/or using automated functional testing tools/frameworks on
Agile projects.

This workshop is sponsored by the Agile Alliance Functional Testing
Tools Program. The mission of this program is to advance the state of
the art of automated functional testing tools used by Agile teams to
automate customer-facing tests.

There is no cost to participate. Participants will be responsible for
their own travel expenses.

Due to room constraints, we can accommodate up to 60 participants.
Registrations will be granted on a first-come, first-served basis to
participants who complete the registration process.

Registering for the AA-FTT Open Space Workshop

We will be using the conference submission system
( to process the requests for
invitation (RFI). If you're interested in being invited to
participate in this workshop, please:
a) login to the submission system (create an account if you don't have
one already). NOTE: make sure your email address is correct.
b) click the 'propose a session' link to request an invitation,
filling in the following required fields:
- title: enter RFI 
- stage: select ‘AAFTT’
- session type: select ‘other’
- duration: select any of the values (not relevant for the RFI process)
- summary: briefly answer the following three questions
i) What do you see as the biggest issue for Functional
Testing Tools on Agile projects?
ii) What do you hope to contribute?
iii) What do you hope to get?
c) click ‘create’

The AAFTT stage producers will review the RFI, and send you an
invitation to attend the workshop, along with further instructions for
pre-organizing openspace sessions.

Please register as soon as possible, before the workshop fills up.

Pass This Along
If you know of someone that would be a candidate for this workshop,
please forward this call for participation on to them.

Open Space as we do it: good, but not for me

I’ve come to think of Open Space workshops, as used in the Agile community, as having three purposes.

  1. They give people a sense of belonging — “I’ve finally met people who care about what I care about! / are grappling with the problems I’m having!” This is by far, to my mind, the most noticeable result.

  2. They provide individuals who have problems with help. The effect can be anything from a shallow “I could try this…” to a real Aha! moment.

  3. On rarer occasions, people who have decided to devote themselves to a broader topic can get, over the course of the workshop, a solid sense of what the most important issues are. That issue survey informs their later efforts, though I don’t think it itself provides a solution or a plan toward a solution. (The survey of Agile testing issues that Bret Pettichord, a few others, and I got at the 2004 XP/AU Open Space was invaluable.)

I think Open Space in our community is not a tool for solving a big problem by gathering together a group of people devoted to it and having them tackle, in small & fluctuating groups, those parts or aspects of it where they can best apply their talents. I believe that was Open Space’s original purpose.

My disenchantment with Open Space is mostly a result of my own poor expectations and personality traits. I am by nature a “let’s solve some big problem” kind of person in any group setting, so I’m attracted to that original purpose of Open Space Technology and disappointed when it doesn’t emerge. I’m also not a person who’s moved by a sense of belonging (except in an abstract, “community of ideas” way).

For large-scale problem-solving and the pushing of abstract ideas, I prefer the LAWST format. Because the sessions are focused on asking questions of a particular person who thinks she has an idea to share, you can get a longer and more in-depth understanding of a topic. But because the schedule is even less planned than in Open Space, there’s more opportunity for the different sessions to cohere into a sense that the workshop was about some-thing and that conclusions were reached about that thing.

How that works:

  1. In one of the presentations, someone says something that strikes a chord.
  2. The collective intelligence amplifies that chord through questioning.
  3. People who have more to say about that chord put themselves on the schedule.
  4. People who don’t take themselves off, or desultory question shortens their time.

LAWST is a pretty effective blend of structure and not-structure. Good job, Brian and Cem.

Agile 2008 early bird registrations

Agile2008Button I notice from the registration page that there are only 68 early bird slots left for Agile 2008. That means the price will go up USD200 pretty soon.

Conference of the Association of Software Testing 2008

The third annual Conference of the Association of Software Testing (CAST) 2008 in Toronto, July 14-16. Early bird registration ends May 30. Here’s what Michael Bolton has written about it:

A colleague recently pointed out that an important mission of our community is to remind people–and ourselves–that testing doesn’t have to suck.

Well, neither do testing conferences. CAST 2008 is the kind of conference that I’ve always wanted to attend. The theme is “Beyond the Boundaries: Interdisciplinary Approaches to Software Testing”, and the program is incredibly eclectic and diverse. Start with the keynotes: Jerry Weinberg on Lessons from the Past to Carry into the Future; Cem Kaner on The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers; Robert Sabourin (with Anne Sabourin) on Applied Testing Lessons from Delivery Room Labor Triage (there’s a related article in this month’s Better Software magazine); and Brian Fisher on The New Science of Visual Analytics. Track sessions include talks relating testing to improv theatre (Adam White), to music (yours truly and Jonathan Kohl), to finance and accounting (Doug Hoffman), to wargaming and Darwinian evolution (Bart Brokeman, author of /Testing Embedded Software/ and one of the co-authors of the /TMap Next/ book); to civil engineering (Scott Barber), to scientific software (Diane Kelly and Rebecca Sanders), to magic (Jeremy Kominar), to file systems (Morven Gentleman), and to data warehousing (Steve Richardson and Adam Geras), and to data visualization (Martin Taylor)… to four-year-olds playing lacrosse (Adam Goucher). There will be lightning talks and a tester competition. Jerry Weinberg will be doing a one-day tutorial workshop, as will Hung Nguyen, Scott Barber, and Julian Harty.

Yet another feature of the conference is that Jerry is launching his book on testing, /Perfect Software and Other Testing Myths/. I read an early version of it, and I’m waiting for it with bated breath. It’s a book that we’ll all want to read, and after we’re done, we’ll want to hand to people who are customers of testing. For some, we’ll want to tie them to a chair and /read it to them/.

The conference hotel is inexpensive, the food in Toronto is great, the nightlife is wonderful, the music is excellent…

More Information
You can find details on the program at

You can find information on the venue and logistics at

Those from outside Canada should look at

You can get registration information at

Paying the Way

If you need help persuading your company to send you to the conference, check out this:

And if all that fails, you can likely write off the cost of the conference against your taxes, even if you’re an employee. (I am not a tax professional, but INC magazine reports that you can write off expenses to “maintain or improve skills required in your present employment”. Americans should see IRS Publication 970 (, Section 12, and ask your accountant!)

Come Along and Spread The Word!


So (if necessary) get your passports in order, take advantage of early bird registration (if you register in the next two weeks), and come join us. In addition (and I’m asking a favour here), please please /please/ tell your colleagues, both in your company and outside, about CAST. We want to share some great ideas on testing and other disciplines, and we want to make this the best CAST ever. And the event will only be improved by your presence.

So again, please spread the word, and come if you can.

Workshop on technical debt

From Matt Heusser:

Personally, I believe that the “technical debt” metaphor is compelling. It explains the tradeoffs involved in taking shortcuts in terms that management can understand, and can take the vague and undefined and turn it into something more concrete.

At the same time, it is not a perfect metaphor, it has weaknesses, and we have a lot more to explore. For example:
- How do you measure technical debt?
- How do you communicate it?
- How do you reverse it?
- How do you avoid it?
- Is reversing or avoiding technical debt always the right thing to do?

To get to some more satisfying answers, Steve Poling and I will be organizing a peer workshop on Technical Debt on August 14/15. The event is hosted by the Calvin College CS Department and free to participants. Seating is limited to fifteen (at most, twenty) people and is by application.

The Call for Participation is available on-line:

The announcement is available on my blog.

If you’ve got some experiences to share, some stories to relate, or you would just like to explore the issue some more, I would encourage you to apply.

Why my life is hopeless

A large number of the submissions to the Agile2008 Examples stage do not use examples to clarify or explain. As a result, they are too vague to evaluate.

And people wonder why I despair.

Lightning Shows

Summary: Google and blogorrhoea have turned many conference tutorials into anachronisms. I propose an alternative that’s more like a Lightning Talk session.

I’m reviewing a pile of submissions to Agile2008. (You can too!) I just noticed a pattern in my comments. Tutorials often have this structure:

  1. Here’s a problem that needs solving. I have a solution.

  2. Here I demonstrate the solution… You now know enough to replicate the solution back home.

  3. Wrapup, pros & cons, questions.

What I’ve discovered while reviewing is that I’m much less interested in step 2 than I used to be. For a vast number of problems out there, a quick Google session will find me some tutorials or blog entries or screencasts, each of which probably gives me a poor-to-adequate understanding of a solution. I can fairly quickly dip into them to see if the solution interests me; if it does, I can install the appropriate software, gather all the facts of what would be a dynamite tutorial from the middling-to-poor ones I probably have, and likely find a mailing list to help me when that’s not enough.

That doesn’t work for real cutting-edge approaches, but honestly most of the time I just want to get Capistrano working.

So I find myself recommending something more like lightning talks. The nice thing about lightning talks is that they’re low risk for the listener: if the current talk is boring, the next one is coming in just a few minutes.

So what I want from a non-cutting-edge tutorial is:

  1. Here’s a problem. Here are some of its variants. Here are some constraints on a solution.

  2. Here I quickly demonstrate what one solution, in action, looks like. Here’s what I think about it. Notice the URL I’m putting at the top-right corner of each slide? Copy it down. That’s my collection of favorite links about all these solutions. It’s the only thing you need to copy from this talk; you get everything else there.

  3. Here I quickly demonstrate the next solution…

  4. So that’s where the state of the practice stands. What innovations do I see coming that you should watch for? What do I wish was happening that isn’t?

Note that I’m not saying that all tutorials should be like this. Some things aren’t documented well. Others have to be learned in a group. But, to an increasing extent, a tutorial presenter doesn’t have to be the Person Who Knows, but rather an editor who filters down a flood of possibilities into a few high-relevance ones and tells me a little about each.

Next Naked Agilists

The next Naked Agilist tele-conference will be Saturday April 26th 2008 at 8pm GMT.