Archive for the 'agile' Category

What is Agile?—beats me, but I know it when I see it

Cory Foy has started an Agile FAQ. His first question is What is Agile? Now, I’m notorious for wandering away from definitional arguments, and I like the answers Cory already has, but I think I have something to add. I have an incomplete and informal list of questions I ask myself about teams to gauge whether they really “get it”:

  • Is there spontaneous chatter? (Most often work-related, but a helping of casual chatter too.)

  • Is there hustle?

  • Are people afraid of being wrong?

  • Do people readily ask for help? Do people readily volunteer help? Even—especially—when they could say, “that’s not my job”?

  • Do I notice people giving in for the sake of the group? (Such as deciding to try something someone else’s way as a way to reduce tension.)

  • When people talk about solving problems, do they talk in terms of nudging something that’s wrong in the direction of rightness, or in terms of solving the problem once and for all?

  • Is their response to a problem to increase the visibility of information? Do they seem to think that if people know about a problem, and are continuously reminded of it, that they’re likely to just naturally act to solve it?

  • Are they a touch monomaniacal about getting working software out there, or at least being able to show someone something new that actually works?

  • Do they disparage the “business side of the house”, or do they have active sympathy with the people there?

  • Do they act helpless? Or as if they have power? Do they give up on problems because “they” will never let them be fixed? (”They” being management, the cubicle police, the configuration management board, etc.)

  • Do they want to be able to take pride in their work? (Or are they cynical or passive about whatever it is they do?) And do they take pride?

I don’t care if I know what Agile is if I know it when I see it. I don’t know to what extent certain values, practices, techniques, or tools influence my answer to the question “Is this team Agile?” To some extent, for sure.

Embracing change

Embracing Agile had me make some big changes, some fundamental changes. As a programmer, my role model changed from the lone genius with OCD to a gregarious social animal (but still hoping for just a touch of OCD). As a tester, I stopped thinking of myself as a dispassionate judge and began to consider myself an involved insider.

For the work-obsessed, such changes are more than just new roles: they’re changes in self-conception. I think it neither unfair nor insulting to observe that some people avoid Agile projects because they prefer not to change anything that close to their core identity.

Such big changes aren’t restricted to programmers and testers. Consider the team manager redesignated Scrum Master: she’s no longer The Boss.

People talk a lot about “the Agile Enterprise.” What big changes, fundamental changes, changes in self-conception would such a thing bring to the CxO? Will it? If not, why not?

Project testing growth path

In response to a potential client, I wrote something very like the following. The interesting thing is that I’m placing more emphasis on manual exploratory testing. It’s not so much that I suddenly realize its importance as that automated business-facing tests continue to be hard to implement and adopt. More on that anon.

A short sketch of a reasonable growth path would go like this:

  1. Get the programmers sold on test-driven design. How difficult that is depends mainly on how much legacy code you have (where legacy code is, as Michael Feathers says, code without unit tests). Legacy code is hard to test, so programmers don’t see the benefits of testing as quickly, so it requires that much more discipline to get over what’s always a higher hump than with greenfield code. (Michael Feathers’ Working Effectively with Legacy Code is the gold standard book, though there’s an important strategy—”strangler applications“—that’s not covered in depth. Also, I’m the track chair for a new Legacy Code track at Agile2008, I just asked Feathers to give the keynote, and he says he has “a number of surprising proposals about how to make things better”.)

    I’ve come to feel that the most important thing to get across to programmers is what it’s like to work with code built on a solid base of tests. If they understand that early on, they’ll have a clear idea of what to shoot for, which helps with the pain of legacy code. I wrote a workbook to that end.

  2. At the same time, move testers away from scripted manual tests (if that’s what they’re doing) and toward a more exploratory style of manual testing. The people who are strongest on exploratory testing in Agile are Jonathan Kohl, Elisabeth Hendrickson, and Michael Bolton.

  3. As programmers do more unit testing, they will become accustomed to changing their design and adding code in support of their own testing. It becomes more natural for them to do the same for the testers, allowing them to do “automation-assisted exploratory testing”. (Kohl writes about this.) I like to see some of the testers learn a scripting language to help with that. Ruby is my favorite, for a variety of reasons. I wrote a book to help testers learn it.

  4. Over this period, the testers and programmers should shed most animosity or wariness they have toward each other. They’re working together and doing things to help each other. It helps a lot if they sit together.

  5. Once the programmers are sold on test-driven design, they will start wishing that the product owners would supplement what they say about what they want with clear, concrete, executable examples of what they want. That is: tests, written in the language of the business. That isn’t as easy to do as we thought it would be five years ago, but it can be done more or less well. Often, the testers will find a new role as helpers to the product owners. For example, they should get involved early enough to ask questions that lead to tests that prevent bugs (which is better than discovering the bugs after you’ve paid some programmers to implement them).

  6. Throughout this, some kinds of testing (like performance testing) don’t change all that much. For performance testing, I trust Scott Barber.

As a side note: I’m quite fond of the new The Art of Agile Development by Shore & Warden: enough to publicly declare that I’ll bring a copy to every team I work with. Lots of good from-the-trenches experience summarized there.

Unresolved issues in Agile

Here are three unresolved debates that many people seem to have agreed to stop having:

  • When do Agile teams need to be deftly led in the right direction, and when can managers/leaders/ScrumMasters/those-responsible-for-a-budget just sit back and let them figure it out?

  • On the spectrum between intensely-focused specialists and generalists who do everything they do with the same skill, where do we want team members? in what combinations?

  • To what extent does Agile require “better” (along some dimension) people?

The tacit, path-of-least-resistance result is not to my taste. In the worst cases I see and hear of, the answers are:

  • With the increased emphasis on leadership and greater focus on the executive suite, the tilt is toward guided or nudged teams over “self-organizing” teams.

  • What difference does it make? We’ve got the people we’ve got, and we’ll make the best of them.

  • Ditto, and however those people improve themselves and along what axes is going to depend on the happenstance thrown up by the day-to-day work.

Perhaps I exaggerate. Early exposure to Norse mythology has made me hypersensitive to centres not holding and that famous quote from Hunter S. Thompson:

We had all the momentum; we were riding the crest of a high and beautiful wave. So now, less than five years later, you can go up on a steep hill in Las Vegas and look West, and with the right kind of eyes you can almost see the high-water mark—the place where the wave finally broke and rolled back.

Jeff Patton Agile Usability references

On the agile-usability mailing list, Jeff Patton wrote something very like this:

One past paper I constantly reference is Lynn Miller’s customer involvement in Agile projects paper, Gerrard Meszaros’ Agile usability paper, and last year’s paper from Heather Williams on the UCD perspective, before and after Agile.

All these are great papers - and I know there’s more.

If he thinks they’re great papers, I do too. I’ve been meaning to read two of them for ages.

Next Naked Agilists

The next Naked Agilist tele-conference will be Saturday April 26th 2008 at 8pm GMT.

A tagging meme reveals I short-change design

There’s one of those tagging memes going around. This one is: “grab the nearest book, open to page 123, go down to the 5th sentence, and type up the 3 following sentences.”

My first two books had pictures on p. 123.

The next three (Impro: Improvisation and the Theatre, AppleScript: the Definitive Guide, and Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life) didn’t have anything that was amusing, enlightening, or even comprehensible out of context. So I kept going, which is cheating I suppose. The last, How Designers Think, had this:

The designer’s job is never really done and it is probably always possible to do better. In this sense, designing is quite unlike puzzling. The solver of puzzles such as crosswords or mathematical problems can often recognize a correct answer and knows when the task is complete, but not so the designer.

That’s a hit. It made me realize a flaw in my thinking. You see, it reminded me of one of my early, semi-controversial papers, “Working Effectively With Developers” (referred to by one testing consultant as “the ‘how to suck up to programmers’ paper”). In its second section, “Explaining Your Job”, I explicitly liken programmers to problem solvers:

A legendary programmer would be one who was presented a large and messy problem, where simply understanding the problem required the mastery of a great deal of detail, boiled the problem down to its essential core, eliminated ambiguity, devised some simple operations that would allow the complexity to be isolated and tamed, demonstrated that all the detail could be handled by appropriate combinations of those operations, and produced the working system in a week.

Then I point out that this provides a way for testers to demonstrate value. I show a sample problem, then write:

Now, I’d expect any programmer to quickly solve this puzzle - they’re problem solvers, after all. But the key point is that someone had to create the puzzle before someone else could solve it. And problem creation is a different skill than problem solving.

Therefore, the tester’s role can be likened to the maker of a crossword or a mathematical problem: someone who presents a good, fully fleshed-out problem for the programmer to master and solve:

So what a tester does is help the programmer […] by presenting specific details (in the form of test cases) that otherwise would not come to her attention. Unfortunately, you often present this detail too late (after the code is written), so it reveals problems in the abstractions or their use. But that’s an unfortunate side-effect of putting testers on projects too late, and of the unfortunate notion that testing is all about running tests, rather than about designing them. If the programmer had had the detail earlier, the problems wouldn’t have happened.

Despite this weak 1998 gesture in the rough direction of TDD, I still have a rather waterfall conception of things: tester presents a problem, programmer solves it, we all go home.

But what that’s missing is my 2007 intellectual conception of a project as aiming to be less wrong than yesterday, to get progressively closer to a satisfactory answer that is discovered or refined along the way. In short—going back to the original quote—a conception of the project as a matter of design that’s at every level of detail and involves everyone. That whole-project design is something much trickier than mere puzzle-solving.

I used the word “intellectual” in the previous paragraph because I realize that I’m still rather emotionally attached to the idea of presenting a problem, solving it, and moving on. For example, I think of a test case as a matter of pushing us in a particular direction, only indirectly as a way of uncovering more questions. When I think about how testing+programming works, or about how product director + team conversations work, the learning is something of a side effect. I’m strong on doing the thing, weak on the mechanics of learning (a separate thing from the desire to learn).

That’s not entirely bad—I’m glad of my strong aversion to spending much time talking and re-talking about what we’ll build if we ever get around to building anything, of my preference for doing something and then taking stock once we have more concrete experience—but to the extent that it’s a habit rather than a conscious preference, it’s limiting. I’ll have to watch out for it.

Agile Coach Camp (May 30 - June 1, Grand Rapids, MI, USA)

Agile Coach Camp is about creating a network of practitioners who are striving to push the limits in guiding software development teams, while staying true to the values and principles at the core of the Agile movement. We’ve invited practitioners who, like you, are passionate about their work, active in the field and willing to share what they’ve learned.

Do you have a technique or practice worth sharing with your peers? Or an idea you’d like to test out with some leaders in the community? Are you facing challenges and want to get some perspective from other practitioners, or hear how they do things? If you feel you’d benefit from connecting with 80-100 ScrumMasters?, XP Coaches, Trainers, Change Agents and Mentors to talk, draw, argue and explore ideas, then this conference is for you.

You can learn all about AgileCoachCamp on this wiki.

I’m writing my position paper now. I think it will be on avoiding doing whatever things cause the legitimate part of the Post-Agile reaction. (And I do think some parts are definitely legitimate.)

UPDATE: the position paper.

Comment on Naked Agilist podcast

Scott Finney has an interesting comment on my segment of the Naked Agilist podcast:

I listened to the Naked Agilists latest podcast the other day.

Brian Marick’s pitch in particular sparked my interest. He compares agile adoption against Geoffrey Moore’s technology adoption curve and observes a discrepancy; namely the seeming dearth of pragmatists. As I listened, a fundamental question immediately sprung to mind:

- Do we really have a case where the standard curve doesn’t fit, or
- Are we looking at the data incorrectly?

He suspects the latter. He may be right.

Kanban-esque scheduling

Kanban-inspired scheduling is the most interesting idea to come along in a while. Read the links above for a fuller description, but the short version is that there’s a fixed pipe of work items, where each is larger than a traditional story. (As I’ve noted before, small stories compensate for our inability to estimate by shifting work to the product director.) The larger work items, often called “minimum marketable features (MMFs)”, contribute a more satisfying chunk of business value than a small story can.

Rather than scheduling an iteration at the beginning, you just keep the pipe full. When one MMF is done, the product director puts the next one in.

It has an appealing simplicity, and I think it fits in well with Jeff Patton’s recent focus on not letting incremental development squeeze out iterative development. (Jeff thinks so too.)

What worries me is discipline. An iteration is a fairly firm commitment to produce X amount of value in Y amount of time. A MMF is, too, but there isn’t (as far as I understand) any moment when some defined set of MMFs is either done or not done. Their “lifetimes” always overlap. Since the people working on this are savvy, I’m sure they have ways for people to take stock of how they’re doing, whether they’re getting faster or slower, etc. I just don’t know what those ways are, and it’s the only thing keeping me from dipping my toe into the water.