Barriers to acceptance-test driven design
At the AA Functional Test Tools workshop, we had a little session devoted to this question: Even where “ordinary” unit test-driven design (UTTD) works well, acceptance-test driven design (ATDD) is having more trouble getting traction. Why not?
My notes:
-
Programmers miss the fun / aha! moments / benefits that they get from UTDD.
- Especially, there is a difference in scope and cadence of tests. (”Cadence” became a key word people kept coming back to.)
- Laborious fixturing, which doesn’t feel as valuable as “real programming”.
- No insight into structure of system.
-
Business people don’t see the value (or ROI) from ATDD
- there’s not value for them personally (as perhaps opposed to the business)
- they are not used to working at that level of precision
- no time
- they prefer rules to examples
- tests are not replacing traditional specs, so they’re extra work.
-
There is no “analyst type” or tester/analyst to do the work.
-
There is an analyst type, but their separate existence (from programmers) leads to separate tools and hence general weakness, lack of coordination
-
There’s no process/technique for doing ATDD, not like the one for UTDD.
-
ATDD requires much more collaboration than UTDD (because the required knowledge and skills are dispersed among several people), but it is more fragile (because the benefit is distributed - perhaps unevenly - among those people).
-
Programmers can be overloaded with masses of analyst- or tester-generated examples. The analyst or testers need to be viewed as teachers, teaching the programmers what they need to know to make right programming decisions. That means sequences of tests that teach, moving from simple-and-illustrative, to more complicated, with interesting-and-illuminating diversions along the way, etc.
August 19th, 2008 at 5:16 pm
“acceptance-test driven design (ATDD) is having more trouble getting traction. Why not?”
Maybe it works better in theory than in practice?
August 20th, 2008 at 7:32 am
It’s important to know why something doesn’t work.
Also: sometimes it does work in practice.
August 20th, 2008 at 9:54 am
Where it has worked for me:
- when it is used for communication (1)
Where it has failed for me
- when it is used for testing (2)
Paradoxically, in the (1) case, they were more valuable for testing too.
August 20th, 2008 at 8:45 pm
[…] Brian Marick weighs in on potential issues with using acceptance test-driven design even where unit-level test-driven development (TDD) is conducted here. […]
September 28th, 2008 at 1:55 pm
I’m discovering that we have to view agreeing and implementing automated acceptance tests as just another strand of development on our project. You have to design the implementation of your tests, and approach their development as “more programming” that needs to be done to deliver a story. Typically, the complexity and effort of automating these tests easily matches - and often outweighs - that of implementing the story itself. ATDD is hard work.
But ultimately worth if you weigh it against the cost of all that unnecessary rework you’ll probably avoid.
It can be hard sell to management when schedules are slipping, though. And programmers usually HATE writing fixtures, because they don’t see it as “real” programming.
That seems to be changing, though. And I see the “developer-tester” becoming a very highly sought after professional.
October 30th, 2008 at 2:46 am
[…] Marick fasst in seinem Blog-Eintrag “Barriers to acceptance-test driven design” die wichtigsten Startprobleme gegen den Einsatz von Akzeptanztests […]