At the AA Functional Test Tools workshop, we had a little session devoted to this question: Even where “ordinary” unit test-driven design (UTTD) works well, acceptance-test driven design (ATDD) is having more trouble getting traction. Why not?
Programmers miss the fun / aha! moments / benefits that they get from UTDD.
- Especially, there is a difference in scope and cadence of tests. (”Cadence” became a key word people kept coming back to.)
- Laborious fixturing, which doesn’t feel as valuable as “real programming”.
- No insight into structure of system.
Business people don’t see the value (or ROI) from ATDD
- there’s not value for them personally (as perhaps opposed to the business)
- they are not used to working at that level of precision
- no time
- they prefer rules to examples
- tests are not replacing traditional specs, so they’re extra work.
There is no “analyst type” or tester/analyst to do the work.
There is an analyst type, but their separate existence (from programmers) leads to separate tools and hence general weakness, lack of coordination
There’s no process/technique for doing ATDD, not like the one for UTDD.
ATDD requires much more collaboration than UTDD (because the required knowledge and skills are dispersed among several people), but it is more fragile (because the benefit is distributed - perhaps unevenly - among those people).
Programmers can be overloaded with masses of analyst- or tester-generated examples. The analyst or testers need to be viewed as teachers, teaching the programmers what they need to know to make right programming decisions. That means sequences of tests that teach, moving from simple-and-illustrative, to more complicated, with interesting-and-illuminating diversions along the way, etc.