Barriers to acceptance-test driven design

At the AA Functional Test Tools workshop, we had a little session devoted to this question: Even where “ordinary” unit test-driven design (UTTD) works well, acceptance-test driven design (ATDD) is having more trouble getting traction. Why not?

My notes:

  1. Programmers miss the fun / aha! moments / benefits that they get from UTDD.

    1. Especially, there is a difference in scope and cadence of tests. (”Cadence” became a key word people kept coming back to.)
    2. Laborious fixturing, which doesn’t feel as valuable as “real programming”.
    3. No insight into structure of system.
  2. Business people don’t see the value (or ROI) from ATDD

    1. there’s not value for them personally (as perhaps opposed to the business)
    2. they are not used to working at that level of precision
    3. no time
    4. they prefer rules to examples
    5. tests are not replacing traditional specs, so they’re extra work.
  3. There is no “analyst type” or tester/analyst to do the work.

  4. There is an analyst type, but their separate existence (from programmers) leads to separate tools and hence general weakness, lack of coordination

  5. There’s no process/technique for doing ATDD, not like the one for UTDD.

  6. ATDD requires much more collaboration than UTDD (because the required knowledge and skills are dispersed among several people), but it is more fragile (because the benefit is distributed - perhaps unevenly - among those people).

  7. Programmers can be overloaded with masses of analyst- or tester-generated examples. The analyst or testers need to be viewed as teachers, teaching the programmers what they need to know to make right programming decisions. That means sequences of tests that teach, moving from simple-and-illustrative, to more complicated, with interesting-and-illuminating diversions along the way, etc.

6 Responses to “Barriers to acceptance-test driven design”

  1. Jonathan Kohl Says:

    “acceptance-test driven design (ATDD) is having more trouble getting traction. Why not?”
    Maybe it works better in theory than in practice?

  2. Brian Marick Says:

    It’s important to know why something doesn’t work.

    Also: sometimes it does work in practice.

  3. Kevin Lawrence Says:

    Where it has worked for me:
    - when it is used for communication (1)

    Where it has failed for me
    - when it is used for testing (2)

    Paradoxically, in the (1) case, they were more valuable for testing too.

  4. GettingAgile.com » Blog Archive » Barriers to Acceptance Test-driven Design Says:

    […] Brian Marick weighs in on potential issues with using acceptance test-driven design even where unit-level test-driven development (TDD) is conducted here. […]

  5. Jason Gorman Says:

    I’m discovering that we have to view agreeing and implementing automated acceptance tests as just another strand of development on our project. You have to design the implementation of your tests, and approach their development as “more programming” that needs to be done to deliver a story. Typically, the complexity and effort of automating these tests easily matches - and often outweighs - that of implementing the story itself. ATDD is hard work.

    But ultimately worth if you weigh it against the cost of all that unnecessary rework you’ll probably avoid.

    It can be hard sell to management when schedules are slipping, though. And programmers usually HATE writing fixtures, because they don’t see it as “real” programming.

    That seems to be changing, though. And I see the “developer-tester” becoming a very highly sought after professional.

  6. Jens Coldeweys Blog » Blog Archive » Hürden gegen Akzeptanztest-getriebene Entwicklung Says:

    […] Marick fasst in seinem Blog-Eintrag “Barriers to acceptance-test driven design” die wichtigsten Startprobleme gegen den Einsatz von Akzeptanztests […]

Leave a Reply

You must be logged in to post a comment.