Interview: alternatives to automated acceptance-test driven design

As part of his programming tour, Corey Haines is interviewing the people he pairs with. He’s posted part 1 of his interview with me. If you’re interested in my life story—and who wouldn’t be?—start at the beginning. If you only want to hear about problems with automated acceptance testing (of the FIT sort) and the solution I’d like to explore, start at 5:54.

One note: my explanation of the solution isn’t clear enough about the role of unit tests. What I want to explore is:

good automated unit tests + good exploratory testing - automated acceptance tests.

(In later parts of the interview, I believe I qualify that to be:

… - most automated acceptance tests.

There’s still a role for sanity-checking tests, for example.)

The whole thing is about 24 minutes long.

3 Responses to “Interview: alternatives to automated acceptance-test driven design”

  1. Pete Dignan Says:

    Brian - it seems like there are a bunch of questionable filters and assumptions behind your dismissal of ATDD. For example, Fit and Fitnesse are not the only way to do automated acceptance testing on agile projects, yet that’s all you refer to. Each tool has its own advantages and disadvantages; you seem to bind the particular disadvantages of Fit to automated acceptance testing in general. Another example - you neglect the difference in impact when a team loses an experienced tester who has been doing primarily exploratory, vs. primarily automated, acceptance testing. The automated tests remain as a useful asset. Also - regarding your ‘requirements on the whiteboard’ illustration - why not ‘whiteboard’ them in something like RSpec/Cucumber, so they can also be executable?

    Full disclosure - I’m the CEO of WatirCraft, the company founded this year to provide commercial support for Watir, so I’m clearly a believer in automated functional, regression and acceptance testing for agile teams.

  2. Ron Jeffries Says:

    I have had good results with fully automated acceptance tests. We grew our own framework, and our system was not at all GUI intensive.

    I have seen many teams having difficulty with automated acceptance tests and find Fit/FitNesse to be intense pains in the .

    For me, it is obvious that automating acceptance tests would be good if you could do it at a reasonable price, and it is becoming quite clear that many people cannot do so. It would be nice if someone would find a better way.

  3. Brian Marick Says:

    Pete: I grant that I’m taking an extreme position to see how it plays out.

    I’ve had the same experience with my own hand-crafted tests that are “written” using a drawing tool (described here http://www.exampler.com/blog/2007/07/13/graphical-workflow-tests-for-rails/). I think that the particular strategy/tool maybe doesn’t make much of a difference for some of my problems: doesn’t have the same Aha!/beneficial effect on the large-scale architecture as unit tests do on the small-scale; doesn’t cause you to have insights while coding; gives feedback late.

    I would whiteboard on a whiteboard instead of in a testing language because you have a much more expressive language. You can write words, you can draw tables, you can draw circles, you can sketch screen mockups, etc. Whiteboards also have more affordance: people can gather around them and work with more ease even than with a laptop screen projected on the wall.

    Good point about losing an exploratory tester. In general, I think the right unit of analysis is the team+code+other artifacts. That is, you can’t just replace the team and expect decent performance. It’s prudent to do replacements one person at a time, with the rest of the team indoctrinating the newcomer. (Pairing does let Agile teams absorb people faster than regular teams, I think, but still… prudence.) However, because of her specialization, an exploratory tester is going to take more knowledge away with her.

Leave a Reply

You must be logged in to post a comment.