Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Wed, 27 Aug 2003

Agile testing directions: technology-facing programmer support

Part 3 of a series
The table of contents is on the right

As an aid to conversation and thought, I've been breaking one topic, "testing in agile projects," into four distinct topics. Today I'm writing about how we can use technology-facing examples to support programming.

One thing that fits here is test-driven development, as covered in Kent Beck's book of same name, David Astel's more recent book, and forthcoming books by Phlip, J.B. Rainsberger, and who knows who else. I think that test-driven development (what I would now call example-driven development) is on solid ground. It's not a mainstream technique, but it seems to be progressing nicely toward that. To use Geoffrey Moore's term, I think it's well on its way to crossing the chasm.

(Note: in this posting, when I talk of examples, I mean examples of how coders will use the thing-under-development. In XP terms, unit tests. In my terms, technology-facing examples.)

Put another way, example-driven development has moved from being what Thomas Kuhn called "revolutionary science" to what he called "normal science". In a normal science, people expand the range of applicability of a particular approach. So we now have people applying EDD (sic) to GUIs, figuring out how it works with legacy code, discussing good ways to use mock objects, having long discussions about techniques for handling private methods, and so forth.

Normal science is not the romantic side of science; it's merely where ideas turn into impact on the world. So I'm glad to see we're there with EDD. But normality also means that my ideas for what I want to work on or see others work on... well, they're not very momentous.

  • I hope future years will see more people with a mixture of testing and programming skills being drawn to Agile projects. Those people will likely neither be as good testers as pure testers, nor as good programmers as pure programmers, but that's OK if you believe, as I do, that Agile projects do and should value generalists over specialists.

    I'm one such mixture. I've done limited pair programming with "pure programmers". When I have, I've noticed there's a real tension between the desire to maintain the pacing and objectives of the programming and the desire to make sure lots of test ideas get taken into account. I find myself oscillating between being in "programmer mode" and pulling the pair back to take stock of the big picture. With experience, we should gain a better idea of how to manage that process, and of what kinds of "testing thinking" are appropriate during coding.

    There might also be testers on the team who do not act as programmers. Nevertheless, some of them do pair with programmers to talk about the unit tests (how the programmers checked the code). The programmers learn what kinds of bugs to avoid, and the testers learn about what they're testing. For some reason, Calgary Canada is a hotbed of such activity, and I look to Jonathan Kohl, Janet Gregory, and others to teach us how to do it well.

    I want to emphasize that this is all about people. Testers traditionally have an arms-length (or oceans-length) relationship to programmers. For the programmer-support half of the matrix, that relationship is, I believe, inappropriate.

  • I've been using the phrase "checked examples" for programmer support tests. We can split that idea in two. There are new examples that guide decisions about what to do next. And there are automated examples that serve as change detectors to see whether what you just did was what you expected to do.

    The common habit is that the change detectors are merely retained code-guiding examples. (You make your unit test suite by saving, one by one, the tests you write as you code.) That's not a logical necessity. I'd like to develop some lore about when to do something else.

    For example, consider this maintenance scenario: you develop some code example-first. A month later, someone adds a new example and changes the code to match. Many prior examples for that hunk of code become "bad examples" (the tests fail, but because they're now wrong, not because the code is). The tendency is to fix those examples so that they're essentially the same. What I mean by that is that the left sequence of events in the table below is expected to yield the same tests as the right. (Read the left column, then the right.)

    Example foo written Example bar written
    Code written to match foo Code written to match bar
    Example bar written (foo invalidated) Example better-foo written (bar is still a good example)
    Code changed to match bar - oops, now foo doesn't check out Code changed to match better-foo (and bar continues to check out)
    Update foo to be better-foo

    That is, newly broken examples are rewritten to match an ideal sequence of development in which no example ever needed to be rewritten. But why? In the left column above, example new-foo is never used to drive development - it's only for checking. What's optimal for driving development might not be optimal for checking.

    Let me be concrete. Suppose that software systems develop shearing layers, interfaces that naturally don't change much. For maintainability, it might make sense to migrate broken examples to shearing layers when fixing them. Instead of being an example about a particular method in a particular class, we now have an example of a use of an entire subsystem. That can be bad - think about debugging - but it reduces the maintenance burden and could even provide the benefit of thorough documentation of the subsystem's behavior.

    I'm hoping that people who distinguish the two roles - guiding the near future and rechecking the past - will discover productive lore. For example, when might it be useful to write technology-facing change detectors that never had anything to do with guiding programming?

I said above that test-driven development is "one thing that fits" today's topic. What else fits? I don't know. And is EDD the best fit? (Might there be a revolution in the offing?) I don't know that either - I'll rely on iconoclasts to figure that out. I'm very interested in listening to them.

## Posted at 15:22 in category /agile [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo