Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Sun, 23 May 2004

No wasted motion in tests

There are a couple of catch phrases in programming: "intention-revealing names" and "composed method". (I think they're both from Beck's Smalltalk Best Practices Patterns, but this plane doesn't have wireless. Imagine that.) The combined idea is that a method should be made of a series of method calls that are all at the same level of abstraction and that also announce clearly why they matter. A good idea.

In my travels, I don't find that tests follow those rules. Tests too often contain superfluous text: it is (or seems) necessary to make the silly things run, but it obscures their intent. Let me give you an example. It's a test I wrote. It's about when students and caretakers in a veterinary clinic perform certain tasks.

Orders are given to caretakers and students. Some orders depend on whether the animal's in intensive care or not.

new case

Rankin brings in a cow
severe mastitis

intensive care


subjective objective assessment and plan
student does

student monitors temperature
6 hours

because in intensive care
caretaker does

no one milks - milking has to be ordered.
student does


caretaker does
12 hours

etc. etc.

Now, this test will drive most any programmer toward a state machine. (The complete test has more complicated state dependencies than you see.) The problem is that it's very hard to tell whether all the relevant sequences of orders have been covered. The relationship between sequences of clinician orders and worker actions is obscured.

I claim the following tables are better:

possible caretaker tasks are

possible student tasks are
milking, soap, temperature

when does caretaker milk


when clinician orders or records milking
12 hours

when clinician orders or records milking, discharge

when clinician orders or records
milking, death

death always has same effect on orders as discharge does

I worry that might not be completely clear. Like many descriptions, it depends on previous domain knowledge. For example, the domain expert and I had a lot of discussion about the difference between a clinician "ordering" something and "recording" something. At this point, there's no difference as far as the program's concerned; but there is a clear distinction in the expert's mind, making it worthwhile to preserve both terms. So the very last line says, "check that when a clinician orders milking and then later records a death, the caretaker never milks that (dead) cow."

It is, I think, easier to check completeness in the latter table because it encourages systematic thinking:

  • what's the starting state?
  • what could cause the caretaker to milk?
  • what could cause the caretaker to stop?

That's not to say that the first test was useless. By discussing tasks with reference to the flow of events in a real medical case, I was encouraged to learn and talk about the domain. I learned things (like that ordering and recording have the same effect). So the first test was a good starting point for conversation, but it was not a good summary of what was learned. Nevertheless, it seems to me that people get trapped into picking one test format and sticking to it too long. Step-by-step formats seem particularly sticky.

I'm not sure why we end up that way, but I have two speculations.

(1) It seems that there's often a division of labor. One person writes the test (perhaps a business expert, more often a tester), and another person implements the "fixturing" that makes the test executable. The problem is that a new table format that helps the tester causes more work for the programmer. Given the usual power imbalance in a project - a programmer's time is more valuable than a tester's - reusing old and mis-fitting fixtures is the natural consequence. (I should note that this new table format was actually quite simple - only one support method was more than a couple of lines of obvious code - but I initially hesitated because it looked different enough that it seemed it must be more work than that.)

(2) The testing tradition is one of implementing tests to find bugs, not one of discovering the right language in which to express a problem. The Lisp ethic of devising little, problem-specific languages is missing. It's not intuitive behavior; it's learned. And that approach - and its power - haven't been learned yet amongst test-writers.

But I think we need to instill a habit in (at least) business-facing test writers that says that both repetition and verbiage that obscures the intent of the test are bad, are signs that something's amiss.

## Posted at 21:33 in category /fit [permalink] [top]

Speaking their language

So here I am in the Salt Lake City airport. I just finished a couple of days in support of a redesign of the Agile Alliance web site, aiming to make it more supportive of people aiming to sell Agile to executive sponsors.

The people we interviewed brought up a couple of interesting points. One is the need for the whole organization (marketing, etc.) to change in order to take advantage of more capable software development. Otherwise, the benefits of Agile get dissipated by impedence mismatch.

Another was the perennial catchphrase that agile advocates "need to talk the executive's language".

One chance utterance of the latter made me flash to Galison's Image and Logic: A Material Culture of Microphysics, which is all about how scientific subcultures adjust to each other. He uses the metaphor of a "trading zone" between subcultures, in which they communicate through restricted languages that he likens to pidgins and creoles.

Galison is not saying that Wilson (who invented the cloud chamber) didn't speak English to the theorists who used his results. He's saying that they used a restricted vocabulary and invented specialized communication devices like diagrams. Those devices meant something different to each party, but they allowed detailed coordination without requiring anyone to agree on global meaning.

Moreover Galison claims his scientists used objects in particular ways: "... it is not just a matter of sharing objects between traditions but the establishment of new patterns of their use [...] I will often refer to wordless pidgins and wordless creoles: devices and manipulations that mediate between different realms of practice in the way that their linguistic analogues mediate between subcultures in the trading zone." (p. 52)

I hope you can see where I'm going with this. It's not that "we" need to speak "their" language: it's that both groups need to learn a new language that works for our joint purposes. That'll be especially true as executive sponsors see the agile team as a responsive tool they can wield flexibly toward their ends.

Obedient reader that I am, I'm not peeking ahead to Galison's big summary chapter. First, a further 400 pages of exhaustive details about bubble chambers and the like. So any summary of what thinking tools Galison offers us will have to wait. In the meantime, I should point to last year's writeup of Star and Griesemer's boundary objects. Galison's ideas are close to theirs. He's more explicit about the mechanisms of language, and he expands the focus from just objects (perhaps abstract) to include procedures and acts of interpretation.

## Posted at 21:27 in category /agile [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.




Agile Testing Directions
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects

Permalink to this list


Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list


Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI


Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."


Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich


Where to Find Me

Software Practice Advancement


All of 2006
All of 2005
All of 2004
All of 2003



Agile Alliance Logo