Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Thu, 15 Jan 2004

A new science metaphor for testing

On p. 166 of Laboratory Life, Bruno Latour and Steve Woolgar discuss how a group of researchers armored their factual claims against objection.

Parties to the exchange thus engaged in manipulating their figures [L&W don't mean that in the dishonest sense], assessing their interpretation of statements, and evaluating the reliability of different claims. All the time they were ready to dart to a paper and use its arguments in an effort not to fall prey to some basic objection to their argument. Their logic was not that of intellectual deduction. Rather, it was the craft practice of a group of discussants attempting to eliminate as many alternatives as they could envisage. [Italics mine.]

One common metaphor for software testing is drawn from the description of science most associated with Karl Popper. A theorist proposes a theory. Experimentalists test it by seeing if its consequences can be observed in the world. If the theory survives many tests, it is provisionally accepted. No theory can ever be completely confirmed; it can only be not refuted.

There's a natural extrapolation to testing: the programmers propose a theory ("this code I've written is good") and the testers bend their efforts toward refuting it.

I find both the science story and the testing story arid and disheartening: a clash of contending intellects, depersonalized save for flashes of "great man" hero worship. ("He can crash any system." "Exactly two bugs were found in his code in five years of use.")

Meanwhile, in Latour and Woolgar's book, a team is working together to create an artifact - a paper to submit - that's secure against as many post-submittal attacks as they can anticipate.

For a variety of reasons, I think that's a better metaphor for testing. Testers and programmers work together to create an artifact - a product to release - that's secure against as many post-delivery attacks as they can anticipate. Here, an "attack" is any objection to the soundness of the product, any statement beginning "You should have done..." or "It's not right because...".

Consequences?

  • Just as scientists both review drafts and help each other in the writing, it's natural for testers to both test after and test first.

  • We needn't cling to a harsh separation of roles. In my wife's lab, there are no pure critics. Everyone does experiments, everyone critiques drafts, everyone collaborates on drafts. Could the same kind of thing happen in software? I suspect not. The consumers of my wife's papers are researchers like her. That makes it easy for people like her to anticipate the attacks of people like her. In software, the consumers are different than the producers, which complicates things. Still, the theorist/experimenter analogy makes a split between testers and programmers fundamental and, in a sense, unquestionable. I'd rather see it as undesirable, something to minimize.

  • The Popperian metaphor allows testers to think their responsibility is only to find bugs. They succeed if they find bugs. But many products also need to be malleable. They need to survive an unending series of "attacks" in the form of feature requests. Since testers have no stake in making the product malleable (though they do have a stake in making their automated tests malleable), they will not help the team succeed in those terms and they are likely to decrease malleability. In the Latour/Woolgar metaphor, testers share responsibility for armoring the product against all objections to its transcendental goodness. So they're more likely to push for a product-wide optimum than merely a testing-wide one.

## Posted at 07:59 in category /testing [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo