Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Wed, 12 Apr 2006

Tests and specifications

I'm not one to quibble over definitions. If someone points at something that's obviously a cow and says "deer", I usually don't argue the point. While we're arguing about what it is we're about to feed, the poor beast will starve.

Still, it creeps me out when people refer to tests (aka examples) as specifications. There's an important distinction:

A specification describes a correct program, while a test provokes a correct program.

In math geek terms, specifications are universally quantified statements, ones of the form "for all inputs such that <something> is true of them, <something else> is true of the output." Tests are constant statements, ones with no variables.* They look like this: "given input 5, the output is 87."

This matters because, while both kinds of statements can be true or false, the only way to deduce the truth of a universally quantified statement from a set of constant statements is to exhaustively list all possible inputs. That's rarely possible.

To make the point concrete, a set of tests allows the programmer to write this:

if (the input is that of test 1)
   make the output what test 1 expects
elsif (the input is that of test 2)
   make the output what test 2 expects
...
else
   do something for all remaining cases

Given that code, the tests say absolutely nothing about the correctness of the something that's done for all remaining cases.

Absurd example? An employee of a beltway bandit once told me his project had done exactly that. Proudly told me, no less.

But let's pretend we live in an ethical culture. There, the tests combine with certain habits and memories to provoke particular actions. Consider a programmer faced with two tests:

assert_equal(1, f(1))
assert_equal(4, f(2))

Those tests could be passed with this code:

def f(x)
   case(x)
   when 1: 1
   when 2: 4
   # default?
end

But a programmer who's been raised well has a fastidious distaste for the case statement and its cousin if, a habit of leaping to abstractions, a learned distrust of incomplete enumerations of cases, and a keen nose for any whiff of duplication. So she will want to change that code. She will further have a memory that the whole point of it all is to square numbers (though the person who told her that was kind of vague on what it means to "square" a number). So she will leap to change that code to this:

def f(x)
   x * x
end

The assertions themselves are like two pebbles rolling downhill. Whether they start an avalanche depends on what they roll into: the hill has to be ready. For the test-driven, the avalanche is a procedural assurance that the program computes x * x for all x, not a logical one.

That's why I don't like calling sets of tests a specification. In practical terms, I don't like it because it always, always, always leads to someone making the argument about universal quantification vs. tests or quoting Dijkstra to the effect that "program testing can be used very effectively to show the presence of bugs but never to show their absence." The ensuing discussion is rarely, in my opinion, a good use of time. So what am I doing having it?

 

* Actually, a test statement can be seen as having variables, being a quantified statement like this:

For all a, b, c, ..., x, y, z: given input 5, the output is 87

where each of the variables is something you hope is irrelevant to the output. The trick is to capture all the relevant variables, pin them down, and feed them into the process.

## Posted at 13:08 in category /agile [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo