Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Thu, 25 May 2006

Notes toward integration testing (1)

Any time you write code that sits on top of a third party library, your code will hide some of its behavior, reveal some, and transform some. What are the testing and cost implications?

By "cost implications," I mean this: suppose subsystem USER is 1000 lines of code that makes heavy use of library LIB, and NEW is 1000 lines that doesn't (except for the language's class library, VM, and the operating system). I think we all wish that USER and NEW would cost the same (even though USER presumably delivers much more). However, even if we presume LIB is bug free, we have to test the interactions. How much? Enough so that an equal-cost USER would be 1100 lines of unentangled code? 1500? 2000? It is conceivable that the cost to test interactions might exceed the benefit of using LIB, especially since it's unlikely we're making use of all of its features.

More likely, though, we'll under-test. That's especially true because I've never met anyone with a good handle on what we're testing for. Tell me about a piece of fresh code, and I can rattle off things to worry about: boundary conditions, plausible omissions, special values like nil or zero. I'm much worse at that when it comes to integrated code, and I think I'm far from alone.

The result of uncertain testing is a broken promise. Given test-driven design, bug reports should fall into two categories:

  1. Something that was omitted from any of the driving tests. Most of those can be fairly classified as new or changed requirements. They can be estimated and scheduled in the normal way (presuming they're not so simple to fix that you just do it right away). Such are more like new features than what most people mean by "bug," and seeing them shouldn't be cause for surprise or disappointment.

  2. A real bug. Everyone agrees that, given the tests driving the code, this previously untried example should have worked. But it doesn't. That's a surprise and a disappointment.

The TDD promise is that there should be few type 2 real bugs. But if we don't know how to test the integration of LIB and USER, there will be many of what I call fizzbin bugs: ones where the programmer fixing them discovers that, oh!, when you use LIB on Tuesday, you have to use it slightly differently.

Since fizzbin bugs look the same to the product director or user, greater reuse can lead to a product that feels shaky. It seems to me I've seen this effect in projects that make heavy use of complex frameworks that the programmers don't know well. Everyone's testing as best they can, but end-of-iteration use reveals all kinds of annoyances.

I (at least) need a better way to think about this problems. More later, if I think of anything worth writing.

## Posted at 07:54 in category /testing [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo