Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Wed, 24 Aug 2005

Still more on counterexamples

Due to conversations with Jonathan Kohl and John Mitchell, a bit more on counterexamples.

I now think that what I'm wondering about is team learning. I want to think more about two questions:

  • Say someone comes up with a counterexample, perhaps that one kind of user uses the product really differently. How is that integrated into the mindset of the team? That is, how does it become an example of an extended model of product use? (I fear too often it stays as an awkward, unintegrated counterexample.)

    Take the blocks world example. In Winston's work, he taught a computer to identify arches by giving it examples and counterexamples. (Eugene Wallingford confirms that the counterexamples were necessary.) In that world, an arch was two pillars of blocks with a crosspiece. The counterexamples included, if I remember correctly, arches without a top (just two pillars) and maybe a crosspiece balanced on a single pillar.

    It's fine and necessary for a researcher to teach a computer - or a product owner a development team - about already understood ideas like "arch". But it's even more fine when the process of teaching surprises the teacher with a new, useful, and more expansive understanding of the domain. I want more surprise in the world.

  • Is there a way to give counterexamples elevated importance in the team's routine action? So that it isn't exceptional to integrate them into the domain model?

    One thing testers do is generate counterexamples by, for example, thinking of unexpected patterns of use. What happens when those unexpected patterns reveal bugs? (When, in Bret Pettichord's definition of "bug", the results bug someone.) The bugs may turn into new stories for the team, but in my experience, they're rarely a prompt to sit down and think about larger implications.

    An analogy: that's as if the refactoring step got left out of the TDD loop. It is when the programmer acts to remove duplication and make code intention-revealing that unexpected classes arise. Without the refactoring, the code would stay a mass of confusing special cases.

    Sometimes - as in the Advancer example I cite so compulsively - the unexpected classes reflect back into the domain and become part of the ubiquitous language. So perhaps that reflection is one way to make incorporating counterexamples routine. We tend to think of the relationship between product expert and team as mainly directional, one of master to apprentice: the master teaches the apprentice what she needs to know. Information about the domain flows from the master to the apprentice. There's a conversation, yes, but the apprentice's part in the conversation is to ask questions about the domain, to explain the costs of coping with the domain in a certain way, to suggest cheaper ways of coping - but not to change the expert's understanding of the domain. Perhaps we should expect the latter.

    Put another way: suppose we grant that a project develops its own creole - its own jargon - that allows the domain expert(s) and technical team to work effectively with each other. Something to keep casual track of would be how many nouns and verbs in the creole originated in the code.

## Posted at 08:02 in category /ideas [permalink] [top]

More on video

In response to my note on Jim Shore's video, Chris McMahon points to HermesJMS, an open source tool for managing message queues. He says:

Configuring any such tool is a chore, but the HermesJMS author has included video on how to configure all of the options of the tool: check out the "Demos" links from the left side of the home page for a really elegant use of video to explain complex activity in a sophisticated tool.

I should also mention the video for Ruby on Rails. The speed with which things get done is much more apparent on video than it would be in text.

## Posted at 08:02 in category /misc [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo