Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Tue, 15 Jul 2003

The Agile Context (continued)

I'm on vacation near Boston, so naturally I decided to take Ken Schwaber's ScrumMaster training course. (In my own defense, tomorrow is the water park.) What's a ScrumMaster? The closest analogue in conventional projects is the manager, but the ScrumMaster has very different goals:

  • "Removing the barriers between development and the customer so the customer directly drives development;

  • "Teaching the customer how to maximize ROI and reach their objectives through Scrum;

  • "Improving the lives of the development team by facilitating creativity and empowerment;

  • "Improving the productivity of the development team in any way possible; and

  • "Improving the engineering practices and tools so that each increment of functionality is potentially shippable."

The thing that most appeals to me about Scrum is the way the ScrumMaster is totally devoted to the success of the development team. There are three people I would unhesitatingly accept as my manager. Ken is one. Johanna Rothman is another. My wife Dawn is the third.

In any case, I recommend the course, even if you - like me - doubt you'll ever be a ScrumMaster on a Scrum project. (I am not a person I'd unhesitatingly accept as my manager.) It's important to know about the different agile approaches, to do some compare and contrast.

Ken reminded me of two more additions to my list of Things Agilists Want to be True.

  • Written documentation is impoverished and slow compared to face-to-face communication. For software development, the advantages of written communication - permanence, replicability, etc. - are exaggerated. How many of those advantages can you do without? How can you attain them without dislodging face-to-face communication from its central role?

    When writing the above, bug reports leapt to my mind. We testers are greatly attached to the bug report as a written artifact. Many of us (including me) write and speak about the need to craft the writing well. For example, Cem Kaner's Bug Advocacy notes have some fantastic text about the importance of crafting a good subject line. The skills he teaches are essential in bug-heavy environments with contending political factions and testers on the periphery of attention. But do our bug-reporting habits serve us well in an agile context?

  • Iterations must deliver increments of potentially shippable, business-relevant functionality. When you do not tie project activities to that, you stand a great risk of succumbing to self-indulgence. Don't risk it.

## Posted at 18:24 in category /context_driven_testing [permalink] [top]

Mon, 30 Jun 2003

The agile context

The noble premise of context-driven testing is that the tester's actions should be tailored to a particular project. The premise of the Agile Alliance is that certain projects have enough in common that they deserve a common name: "agile". It follows that those common themes should drive the actions of context-driven testers on agile projects.

But describing that commonality is a vexing question. The Agile Manifesto is an early attempt, still the definitive one. But, in my own thinking, I find myself coming back to different themes, ones more related to personality and style than values and principles.

Now, it's presumptuous of me to define Agility: although I was one of the authors of the Manifesto, I've always thought of myself as something of an outsider with a good nose for spotting a trend. So when I make pronouncements about Agility, I look for approving nods from those who I think get it more than I do. In recent weeks, I've gotten them from Michael Feathers, Christian Sepulveda, Jeremy Stell-Smith, Ward Cunningham, and Lisa Crispin.

Made bold by them, I present a partial and tentative list of what I'm calling Things Agilists Want to be True. I'm calling it that to avoid arguments about whether they are or are not true. Truth is irrelevant to whether those beliefs are part of the agile context.

  • A team of generalists trumps a team of specialists.

  • To get the best work out of programmers, protect them from distractions and interference. Instead, let them do what they think is best. The Scrum literature is the most clear on this point.

  • Tests are a tool for guiding design and making change safe. They are only secondarily about finding bugs, primarily about facilitating steady, smooth progress. (Ward once said to me, apropos unit tests, "Maybe we shouldn't have used the word 'test'.") Although this belief is strongest in XP, I sense that it's becoming common agile knowledge.

  • Every program contains a better, cleaner, more capable program that wants to get out. That better program is released when customers provide a well-paced stream of change requests to programmers who respect their craft.

  • As Lisa Crispin puts it: "On an XP team, if you ask for help, someone has to help you." At Agile Development Conference, I was surprised by how often the word "trust" came up. It was sure a heck of a lot more often than it does at testing conferences.

Of what use is this list? Well, I'm going to use it to remind me to think about my habits. Suppose I'm a specialist tester on an agile team. Being a specialist is comfortable to me - it's been my job many times - but I have to remember it cuts a bit across the grain of an agile project. I'll have to think more about earning - and giving - trust, about offering help outside my specialty, about taking care that my bug reports don't disrupt the smooth steady flow of improvement. Otherwise, I'll be wrong for my context.

My hunch is that many testers will find the team dynamics of an agile project their biggest challenge.

## Posted at 17:21 in category /context_driven_testing [permalink] [top]

Fri, 20 Jun 2003

The personal context

At Agile Fusion, I flashed on something about context-driven testing. James Bach said I should write it down.

In the ideal, adding context-driven testing to a project means that the tester observes the context and designs a testing strategy that matches it (while recognizing that the strategy will change as understanding increases).

Reality is less tidy, less rational. First, any particular strategist comes with a bundle of preferences, a set of experiences, and a bag of tricks. The natural first impulse is to do this project like a prior successful one. This project's context has an influence, to be sure, but does it really drive the strategy? Often not, I suspect. The test - perhaps - of context-driven-ness is how readily the strategist recognizes that what appears to be external context is the projection of internal biases.

This is especially tricky because internal biases take on external reality. To be trite, the observer affects the observed. The most important part of context is the people (principle 3). The strategist changes people's goals, activities, assumptions, and beliefs. So early choices shape the context, I suspect often in self-reinforcing ways.

This argues for rather a lot of humility on the part of the strategist. On the other hand, things have to get done. One cannot spend time in an agony of undirected self-doubt. So, an assignment for context-driven testers: tell stories about how you misjudged the context, then recovered. And about how you shaped a project wrongly. My sense is that the effect described here, though hardly insightful, is under-discussed.

## Posted at 05:19 in category /context_driven_testing [permalink] [top]

Wed, 26 Mar 2003

Best practices?

A glimpse of Cem Kaner's upcoming talk at the Pacific Northwest Software Quality Conference:

Twenty-plus years ago, we developed a model for the software testing effort. It involved several "best practices," such as these:

  • the purpose of testing is to find bugs;
  • the test group works independently of the programming group;
  • tests are designed without knowledge of the underlying code;
  • automated tests are developed at the user interface level, by non-programmers;
  • tests are designed early in development;
  • tests are designed to be reused time and time again, as regression tests;
  • testers should design the build verification tests, even the ones to be run by programmers;
  • testers should assume that the programmers did a light job of testing and so should extensively cover the basics (such as boundary cases for every field);
  • the pool of tests should cover every line and branch in the program, or perhaps every basis path;
  • manual tests are documented in great procedural detail so that they can be handed down to less experienced or less skilled testers;
  • there should be at least one thoroughly documented test for every requirement item or specification item;
  • test cases should be based on documented characteristics of the program, for example on the requirements documents or the specifications;
  • test cases should be documented independently, ideally stored in a test case management system that describes the proconditions, procedural details, postconditions, and basis (such as trace to requirements) of each individual test case;
  • failures should be reported into a bug tracking system;
  • the test group can block release if product quality is too low;
  • a count of the number of defects missed, or a ratio of defects missed to defects found, is a good measure of the effectiveness of the test group.

Some of these practices were (or should have been seen as) dubious from the start... they are all far from being universal truths.

He continues:

Testing practices should be changing. No--strike that "should." Practices are changing.

For those who don't pay attention to testing because it's an intellectual backwater, be aware: there's a dust-up in progress. With luck, and sympathetic attention from "outsiders", things will be better in five years.

## Posted at 20:20 in category /context_driven_testing [permalink] [top]

Mon, 03 Feb 2003

Context-driven testing and agile development

I'm a member of a school of testing called "context-driven". Part of what I want to do with this blog is talk through what context-driven testing is, what agile testing might be, and how and whether they intersect.

I've never been enormously fond of the context-driven manifesto. It somehow seems to miss the essence, and - worse - it's too easy for anyone to agree to, even people who seem to me entirely not of the school. (Note: I had a hand of some sort in writing that manifesto, so I'm not blaming anyone else for its failure to thrill me.)

In a talk I gave at SD East, I listed things about agility that resonate with me. Agility emphasizes:

  • conversation and collaboration
  • minimal documentation
  • working software soon
  • responding to change

Then I listed things about context-driven testing that distinguish it from conventional testing:

  • projects are conversations about quality
  • documents are treated as 'interesting fictions'
  • a big emphasis on producing bug reports soon (bug reports against working software)
  • responding to change

Pretty close parallels, it seems to me.

## Posted at 20:35 in category /context_driven_testing [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo