Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Sat, 04 Oct 2003

Agile testing directions: Testers on agile projects

Part 7 of a series
The table of contents is on the right

Should there be testers on agile projects?

First: what's the alternative? It is to have non-specialists (programmers, business experts, technical writers, etc.) perform the activities I've identified in this series: helping to create guiding examples and producing product critiques. Or, symmetrically, it's to have testers who do programming, business analysis, technical writing, etc. It's to consider "testing" as only one set of skills that needs to be available, in sufficient quantity, somewhere in the team, to service all the tasks that require those skills.

Why would non-specialists be a bad idea? Here are some possible reasons:

  • Testing skills are hard to learn. If you try to be a tester and a programmer or a tester and a technical writer, you won't have the minimum required skills to be a good enough tester.

  • Suppose you're the best basketball player in the world and also the best car washer. You should nevertheless pay someone else to wash your car, because you could earn far more in that hour playing basketball than you'd save washing your own car. That's an example of comparative advantage, what Paul Samuelson advanced as the only proposition in the social sciences that's both true and non-trivial. It's a general argument for specialization: it's to the advantage of both you and the person you hire for both of you to specialize. So why shouldn't a person with a knack for testing do only testing, and a person who's comparatively stronger at programming do only programming?

  • Testing might not be so much a learned skill as an innate aptitude. Some people are just natural critics, and some people just aren't.

  • All the other tasks that a tester might take on in a project imply sharing ownership of the end product. Many people have trouble finding fault in their own work. So people who mix testing and other tasks will test poorly. It's too much of a conflict of emotional interest.

  • A tester benefits from a certain amount of useful ignorance. Not knowing implementation details makes it easier for her to think of the kinds of mistakes real users might make.

Argument

Let me address minimum required skills and comparative advantage first. These arguments seem to me strongest in the case of technology-facing product critiques like security testing or usability testing. On a substantial project, I can certainly see the ongoing presence of a specialist security tester. On smaller projects, I can see the occasional presence of a specialist security tester. (The project could probably not justify continual presence.)

As for the exploratory testers that I'm relying on for business-facing product critiques, I'm not sure. So many of the bugs that exploratory testers (and most other testers) find are ones that programmers could prevent if they properly internalized the frequent experience of seeing those bugs. (Exploratory testers - all testers - get good in large part because they pay attention to patterns in the bugs they see.) A good way to internalize bugs is to involve the programmers in not just fixing but also in finding them. And there'll be fewer of the bugs around if the testers are writing some of the code. So this argues against specialist testers.

Put it another way: I don't think that there's any reason most people cannot have the minimum required exploratory testing skills. And the argument from comparative advantage doesn't apply if mowing your lawn is good basketball practice.

That doesn't say that there won't be specialist exploratory testers who get a team up to speed and sometimes visit for check-ups and to teach new skills. It'd be no different from hiring Bill Wake to do that for refactoring skills, or Esther Derby to do that for retrospectives. But those people aren't "on the team".

I think the same reasoning applies to the left side of the matrix - technology-facing checked examples (unit tests) and business-facing checked examples (customer tests). I teach this stuff to testers. Programmers can do it. Business experts can do it, though few probably have the opportunity to reach the minimum skill level. But that's why business-facing examples are created by a team, not tossed over the wall to one. In fact, team communication is so important that it ought to swamp any of the effects of comparative advantage. (After all, comparative advantage applies just as well to programming skills, and agile projects already make a bet that the comparative advantage of having GUI experts who do only GUIs and database experts who do only databases isn't sufficient.)

Now let's look at innate aptitude. When Jeff Patton showed a group of us an example of usage-centered design, one of the exercises was to create roles for a hypothetical conference paper review system. I was the one who created roles like "reluctant paper reviewer", "overworked conference chair", and "procrastinating author". Someone remarked, "You can tell Brian's a tester". We all had a good chuckle at the way I gravitated to the pessimistic cases.

But the thing is - that's learned behavior. I did it because I was consciously looking for people who would treat the system differently than developers would likely hope (and because I have experience with such systems in all those roles). My hunch is that I'm by nature no more naturally critical than average, but I've learned to become an adequate tester. I think the average programmer can, as well. Certainly the programmers I've met haven't been notable for being panglossian, for thinking other people's software is the best in this best of all possible worlds.

But it's true an attack dog mentality usually applies to other people's software. It's your own that provokes the conflict of emotional interest. I once had Elisabeth Hendrickson doing some exploratory testing on an app of mine. I was feeling pretty cocky going in - I was sure my technology-facing and business-facing examples were thorough. Of course, she quickly found a serious bug. Not only was I shocked, I also reacted in a defensive way that's familiar to testers. (Not harmfully, I don't think, because we were both aware of it and talked about it.)

And I've later done some exploratory testing of part of the app while under a deadline, realized that I'd done a weak coding job on an "unimportant" part of the user interface, then felt reluctant to push the GUI hard because I really didn't want to have to fix bugs right then.

So this is a real problem. I have hopes that we can reduce it with practices. For example, just as pair programming tends to keep people honest about doing their refactoring, it can help keep people honest about pushing the code hard in exploratory testing. Reluctance to refactor under schedule pressure - leading to accumulating design debt - isn't a problem that will ever go away, but teams have to learn to cope. Perhaps the same is true of emotional conflict of interest.

Related to emotional conflict of interest is the problem of useful ignorance. Imagine it's iteration five. A combined tester/programmer/whatever has been working with the product from the beginning. When exploring it, she's developed habits. If there are two ways to do something, she always chooses one. When she uses the product, she doesn't make many conceptual mistakes, because she knows how the product's supposed to work. Her team's been writing lots of guiding examples - and as they do that, they've been building implicit models of what their "ideal user" is like, and they have increasing trouble imagining other kinds of users.

This is a tough one to get around. Role playing can help. Elisabeth Hendrickson teaches testers to (sometimes) assume extreme personae when testing. What would happen if Bugs Bunny used the product? He's a devious troublemaker, always probing for weakness, always flouting authority. How about Charlie Chaplin in Modern Times: naïve, unprepared, pressured to work ever faster? Another technique that might help is Hans Buwalda's soap opera testing.

It's my hope that such techniques will help, especially when combined with pairing (where each person drives her partner to fits of creativity) in a bullpen setting (where the resulting party atmosphere will spur people on). But I can't help but think that artificial ignorance is no substitute for the real thing.

Staffing

So. Should there be testers on an agile project? Well, it depends. But here's what I would like to see, were I responsible for staffing a really good agile team working on an important product. Think of this as my default approach, the prejudice I would bring to a situation.

  • I'd look for one or two people with solid testing experience. They should know some programming. They should be good at talking to business experts and quickly picking up a domain. At first, I'd rely on them for making sure that the business-facing examples worked well. (One thing they must do is exercise analyst skills.) Over time, I'd expect them to learn more programming, contribute to the code base, teach programmers, and become mostly indistinguishable from the people who started off as programmers.

    Personality would be very important. They have to like novelty, they shouldn't have their identity emotionally wrapped up in their job description, and they have to be comfortable serving other people.

  • It would be a bonus if these people were good at exploratory testing. But, in any case, the whole team would receive good training in exploratory testing. I'd want outside exploratory testing coaches to visit periodically. They'd both extend the training and do some exploratory testing. That last is part of an ongoing monitoring of the risk that the team is too close to the product to find enough of the bugs.

  • To the extent that non-functional "ilities" like usability, security, and performance were important to the product, we'd buy that expertise (on-site consultant, or visiting consultant, or a hire for the team). That person would advise on creating the product, train the team, and test the product.

    (See Johanna Rothman about why such non-functional requirements ought to be important. I remember Brian Lawrence saying similar things about how Gause&Weinberg-style attributes are key to making a product that stands out.)

  • I'd make a very strong push to get actual users involved (not just business experts who represent the users). That would probably involve team members going to the users, rather than vice-versa. I'd want the team to think of themselves as anthropologists trying to learn the domain, not just people going to hear about bugs and feature requests.

Are there testers on this team, once it jells? Who cares? - there will be good testing, even though it will be increasingly hard to point at any activity and say, "That. That there. That's testing and nothing but."

Disclaimers

"I'd look for one or two people with experience testing. They should..."

Those ellipses refer to a description that, well, is pretty much a description of me. How much of my reasoning is sound, how much is biased by self-interest? I'll leave that to you, and time, to judge.

"... the whole team would receive good training in exploratory testing."

Elisabeth Hendrickson and I have been talking fitfully all year about creating such training. Again, I think my conclusion - that exploratory testing is central - came first, but you're entitled to think it looks fishy.

## Posted at 12:20 in category /agile [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo