Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Mon, 27 Dec 2004

Two other writing tips

Substitute "damn" every time you're inclined to write "very;" your editor will delete it and the writing will be just as it should be.

That's from Mark Twain. I read it 25-30 years ago, and I cannot type the letters v-e-r-y without recalling it. It's (wait for it...) damn useful advice.

I also remember another tip from Ralph Johnson, from back when I was his graduate student. I don't remember it as well, but it went something like this:

Don't use the word "obviously". If the rest of the sentence isn't obvious to the reader, you've just insulted him. Why risk that?

I never fail to hesitate just as I'm starting to type o-b-v...

## Posted at 11:35 in category /misc [permalink] [top]

No Powerbook G5

The Powerbook G5 announcement has been canceled. Although my drive did die, I bought another disk instead of another Powerbook.

P.S. I've decided to opt for convenience over expense. Henceforth, I'm going to make bootable full backups to a Powerbook-compatible 2.5" drive in a firewire enclosure, rather than to a partitioned big disk. Next time a drive dies, I'll move the drive: Up and running again in fifteen minutes. And if my drive starts making ominous clunking sounds, I won't hang around waiting for it to die. Plus the extra planning and expense will ensure that no disk ever dies on me again.

Anyone else do that? Or do you have some other clever backup strategy? Mail me.

## Posted at 08:56 in category /mac [permalink] [top]

Wed, 15 Dec 2004

Bill of Rights

Today is the 213th anniversary of the signing of the U.S. Constitution's Bill of Rights. That document means a lot to me.

It's also a pretty good example of how difficult it is to write an ironclad requirements document, especially in natural language. It's a bit presumptuous, I know, but a lot of the text could have benefited from my first rule for revising. And it might have helped to have an appendix with examples / tests. ("So this policeman can magically see through walls. He claims that means he doesn't need a warrant. He's wrong because the whole point of the amendment is the bit about being 'secure in their houses'.") Oh, we're doing OK having a different group write the tests. But some sort of record of conversations around specific examples might have helped us understand how the writers intended us to think about the requirements and extrapolate from them.

## Posted at 13:20 in category /misc [permalink] [top]

Mon, 13 Dec 2004

Powerbook G5 to be announced soon

My powerbook is making truly alarming clunking noises. Gosh, could it be the disk? I'll probably be buying a new one soon, which can only mean that some wonderful new model will be announced in soon+1 days.

## Posted at 20:45 in category /mac [permalink] [top]

Selling Agile

As I've mentioned before, I sometimes hang out with Andrew Pickering, head of the Sociology department at Illinois. We've talked about Agile methods, which match well his analytical framework for explaining scientific progress (as put forth in his fairly dense The Mangle of Practice). As a result, he invited me to write a chapter on Agile for a forthcoming book on "the mangle." I produced a draft in February. It was mostly a detailed retelling of a refactoring episode in manglish terms. Lots of Java.

As so often happens to me, the current draft is much different. Instead of being about the micro, it's about the macro: explaining Agile development to sociologists.

So far, so boring. But I decided to end with a topic that intrigues me. Agile software development is not "businesslike". You've got a room full of programmers yammering to each other. And let's be frank: that room is messy. There's food all over the place. Maybe toys. Tables with 3x5 cards lying on them, programmers pushing them around like game pieces. Crude, childlike graphs on the wall.

Since at least the '60's, business has been successfully domesticating programmers, and all that progress seems to have been lost. There's even a company where the dress code calls for ties, and the programmers on the Agile team have been given a waiver. That's the first step to the harder stuff. What now will prevent all that California-style New Age babbling about 'emergent design' from leading to web sites describing which crystals are best for writing PHP code? And office water coolers filled with "energy water"?

I exaggerate. A bit. But the style and many of the beliefs of Agile development do not mesh well with what's traditionally thought of as the proper business workplace and practices. So why is it that I run into business people who love their Agile teams? What is it that those teams are doing right?

Is it just that Agile projects deliver better ROI? I certainly hope they do, but that claim isn't proven. And I don't think it accounts for the distinct emotional response I've seen. And, in any case, those concerned with ROI are no more dispassionate utility maximizers than the average hairless ape, so there must be more to the success of Agile.

Not that it's always successful. We so often hear the sad tale of a new VP coming into an Agile shop and destroying the Agile teams, typically for reasons that seem (to us) raw prejudice. Perhaps the reasons why some teams sell themselves well can help those teams that don't.

Pages 8-10 of my paper give those reasons, so far as I understand them. But I could be wrong. And I've probably overlooked or incorrectly discounted some. So I'd truly appreciate reviews, which you can send to marick@exampler.com. I have both a PDF of the chapter and a Word version in case you want to put comments in the text. (Sorry, RTF users: Word mangles the document when it exports it to RTF.)

I'd also appreciate comments on my description of Agility. I do use manglish jargon in that description, but in a way that I hope will still be meaningful for people who haven't read Pickering. (And I've also previously posted a description of the key terms.)

(And I have a guilty feeling that I'm being unfair to Parnas and Clements in my description of their "A rational design process: how and why to fake it." If you think I am, let me know.)

Thanks.

## Posted at 12:16 in category /mangle [permalink] [top]

Tue, 07 Dec 2004

Austin Workshop on Test Automation

Bret Pettichord is co-organizing the next Austin Workshop on Test Automation on open-source web test tools, January 7-9:

Join us as we review and contribute to open-source tools for the functional testing of web-based applications. Help us understand the strengths and weaknesses of existing open-source tools and work with us to improve them, and make them easier to understand and adopt. The workshop will consist of presentations, discussions and actually sitting down and writing code, documentation, and examples. We seek participation from developers of open-source test tools, testers with experience using them, and people who want to learn more about how they can contribute to open-source efforts.

I think it will be good, but I can't go. So you should go instead.

## Posted at 07:37 in category /testing [permalink] [top]

Mon, 06 Dec 2004

The prayer of every true partisan

From J.S. Mill, via Lionel Trilling, via John Holbo (in a discussion of one of Pragmatic Dave Thomas's alter ego's books), a "prayer of every true partisan of liberalism":

Lord, enlighten thou our enemies... sharpen their wits, give acuteness to their perceptions and consecutiveness and clearness to their reasoning powers. We are in danger from their folly, not from their wisdom: their weakness is what fills us with apprehension, not their strength.

Trilling comments:

What Mill meant, of course, was that the intellectual pressure which an opponent... could exert would force liberals to examine their position for its weaknesses and complacencies.

Isn't that a nice sentiment? And not just for political partisans.

P.S. A duty to attend to an opponent's thought is not a duty to argue with that opponent. I've in the past shrugged and walked away from arguments. Citing this post won't shame me into resuming them.

## Posted at 11:35 in category [permalink] [top]

Thu, 18 Nov 2004

Want to hire a Customer or VP of engineering?

One of the depressing things about Agile is the frequency with which great teams and great projects are derailed by the arrival of new management with no sympathy for those weirdo processes. That's happened to a client of mine. Two of the victims have asked me to be alert for possible jobs in the San Francisco Bay Area. One is a manager of managers, and one is a product manager. The people who'd hire them are outside my network, but perhaps they're not outside yours.

One, Mr. A--, was the Customer on a near-XP team. (His formal title was Senior Technical Product Manager.) Over the time I consulted with the company, he blossomed into the kind of Customer I'd want on my project.

The other person, Mr. B--, was the VP of Engineering responsible for migrating five projects from torpid to agile processes. He was a good person to consult for (and he obviously has the taste to hire well...) I heard far fewer gripes about him than I'm used to, so I suspect he was also a good manager for employees.

If you need either kind of person, drop me a line, and I'll forward a resume.

## Posted at 18:30 in category /misc [permalink] [top]

Wed, 17 Nov 2004

How people add basements to houses

In response to my note about adding a basement to a house, Lisa Crispin writes:

On the basement analogy, I would just like to point out that my neighbor dug his basement out by hand some years after he bought the house. This was back in the 40s when there weren't really many power tools to help with this. He had a conveyor belt, buckets and a shovel. He paid a laborer with a wheelbarrow to take the dirt a couple blocks down to a gulch and dump it there (of course, you can't do this these days in the city either, but y'know, simplest thing...). He ended up with a nice finished basement. This is very common in our neighborhood. Most houses were built in the 20s and 30s with only crawl spaces, and dug out later (I guess it's amazing that the gulch is still a gulch!). Our own basement has been dug out TWICE, who knows when but certainly more than 40 years ago (the second dig was to fit in a big coal furnace), and is extra deep. So maybe the idea of building the basement first and then the house was a later innovation! ;->

Interesting. Before heavy equipment, the cost differential between basement before and basement after was probably much smaller. Just as technology can make house construction from the bottom up more compelling than it once was, technology (refactoring tools, lots of spare cycles for continuous builds, pair programming) can make software design from the top down less compelling.

## Posted at 12:23 in category /agile [permalink] [top]

Tue, 16 Nov 2004

Hints for revising

If a sentence is unclear, do not fix it by adding more words. Fix it by splitting it into two sentences. Then maybe add a third.

If a paragraph is unclear, do not fix it by adding more sentences. First look earlier in the piece. Can you find a place to add a few sentences that will make the later idea clearer? Perhaps you can rule out an interpretation that will later cause confusion. Write text to head off the problem, then return to adjust the guilty paragraph.

If an idea or procedure is complicated, don't add more words explaining it. Add an example. If the example is too complicated, don't add more words explaining it. Precede it with a simpler example, then change the explanation of the complicated example to focus on what it adds to the simpler one.

If you use change tracking, turn display of changes off. You won't be able to make the new text read well if it's all mixed up with the old text.

After you change a sentence, leave it aside for a while, then come back and reread at least the whole paragraph that contains it. Then tweak the sentence to make it fit better into its environment.

How do you find what needs revision?

Can you turn that bullet list into one or more paragraphs? Bullet lists are, on average, easier for writers but harder for readers. They're easier for writers because you don't have to worry about transitions between one idea and the next. They're harder for readers because there are no transitions guiding them from one idea to the next. Will their eyes glaze over because you're not providing them with a sense of flow?

Read your text aloud. You don't have to write like you speak, but reading aloud changes your perspective. Awkwardness will jump out at you.

Reading aloud is one way to get some distance, to separate the piece from your memory of writing it. Putting it aside for a day or, better, a week does the same thing. I find that reading a printed copy helps me see things I don't see on a screen. Can you find other tricks? Richard P. Gabriel tells the story of one writer who would tape his work to a wall, go to the other side of the room, and read it through binoculars.

Print the piece with a wide margin on one side. Next to each paragraph, scribble a few words about the paragraph's topic. Now read the scribbles. Do they form a progression of thought, a developing story of explanation? Or are they more like a bunch of thoughts hitched together in any old order? If so, shuffle them into a better order. (Some people cut the paragraphs out and move them around; I usually draw arrows from where the paragraph is to where it should go. I suspect the other people do better.)

Sometimes you read a piece where a particular secondary idea or clever chunk of text seems to have undue importance. It's almost as if the piece were distorted to find a way to make that gem fit. That's usually because it was. The gem came first, the piece grew away from it, but the author forced it to stay. Ask what your favorite bit of the piece is, then throw it out - or at least consider how the piece would read if you dropped it. I find this useful to do when I get bogged down during writing.

(Inspired by about twenty years of writing badly, about ten of writing competently, and five years of getting paid to edit. Not inspired by any particular author.)

## Posted at 15:58 in category /misc [permalink] [top]

Mon, 15 Nov 2004

Electronic voting machines: action can be taken

Here are some comments by Cem Kaner, professor of computer science and attorney, on the move toward IEEE approval of voting machine standards that do not include a paper trail. He is a member of the standards committee and is not happy.

Comments are excerpted from a semi-public mailing list, with permission. To set the stage, here's an excerpt from a note of Cem's:

... What puzzles me is why the IEEE is willing to associate itself with the development of a standard that pretends that non-recountable voting equipment is a reasonable, acceptable product.

Which led to this response:

Maybe the standard is being driven by the parties with a vested interest. My experience with IEEE standards is that most are driven by a small handful of people and are therefore easy to "drive" in certain directions. Something for those of us that vote on such standards to keep in mind when voting on this.

... and to Cem's longer reply, which includes some activities that we who care can take:

Most of the executives of the drafting committee work for the vendors or a contractor to the vendors. My opinion as a committee member is that the process has been driven by the vendors' employees.

What has puzzled me has been the extent to which IEEE management has taken the side of drafting committee leadership during disputes over process. Some of the process fights that I've seen: membership-in-committee rules have been used to exclude critics but not to exclude supporters. Proxy rules have been reinterpreted several times. Agreements are reached during the meetings, but the minutes typically list no agreements--we vote on meeting minutes that list only the only decision as approval of the previous meeting's minutes. Agendas for meetings have been distributed only a few days (rather than the "standard" 30) before meetings, drafts of the (quite long) standard have been circulated only days before the meetings, "agreed" changes seem to get lost, and it is almost impossible to trace comments to changes in the draft or changes back to comments.

A different issue is that standard drafts are considered confidential and may not be circulated -- I can't send you one for review. You can buy one for $100, though. With the public policy implications inherent in this standard, I think this is outrageous.

Some of my friends have commented that I have taken a sharper tone toward IEEE and its standards over the past year. The voting standard process has played a substantial role in that. I have seen disappointing work from (and in) other IEEE standards committees but this one leaves me questioning the integrity of the IEEE process.

The IEEE standard P1583 will come up for balloting soon. Please, join the IEEE Standards Association ($39), sign up for balloting on this standard, and vote against it.

For those of you even more actively interested, the next meeting of the committee is this Thursday/Friday in New Jersey. You are not required to be an IEEE member to attend (my understanding is that this is because ANSI rules bar that requirement, in the drafting of standards that will be submitted to ANSI for approval as national standards.) You have to attend the first meeting in person, but can go to subsequent meetings (as I do) by conference call. You become an official member of the committee if you apply to join during the first meeting and ask for membership again during the second. (Depending on the politics du jour and what votes are expected that meeting, new people gain their voting membership either at the start of the meeting or at the end of it. Like I said, it has been a most interesting process.)

## Posted at 17:58 in category /misc [permalink] [top]

Who came up with the hurricane metaphor?

Someone came up with the idea of using hurricane prediction tracks as metaphors for Agile project planning. Who was it? I want to give credit where due.

UPDATE: It seems likely I heard it from Tim Lister at ADC 2004.

UPDATE2: Clarke Ching knows more. He first saw it in Frank Patrick's blog in September 2003. Frank got it from James Vornov, who got the picture from Dave Rogers. Thanks, Clarke.

## Posted at 09:59 in category /agile [permalink] [top]

Sat, 13 Nov 2004

Dealing with culture clash

Different disciplines have different cultures. There can be culture clash. How do you deal with that in an Agile project?

A group of us addressed this question at a Scrum Master get-together. (We were Christian Sepulveda, Jon Spence, Michele Sliger, Charlie Poole, and me.)

We focused on three more specific problems:

  • You need to get past cultural conflicts. (But you don't necessarily need to solve them.)

  • People are afraid they'll have to relinquish their disciplinary identities.

  • You can lead people to a cross-functional team, but you can't make them collaborate.

We recommend the following at the beginning of the project:

  1. Try to get the right people on the team. If it later becomes apparent that you didn't, separate the poison. Offer them additional training away from the team, put them on a special project (again, away from the team), help them find another project where they'd be more comfortable, and - as a last resort - suggest that they look for another position.

  2. Have an open forum at the start of the project. Get the issues out in the open.

  3. Use the "Word in a hat" game. Each team member writes down a word or phrase that best describes their main concern about the project, folds the paper, and places it in a hat. The phrases should be something like "rigid customers", "bonuses", or "schedule". The Scrum Master pulls the word out of the hat, reads it aloud, and starts a discussion that preserves anonymity.

  4. The open forum should result in an internal risk management plan to monitor cross-functional issues the team identified - and then deal with them. This can be simple, like a weekly pizza lunch to review open issues or new ones.

  5. The team should have a clear and common goal that all members can clearly articulate. For example: they should be able to recite the purpose of the project to their CEO should they find themselves in the elevator with her. Another idea is Jim Highsmith's "design the box" exercise. In it, the team designs the packaging of the software and puts it in a common area as a constant visual reminder of the project's ultimate goal.

Throughout the project:

  1. Monitor issues. Address them in retrospectives.

  2. Use "odd pairings". When a task needs doing, have programmers pair with testers, have testers pair with technical writers, have technical writers pair with programmers. This will spread knowledge through the team and cause people to sympathize with people in different roles.

  3. Ask "Why?" As team members take on tasks, they should think about how what they're doing helps to achieve the project's goals. By asking "why am I doing this?" the team is less likely to revert to non-agile form or start on wasteful activities.

Clearly, we've only scratched the surface. In particular, I notice that we haven't got anything specific to the problem of people afraid of having to relinquish their disciplinary identities. That's a problem near to my heart, because it's one that comes up a lot with testers.

## Posted at 19:17 in category /agile [permalink] [top]

Summary of the cost of change curve

There was a lot of email discussion about my post on the cost of change curve. A restatement of the problem:

Assume a classic waterfall process. On March 15, you release version 1 of your product. On March 20, you start work on version 2. On April 20, an urgent change request comes in. Assume two choices:

  • make the change in version 1 and release a patch.
  • make the change in version 2 and include it in version 2's release.

Let's assume that certain of the work is the same in either case. You have to scour version 1's requirements documents, architectural design documents, design documents, and code for the implications of the change. You have to update each of them. (Remember, we are assuming the kind of project to which the cost-of-change curve applies.) You have to make the change and test it.

So why would version 1 have a substantially higher expected cost?

Here's what people came up with:

  • In version 1, if the work toward the patch doesn't detect misimplementations, nothing will: you've just delivered a defective patch, which has substantial costs (especially in goodwill). In version 2, mistakes made in the change can be caught at many places along the way to the new release. Another way of putting it is that the cost-of-change curve is largely measuring risk of releasing defects.

  • There is some additional work in version 1 (preparing a patch release, special testing of the patch release, keeping track of which customers have which patches, maintaining multiple version control branches, etc).

  • In version 2, some of the work can be folded into things you're doing anyway (such as updating requirements documents for other reasons, changing the database schema, running manual test passes, etc).

  • Disruptive, interrupting work - "context switching" - costs more than doing work you planned on.

  • Money that wasn't budgeted (to make changes in version 1) "costs more" than money that was (to fold changes into version 2).

  • Some of the cost of the change is born by people outside the development organization. (They're the ones working with inadequate software while waiting for the patch, they're the ones who may have to relearn things, they have to install the patch, etc.) Even in an imperfect market, some of that cost would presumably be reflected back to the development organization. (Echoes here of Genichi Taguchi's cost of quality curve.) In the case of the version 2 release, the recipient's cost of the change is included in the expected cost of any new release.

  • At the end of Version 1, there may have been some cleanup that drives the costs back down (tactical hacks fixed, architecture rejiggered, etc.) Even without that, the team may have gotten some down time to refresh themselves. (That is, they're temporarily out of the trap wherein overwork visibly consumes hours but invisibly destroys efficiency.)

The agile methods are, in large part, about driving these costs down, it seems to me (and to others).

Thanks to Alex Aizikovsky, Laurent Bossavit, Todd Bradley, Clarke Ching, Jeffrey Fredrick, Chris Hulan, Chris McMahon, Glenn Nitschke, Alan Page, Andy Schneider, Shawn Smith, Glenn Vanderburg, Robert Watkins, and perhaps others I forgot to record. Because of the underwhelming response to my quirky network invitations thing, I'd concluded the 120,000 hits my blog got last month were mostly due to two out-of-control news aggregators hitting my site once per minute.

## Posted at 12:20 in category /misc [permalink] [top]

Wed, 10 Nov 2004

Changing direction

In principle, the product owner of an agile project could, at any moment, throw out all the backlog of stories and take the product in a completely new direction. For the book chapter I'm writing, I'd like to give a couple of examples of radical change. What's the biggest change to the backlog that you've seen?

Obfuscate all details. I just would like to be able to say, "One correspondent told of 50% of the stories changing one rainy afternoon" or "... told how, in 2000, their consumer e-commerce site was redirected to become an air traffic control system."

Mail me. Thanks.

## Posted at 10:59 in category /agile [permalink] [top]

Tue, 09 Nov 2004

Adding a basement to the house

Agile methods people claim changed requirements late in a project are not a disaster. Skeptics claim that's impossible, that it's like finishing the first story of a house and then deciding you want a basement.

That's a misguided analogy. The reason putting in a basement after the walls are up is hard is because almost no one does it. If it was done to every house during construction, you may be sure that homebuilders would have learned to do it as cheaply as is physically possible.

Agile projects don't think ahead: in iteration N, they don't pay much attention to what's coming in iteration N+1, much less iteration N+5. That means that every iteration brings with it a whole slew of what are, in effect, changed requirements. That trains both the software and the team to handle change as cheaply as is softwarically possible.

It's like the way that just-in-time inventory management forces factories to improve their production process. Because they cannot buffer asynchronies with stock on hand, they are forced to remove them. (See The Machine That Changed the World.)

## Posted at 23:53 in category /agile [permalink] [top]

The cost of change curve

Everyone knows the canonical cost of change curve (first image), where the cost of a change rises exponentially throughout the project. Let's pretend it's a law of nature, as many many project planners before us have done.

Now suppose someone notices they need to change a requirement in the middle of the project. The cost of change curve says that the change would be far too expensive. So it's not made.

Fine. But we know that almost every product that's actually used gets re-released with new features added. Most usually, the next project starts as soon as the previous one finishes. According to the cost of change curve, that's exactly at the point where the costs are highest (second image).

The decision to postpone changes can only make sense if the cost of change resets to something much closer to the far left of the curve (third image). What's supposed to make that happen? And why doesn't that operate in the middle of the first project's curve?

These are serious questions, even though I suspect I'm being stupid. I'm writing a book chapter that explains conventional and agile software development to an audience of sociologists, and this occurred to me. Mail me, and I'll summarize.

## Posted at 09:18 in category /misc [permalink] [top]

Mon, 08 Nov 2004

Voting machine bug reports

Tim Van Tongeren has compiled an interesting list of voting machine bugs reported in the recent election.

## Posted at 09:49 in category /misc [permalink] [top]

Sat, 06 Nov 2004

That ol' "n degrees of separation" thing

When you're asking people to do work for you, especially unpaid work, it helps a lot if you already have a personal relationship. In staffing next year's OOPSLA essays track, I want to find committee members who are well known, interdisciplinary, and like novelty and change. I know some - never enough - people like that in software, but I know fewer outside software. So I'm going to do something quirky: diffuse invitations through a network. Here's how:

  1. On one of my unused wikis, I've placed a list of people who I think would fit well with our plans and might find the opportunity interesting or even useful. (Right now, they're Malcolm Gladwell, Rodney Brooks, Lucy Suchman, Etienne Wenger, Corey Doctorow, Lawrence Lessig, Edward Felton, and Eszter Hargittai.)

  2. If you (a) know someone who is more likely to know the person named than you are, and (b) think that intermediate person would find the idea of the essays track interesting enough to forward it on, send them the text you find here.

  3. But incorrigible optimist that I am (heh!), I'm afraid of a sorcerer's apprentice situation. I don't want intermediate people or the final recipient to get an annoying amount of email. So before you send mail headed for a particular person, check if four people already have. If four people have, don't send mail. Otherwise, do send mail and leave a note on the wiki.

  4. If you know one of the targeted people, contact me so that we can arrange an introduction.

Thanks. Let's see what happens...

## Posted at 14:37 in category /oopsla [permalink] [top]

Mon, 01 Nov 2004

OOPSLA essays track

The program chair for OOPSLA 2005, Richard P. Gabriel, wants to shake things up. As part of that he's going to institute an Essays track, and I will be program chair for that track. I'm hunting for people to serve on the committee.

The essays don't have to be original research, the usual OOPSLA fare. Instead, they'll be of two types.

  • Richard describes one as "the first draft of your Turing Award lecture". The Turing Award, as the highest honor in computer science, comes with the obligation to produce a speech, usually of the sweeping nature expected from an elder in the field. We're looking for essays of that sort: a survey of breadth and experience, telling the field something about itself, making tacit assumptions and habits explicit.

  • Essays from outsiders who are deeply experienced in a different field, have some knowledge of ours, and can come to us and say, "It's so odd that you do X, because we in field Y do that sort of thing completely differently". These essays should shake us out of our ruts.

To that end, I'd like to get committee members from both inside and outside the field. When they come from inside, I'd like them to have serious knowledge of some outside field. I welcome suggestions.

## Posted at 09:38 in category /oopsla [permalink] [top]

Show, don't tell

As an editor for Better Software magazine, I sometimes give authors the old fiction-writers' advice "show, don't tell". The writer Robert J. Sawyer has written a nice, short essay on it (though I think the first example overdoes it).

I particularly like this essay because it itself shows how the maxim applies to nonfiction writing. Sawyer begins with an introduction to the idea, sketching out the rule. Then he shows a series of negative and positive examples, presenting both and then offering commentary. He shows, then tells.

P.S. As always, I need writers for some department articles. They are:

  • From the Front Line: a story of a (software development) problem you faced, what you did about it, and what you generalize from the experience. This is an especially good slot for novice writers. I enjoy helping such people, and I believe them when they say I do a good job of it.

  • Bug Report: the story of some software failure. The prototypical Bug Report describes the failure and delves down into its root cause.

  • Tool Look: your experience using some tool. This isn't a full-fledged tool evaluation. The idea is to pique the reader's interest in a tool you think useful.

For more, see my magazine FAQ.

The official timing for the next open slot has a first draft due November 15, but I have some slack to slip that.

## Posted at 08:50 in category /misc [permalink] [top]

Fri, 29 Oct 2004

Three talks

(Or, "Just Another Boring Romantic, That's Me")

In one day at OOPSLA, I saw three keynote-ish talks.

The first was by the head of Microsoft Research. I'm sure he's a good fellow - most everyone I've met from Microsoft is - but it was just like a talk from every other high profile Microsoft presenter I've seen, down to that odd Gatesian way they have of gazing raptly at the person they bring on stage to demo something or other. It struck me as mostly a litany of More: more storage, more bandwidth, more cameras attached to more bodies, more visual editors to handle more complexity, more RFID chips in more places. More, more, more.

I found it profoundly depressing, the moreso for the answer when someone asked about privacy: "That's a hard problem" (repeated an uncomfortable number of times). No doubt so, but perhaps researchers ought to tackle such hard problems. I do not anticipate more privacy.

Ward Cunningham gave the best talk I've heard him give - a set of stories about the many Big Things that he's helped create: CRC cards, design patterns, wiki, XP. There were threads running through all the stories. Active waiting for flashes of insight. Building on luck. Simplicity. Communication. Courage. Attention to the physical world and the emotional world. Active awareness of others.

The Microsoft Research talk was mostly about piling things on top of things. Ward's was a story of things supporting people who change things that change people that.... Ward's is a story that, pound for pound, dollar for dollar, is more world-changing.

Alan Kay's Turing Award lecture was about three things: the power of a simple idea pursued relentlessly (the meme trail from Sketchpad and Simula to Smalltalk to Squeak), the as-yet-untapped potential of the computer, and our responsibility to our children to give them learning opportunities we couldn't have had.

His demos were cooler than the Microsoft ones. For a bit, that puzzled me. A searchable catalog of sky pictures and galaxies is cool: I plan to show it to my children. Being able to view most any location in the United States down to incredibly fine granularity is cool. So why is an ancient video of flickery black-and-white Sketchpad cooler? Why are Fun Manipulations of Two-Dimensional Objects in Squeak cooler? Why are not very detailed three dimensional moving objects so awesomely cool that I called my wife just to babble to her about it and say we had to teach our children Squeak?

I think it's because in the Microsoft Research world, we're observers, consumers, secondary participants in an experience someone else has constructed for us. In Alan Kay's vision, we're actors in a world that's actively out there asking us to change it. A world like that Cornel West says Ralph Waldo Emerson's was:

  1. Emerson held that "the basic nature of things, the fundamental way the world is, is itself incomplete and in flux" (p. 15). Moreover, the world and humans are bound up together: the world is the result of the work of people, and it actively solicits "the experimental makings, workings, and doings of human beings" (p. 15).

  2. Emerson believed that this basic nature makes the world joyous. It gives people an opportunity to exercise their native powers with success, because the world is fundamentally supportive of human striving.

  3. And finally, Emerson believed that human powers haven't yet been fully unleashed, but they can be through the "genius of individuals willing to rely on and trust themselves" (p. 16).

(My summary of what West says about Emerson in his The American Evasion of Philosophy.)

P.S. I feel bad saying this about the Microsoft Research guy. He can't help it that he doesn't have the vision of people like Ward and Alan Kay (or, if he does have it, can't express it). We're all just people, mostly muddling along the best we can. But Lord, I wish the genial humanists like Ward and the obsessive visionaries like Alan Kay had more influence. I worry that the adolescence of computers is almost over, and that we're settling into that stagnant adulthood where you just plod on in the world as others made it, occasionally wistfully remembering the time when you thought endless possibility was all around you.

## Posted at 13:30 in category /misc [permalink] [top]

Thu, 28 Oct 2004

Still more burndown charts

Two more charts, both burnup charts instead of burndown.

One from Wayne Allen. I like it because I like area graphs more than bar charts.

Ron Jeffries has updated his very nice article on Big Visible Charts with a burnup chart like Wayne's, though pleasingly hand-drawn instead of in Excel. (I'm serious: I'd do hand-drawn if I could get away with it. For one thing, the extension of the arc could be part of an end-of-iteration ritual. And crudity of presentation reinforces the uncertainty of the prediction.)

## Posted at 09:37 in category /agile [permalink] [top]

"Methodology work is ontology work" posted

Now that I've presented my paper at OOPSLA, I can post it here (PDF).

Here's the abstract:

I argue that a successful switch from one methodology to another requires a switch from one ontology to another. Large-scale adoption of a new methodology means "infecting" people with new ideas about what sorts of things there are in the (software development) world and how those things hang together. The paper ends with some suggestions to methodology creators about how to design methodologies that encourage the needed "gestalt switch".

I earlier blogged the extended abstract.

This is one of my odd writings.

## Posted at 09:37 in category /ideas [permalink] [top]

Mon, 25 Oct 2004

Help for customers

I'm at OOPSLA. Today, I was at a workshop on the Customer role in Agile projects. A group of us tried to write down problems and solutions we've seen customers having and using. I like the results. Here they are.

Note: I fancied up the problems and solutions with a running narrative. Of the rest of the group, only Jennitta's seen even a fraction of what you see. So what I say may not be an accurate record of what someone meant. But I have deadlines to meet (and miles to go before I sleep), so this is going to go into hardcopy without their review. We may fix it up later.

Nickieben Bourbaki Is a Customer on an Agile Project.
Boy, does he have problems…

Something about the problem
Something about solutions
Nickieben was originally consumed with fear. The project seemed far too much work to complete in the time allowed, and he would be responsible when it failed.
Time was the solution. As iterations delivered visible business value, he showed some of it to his Lords and Masters. They were pleased with the progress, so he grew calmer. As the business environment shifted, they changed the product direction, and Nickieben and the team showed they chould change with it, which further pleased the L&M's.

It would have helped Nickieben a lot if he had had a support group of other Customers who could tell him what being a Customer was like, but he didn't.
Planning meetings lasted way too long. Many people were uninvolved for big chunks, and the meetings seemed to drain the energy out of the whole team.

And, for all that, the resulting estimates were not very good.
Nickieben started having "preplanning" meetings the iteration before. In them, he, a tester, and a programmer would discuss a story, write some test sketches, and make an initial estimate. People came to the planning meeting prepared for a short, focused discussion that informed the rest of the team and asked them to look for errors in the estimate.

Nickieben's since discovered that other teams also do preplanning. The meetings vary in form, membership, the thoroughess of the discussion, etc. For example, one team had some analysts who spent the iteration ahead of the programmers predigesting the requirements, learning how to explain the domain (which was very complex), and writing tests.  Nickieben doesn't think there's one right way to do it, but he does now have a motto: "Meetings must be snappy".
Early on, it seemed that the programmers focused much more on the technical tasks that made up a story than they did on the story itself. It seemed that the tasks were therefore inflated: the programmers did what "a complete implementation of task X" meant, rather than just enough to make the story work.
Nickieben kept harping on the stories as the thing that mattered to him, not tasks. He learned to pare stories down into small "slices" that stretched from the GUI, through the business logic, down to the database. As the programmers got used to making one slice at a time work, they learned that they didn't have to write lots of infrastructure up front.
At first, Nickieben was indecisive about prioritizing stories. He couldn't decide among the different stories that might go into the next iteration.
Short iterations helped after one of the programmers pointed out that any scheduling mistake he made could be corrected in less than two weeks. So the cost of getting something wrong wasn't too big.

He forced himself to prioritize by writing down the cash benefit of each feature. Now he didn't have to decide which of two features was worth more; instead, he independently decided on worth, then used the cash benefit to pick.

He'd started out using a spreadsheet to track the backlog of stories, only writing them on cards when he'd decided on what should go in the iteration. Later, he switched to writing everything on cards. When it came time to thinking about planning, he'd spread the cards out on a table and push them around. Important cards went "up" (farthest from him), and the lesser cards went down. He clumped related cards together, and sometimes a batch of cards made a theme for the iteration. He also found that he could sequence cards so that an iteration's set of stories all supported a particular business process.
Early in the project, Nickieben often found himself frustrated that "finished" stories weren't what he thought he was going to get. It was hard to think of everything he needed to tell the programmers; so much of what he did automatically had to be remembered and put into words. And he'd explain things, and the programmers would think they understood, and he'd think they understood, but it would turn out they hadn't.
He sketched tests up front. Instead of just explaining in words, he found himself writing more and more concrete examples on the whiteboard. Discussing those seemed to prompt him to remember steps or issued he'd otherwise forget.

During the iteration, he also spent more time checking in with the programmers, instead of waiting for them to come to him with questions. He especially spent more time with the "GUI guy", talking about what he wanted the GUI to do, and how it did it, and sketching out examples of usage as tests.
As he moved toward more examples, Nickieben started making the examples too complicated. He produced one example that illustrated all the inherent complexities of its story's bit of the business.
He learned to start with the simplest possible example. Then he added one scenario or business rule at a time. In a way, he used the examples to progressively teach the programmers, and they used them to progressively teach the code.
He was sometimes surprised by the technical implications of his ideas. Once, a simple "let's put a Cancel button on the progress bar" led to all sorts of scary talk about transactions and undoing. He was uncomfortable not knowing whether something would be simple or hard.
For a time, he got the help of an analyst who bridged the business and technical worlds. That person helped him understand how big a decision was. But more: her technical knowledge and experience with similar applications allowed her to suggest considerations he would never have thought of.

He also enlisted the programmers for lightweight training. He had short conversations about what they had to do to implement a story. (Some of the programmers were much better at this explaining than others.) Over time, those short conversations added up to a decent enough high-level understanding of the system.

The programmers also got better at coping with change. As they worked more with the system, it got more pliable, so the "internal bigness" of the change more often - but not always! - corresponded to its "external bigness." Programmers also learned more about the business domain, so they could say, "Are you going to need X, Y, or Z? Cause if you do, it would probably be better to schedule those things early."

Eventually, the team didn't need the analyst any more. All of them were analysts, a little.
There was a time when Nickieben felt cleanup was taking over control of the project. Parts of the system were old legacy code. When he started giving stories for that, it seemed like every story led to some technical task that was more than an iteration long. Everything seemed to lead to a huge refactoring.
Nickieben learned how to write stories in small slices, about one day's work or so each. And the programmers learned how to do the big refactoring one slice at a time, such that each story led to somewhat better code and enough stories would lead to really good code.
 
They also made information radiators to track "technical debt". Sometimes the programmers couldn't see a way to make an improvement in the time they had - even with their greater experience, it seemed like the refactoring had to be a big chunk. Whenever a programmer left the code worse than she thought she should, she wrote it up on a card and put it on the Refactoring Board. At some point, Nickieben would start getting nervous that the messiness would start slowing the team down, so he would sanction some specific cleanup time. Nevertheless, they tried to tie each refactoring to something useful, like a small feature of a bug fix.

The programmers' editor also let them visually track the number of "todo" items they'd left in the code, which was another stimulus to clean up.
In the end, Nickieben's project was a big success. The date did slip a bit, and the Lords and Masters didn't get everything they'd wanted from the release. But they'd changed business direction right in the middle, and the team had coped well and still produced a solid, salable product. Looking back, Nickieben is amazed at the difference between him then and him now. He'd started out floundering, practically on the edge of a nervous breakdown. While he still wouldn't call his job easy, he knows he can do it. The only problem is that he knows there are people just like he was nine months ago. And just like he had no support group, they still don't. So they get to learn it all again, the painful way.
Maybe this page will help.
One last thing: Nickieben has to serve multiple masters: there are different interest groups who care about what the product does. There are two different classes of users, one very demanding buyer, operations, customer support, and so on. He has a lawyer friend who says it's common knowledge among lawyers that someone trying to represent multiple interest groups usually gets trapped by one or two and under-represents the others. Nickieben worries that he's doing that. He isn't really sure what to do about it. He thinks that linking each interest group to a persona (as used in some styles of user-centered design) might help. He imagines putting big pictures of the personas up in the bullpen would keep them in his (and everyone's mind).

But he wishes he had better ideas.

Jennitta Andrea,  Richard P. Gabriel,  Brian Marick, and Geoff Sobering

Update: Nickieben Bourbaki is a composite of various customers we've known, a pseudonym that marks this as a collective effort, and the regular pseudonym of one of us (but I'm not telling who).

## Posted at 17:38 in category /agile [permalink] [top]

Thu, 21 Oct 2004

Another burndown chart

Kelly Weyrauch has posted another variant of a burndown chart. Here's my version of it.

What I like about Kelly's chart is that it marches steadily down to a release date that stays on the X axis. But it's easy to see whether work's added or removed by looking at how the bars change height.

My chart differs from Kelly's in that I removed some extrapolated lines he uses. I like to get away with as few predictions as possible. To emphasize that, I hand-drew the one prediction. That makes the line seem less authoritative and believable than one Excel draws, which is appropriate.

(One of my proudest moments back when I had a real job was in 1985 or so, when I was first charged with scheduling a project. I had to predict out about nine months, using an early version of Microsoft Project. In response, I invented Schedu-Sane®. It was a sheet of plexiglas you would lay over a printed schedule. The left side would be clear. But, as your eye travelled further to the right, forward in time, the plexiglas would become cloudy and warped, making it hard to see the predictions underneath - as is appropriate. Schedu-Sane was never constructed, but I told managers about it whenever I showed my schedule, thus reinforcing my reputation for eccentricity.)

I don't know whether I like Mike Cohn's or Kelly's chart better. I certainly don't have the experience to speak with authority about burndown charts.

## Posted at 09:26 in category /agile [permalink] [top]

What? - Gut? - So what? - Now what?

Esther Derby ran a BoF on retrospectives. She's written up her notes.

One of the things she talked about was a model of communication that she uses. It's summarized by the four questions in this posting's title. At a recent meeting, I compared something I'd just heard to those four questions and was inspired to do something I haven't done in many years: get so uncomfortable speaking to a group that I had trouble finishing.

Believe it or not, that's a recommendation. I have a fear of spending so much time in my comfort zone that I turn into one of those dinosaur consultants I scorned when I was young and fiesty. (Still do, actually.) I get uncomfortable whenever I notice I've been comfortable for a while.

## Posted at 09:00 in category /links [permalink] [top]

Wed, 20 Oct 2004

A variant burndown chart

Mike Cohn has a variant on the burndown chart that I like.

## Posted at 20:11 in category /agile [permalink] [top]

Sun, 17 Oct 2004

Voting machine protocols

Ed Felton has two posts on Diebold voting machine protocols (here and here). Unless there's a misunderstanding somewhere down the line, well... I think that even I could have done a better job, and I'd sure never hire me to do security design.

## Posted at 14:40 in category /misc [permalink] [top]

Sat, 16 Oct 2004

Cross-functional teams

I'm in Boulder, Colorado, at a Scrum Masters meeting. Yesterday, I was in a session on how to deal with problems around cross-functional teams (such as teams with programmers, testers, technical writers, and interaction designers). It didn't go so well, largely because of my inept moderation. But I have synthesized some of our ideas about the dynamics of successful cross-functional teams into this abstract diagram:

Here's the way it works. You have people from various disciplines who you need to work together toward a common goal. In many ways, disciplines are cultures: they have shared values, goals, languages, self-images, and such. Cultures are resistant to change. So melding these people into a team can be hard.

It's first important to provide the team with a shared goal, which is to provide external value. People should always be able to articulate how their current task ties into the delivery of specific value to those paying for the project. I suspect that a properly running team will develop a shared language that they are all comfortable using when talking about their goal. (This would be one of Galison's creoles.)

But I don't think the shared goal and shared language are enough. I think they will also develop tools (in the broad sense) that are used across disciplines. I think of these as boundary objects in Star and Griesemer's sense: things that people can use to further a common purpose while not having to agree on their meaning. Test-first customer tests are a good example: a customer might think of them mainly as an explanatory device, a tester might think of them as a tool that covers the whole breadth of a problem and lets no important detail go undiscussed, and a programmer might think of them mainly as a way to break programming down into small chunks.

Such "unshared tools" work if they allow the different people who use them to achieve what they value. Until the glorious day when disciplines wither away and everyone knows enough about everything (which I do think would be a glorious day), members of a discipline must feel valued in the terms that their discipline defines. When a tester and a programmer work together, the tester must feel valued as a tester, and the programmer must feel the tester delivers value as a tester.

Where this exchange often falls down is in the recipient's feelings. The programmer might not see value coming from the tester: she might see an imposition. A team needs to have a shared ethic that each person acts to serve others - not through "tough love" or while thinking "this is for your own good, and you'll thank me when you're grown up", but in terms the recipient values.

It seems to me that this would work best in a gift economy of the sort described by Marcel Mauss, one where a person's status depends on how much she gives to others. A Scrum Master would want to show the team a way to an internal gift economy.

But perhaps the most important thing is positive feedback, feedback that reinforces desirable behavior and attitudes. That feedback comes from shared activities: two or three people sitting down to accomplish some task. People from one discipline will help people from others, thus building respect and a common language. As is typical of Agile projects, we want such tasks to be frequent and completed quickly. That maximizes the number of feelings of shared accomplishment and allows lots of room for adjustments and experiments. For help to be valued, the tasks have to be real ones, the kind that are valued outside the team (presumably because they deliver business value).

Special thanks to Christian Sepulveda, Jon Spence, Michele Sliger, and Charlie Poole, who (except for Jon) are not to blame for the odder parts of the above. The rest is Chet's fault.

## Posted at 14:30 in category /agile [permalink] [top]

Sun, 03 Oct 2004

My testing metaphor

I don't think I have a firm grasp of XP's metaphor. Nevertheless, I am taken with the idea of using a guiding metaphor to encourage cohesive action. So what about a guiding metaphor for testing in Agile projects?

a dartboard with labeled darts The picture on the right is my idea of conventional testing's dominant metaphor. The tester stands as a judge of the product. She acts by tossing darts at it. Those that stick find bugs. The revealing of bugs might cause good things to happen: bugs fixed, buggy releases delayed, a project manager who sleeps soundly at night because she's confident that the product is solid.

When talking to testers, two themes that often come up are independence and critical thinking. Critical thinking, I think, means healthy skepticism married to techniques for exercising it. Independence is needed to protect the tester from being infected with group think or being unduly pressured. A fair witness must be uninvolved to get at the truth.

There's no question in my mind that human projects often need people governed by this metaphor. But what about one type: Agile projects? Some say conventional testing exists, it's understood, and Agile projects need to import that knowledge.

I agree. We need the knowledge. But maybe not the metaphor and the attitudes that go with it. Independence and - to some extent - error-seeking are not a natural fit for Agile projects, which thrive on close teamwork and trust. Is there an alternate metaphor that we can build upon? One that works with trust and close teamwork, rather than independently of them? Can we minimize the need for the independent and critical judge?

I think so. Other fields have ways of harnessing critical thinking to trust and teamwork.

  • Latour and Woolgar describe scientific researchers as paper-writing teams (Laboratory Life). As the paper goes through drafts, coworkers critique it. But the critiques are of a special sort. They are often couched as defenses against later attack: "Griswold is obsessive about prions. If you leave this part of the argument like it is, he's going to go after it." They often include trades of assistance: "The last JSTVR reported a new assay. You need to run that, else you'll be dated right out of the gate. I can loan you my technician."

    My wife, who is a scientist, confirms this style. In her team, certain people have semi-specialties - one professor is good at statistics, so he wields SAS for other people, and I get the impression that my wife is strong at critiquing experimental design - but they all work together in defense of the growing work.

    I think it's interesting and suggestive that all of these people are producers. Despite being a referee for a seemingly infinite number of journals, my wife isn't just a critic. She's also a paper writer.

  • Richard P. Gabriel describes writers' workshops in Writers' Workshops and the Work of Making Things. A writers' workshop is an intense, multi-day event in which a group of authors help each other by closely reading and jointly discussing each others' work. Note that, like the scientific papers, the work is not complete. The goal is to make it the best it can be before it's exposed to the gaze of outsiders. Trust is important in the workshop; there are explicit mechanisms to encourage it. For example, the group discusses what's strongest about the work before talking about what's wrong. And, again, everyone in the workshop is a producer (at least in the traditional workshops, though not necessarily as they've been imported into the software patterns community).

a circle of dinosaurs protecting a book So, with those examples in mind, I offer this picture of an Agile team. They - programmers, testers, business experts - are in the business of protecting and nurturing the growing work until it's ready to face the world. (To symbolize that, I placed a book in the center, which is where herd animals like elephants, bison, and musk oxen place their young when danger threatens.)

Notice that the dinosaurs are all of the same species. Even though some of them might have special skills (perhaps one of them knows SAS, or was once a tester), they are more alike than they're different. They're all involved in making the weak parts strong and the strong parts stronger. (The latter is important: it's not enough to seek error; one must also preserve and amplify success, and spread understanding of how it was achieved throughout the team.)

P.S. Malcolm Gladwell wrote an interesting discussion of the virtues of group think.

Illustrations licensed from clipart.com.

## Posted at 23:22 in category /agile [permalink] [top]

Mon, 27 Sep 2004

Visiting the Bay Area

I'm visiting the SF bay area for the next two weeks. I'm free over the weekend of the 2nd if anyone wants to hang out together, talk shop, or code. Let me know. Only catch is that I'm thinking of never renting a car, partly so I blend in with the native Californians, but mostly because it seems wasteful to have my client pay for a car I'll hardly use. So you'd have to pick me up.

## Posted at 08:53 in category /misc [permalink] [top]

Thu, 23 Sep 2004

Chapter two is available

The second chapter of my draft book, Driving Projects With Examples, is now available. For some reason, it was tough to write. Hope it's not too tough to read. Let me know if it is, and if I've missed something in this topical summary. Here's the abstract:

We're about to launch into details. Those details will make sense if you understand the key themes of example-driven development. The Introduction was supposed to highlight those themes and give you an understanding of how a project operating according to them should feel. In this chapter, I cover them more explicitly.

## Posted at 16:14 in category /examplebook [permalink] [top]

Tue, 21 Sep 2004

Co-teaching

A long time ago, I gave up teaching conventional courses. You know the type: two or more days, 25 people in a room, lecture + lab + discussion. That works OK for learning a programming language or tool, I guess, but it doesn't work for what I do. My people have to go back and apply something like programmer testing to big problems. The fraction they retain from a course doesn't help them enough. Because they have trouble getting started, they don't, so nothing changes. Money wasted. Time for the next course on some other topic (since programmer testing clearly doesn't work).

So nowadays, when people ask me for training, I give them consulting. The actual lecture is short: I prefer half a day. After that, I sit down and work closely with real people on real problems. My goal is to get a core set of opinion leaders properly started, then let them train everyone else.

I've added a new twist, inspired by some co-consulting I've done with Ron Jeffries. We visit the client together, sometimes work with people together, sometimes go our separate ways, but always get together in the evening to talk about what we've seen and what we would best do the next day. Our sum is greater than our parts.

The new twist is that I now want to co-teach my consulting-esque courses. To that end, I announce two courses:

For more details, you know where to click. I imagine we won't teach many of these, what with persuading people to pay for two instructors and our varying travel schedules, but I bet the ones we do teach will be good.

## Posted at 18:01 in category /misc [permalink] [top]

Sun, 19 Sep 2004

Thoughts on Andy Schneider's comments

(Andy Schneider commented on my post on cybernetics and Agile methods. I've finally gotten a spare moment to respond.

Andy's right that agile teams are too often inward-looking. But I don't think that's a reason to avoid using the team as a unit of analysis. One way to talk about a system is to find useful components and look at their interactions. (That's not to say that the team is the only useful unit of analysis; it might well be instructive to slice things along a different dimension.)

I agree with Andy that Agile teams err when they think the only feedback that matters is instructions coming in and software going out. That's one of the reasons why I was so taken with Pickering's descriptions of devices reaching out, actively exploring their environments, and adapting to them. I think that's what Andy wants, and I'm suggesting cybernetics might have learned something we can steal.

I wasn't at all clear in my description of "teams succeeding in their own terms." By that, I meant to suggest that the team is delivering what some representative of the business said was business value, but that either the representative was wrong or someone with more power wasn't interested in that business value. So the project gets canned because it wasn't adapted to its real environment, only to an economic fairy tale: the corporation run by profit-maximizing economic actors.

## Posted at 16:59 in category /agile [permalink] [top]

Wed, 15 Sep 2004

Role, Schmole

Here's my position paper for the OOPSLA 2004 workshop on the Customer Role in Agile Projects. I'm a little dubious about the position I take, but what the heck. It'll lead to a better one.

 

Here's a summary of 3394 recent mailing list threads:

Skeptic: How can one person *possibly* do everything expected of the XP Customer!?

Ron Jeffries: The XP Customer isn't a person, it's a Role.1

By that, he means that the Customer Role can be realized by a mix of people who work things out amongst themselves and then speak with One Voice. He's right. Let me say that again: He's right. But.

But doesn't it work ever so much better when it is a person? And when it's the right person?

My contribution to the workshop will be to ask us to pretend, for a short time, that we can require that every Agile project have a bona fide business expert sitting in the bullpen with the rest of the team. That person is the focal point of the programmers' work. They are oriented toward making her smile in the same way that a compass needle is oriented north: forces may sometimes push it away, but it wants to swing back.

The objection remains: that one person can't do it alone. So the first question is: what kind of skills is she likely to lack? Three related ones come immediately to my mind:

  1. the skill of explaining herself well.

  2. the skill of introspection, of realizing what part of her tacit knowledge has to be made explicit. (This is the skill of knowing what she's forgotten she knows.)

  3. the skill of creating the telling example, that being one that both helps her explain what she wants and is also readily turned into an automated test.

It would be nice to have a more complete list. It'd also be nice if we collectively had some way of teaching those skills, other than stamping "Customer" on some poor accountant's head and tossing her into a writhing mass of programmers. (We might not teach those skills to the official Customer; we might instead surround her with the right support staff.)

Given skills, the next question is: what kind of personality turns programmers, especially, into compass needles? Here are some traits I think a Customer should have: an eagerness to help that leads her to immediately turn aside from her task when a programmer has a question; a respect for others' expertise; a touch of vulnerability; curiosity; patience; enthusiasm.

Perhaps companies could use a list of such traits to choose the most effective person to be Customer, not just the person with the most domain knowledge or the person who cares most about the product (or the person who's easiest to detach from her real job).

I hope we can talk about these things. Customers - the people sitting in chairs, not the Roles - need all the help we can give them.

1 Rumor has it that F7 on his keyboard pastes that sentence into the current window.

## Posted at 10:45 in category /agile [permalink] [top]

Sun, 12 Sep 2004

Andy Scheider comments

Andy Schneider had some interesting comments on my cybernetics post. I'll comment on his comments after I get through various pressing business. The rest of these words are Andy's, reprinted with permission, except where he's quoting me.

I've worked on a bunch of broken and working agile projects and when I was reading your missive it kept taking me back to stuff I've observed.

I'm picking the bullpen as the unit of adaptation because I want to talk myself out of the notion that everything that matters is in the heads of the team members - some of it is "in" the configuration of the room, the Big Visible Charts, the source code, the Lava Lamps, the rituals that people take part in and the rules they follow and reinterpret...

When I read this I read the 'team' as the development team, the people cutting code and probably the customer representative. The definition is very dev centric (BVC, Lava Lamps, source...). When teams buy into this perspective I think they are already in trouble. These teams are often have the following traits:

  • They are not managing the 'group think' that arises from teams that bond and look inwards rather than outwards.

  • They see their team as the 'agile' team surrounded by a hostile, non-agile or problem environment (reinforcing the group think and introversion).

  • They fail to realise that feedback loops need to be in place between all the suppliers/providers and themselves, not just in the ways drawn out in the XP book - i.e. they just don't get the feedback part.

The best agile people I see working view the team as encompassing people involved in the end to end process. Furthermore, they view agile as part of a multi-paradigm approach to the entire constructin process rather than as the proverbial hammer. I do a brief and not very profound short talk in scaling agile and I spend 50% of it ramming home the need to make business engagement work and the other 50% discussing how you dovetail your agile dev team into the corporate environment in a way that is acceptable to all. In my mind these are two key cornerstones to an agile project, often missed in the rush to pair programming, lava lamps and A3 charts on the walls.

You then go on to say:

...and others appalled by how often successful-in-their-own-terms Agile projects get taken down by organizational struggles and interests are trying to figure out how to convert receiving organizations into something Agile enough to really use their Adaptive Bullpens well.

The fact the Agile projects believe they need to convert other parts of the organisation suggests a few things:

  • they haven't figured out how to adapt their external facing image to the needs of the host.

  • conversion... sounds awfully religious to me (slightly tongue in cheek).

  • The projects that later get taken down don't deliver busines value so their going in position, that the agile model they established would deliver business value, was probably mistaken - or at least its naive application was.

The fact that some people are even defining success 'in-their-own-terms' seems a bit of an issue with an organisation where success is probably defined in very different ways.

Of course, it can be argued that true success can only be achieved when the entire business is agile, but I think that's an oversimplification. Whilst some agile projects may be too passive, I have found many are too introverted and focussed on the 'one solution'. What we really need is not the establishment of agile development teams with a 'customer on site' but the realisation that agile is an approach, that needs to be tailored to the host environment and that must be driven by people who see the 'team' as the people in the end to end process chain, not just dev and the customer rep. Let's break down this insularity and start to see the bigger picture.

Andy S

## Posted at 14:46 in category /agile [permalink] [top]

Tue, 07 Sep 2004

Usability tricks

On the Agile-Usability list, I asked for tricks:

I'd like to hear some advice to programmers, testers, and others on agile projects about how they could get a bit better at those things that the interaction design (etc.) people are really, really good at. Those things should be absorbable and try-able without a huge investment in time.

Dave Cronin (who says he loves to get mail), responded with this nice list:

  • Make all decisions within the context of one or more specific user archetypes (personas, actors, whatever you want to call them) accomplishing specific things (scenarios, goals, use cases, etc).

  • Express what the user is trying to accomplish in English. For example, if you have a complex form, first try to describe what is being specified in sentences. Then use the sequence of sentences to order fields in the layout and use the nouns and verbs from the sentences to label fields.

  • Focus on goals, not tasks. Goals are the end result that users want to achieve-- tasks are the things that get them there. Sometimes being overly focused on the tasks makes you lose the forest for the trees. Even if you can't do the bluesky design where you cut out a bunch of unnecessary tasks, focusing on goals will still help you express things in a way that a user will understand.

  • Use a grid for layout. Seems obvious, but its amazing how often I see screens layed-out with no order whatsoever. Look no further than the front page of the Wall Street Journal or any of a number of other newspapers for how to fit a ton of information of varying importance into a compact space.

  • Use color sparingly. A couple colors used judiciously can really make a screen come alive. Using five colors haphazardly makes you screen look like salad.

  • Optimize for the common case, accommodate the edge cases

  • Rough out a framework before you try to lay out every button and field. Work with the big rectangles and push them around until things start to fit. Test layout with a variety of possible controls, think of the worst case situation, make sure things degrade gracefully. Then when it seems like it will work, go ahead and extend your framework by laying out all the specifics. As you all know, things change all the time. A solid framework can accommodate these changes, meaning you will rarely have to restructure your interface after you refactor.

Thanks, Dave.

## Posted at 10:41 in category /agile [permalink] [top]

Sun, 05 Sep 2004

A series on traits

I've been worrying about testing in agile projects for about three years now. I started by wondering how people like me could fit into an agile project. Then, as I saw more and more programmers and, occasionally, product owners performing testing tasks, I came to focus more on the testing role: what are its goals? what are its components? what skills comprise it? how is the role distributed amongst the team?

Since the beginning of the year, I've been wondering less about what specifically has to be done and more about how a team evolves such that those things just naturally get done - or, if they don't get done, how the team recognizes that and corrects itself.

I've been thinking that a team has to have the right traits - in a way, the right personality. Individuals, I'm thinking, act as "carriers" of those traits. In the right circumstances, a person's traits will "infect" the team. Once that happens, you won't need to worry (so much) about which person should do what or which hats (roles) people should wear when.

In this blog category, I'll start giving capsule descriptions of the traits I think people like me should infect a team with. It's not that I think tester-people uniquely possess these traits; it's just that they're characteristic of testers, so testers make great carriers.

Background reading: Bret Pettichord's "Testers and Developers Think Differently".

## Posted at 08:54 in category /traits [permalink] [top]

Sat, 04 Sep 2004

Cybernetics, agile projects, and active adaptation

I just read the first chapter of Andrew Pickering's forthcoming The Cybernetic Brain in Britain, a history of - and reflection on - the British cyberneticians from the 50's on.

As Pickering tells it, these people were concerned with the brain, but not as an organization of knowledge, a sack of jelly in which representations of the world are stored and processed.

What else could a brain be, other than our organ of representation? ... As Ashby put it in 1948, 'To some, the critical test of whether a machine is or is not a "brain" would be whether it can or cannot "think". But to the biologist the brain is not a thinking machine, it is an acting machine; it gets information and then it does something about it' (1948, 379). The cyberneticians, then, conceived of the brain as an immediately embodied organ, intrinsically tied into bodily performances. And beyond that, they conceptualised the brain's special role to be that of an organ of adaptation. The brain is what helps us to get along in, and come to terms with, and survive in, situations and environments we have never encountered before... the cybernetic brain was not representational but performative, as I shall say, and its role in performance was adaptation.

This gives me a couple of thoughts. First, we can think of the agile bullpen, containing people, furniture, and source code, as an embodied organ of adaptation. The team is something that gets information and does something about it (change the code). The better the team, the more flexibly adaptable it will be, and the better it will survive in the business world.

(I'm picking the bullpen as the unit of adaptation because I want to talk myself out of the notion that everything that matters is in the heads of the team members - some of it is "in" the configuration of the room, the Big Visible Charts, the source code, the Lava Lamps, the rituals that people take part in and the rules they follow and reinterpret. Also, I'm making a bit of reference to Searle's Chinese Room critique of AI, though I'm not sure to what end.)

The second thought ties in with this quote:

Norbert Wiener's basic model for the adaptive brain, the servomechanism, is, in one sense, a passive device. A thermostat simply reacts to unpredictable fluctuations that impinge upon it from its environment. If the temperature in the room happens to go up, the thermostat turns down the heating, and vice versa. In contrast, the distinctive feature of Walter and Ashby's models is that they were active. They interrogated their environments and adapted to what they found. Walter's tortoises literally wandered through space, searching for sources of light. Ashby's homeostats stimulated their environments with electrical currents and received electrical feedback in return. Such cybernetic devices, one could say, enjoyed a relationship with their environment which was both performative - the devices acted in their world, and the world acted back - and experimental: they explored spaces of possibility via these loops of action and reaction.

Agile projects are not as passive as a thermostat. They stimulate the world by releasing software, causing the world-of-the-business to stimulate the project back. But there seems to be an emerging critique of the current stage in Agile that accuses it of being too passive. Tim Lister's keynote at Agile Development Conference charged the listeners to do more than passively accept requirements from users: instead we should exhibit (and develop) expertise in what users need. The new Agile-Usability list kicked off with some blasts at what can happen when a business expert who doesn't understand usability creates a UI feature by feature and the programmers passively do what she wants. Mary Poppendieck's keynote at XP/Agile Universe talked of the need to deliver a whole product to the business, and others appalled by how often successful-in-their-own-terms Agile projects get taken down by organizational struggles and interests are trying to figure out how to convert receiving organizations into something Agile enough to really use their Adaptive Bullpens well.

In so-far-sketchy conversations with Pickering, we've both been struck by similarities between Agility (as I describe it) and the cyberneticians (as he describes them). Perhaps we of today can learn from the cyberneticians of yesteryear - not least, how to avoid their fate: because that polyglot, promiscuous field has pretty much vanished, except from fond memories. The anti-discipline that produced von Foerster's "Act always so as to increase the number of choices" is itself no longer a choice.

## Posted at 17:48 in category /agile [permalink] [top]

Helping Norm Kerth

Norm Kerth is the author of Project Retrospectives, an early proponent of patterns, and a nice guy. He suffered a disabling brain injury in a car accident and can't work for sustained periods.

Karl Wiegers has bunches of shareware process aids (mostly Word and Excel templates for various things). Net proceeds will go to the Norm Kerth Benefit Fund.

A worthy cause. And you ought to buy Norm's book, too. It's a good and useful read, in the Weinberg style.

## Posted at 12:22 in category /misc [permalink] [top]

Thu, 02 Sep 2004

I knew it would be stupid

Read The Fine Manual (or FAQ). Jason Yip and Alan Francis and Ron Jeffries solve my problem:

From http://confluence.public.thoughtworks.org/display/CC/FrequentlyAskedQuestions:

Q: I see just a dark blue screen when I look for build results and my tomcat log has the following exception: org.apache.xml.utils.?WrappedRuntimeException: The output format must have a ' http://xml.apache.org/xslt}content-handler' property!'. What's going on and how do I fix it?

A: JDK1.4 includes an old version of xalan, try installing a new xalan.jar (from http://xml.apache.org/xalan-j/downloads.html) into tomcat_dir/common/endorsed.

I knew I'd be publicly shamed by asking, and this is a particularly embarrassing way, but look: I got the answer three waking hours later. Thank you, Jason, Alan, Ron, and - especially - Eric Bina.

## Posted at 06:30 in category /misc [permalink] [top]

Wed, 01 Sep 2004

OK, I'm stupid

Here I am in Omaha, fiddling with Cruise Control (2.1.6). Once I found that Mike Clark had made available the relevant chapter from his fine Pragmatic Project Automation, things became much easier.

But I still can't get Tomcat to spit out the formatted build log because of this evil exception:

org.apache.jasper.JasperException: The output format must have a '{http://xml.apache.org/xalan}content-handler' property!

I just know that I'm doing something stupid and obvious, maybe even to a newbie like me, definitely to a Control freak who groks XSLT and all the steps that lead from a simple build to a pretty web page. Here's hoping that this posting will lead to a quick mail with the simple'n'obvious fix. Send it and I'll owe you lunch.

(I know: "use the source, Luke". I don't have time just now.)

Full error page here.

## Posted at 20:50 in category /misc [permalink] [top]

Sat, 28 Aug 2004

The upcoming US election

I'm uncomfortable with public talk about politics, but this election is especially important.

I've long been broadly sympathetic to Democratic goals, yet often suspicious of Democratic approaches and reflexes. I've voted for candidates of both major parties. Until this year, I'd never put a political bumper sticker on my car (unless you count "Free the Mouse"), and I'd never donated a dime to any politician or party.

Now I find myself with Kerry bumper stickers on both cars and a big credit card bill for donations.

The current administration is incompetent at policy execution, especially at sweating the details, especially at keeping ideology from interfering with practical results, especially at adjusting when circumstances change.

That's my core objection, pragmatist that I am, but I can also be a stiff-necked moralist. This administration promised to bring honor and dignity back to the White House. They did not. They evade responsibility. They are willing to benefit from the dirty work of others. They resolutely not-quite-lie to create beliefs (like an involvement of Iraq in 9/11), or they withhold information it is their duty to disclose (like the true cost of the Medicare bill). They pander to the country's moral flaws of fearfulness and spite. Condemning others while taking the easy path is not honorable.

They're my employees, they've done a lousy job, and I want to fire them.

I now return to my normal topics.

## Posted at 10:44 in category /misc [permalink] [top]

Mon, 23 Aug 2004

I want to visit your site

After XP/Agile Universe, I stayed in Calgary for a day. Instead of going to Banff like a normal person would, I spent that day visiting two companies that had started to drive their projects with business-facing examples. Although they weren't yet doing the style of development I advocate in the book, I thought I could learn from them. I did. But I learned in a particular way. Instead of interviewing them about their practices with an eye toward filling up the book, I rapidly slipped into my normal consulting role: conversing with an eye toward offering advice and answering their questions. A couple of people worried that I wasn't getting what I came for, that they were just getting free consulting. They were, but I also got what I came for. (And who knows, a consulting gig might come of one of them.)

I'd like to do more of that. If you're interested and in one of the places listed below, contact me. The only rules are that you'll have to drive me between places, and that you be a project that's making a concerted effort to use examples/tests to understand the problem to be solved and to explain it to programmers. You don't have to consider that effort a smashing success - though some of that would be nice - you just have to be trying.

Here are the places I'll be for less than a whole week, making it easy to extend my trip:

  • San Jose, California. I can come earlier on the week of September 27.
  • Boulder, Colorado. I could visit on October 12 or October 15.
  • Vancouver, British Columbia. Some time during the week of OOPSLA (October 25).

## Posted at 08:19 in category /agile [permalink] [top]

WATIR

WATIR is a project headed by Bret Pettichord and Paul Rogers. They're going to take the existing Web Testing in Ruby framework and push it forward full throttle. This will be a project to watch - and to contribute to.

## Posted at 06:44 in category /testing [permalink] [top]

Weeding out bugs

Dave Thomas has a masterful blog entry titled Weeding Out Bugs. Check it out.

## Posted at 06:44 in category /coding [permalink] [top]

Sun, 22 Aug 2004

Books I mentioned

In my XP/Agile Universe keynote, I cited a bunch of books. Someone asked that I post them, so here they are.

One bit of evidence that test-driven design has crossed the chasm and is now an early-mainstream technique is JUnit Recipes, by J.B. Rainsberger. This is a second-generation book, one where he doesn't feel the need to sell test-driven design. He's content (mostly) to assume it.

I asked how it could be that Agile projects proceed "depth-first", story-by-relatively-disconnected-story, and still achieve what seems to be a unified, coherent whole from the perspective of both the users and programmers. My answer was the use of a whole slew of techniques.

At the very small, programmers are guided by simple rules like Don't Repeat Yourself. Ron Jeffries' Adventures in C# is a good example of following such rules. Martin Fowler's Refactoring is the reference work for ways to follow those rules.

At the somewhat higher level, we find certain common program structures: patterns, as described in Design Patterns. Joshua Kerievsky's new Refactoring to Patterns shows how patterns can be the targets of refactoring. You can think of patterns as "attractors" of code changes.

There are even larger-scale structures as described in Eric Evans's Domain-Driven Design, Fowler's Patterns of Enterprise Application Architecture, and Hohpe&Woolf's Enterprise Integration Patterns.

I was resolutely noncommittal about whether the larger-scale patterns can emerge from lower-level refactorings or whether pre-planning is needed. I tossed in Hunt and Thomas's The Pragmatic Programmer both because it (and they) speak to this issue and because it seems to fit somehow in the "architectural" space.

Finally, I addressed an issue that's been weighing on my mind for more than a year. Dick Gabriel has said (somewhere that I can't track down precisely) that there are two approaches to quality. The one approach is that you achieve quality by pointing out weaknesses and having them removed. The other is that you build up all parts of the work, including especially (because it's so easy to overlook) strengthening the strengths.

Testers too often take the first approach as a given. They issue bug reports, become knowledgeable discussants about the weaknesses of the product, and think they've done their job. And they're right, on a conventional project. I've decided I don't want that to be their job on an Agile project; instead, I want them to follow the second approach. I want them to apply critical thinking skills to the defense and nurturing of a growing product.

But what could that mean? How can it be done? I proposed mining the knowledge of other skilled laborers for ideas. Latour and Woolgar's Laboratory Life: the Construction of Scientific Facts describes how teams of scientists collaborate to produce their primary work product: published and widely-cited scientific papers. (I've written about that earlier.)

I mentioned another source: writers' workshops as used in the creative writing community and imported into the patterns community. Richard P. Gabriel's Writers' Workshops and the Work of Making Things is the reference work here. (See also James Coplien's online "A Pattern Language for Writers' Workshops".

## Posted at 09:13 in category /2004conferences [permalink] [top]

The purpose of a keynote

Back when I didn't give keynotes, only attended them, I thought they had two purposes. The first was to let me get near some godlike figure - your Herb Simon, your Guy Steele - and have wisdom flow from their head into mine. The second was to be inspiring.

I now realize that neither of those is to the point, though the second is closely tied up with the real purpose of the keynote, which is to provide a theme for the conference that threads through all the conversations that make it up. People should refer to the keynote in hallway talk. Other presenters should refer to the keynote's theme as they speak their piece. Questioners should ask presenters questions informed by the keynote. Collectively, the theme gets amplified and people resolve to do things about it.

## Posted at 09:13 in category /2004conferences [permalink] [top]

XP/Agile Universe 2004

I had a very good conference last week, and it seemed that everyone I talked with did too. Thinking back on it, I'm reminded of a book by Randall Collins, The Sociology of Philosophies: a Global Theory of Intellectual Change. As a teensy part of that book, he talks about how important coming together is for intellectuals and members of schools. It's from that coming together, that personal contact, that people get the emotional and creative energy to push the school forward. I'm fired up.

The local folk in Calgary did a fantastic job. They set a friendly and welcoming tone. Whatever glitches there may have been were hidden from the participants. Plus they gave me a cowboy hat.

As was announced at the Agile Development Conference, it and XP/Agile Universe will be merging next year. The united conference will be named Agile United, and it will be held in Denver.

## Posted at 07:30 in category /2004conferences [permalink] [top]

Wed, 11 Aug 2004

The story of advancers

An author wrote a story for Better Software about the usefulness of complex tests. I was struck by a conversation I'd once had with Ward Cunningham about how his team came up with Advancers. I saw a nice way to complement a test design article with code design content. So I wrote a sidebar. In the event, it didn't get used, so I'm putting it here.

Complex tests can find bugs in complex code. What then? Usually, the result is an unending struggle against entropy: a continuous effort to fix bugs in unyielding code, hoping that each fix doesn't generate another bug. Once upon a time, Ward Cunningham was mired in entropy, but what happened next makes an unusual story.

His team was working on a bond trading application. It was to have two advantages over its competition. First, input would be more pleasant. Second, users would be able to generate reports on a position (a collection of holdings) as of any date.

The latter proved hard to do. Many bug fixes later, one method in particular was clearly a problem. It was the one that advanced a position to a new date by processing all related transactions. It had grown to a convoluted mess of code, one that proved remarkably hard to clean up.

The solution was to convert the method into a Method Object. You can find a fuller description of method objects in Martin Fowler's fine Refactoring, but the basic idea goes like this:

  1. Suppose you have a big method that contains many interrelated temporary variables. You can't untangle it by extracting little methods because each little method would use temporaries also used in other little methods. The need to coordinate sharing of variables makes it too hard to see any underlying structure.

  2. Therefore, turn the method into an object, one that has a single method - perhaps called compute. Code that used to call the original method will have to make one of the new objects and call its compute method.

  3. What does this gain you? Now you can change temporary variables into instance variables of the object. Since these are automatically shared among any methods of the object, you can now extract little methods without worrying about coordination.

  4. As the little methods get extracted, the object's structure and responsibilities become clearer. Clearer code means fewer bugs.

It's common to treat method objects as just a coding convenience. But Cunningham's team found themselves treating this one as a design tool. They gave it a name - Advancer - that sounded like one from the domain (though none of the domain experts had a corresponding notion). Once Advancers were part of their design vocabulary, thinking about how to satisfy a new requirement meant, in part, thinking about whether a special kind of Advancer might fit nicely. By changing the way they thought about the domain, the team was able to write better code faster.

Advancers later helped out in another way. The program calculated tax reports. What the government wanted was described in terms of positions and portfolios, so the calculations were implemented by Position and Portfolio objects. But there were always nagging bugs. Some time after Advancers came on the scene, the team realized they were the right place for the calculation: it happened that Advancers had instance variables that contained exactly the information needed. Switching to Advancers made tax reports tractable.

It was only in later years that Cunningham realized why tax calculations had been so troublesome. The government and traders had different interests. The traders cared most about their positions, whereas the government cared most about how traders came to have them. It's that latter idea that Advancers captured, but conversations with domain experts couldn't tease it out - even tax experts didn't know how to express it that way. It only came out through a conversation with the code.

In some circles, it's said that programmers + the code + rules for code cleanliness are smarter than programmers alone. That is, programmers who actively reshape or mold the code as it changes will cause new and unexpectedly powerful design concepts to emerge. The story of Advancers shows massaging the code as a learning tool.

## Posted at 14:28 in category /coding [permalink] [top]

Tue, 10 Aug 2004

If statements and duplication

I'm a big fan of hearing little stories that lead to little lessons. Being good at a craft requires knowing a whole slew of little lessons and having your mind be primed to pull them into action. Here's a nicely put little lesson from Kevin Rutherford.

Therefore it seems to me that there are two kinds of conditional statement in a codebase: The first kind tests an aspect of the running system's external environment (did the user push that button? does the database hold that value? does that file exist?). And the second kind tests something that some other part of the system already knows. Let's ban the second kind...

## Posted at 07:02 in category /coding [permalink] [top]

Mon, 09 Aug 2004

Testing contributions to the Agile Times

I edit the "Testing Tips" section of the Agile Times, the Agile Alliance newsletter. You can find sample copies here:

Issue One
Issue Two
Issue Three
Issue Four

The latest issue is only available to Agile Alliance members.

If you'd like to send a testing-related article to me, please do. Articles are usually short - a few hundred words. I like practical content: "I tried this. You could too." Jeff Patton's and Zhon Johansen's story below would make a swell article.

If it comes before August 25, your article could make it into the next issue, but send articles any time to marick@exampler.com. Expect light and shared editing before publication.

Thanks.

Update: fixed broken link.

## Posted at 21:14 in category /agile [permalink] [top]

An interview

There's an interview with me at whatistesting.com.

I'm rather pleased with this answer:

Q: ... What is your assessment of the impact [the Agile Manifesto] has made?

A: ... the impact? I'm writing this on the plane back from the Agile Development Conference. Do you realize what it's like to talk to people who love their job? Who feel like at last they're allowed to produce at their peak? Without the Agile Manifesto, many people's jobs would be worse.

The emotion behind that answer looms ever larger. Why do I push the Agile methods? One of my two main reasons is the chance to reclaim joy in work.

## Posted at 15:39 in category /misc [permalink] [top]

Out of the side of your eye

A nice story from Jeff Patton, told on the Agile Usability mailing list. Copied with permission.

He was in a very strong XP organization with big investment in unit and acceptance tests. When the acceptance tests ran, the gui popped up and danced around for 15 or 20 minutes as if a really really fast ghost user was running the app. [picture Data on star trek] No one needed to see it, so it often ran on an integration machine in the corner. One day someone who focused on the UI was having a conversation with a developer and caught a view of the acceptance test running out of the corner of their eye. "Hey - that's wrong!" he said. Developer says: "Hmm... it shouldn't be, the acceptance tests pass." UI guy: "No, I saw it as it flashed by, the fields on form X were positioned incorrectly." They went back and checked, and they were indeed wrong.

When I visited their shop acceptance tests were running on any unused machine in the development environment. "We catch errors this way." my friend said. They'd started to rely on people catching things out of the corner of their eye while doing something else. They'd caught several issue this way. Me asking why they did this prompted him to tell this story.

It's amazing to me how fast the eyes and brain can parse a complex image and sense something out of order. For some tasks it's pretty difficult to write code that outperforms the brain.

Update: Jeff queried the original teller, Zhon Johansen, who wrote back. (Copied without permission; don't hurt me, Zhon.)

I liked your telling of the story. Only a couple of differences between the two tellings: 1) Lorin, an AT test writer, noticed the bug; and 2) the bug was a core issue. (We almost had the bug fixed before the ATs finished.)

If Lorin or any of our AT test writers had been concerned with usability, I am sure they would have noticed usability bugs. As this was not an isolated incident, it could easily have happened with usability guy.

It was a beautifully told story with perfect intent.

This isn't quite a Big Visible Chart, but it's something like that.

## Posted at 14:55 in category /testing [permalink] [top]

Workshop reminder: Tests as Documentation

I've posted before about a workshop that Jonathan Kohl and I are doing at XP/Agile Universe titled "Tests as Documentation". If you were planning to come, let me remind you of this: "We encourage anyone working on a software project who writes tests to [...] bring their tests and code with them."

It'd be a good time to start picking some tests to bring. Thanks.

## Posted at 14:54 in category /2004conferences [permalink] [top]

Sun, 08 Aug 2004

Workshop reminder: Who Should Write Acceptance Tests?

I've posted before about a workshop that David Hussman, Rick Mugridge, Christian Sepulveda, and I are doing at XP/Agile Universe titled "Who should write acceptance tests?" If you were planning to come, let me remind you of this, from the blurb:

Participants will be encouraged to have prepared positions or accounts of experiences regarding the topic. Each participant will be invited to give a brief statement regarding her opinions, experiences or questions related to the workshop topic. The workshop organizers will then facilitate discussions, based on topics derived from the opening statements.

We're not requiring advance submissions or even written positions, but the workshop will be better if people think of their brief statement in advance. Thanks.

## Posted at 16:27 in category /2004conferences [permalink] [top]

Sat, 24 Jul 2004

Methodology work is ontology work

I've had a paper accepted at OOPSLA Onward. I had to write a one-page extended abstract. Although I can't publish the paper before the conference, it seems to me that the point of an abstract is to attract people to the session or, before then, the conference. So here it is. I think it's too dry - I had to take out the bit about bright cows and the bit about honeybee navigation - but brevity has its cost.

(As you can guess from the links above, the paper is a stew of ideas that have surfaced on this blog. I hope the stew's simmered enough to be both tasty and nourishing.)

I argue that a successful switch from one methodology to another requires a switch from one ontology to another. Large-scale adoption of a new methodology means "infecting" people with new ideas about what sorts of things there are in the (software development) world and how those things hang together. The paper ends with some suggestions to methodology creators about how to design methodologies that encourage the needed "gestalt switch".

In this paper, I abuse the word "ontology". In philosophy, an ontology is an inventory of the kinds of things that actually exist, and (often) of the kinds of relations that can exist between those things. My abuse is that I want ontology to be active, to drive people's actions. I'm particularly interested in unreflective actions, actions people take because they are the obvious thing to do in a situation, given the way the world is.

Whether any particular ontology is true or not is not at issue in the paper. What I'm concerned with is how people are moved from one ontology to the other. I offer two suggestions to methodologists:

  1. Consider your methodology to be what the philosopher of science Imre Lakatos called "a progressive research programme." Lakatos laid out rules for such programmes. He intended them to be rules of rationality, but I think they're better treated as rules of persuasion. Methodologies that follow those rules are more likely to attract the commitment required to cause people to flip from one system of thought to another (from one ontology to another) in a way that Thomas Kuhn likened to a "gestalt switch".

  2. It's not enough for people to believe; they must also perceive. Make what your methodology emphasizes visible in the world of its users. In that way, methodologies will become what Heidegger called ready-to-hand. Just as one doesn't think about how to hold a hammer when pounding nails, one shouldn't think about the methodology, its ontology, and its rules during the normal pace of a project: one should simply act appropriately.

Methodologies do not succeed because they are aligned with some platonic Right Way to build software. Methodologies succeed because people make them succeed. People begin with an ontology - a theory of the world of software - and build tools, techniques, social relations, habits, arrangements of the physical world, and revised ontologies that all hang together. In this methodology-building loop, I believe ontology is critical. Find the right ontology and the loop becomes progressive.

## Posted at 13:06 in category /ideas [permalink] [top]

Sun, 18 Jul 2004

Testing aggregator

Antony Marcano has started up an aggregator for testing blogs. Send him recommendations for blogs.

## Posted at 10:45 in category /testing [permalink] [top]

Thu, 15 Jul 2004

I need an article on 'insourcing'

In the November/December issue of Better Software, we're going to have an article on outsourcing. I thought that it would be interesting to have a Front Line article on "insourcing". Many people have outsourced, been disappointed, then chosen to bring development back in house. A good article would tell the story of such an event and offer advice to people contemplating it. What problems can they avoid? What tricks of the trade should they know?

If you have such a story in you, mail me. Deadlines are a bit tight, unfortunately: first draft as late as August 15, but final draft is due September 1.

## Posted at 09:44 in category /misc [permalink] [top]

Mocks aren't stubs

Martin Fowler has a nice summary of the differences between mocks and stubs, or the difference between state-based and interaction-based tests. Way back when, Ralph Johnson had me review the original mock objects paper and I completely missed the distinction, which forced me to abase myself when I met Steve Freeman at Agile Development Conference 2003.

## Posted at 09:38 in category /testing [permalink] [top]

Strangler Applications

Martin Fowler shows what happens when a thinker goes on vacation:

When Cindy and I went to Australia, we spent some time in the rain forests on the Queensland coast. One of the natural wonders of this area are the huge strangler vines. They seed in the upper branches of a fig tree and gradually work their way down the tree until they root in the soil. Over many years they grow into fantastic and beautiful shapes, meanwhile strangling and killing the tree that was their host.

This metaphor struck me as a way of describing a way of doing a rewrite of an important system...

## Posted at 09:38 in category /misc [permalink] [top]

Wed, 14 Jul 2004

Three bills

Apparently there's something of a concerted push to get U.S. voters to contact their congresspeople about the need for verifiable paper trails in computer voting. I contacted mine today. I asked the senators to co-sponsor Senator Ensign's 'Voting Integrity and Verification Act,' # S 2437 or Senator Graham's amendment to HAVA ( # S 1980), and my representative to co-sponsor Rep. Holt's 'Voter Confidence and Increased Accessibility Act,' # HR 2239.

Phone numbers for legislators are easy to find.

Dutiful one that I am, it seems I should study the text of the bills to make sure I agree, but I don't have time. The general thrust seems right, and the crucial thing is to let our representatives know this is an important issue.

## Posted at 16:32 in category /misc [permalink] [top]

Draft Introduction posted

I've finished a draft of the Introduction to Driving Projects with Examples. The introduction is a fictional story of adding a feature to a product. My goal is twofold: to show the tasks involved in example-driven development, and to give a feel for what it should be like to work in such a project. For those with experience, I'm curious how this story compares to your own.

## Posted at 11:31 in category /examplebook [permalink] [top]

Agile Planet

Ian Davis has created an aggregator for Agile blogs called Agile Planet. One-stop shopping for all your agileoblogospheric needs.

## Posted at 08:14 in category /agile [permalink] [top]

Wed, 07 Jul 2004

Starting a book

I've started work on a book, tentatively titled Driving Projects with Examples: a Handbook for Agile Teams. All that's done to date is the Preface.

Some of you practice the style of development I'm documenting - or variants of it. If you do, I want to talk to you, be it on the phone, or via email, or in person. (I am budgeting travel money to visit worthy sites.) I'm serious about the "handbook" in the title: I want to fill it with tricks, tips, techniques, and stories. The more people I gather them from, the better the book will be.

## Posted at 11:00 in category /examplebook [permalink] [top]

Thu, 01 Jul 2004

Gambling vs. voting

Via Michael Bolton, a comparison of the measures taken to ensure the integrity of slot machines (high) vs. voting machines (low).

Election officials say their electronic voting systems are the very best. But the truth is, gamblers are getting the best technology, and voters are being given systems that are cheap and untrustworthy by comparison. There are many questions yet to be resolved about electronic voting, but one thing is clear: a vote for president should be at least as secure as a 25-cent bet in Las Vegas.

I've written my congressmen and gotten the usual "Thanks for your note, but I have a safe seat" reply from one, the "I'm quitting, so I won't reply" non-note from another, and a satisfactory reply from the the third. Perhaps something is amiss.

## Posted at 07:13 in category /testing [permalink] [top]

Mon, 28 Jun 2004

A roadmap for testing on an agile project (2004 edition)

I've published a concise description of how I'd want to do testing on a fresh agile project. I call it the "2004 edition" because I hope and expect that I'll have revised and extended my ideas next year.

## Posted at 16:12 in category /agile [permalink] [top]

Now here's a scary picture

## Posted at 11:15 in category /junk [permalink] [top]

At Agile Development Conference 2004: whither the Agile Alliance (a sketch)

I'm this year's chair of the Agile Alliance board, assisted as vice-chair by Mike "Power Behind the Throne" Cohn.

It was announced at the opening of ADC that the two North American conferences have merged. There will be a single conference next year, name to be selected by a community process. The two separate conferences were a historical accident, really, the kind of thing that happens when an act like the Agile Manifesto succeeds so wildly.

Mike, Todd Little of the Agile Development Conference, and Lowell Lindstrom and Lance Welter of XP Agile Universe have done an excellent job of laying the groundwork for a merged conference that retains what's been special about each. The highest priority of the Agile Alliance is to make the merged conference go well, mainly by staying out of the way except when the organizers tell us to clear away obstacles.

(There may also be smaller local conferences, modeled somewhat after the No Fluff, Just Stuff symposia. I'm especially interested in events that could bring customers (goal donors, product owners, etc.) together for a day. Those people have a tough job, and there's not enough institutionalized support for them.)

The second priority is to use the web site as more of a hub for a community of members. The two goals of the Agile Alliance are to support people already in agile projects and to facilitate the creation of more agile projects. That's mostly about people and interactions, it seems to me.

For example, a long-ago coworker just contacted me and asked if I knew of Agile opportunities in the Madison or Minneapolis areas. I didn't, though I could get him one link closer to people who did. But wouldn't it be convenient if he could go to the Members section of the website and directly find people in an area who would be willing to answer questions about it? People join the Agile Alliance because they want to support a movement, not because they get that much in tangible benefits - so let's enable their altruism.

I've been the webmaster for the Agile Alliance for about two months, so I'll be doing the hands-on work (at least for a while). I'm not awfully qualified, but I see this as an opportunity to learn better website management. Mike Cohn is the Customer for our product backlog, so we'll be having frequent(ish) new releases of features.

Let me give a big thanks to Ken Schwaber, who founded the Agile Alliance and has been the chair until now, and Micah Martin of ObjectMentor, who created the website for us and has been its webmaster until now. (And continued thanks to ObjectMentor for hosting the website.)

## Posted at 10:14 in category /adc2004 [permalink] [top]

Fri, 25 Jun 2004

At Agile Development Conference 2004: ownership

For whatever reason, I was in a scrappy mood this afternoon and evening. And I took offense to the word "own", as used - for example - in this statement "... and the customer owns the acceptance tests".

I have yet to think through my objection, but it's somewhat summed up in the remark I made twice today: "If she owns them, how much can she sell them for?"

Ownership is all about a culture of scarcity and exclusion, not a culture leaning toward abundance and sharing. You own something if you can prevent someone else from using it. Why is ownership a useful metaphor within an agile team?

## Posted at 07:50 in category /adc2004 [permalink] [top]

At Agile Development Conference 2004: the Blindfold Game

My faithful reader will recall that Elisabeth Hendrickson and I were running a tutorial on exploratory testing, but we wouldn't actually test software. Instead, we would have people create a game that they would then test.

Here's an interesting game that one team created. It was intended to demonstrate the value of rapid feedback. It consisted of two teams of two people. A team consisted of a car and a driver. The "car" was blindfolded. He or she moved in a particular direction until commanded otherwise by the driver. The driver could issue commands at fixed intervals, with one exception: if the car was about to damage itself, the driver could say "stop!" Then they would have to wait for the end of the interval for the next command. We didn't want anyone to tumble off the balcony to the ground floor.

Each team had to make it to a gas station. "Gassing up" was represented by a small tented card. The first blindfolded person to pick up the tented card was the winner.

The trick is that one driver could issue commands every twenty seconds, but the second could do it every five seconds. That is, one car had more frequent feedback. Guess which one won?

Yes, it was the one with more rapid feedback. I thought this game worked nicely. For a team whose whimsicality hadn't been beaten out of them, it would be a fun introduction to the virtues of rapid feedback. Give it a try!

There was one interesting twist. We ran a trial of the game in which the intervals were reduced. The five-second car now got instructions every second. The 20-second car got instructions every five seconds.

I'd been the five-second car, now the one-second car. Our team noticed two things. One was that the stress level increased dramatically. Both I and the driver got flustered keeping up with the demands of the clock. The other was that we didn't beat the five-second person by much.

Does this tell us anything about iteration length? Beats me, really. But I have heard comments that make me think that one week iterations are too short for some people, that they feel they don't really get a chance to get into the flow before they have to switch direction. They might well be able to learn to be comfortable with one-week iterations: but is it possible that a given team has a "natural" iteration length that it needs to discover?

## Posted at 07:50 in category /adc2004 [permalink] [top]

Thu, 24 Jun 2004

At Agile Development Conference 2004: verbal explanations

I had a thought at a session yesterday. The agile methods depend a great deal upon the Goal Donor explaining her desires to the development team. And yet, most people are lousy at explaining themselves. They're lousy because they're not taught about:

  • how to use stories (linear narratives) to engage the listener
  • how and when to draw pictures
  • how and when to use examples
  • when to drop into showing vs. just talking
  • the importance of repeating yourself
  • "active speaking": probing to see if you're understood
  • answering the question behind the question
  • ... and so forth (what do I know? I'm a fairly good explainer, but I'm all self-taught.)

As I think about this, I find myself really annoyed. How can people not be taught this broadly useful life skill? It's not as if XP Customers are the only people who have to explain themselves. Where's a writers' workshop for verbal explainers - a forum where an explainer can make an explanation, be videotaped and audiotaped, have listeners offer both due praise and constructive criticism?

If anyone knows of anything like this, please contact me. I want to attend and bring this knowledge back into the agile world.

## Posted at 21:09 in category /adc2004 [permalink] [top]

Wed, 23 Jun 2004

At Agile Development Conference 2004: a LAWST extension

I'm at ADC, and I'll be posting occasional thoughts in this blog category.

Ole Larson did a session on customer collaboration. He ran it in something like the LAWST format. The neat extension was that, in the beginning, the participants broke into pairs. Each person explained her story to her pair, taking exactly five minutes. Then the pair reciprocated. Then the pairs coalesced into quartets, and each person in the quartet again explained her story to the other three. Then the quartets decided which of the four stories would be presented to the whole group, in the conventional tell-the-story, clarifying-questions, open-game format.

The advantages? The story gets more concise and clear each time, and the whole group's time isn't spent on a suboptimal story.

## Posted at 17:13 in category /adc2004 [permalink] [top]

Sun, 20 Jun 2004

Exploratory game design

In our tutorial on exploratory testing at Agile Development Conference, Elisabeth Hendrickson and I will be doing something odd. We'll talk about exploratory testing of software, but we'll demonstrate it by having teams design and test a game. Our view of where exploratory testing fits into Agile is that it's a dandy end-of-iteration activity, during which people give the software a test drive and get ideas that feed into later iterations.

But suppose we wanted to demonstrate that with software. We'd spend half an hour getting people's laptops ready, then they'd do the exploratory testing, then... what? You can't have another iteration - the software is what it is. So that would miss the feel of the process, and the feel is important. So we'll concentrate on the feel - and on four key techniques - and defer the direct experience of software exploration until after the session (perhaps later in the conference).

Coming to the tutorial? Here are the game design notes. Couldn't hurt to read them in advance (but it's not required).

## Posted at 13:13 in category /agile [permalink] [top]

The danger of numbers

From a Washington Post article summarizing the state of Iraq:

Bremer acknowledged he was not able to make all the changes to Iraq's political system and economy that he had envisioned, including the privatization of state-run industries. He lamented missing his goal for electricity production and the effects of the violence. In perhaps the most candid self-criticism of his tenure, he said the CPA erred in the training of Iraqi security forces by "placing too much emphasis on numbers" instead of the quality of recruits. (Emphasis mine.)

In a Wall Street Journal article about the Abu Ghraib scandal, we have this:

"The whole ball game over there is numbers," a senior interrogator, Sergeant First Class Roger Brokaw, told the paper. "How many raids did you do last week? How many prisoners were arrested? How many interrogations were conducted? How many [intelligence] reports were written? It was incredibly frustrating."

From a Christian Science Monitor article on the same topic:

Yet Specialist Monath and others say they were frustrated by intense pressure from Colonel Pappas and his superiors - Lt. Gen Ricardo Sanchez and his intelligence officer, Maj. Gen. Barbara Fast - to churn out a high quantity of intelligence reports, regardless of the quality. "It was all about numbers. We needed to send out more intelligence documents whether they were finished or not just to get the numbers up," he said. Pappas was seen as demanding - waking up officers in the middle of the night to get information - but unfocused, ordering analysts to send out rough, uncorroborated interrogation notes. "We were scandalized," Monath said. "We all fought very hard to counter that pressure" including holding up reports in editing until the information could be vetted.

I am reminded of my paper, How to Misuse Code Coverage (PDF). (I'm a little appalled that I'm comparing bad testing to Abu Ghraib. Thank God I lead so sheltered a life that I can make such comparisons. But onward.)

I have a wary relationship with numbers. On the one hand, you do sometimes have to make decisions, and when two parties disagree, numbers can shorten arguments. On the other hand, numbers do not merely measure some chosen aspect of reality, they also serve to create reality, often with horrifying unintended consequences.

What to do?

  • Cem Kaner has recommended balanced scorecards, the basic idea - I believe - being that it's harder to "game" multiple numbers than one.

  • I often ask people proposing new techniques, "What could go wrong?" That has two sub-questions: "Let's assume that your idea is wonderful in general. But there must be situations for which it's a bad idea. What are they?" And "Even if your idea is wonderful in this situation, it will be implemented by frail, fallible, and probably inexperienced humans. What mistakes are they likely to make?" Those questions can be used when someone proposes a particular measurement. The followup question is "how will we know when things are starting to go wrong?"

  • People know when numbers are being misused, if only through a vague feeling of disquiet. They need time and permission to reflect. Enter the retrospective.

  • Keep pointing out the dangers until the riskiness of numbers becomes common knowledge. Catchy slogans help.

  • Teach people the difference between numbers and reality. Cem Kaner has an article (PDF) on that topic.

But those seem mostly negative, reactive. We also need examples of problems solved through incremental use and adjustment of partial information. It also seems to me we need changed attitudes toward management, subjectivity and objectivity, information flow, problems, and solutions. But those are topics for another day.

## Posted at 11:46 in category /misc [permalink] [top]

Fri, 11 Jun 2004

Maybe something about coaching

In response to an editorial I derived from my posting about William James, a correspondent quotes some Zen:

We use [kong-ans (Korean) or koans (Japanese)] to teach how it is possible to function correctly in everyday life. Sometimes old Zen Masters' answers to a question are not correct, but they used the situation to teach correct function, correct life to others. For example, two monks were fighting over whether a flag was moving or the wind was moving. Hui-neng, the sixth patriarch, who was passing by, said, "It's your mind that's moving." Again, this is not correct, but he used "your mind is moving" to teach correct life.

If I tried that sort of thing, the two monks would stop, look at me, look at each other, roll their eyes in harmony, and walk off together laughing. (See, I do know how to teach correct functioning in everyday life... but since I get quite enough eye-rolling from my children already, thank you, I try to stick to correct answers.)

## Posted at 07:42 in category /misc [permalink] [top]

Thu, 10 Jun 2004

Your heart as a squirming bag of worms

In response to my posting about the kludgy body, Pete TerMaat sent me some entertaining notes about how complex even a straightforward bodily function is. I'm really repeating these because they're cool, but I suppose I need some Grand Metaphorical Lesson, that being my schtick. How about "think of these next time you're tempted to whine about complex business rules"?

Even though it's simple in purpose, the heart is integrated in ways that are tough to duplicate.

Medtronic spends millions trying to come up with a hunk of metal that can replace just the electrical (pacemaking) aspects of the heart. The company's first pacemaker came about when the founder, Earl Bakken, grabbed a metronome circuit from a Popular Electronics magazine, and hooked it up to some leads so that the circuit would provide pacing pulses to the heart. Simple enough...

But not as sophisticated as the heart, which has some tight integration with the rest of the body. For example, when you merely *think* about running, your pulse starts to quicken in anticipation of greater demands on the heart. Also, when you go to sleep, your heart slows down.

How do pacemakers handle the problem of ramping up the pulse in reponse to exercise? They have motion detectors in them. One of the early models was fooled by a woman who knitted a lot. When she sat in her rocking chair and bobbed forward/backward, the motion detector figured she was out for a jog, and ramped up her heart rate.

There's another story about a rock climber. During an ascent he'd stop for a breather. As soon as he started climbing again, he needed his heart rate to increase. But the pacemaker lagged his needs. His solution was to pound his chest repeatedly, causing vibrations that were picked up by the pacemaker and interpreted as motion caused by exercise. This technique was a crude "remote control" for his pacemaker.

How do pacemakers know when you're sleeping? They check the time of day and compare it against the bedtime that your doc has programmed in for you. Again, not as nicely integrated as the real heart.

Later:

Another tidbit: the "rate response" feature of modern pacemakers, where they detect a person exercising and respond with a faster heart rate, was discovered by accident. Medtronic engineers put a vibration sensor in a pacemaker, hoping to to detect a particular condition--possibly ventricular fibrillation (where the heart quivers like a bag of worms), or ventricular tachychardia (an overly fast heartbeat). When they put the device in a canine for testing, it picked up a lot of "noise". They figured out that the "noise" happened when the dog moved. From that came the idea of interpreting the vibrations as body movement, corresponding to exercise.

(Posted with permission.)

## Posted at 09:47 in category /misc [permalink] [top]

Micro-techniques event

Apropos of my posting mentioning micro-techniques, Steve Freeman points to Joe Walnes's Personal Development Practices Map workshop at Agile Development Conference. Sounds rather like what I was asking for (depending on the granularity of the practices they consider).

## Posted at 08:14 in category /misc [permalink] [top]

Wed, 09 Jun 2004

Your body is a gross kludge...

... so how come it works so much better than your software?

It's a commonplace of software that code should do one thing and do it well. Having heard from my wife (a professor of veterinary medicine) numerous stories of the complexity and interconnectedness of bodily systems, I once asked her if there was any organ that did one thing. She hesitated for a bit, said that the heart is mostly a pump, but then she described some completely unrelated thing the heart participates in.

Fact is, the body is a gross kludge. You'd fire anyone who designed software that way.

My favorite story along those lines is about the kidney. The kidney is a processing pipeline. One stage in the pipeline extracts too much water from the urine. A later stage puts it back in. An obvious question from the do-one-thing-and-do-it-well crowd is "Why not extract the right amount in the first place?" Well, it's probably happenstance. Think about a fish. Getting water isn't a problem. So you can afford to waste it. Then you crawl onto land. All of a sudden, water is a problem. So why not kludge on a stage that puts back the water you formerly wasted? And we all live with the legacy of that addition.

I once wanted to do a keynote-style speech on this topic. It would begin with me ambling up to the stage, facing a few hundred people, smiling genially, and saying "Let's talk about urine." Admit it: it'd catch your attention.

The problem is that I didn't really have anywhere to go after that. The obvious retort is, "Sure, if you can spend millions of years watching failure after failure die, you can make any kind of kludge work, but we have to deliver something next quarter." Which I can't really argue with. But I wish I could. I persist in thinking that maybe there's something interesting about the fact that these fantastically successful systems - bodies - so good at absorbing abuse and coping with change, are not the kind of systems that appeal to us computer people.

## Posted at 21:00 in category /misc [permalink] [top]

JUnit Recipes

[Sorry about the repeat post. My ISP does not seem to have been as good about backups as I might have hoped, so I'm restoring some bits piecemeal.]

I've been reading J.B. Rainsberger's forthcoming JUnit Recipes. I quite like it. It's something of a patterns book, with problems and solutions embedded in a context.

One of the things I like about it is the emphasis on micro-techniques. At XP/Agile Universe 2002, Alistair Cockburn gave a keynote in which he said that a lot of the difference between an expert and a mediocrity is that the expert knows a whole host of micro-techniques, little tricks of the trade that add up. It's hard to acquire enough micro-techniques. You can find that mythical perfect project, stuffed full of programmers from a diversity of other projects. Drop yourself into that project, and you'll pick up different micro-techniques from each of them. Or you could bounce from project to project, learning new things in each. (This is the roving consultant's approach. Requires Travel. Lots and Lots of Travel.) Less good - but workable - is to wade through mailing lists, blogs, and wikis, carefully separating the wheat from the chaff. J.B.'s done some - maybe all - of those things and boiled the techniques down nicely into a browseable form.

The other thing I like is the conversational tone. I'm only a half-good programmer, but I do spend time with good programmers. There exists a programmer's community of practice, a community that builds identity and examplars of good behavior through conversation about work - conversations usually held in break rooms, lunch rooms, blogs, and the like. A lot of programmers are not part of that conversation, and it shows. J.B.'s book, I think, does a nice job of introducing such programmers to "the community givens" without making a lecture of it. They'll learn a lot of design and Agile lore if they study the book. Will they? Dunno. Hope so - hope they're seduced into it by the problem-focused recipe format.

P.S. For the first Agile Development Conference, I proposed a "Micro-Techniques Faire":

Suggest providing rituals for people to break out and talk about micro-techniques. For example, a hallway conversation with Nathaniel Talbott at XP/AU shifted into seven people clustered around a table, looking at two variant ways of doing what I'm now calling "coaching tests". Can that stuff be ritualized? (I think of Open Space as a similar kind of ritualization.)

I envision a big room. One side has tables. One side has lots of open space and blank walls. At the tables, people who have micro-techniques to share are sharing them with small groups, who came together because they knew a particular micro-technique would be shown there at that time.

Over in the open space, people are demonstrating "wall as tool" micro-techniques. For example, Norm Kerth might be showing the project energy timeline he uses.

And so forth.

Nothing came of that, and I rather suspect I'll be too busy at both Agile conferences this summer to organize any such thing under the auspices of Open Space. So anyone intrigued by the idea should take it and run with it, this year or next.

## Posted at 12:49 in category /agile [permalink] [top]

Thu, 27 May 2004

The power to cloud minds

Earlier, I wrote about how tests should contain as little verbiage as possible. Often, the step-by-step nature of ActionFixture, StepFixture, or DoFixture tests obscures the point of the test.

In a conversation about that, Ward Cunningham observed:

I had a client once that produced business facts in nice column fixture compatible tables until the application gui came up and all reasoning about the application turned into sequential steps.

Jim Shore replied:

Interesting... that's what happened with me, too. Before we had a GUI, lots of fantastic domain-specific column and row fixtures with explanatory graphs. After the GUI, nothing but sequential steps.

Something to watch out for, I suppose, even if I can't explain it.

## Posted at 12:19 in category /fit [permalink] [top]

Groundhog day

Today, I became the 100,000th person to set his iPod alarm clock to play "I Got You, Babe".

## Posted at 07:38 in category /junk [permalink] [top]

Wed, 26 May 2004

Agile testing directions: the table of contents

Here is a set of links to my essays on agile testing directions. It's the same as the set on the right, but it's easier to link to.

Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

 

## Posted at 12:47 in category /agile [permalink] [top]

When is an agile team working up to its potential? (continued)

(See the original.)

Some additional suggestions in email comments. (I know, I know, I should enable comments.)

  • Everyone on the team has an idea of what other people on the team are working on and can say why that is important work to do.

  • Add "overstressed" to "People should not feel unchallenged or frustrated or overworked."

  • Team are aggressive about adjusting to the changing environment. When things change the team are discussing how to address it in a positive way and not with head in hands crying "we're all doomed".

  • Team are constantly looking to continually optimise their performance and (to avoid local minima) are always looking for ways to make the end to end process more efficient.

  • If you look around the team you find most things are being built just in time, little is just sitting on a shelf waiting to be tested, read, commented on etc. e.g. little untested code lying around.

  • If you look at the end to end process you see a smooth flow of value add activities, all the way from ideas down to delivery. Queue and batch is used minimally, and only where needed.

Thanks to Jeffrey Fredrick, Dadi Ingolfsson, and Andy Schneider.

## Posted at 11:06 in category /agile [permalink] [top]

Update to Bang feature of ColumnFixture

Earlier, I described an update to the Fit ColumnFixture that lets you head a column with a name like "calc!", which means that "calc" is an operation that's not expected to produce any result. (Later columns check what the operation did.)

At Rick Mugridge's suggestion, I've updated it so that "error" tags can be used when the operation is expected to fail. That looks like this:


Normally, the column below the "bang" cell has nothing in it. However, the cells may use the "error" notation if the action is intended to fail. The next three columns show different options for what to do once a setter method fails as expected.

fit.ColumnFixtureBangTester
this will fail!
you can ignore()
you can check()
you can expect errors()
error

some ignored value that shows greyed out
an expected value
error

Source is at the same old place.

## Posted at 08:20 in category /fit [permalink] [top]

Sun, 23 May 2004

No wasted motion in tests

There are a couple of catch phrases in programming: "intention-revealing names" and "composed method". (I think they're both from Beck's Smalltalk Best Practices Patterns, but this plane doesn't have wireless. Imagine that.) The combined idea is that a method should be made of a series of method calls that are all at the same level of abstraction and that also announce clearly why they matter. A good idea.

In my travels, I don't find that tests follow those rules. Tests too often contain superfluous text: it is (or seems) necessary to make the silly things run, but it obscures their intent. Let me give you an example. It's a test I wrote. It's about when students and caretakers in a veterinary clinic perform certain tasks.

Orders are given to caretakers and students. Some orders depend on whether the animal's in intensive care or not.

AnimalProgressFixture
new case
Betsy
Rankin


Rankin brings in a cow
diagnosis
severe mastitis




order
intensive care




order
soap


subjective objective assessment and plan
check
student does
soap
daily


check
student monitors temperature
6 hours

because in intensive care
check
caretaker does
milking
never

no one milks - milking has to be ordered.
check
student does
milking
never


order
milking




check
caretaker does
milking
12 hours


etc.
etc. etc.


Now, this test will drive most any programmer toward a state machine. (The complete test has more complicated state dependencies than you see.) The problem is that it's very hard to tell whether all the relevant sequences of orders have been covered. The relationship between sequences of clinician orders and worker actions is obscured.

I claim the following tables are better:

TaskFixture
check
possible caretaker tasks are
milking


check
possible student tasks are
milking, soap, temperature



TaskFixture
when does caretaker milk




check
ordinarily
never



check
when clinician orders or records milking
12 hours


check
when clinician orders or records milking, discharge
never


check
when clinician orders or records
milking, death
never

death always has same effect on orders as discharge does

I worry that might not be completely clear. Like many descriptions, it depends on previous domain knowledge. For example, the domain expert and I had a lot of discussion about the difference between a clinician "ordering" something and "recording" something. At this point, there's no difference as far as the program's concerned; but there is a clear distinction in the expert's mind, making it worthwhile to preserve both terms. So the very last line says, "check that when a clinician orders milking and then later records a death, the caretaker never milks that (dead) cow."

It is, I think, easier to check completeness in the latter table because it encourages systematic thinking:

  • what's the starting state?
  • what could cause the caretaker to milk?
  • what could cause the caretaker to stop?

That's not to say that the first test was useless. By discussing tasks with reference to the flow of events in a real medical case, I was encouraged to learn and talk about the domain. I learned things (like that ordering and recording have the same effect). So the first test was a good starting point for conversation, but it was not a good summary of what was learned. Nevertheless, it seems to me that people get trapped into picking one test format and sticking to it too long. Step-by-step formats seem particularly sticky.

I'm not sure why we end up that way, but I have two speculations.

(1) It seems that there's often a division of labor. One person writes the test (perhaps a business expert, more often a tester), and another person implements the "fixturing" that makes the test executable. The problem is that a new table format that helps the tester causes more work for the programmer. Given the usual power imbalance in a project - a programmer's time is more valuable than a tester's - reusing old and mis-fitting fixtures is the natural consequence. (I should note that this new table format was actually quite simple - only one support method was more than a couple of lines of obvious code - but I initially hesitated because it looked different enough that it seemed it must be more work than that.)

(2) The testing tradition is one of implementing tests to find bugs, not one of discovering the right language in which to express a problem. The Lisp ethic of devising little, problem-specific languages is missing. It's not intuitive behavior; it's learned. And that approach - and its power - haven't been learned yet amongst test-writers.

But I think we need to instill a habit in (at least) business-facing test writers that says that both repetition and verbiage that obscures the intent of the test are bad, are signs that something's amiss.

## Posted at 21:33 in category /fit [permalink] [top]

Speaking their language

So here I am in the Salt Lake City airport. I just finished a couple of days in support of a redesign of the Agile Alliance web site, aiming to make it more supportive of people aiming to sell Agile to executive sponsors.

The people we interviewed brought up a couple of interesting points. One is the need for the whole organization (marketing, etc.) to change in order to take advantage of more capable software development. Otherwise, the benefits of Agile get dissipated by impedence mismatch.

Another was the perennial catchphrase that agile advocates "need to talk the executive's language".

One chance utterance of the latter made me flash to Galison's Image and Logic: A Material Culture of Microphysics, which is all about how scientific subcultures adjust to each other. He uses the metaphor of a "trading zone" between subcultures, in which they communicate through restricted languages that he likens to pidgins and creoles.

Galison is not saying that Wilson (who invented the cloud chamber) didn't speak English to the theorists who used his results. He's saying that they used a restricted vocabulary and invented specialized communication devices like diagrams. Those devices meant something different to each party, but they allowed detailed coordination without requiring anyone to agree on global meaning.

Moreover Galison claims his scientists used objects in particular ways: "... it is not just a matter of sharing objects between traditions but the establishment of new patterns of their use [...] I will often refer to wordless pidgins and wordless creoles: devices and manipulations that mediate between different realms of practice in the way that their linguistic analogues mediate between subcultures in the trading zone." (p. 52)

I hope you can see where I'm going with this. It's not that "we" need to speak "their" language: it's that both groups need to learn a new language that works for our joint purposes. That'll be especially true as executive sponsors see the agile team as a responsive tool they can wield flexibly toward their ends.

Obedient reader that I am, I'm not peeking ahead to Galison's big summary chapter. First, a further 400 pages of exhaustive details about bubble chambers and the like. So any summary of what thinking tools Galison offers us will have to wait. In the meantime, I should point to last year's writeup of Star and Griesemer's boundary objects. Galison's ideas are close to theirs. He's more explicit about the mechanisms of language, and he expands the focus from just objects (perhaps abstract) to include procedures and acts of interpretation.

## Posted at 21:27 in category /agile [permalink] [top]

Mon, 17 May 2004

Seeking victim or perpetrator

For an upcoming Better Software issue on computer security, I'd like a "front line" piece written by a victim or perpetrator of some sort of interesting security event. Possible titles: "Confessions of a Virus Writer", "I am a White Hat Hacker", "My Day in DDoS Hell", etc.

Such articles typically start off with a story, then expand out to a few generalizations. They're usually pretty easy to write: just write down what you'd tell someone over beers. (Here's a PDF sample.) 1200 to 1500 words. First draft deadline is June 12.

If you're interested, mail me. Thanks.

## Posted at 12:49 in category /misc [permalink] [top]

Fri, 14 May 2004

When is an agile team working up to its potential?

While writing the trip report for last week's client visit, I ended up thinking about how an external observer would know that a team had made it to the fabled Land of Agility. What would you look for?

Here's the list I put together. I welcome corrections and additions from those with better insight.

  • People are pleasantly tired at the end of the day. The mental feeling should be like the physical feeling of finishing a long hike: you've really worked, but it feels good, it's an accomplishment. People should not feel unchallenged or frustrated or overworked.

  • There's a lot of chatter around, and it's focused on the work, the product, and the craft (rather than, say, company gossip).

  • Learning is happening extremely fast - and as an inseparable part of getting the job done.

  • People are optimistic about difficulties. For example, if unit tests are slowing the team down, there's no resignation to an ugly reality. Instead, someone is doing something about it.

  • People are comfortable and proficient at proceeding in small steps. For example, consider a big mess of legacy code. An ideal agile team would be able to clean it up a bit at a time, with each bit tied to some story that delivers business value. A team that wasn't there yet would feel the need to rewrite in one big gulp.

  • The whole team is monomaniacal about business value and pleasing the customer.

My reaction on rereading this is that only the last two have anything particular to do with agility per se. I think that's OK.

## Posted at 14:14 in category /agile [permalink] [top]

Thu, 13 May 2004

Exploratory testing with Ruby

I've written an article that shows novice programmer/testers how to use Ruby for exploratory testing of Google's web services interface. I hope it convinces some tester somewhere that programming is easy and fun.

## Posted at 16:32 in category /ruby [permalink] [top]

Situational knowledge

Ben Hyde generalizes from assembly language programming:

It's always good to get close to the situation; and I suspect that any situation will do.

## Posted at 08:02 in category /misc [permalink] [top]

Fit-users mailing list

Jim Shore has started the fit-users mailing list.

This list is for general discussion of Fit. Appropriate topics include how to use Fit on your project, fixtures, and approaches to test-driven development using Fit. Direct discussion of Fit development to fit-devl@lists.sourceforge.net.

## Posted at 07:36 in category /fit [permalink] [top]

Wed, 12 May 2004

Strong opinions

My beloved wife is a veterinarian. I may have been the first computer programmer she'd ever met. While we were courting, we hung out with a variety of my programmer friends. One day, we were walking along, hand in hand, and she said, "Your friends are nice, [pause] but [pause] they have really strong opinions [pause] about everything".

That was probably 15 years ago, but it still comes back to me occasionally. It came back to me once, in 1999, when a veterinarian at a party explained to me how simple the Y2K problem was. I suddenly realized that I'd never before had a veterinarian tell me - from a position of shallow knowledge - how easy my job is, but that I'd been guilty many times of assuming that I could, with only brief exposure, master and correct someone else's job.

I wonder how often agile projects fail because the programmers take over from the customers?

## Posted at 15:32 in category /misc [permalink] [top]

Mon, 10 May 2004

Agile customer mailing list

Inspired by a couple of consulting trips, I've started a new mailing list: Agile-Customer-Today. Here's the blurb:

This group is to serve those people whose role on an agile team is to guide the project. They're called, variously, "customers", "product owners", "business experts", "Goal Donors", etc. It's a really hard job, so this group is for those people to ask questions, share ideas, and describe their experiences.

A typical conversational thread might start with one customer saying, "I am having real trouble explaining my stories clearly enough that they're really 'done' when the programmers think they're finished. For example, [...]. What do other people do?" Then other people on other projects will share tricks of the trade.

This group isn't only for customers. Having worked with customers on past projects, programmers, testers, and managers can help other customers today. But this is not a group *about* customers; it's a group *for* them. The acid test for posting should be: "Will my words, read today, help some customer on some project do her job better?"

You can sign up by sending mail to agile-customer-today-subscribe@yahoogroups.com or visiting: http://groups.yahoo.com/group/agile-customer-today.

## Posted at 12:30 in category /agile [permalink] [top]

Sat, 01 May 2004

The continuity of practice subcultures

Testing and programming are two independent technical subcultures. I think they need to talk to each other more. Within agile projects, I favor moving toward a single blended technical culture.

To that end, I'm reading Peter Galison's Image and Logic: A Material Culture of Microphysics, a modest pamphlet (955 pages) on two subcultures in experimental particle physics. He talks about how those subcultures evolved, how they interacted, how they competed for attention, and how they blended in the end.

He tags one subculture with "Image". It's comprised of, roughly, those people who take cool pictures of elementary particle tracks. Their goal is "the production of images of such clarity that a single picture can serve as evidence for a new entity..." (p. 19) The second subculture is "Logic" (after electronic logic). In it are the people who build things like Geiger counters: "These counting (rather than picturing) machines aggregate masses of data to make statistical arguments for the existence of a particle..." (p. 19)

So. Here's a gleaning. Subcultures have continuity. They change in response to the outside world (changes in physical theory, for example), but they also persist and evolve according to their own internal logic.

What sources of continuity does Galison find? And are there similar sources in the two subcultures I care about?

  1. Pedagogical continuity: When it came to cloud chambers, Wilson taught Millikan taught Anderson taught Glaser... Moreover, there's relatively little overlap between the Image and Logic "teaching lineages".

    Like many worthwhile professions, much of testing and programming is taught through apprenticeship. That's especially true for testing, whose practice hasn't (until recently) had much of a toehold in academia.

  2. Technical continuity: Image people needed to know about track analysis, photography, and micrometry. Logic people needed to know about high voltages, logic circuit design, and gas discharge physics. Skills did not translate easily across the divide.

    I'd like to think the divide between testing and programming is more bridgeable, but it nevertheless exists. It's a problem that the technical skills one side pridefully brings to bear seem not that important to the other.

  3. Demonstrative or epistemic continuity: This is the kicker. Roughly, the Image people believed strong arguments rested on pictures, whereas the Logic people believed they rested on statistics. Each found the other's evidence (and arguments about evidence) somewhat unpersuasive.

    I say it's the kicker because it seems to me the big divide between testing and programming is what counts as a valid and noteworthy work. To programmers - again, roughly - it's a product that does progressively more stuff. It's new business value. To testers, it's a product that's progressively more capable of surviving attack. It's lack of negative business value.

Since blending subcultures is my goal - rightly or wrongly - I should be attentive to each of these three kinds of continuity. The blend needs each of them.

## Posted at 16:42 in category /agile [permalink] [top]

Fri, 30 Apr 2004

Cyborgs

A programmer's reaction to agile sometimes includes something like this: "I got into computers because I didn't want to work with people!" That's a reason to avoid bullpens, pair programming, and all that.

I am that kind of person. If you'd told my coworkers 20 years ago that I'd be pushing agile, they'd be flummoxed. I was the person who wanted an interior office, not one with a window, because I liked disappearing into my cave and working alone. I came and left at odd hours, sometimes quite out of sync with the rest of the world. I'm the classic introvert, someone who gets rejuvenation in solitude. (This notwithstanding my habit of speaking happily in front of hundreds of people, which is a character quirk for another day.)

But let me suggest that "I'm not a people person" is an oversimplification, at least for me. I have radically different ways of working with people.

There is working with people in relation to the machine and there is working with people in relation to people. I do hefty amounts of both of those things. I help people use Fit, I pair program, I pair test, I teach those things one-on-one. I also sometimes try to facilitate agreement within a group. Even worse, I try to facilitate agreement between groups about each other. That kind of thing is hard, harder than pairing on code. It's draining. The former is invigorating (though at the end of the day, you know you've worked).

It's hard for me to express the key difference. I don't think it's the number of people. I don't think it's that pair programming is more structured than group meetings - in many ways, I think it's more free-form than a well-run meeting. I don't think it's that when I moderate a meeting I have to be attentive to the process and the people - because I do that too when pair programming.

Somehow, the difference seems to me bound up with the focus of attention: which is the thing, the program, the observable change in something that acts material.

So my answer to people who object "I got into computers because I didn't want to work with people!" will be... well, something like this, but succinct, catchy, and convincing.

## Posted at 08:37 in category /agile [permalink] [top]

Wed, 28 Apr 2004

Interviewee needed for "What Is Quality?"

Better Software magazine (for which I am an editor) is running an occasional feature this year, "What Is Quality?" It's an interview with some person with an opinion on that question. Issue 6 is going to be an agile theme issue, so we'd like:

... to interview someone from the agile world whose view of quality has changed now that his company has taken a more lightweight approach to things. (Quality doesn't have to be a drag. Quality doesn't have to be heavyweight. There's room for quality in an agile universe... that sort of thing).

If you're interested, have a compelling story to tell, or a fervent opinion to share, drop me a line.

## Posted at 15:28 in category /agile [permalink] [top]

Thu, 15 Apr 2004

More Fit indentation

Part of a series on FIT extensions and stylistic tricks.

The ColumnFixture and RowFixture are good at checking lots of values in a small space, but they're awkward for describing test steps. The ActionFixture and my StepFixture are good at describing steps, but I find row after row of checks cumbersome and distracting.

People tend to interweave the two, using each for what it's good at. In keeping with my tendency to set off checks from test steps, I tend to indent the checking fixtures, like this:

AnimalProgressFixture
new case
Susie
Rankin


diagnosis
therapeutic truck ride



charge
30




CurrentAccountFixture
owner
patient owner charges
owner balance due status
state charges
clinic charges
Rankin Susie
30
30
in clinic 0
0

The animal remains on the books until it's both paid for and discharged.
Here's the normal case: the owner comes in, pays his bill, and receives the animal.

ContinuedAnimalProgressFixture
payment 30



CurrentAccountFixture
owner
patient owner charges
owner balance due status
state charges
clinic charges
Rankin Susie
30
0
in clinic 0
0

ContinuedAnimalProgressFixture
record
discharge




CurrentAccountFixture
owner patient owner charges
owner balance due status
state charges
clinic charges

...

## Posted at 14:44 in category /fit [permalink] [top]

Wed, 14 Apr 2004

Gift exchange

[Update: added link to Marcel Mauss's The Gift. Thanks to Finlay Cannon for correcting the misspelling that made me unable to find it.]

The gift exchange, as opposed to barter or contractual exchange, is particularly well suited to social systems in which great reliance is placed on the ability of well-socialized persons to operate independently of formal controls.

W.O. Hagstrom, The Scientific Community, p. 21

An agile team is such a social system. A company within which agile projects flourish might be one too. What kind of gifts are exchanged?

Given my niche, I'm interested in gift exchange between testers and other team members. It seems to me that the gift the testers (at least those on the left side of the matrix) give is to increase the velocity of the team. They help the programmers produce more features per iteration, enhancing their credibility, giving them more scope to do the kinds of things they want to do. They help the customer demonstrate more features per iteration to the interest groups hovering over her shoulder, thus enhancing her credibility etc.

But what gift do the testers get in return? Testability support. A feeling of being more central to the team. Influence into the shape of the product. And...? I'm giving three talks this year about the role of testers on agile projects. I want to concentrate on the team support half of that role (as opposed to the product critique half). Thinking about how testers fit into a gift economy may help me.

## Posted at 07:20 in category /agile [permalink] [top]

Mon, 12 Apr 2004

Ruby automocks

Michael Feathers asks for a way for tests to substitute their own mock classes for any class used, however indirectly, by the class under test. That would be simpler than refactoring to pass mocks around in the code under test, though perhaps, in the long run, less desirable. As Michael says, "I'm not convinced that is great thing. Abused it would just let code get uglier without bound. But, used well, it could really help people out of some jams."

I undid a bad mood this morning by hacking up something like that for Ruby. Here's an example:

require 'test/unit'
require 'automock'

module DBI
  # This is a utility used by the class under test.
  class BigHairyDatabaseConnection
    def name
      "true database"
    end
  end
end

module SeparateAppPackage
  # This class will use the big hairy database connection.
  # How to mock that out without changing this code?
  class ClassUnderTest
    def initialize
      @database = DBI::BigHairyDatabaseConnection.new
    end

    def which_database
      @database.name
    end
  end
end

class TestSeparatePackage < Test::Unit::TestCase
  include SeparateAppPackage

  # Mock out the big hairy database connection.			    
  mock DBI::BigHairyDatabaseConnection

  # Here's what it gets mocked out with.
  class MockBigHairyDatabaseConnection
    def name
      "mock database"
    end
  end
  
  def test_mock
    # Notice that the ClassUnderTest magically uses the mock,
    # not the class it was coded to use.
    assert_equal("mock database",
                 ClassUnderTest.new.which_database)
  end
end

This is just a spike, though I don't offhand see any impediments to a solid implementation. I can finish it up or hand it off if anyone would actually use it.

## Posted at 18:59 in category /ruby [permalink] [top]

Sat, 10 Apr 2004

Standards and defaults

Jason Yip takes on the word "standard". He's changed my vocabulary.

## Posted at 13:50 in category /misc [permalink] [top]

Fri, 09 Apr 2004

"Bang!" went the ColumnFixture

Part of a series on FIT extensions and stylistic tricks.

ColumnFixtures are useful when you want to vary data a lot but vary processing not at all. Having each row be a separate test is very tidy.

Sometimes the sequence of events is this:

  1. provide data (which varies)
  2. do something that changes state (always the same thing)
  3. check that the state was changed correctly (always the same way)
Notice that, even though there are three steps, the ColumnFixture has only two types of columns: input values, and methods whose results should be checked. Those correspond nicely to the first and third steps, but there's nothing that exactly matches the second.

Now, it's not hard to use the second type of column for that: just have the state-changing method return some random value, like true, and have FIT "check" for it. Here's an example:

fit.ColumnFixtureBangTester




number 1
number 2
calculate()
sum()
difference()
1
1
true
2
0
1000
-1000
true
0
2000

Still, it bugs me that there's no visual cue that calculate isn't something different from sum. So, taking a leaf from Scheme, I propose that columns used for side-effect be distinguished by ending their name with "!". This eliminates the need for anything to be in the column cells, which visually separates the input from the expected results:

fit.ColumnFixtureBangTester




number 1
number 2
calc!
sum()
difference()
1
1

2
0
1000
-1000

0
2000

I have written a version of ColumnFixture that makes the above test pass. (Here's a zip file that includes it, the test, and the test source. If people want me to, I can include it in my StepFixture jar file.)

This version of ColumnFixture does one other thing. I like space-separated names at the heads of columns, so I added code to "camel case" them. (That is, in the test, "number 1" names method number1. And "do calculation" would name method doCalculation.) To my mind, this makes ColumnFixture consistent with ActionFixture.

## Posted at 16:58 in category /fit [permalink] [top]

Thu, 08 Apr 2004

FIT: highlighting checks

Part of a series on FIT extensions and stylistic tricks.

Here are two tests with identical contents (exact same words). Which do you prefer?

Orders are given to caretakers and students. Some orders depend on whether the animal's in intensive care or not.

AnimalProgressFixture (a StepFixture)
new case
Betsy
Rankin


Rankin brings in a cow (a comment)
diagnosis
severe mastitis




order
intensive care




order
soap


subjective objective assessment and plan
check
student does
soap
daily


check
student monitors temperature
6 hours

because in intensive care
check
caretaker does
milking
never

no one milks - milking has to be ordered.
check
student does
milking
never


order
milking




check
caretaker does
milking
12 hours


check
student does milking
3 hours

between student and caretaker, cow gets milked every three hours
...




now finish treatment

Orders are given to caretakers and students. Some orders depend on whether the animal's in intensive care or not.

AnimalProgressFixture
new case
Betsy
Rankin


Rankin brings in a cow
diagnosis
severe mastitis




order
intensive care




order
soap


subjective objective assessment and plan
check
student does
soap
daily


check
student monitors temperature
6 hours

because in intensive care
check
caretaker does
milking
never

no one milks - milking has to be ordered.
check
student does
milking
never


order
milking




check
caretaker does
milking
12 hours


check
student does milking
3 hours

between student and caretaker, cow gets milked every three hours
...




now finish treatment

I prefer the second. If I want a quick idea of what the test's about, it's easy to scan the rows and just read the bold ones. It's comparatively hard to figure out what's going on if the steps in the test don't stand out from the checks. Other people might use color to distinguish the two types of rows. I find font changes less distracting than color changes. (As you can tell from this page or my main page, I'm a pretty colorless person.)

A test is two things: an example of use and a check of correct results. I've always been bothered when I can't see the examples for the checks, and I've gone to silly lengths at times to keep them separate. Here's a test written in Ruby:

  def test_a_normal_day
    start_background_job
                                assert_states([@misc], [])
    start 'stqe'
                                assert_states([@stqe], [@misc])
    start 'timeclock'
                                assert_states([@timeclock], [@misc, @stqe])


    pause_without_resumption
                                assert_states([], [@timeclock, @misc, @stqe])

I don't go this far as a rule, but I wanted to talk about these tests with my Customer. This format let us step through the commands one by one. For each, I'd translate the assertion into human language. She didn't have to pay attention to anything on the right.

## Posted at 13:44 in category /fit [permalink] [top]

Wed, 07 Apr 2004

Inline comments in Fit

Part of a series on FIT extensions and stylistic tricks.

One of the nice things about FIT is that you can interweave explanatory text with tables that serve as both examples and tests:

A clinician can have the clinic absorb some charges. Perhaps a test was done only for educational purposes, or the animal stayed an extra day.

AnimalProgressFixture
Note: this is a StepFixture.
new case
2353
Peoria
order
intensive care


charge
100


charge
900
clinic

check
balance
1000

check
balance
clinic
900
check
balance
Peoria
100
...


Certain charges are always paid by the state.

AnimalProgressFixture
...


I love that. (See also Mark Miller's Updoc, which is used with a scripting-ish language called E.)

I also find it useful to put "inline" comments in my tests. I do that by taking advantage of the way that Fit stops looking at table cells when it sees the first empty one in a left-to-right scan. So I leave a blank cell and write a comment in the next one. Here's the above test, so annotated:

AnimalProgressFixture
new case
2353
Peoria

order
intensive care



charge
100


Normal diagnostics yield nothing, but the clinician wants to press on...
charge
900
clinic

Absorb excess diagnosis charge.
...



I think this sort of thing is useful, particularly for step-by-step tests, because it helps the test tell a story, helps it make sense in business terms to both the reader and the writer.

## Posted at 14:26 in category /fit [permalink] [top]

PLoP 2004 reminder

The PLoP 2004 submission deadline is rapidly approaching.

## Posted at 07:02 in category /misc [permalink] [top]

Using XML for Ant

The Ant inventor reflects:

If I knew then what I know now, I would have tried using a real scripting language, such as JavaScript via the Rhino component or Python via JPython, with bindings to Java objects which implemented the functionality expressed in todays tasks. Then, there would be a first class way to express logic and we wouldn't be stuck with XML as a format that is too bulky for the way that people really want to use the tool.

Or maybe I should have just written a simple tree based text format that captured just what was needed to express a project and no more and which would avoid the temptation for people to want to build a Turing complete scripting environment out of Ant build files.

Both of these approaches would have meant more work for me at the time, but the result might have been better for the tens of thousands of people who use and edit Ant build files every day.

(Via Keith Ray.)

## Posted at 06:57 in category /misc [permalink] [top]

Big Visible Things

Alberto Savoia describes some clever and fun feedback devices. Scroll through to look at the pictures, then read the text.

## Posted at 06:51 in category /agile [permalink] [top]

Tue, 06 Apr 2004

Fit extension: the StepFixture

I've been reviewing a forthcoming book on FIT, so I've been thinking about it a lot. Hence this blog category. I'm going to spew forth both some extensions to FIT and some stylistic quirks I've developed. I hope they'll provoke some blogospheric or wikispheric talk - or at least pointers to place where that talk's already happening.

Since a lot of my examples will involve my StepFixture, I'll talk about that first. (You can get it here. It has both my source and a jar file containing StepFixture plus the standard FIT fixtures.)

StepFixture is my alternative to ActionFixture.

ActionFixture depends on three keywords: enter, press, and check. The first two ask you to model your application as a device with a GUI-ish interface. My first difficulty with ActionFixture is that I'm afraid people will think the FIT tests should test through the actual GUI, though I know this is not Ward's intention. He would prefer the tests to bypass the GUI and test the business logic directly. The pretend device interface is just a convenient way to think about performing a series of steps. Still - to be overbearingly paternalistic about it - I think it's an "attractive nuisance."

I think that the problem could be alleviated by using cause as a synonym for press. Then you "probe the device" by entering data, entering data, ..., entering data, then causing some action. That's easily done by adding this code to ActionFixture.java:

    public void cause() throws Exception {
        press();
    }

But my second difficulty is that the tests are verbose:

enter Betsy
enter Rankin
cause new case

I think long sequences like that are hard to read. I would rather see this kind of terseness:

new case Betsy Rankin

It might be objected that this looks too much like a program's method call; that non-programmers will find the device model more friendly. I have experience showing StepFixture tests to only two Customers, but neither of them has seemed to have problems with the format.

Notice that, whereas press describes a method with no arguments, new case takes several. I've occasionally found it useful to have methods with the same name but different signatures. The following is a bit of a test that describes how sometimes the owner of an animal being treated doesn't pay all the charges. If a clinician does extra work for teaching or research purposes, the veterinary clinic will pick up part of the charge.

diagnosis clinical mastitis  
charge 100  
charge 400 clinic

It might seem that it would be better to always have charge take both the amount and the entity-being-charged. But consider: there were already a bunch of tests that used the one-argument form. It would have been a hassle to change them. Moreover, we'd already established a vocabulary where charge 100 meant "charge the normal payer" - I want to build a vocabulary, not always be revising it.

I've also found arguments to be useful with the check keyword. For example, after the above, I might use these check statements:

check balance 100  
check balance clinic 400

They invoke, respectively, these methods:

    public int balance()...
    public int balance(String payer)...

So the expected value is always the last non-blank cell in the row. (That's a little awkward - I kind of wish they all lined up.)

Sometimes the use of arguments on check isn't necessary, but I find it more pleasing. (I think text that exists to promote conversation - as customer-facing tests do - should be pleasing to the eye.) For example, these check who does what how often:

check
student does
soap
daily
check
student monitors temperature
6 hours
check
caretaker does
milking
never
check
student does
milking
never

There's no particular reason for "student does" and "soap" to be in separate cells, but I find the vertical alignment makes the set of checks easier to scan.

The FIT ActionFixture has the test explicitly start an application. I found myself drawn to a "FIT-first" style of coding, in which I started building the application within the fixture, factoring out pieces as it became convenient. So I did away with start. Nevertheless, an important part of FIT is the ability to have a test that includes more than one kind of table. So state has to persist across tables. As with the ActionFixture, that happens automatically with the StepFixture. If you want to start fresh, you use restart in the table:

fit.StepFixture
restart

That clears out all state and starts you with a new object. I'm not sure I like that, so...

Most of my tests have been a single StepFixture. So what I've found myself doing is having two subclasses of it for an app. The "AppFixture" fixture starts fresh. The "ContinuedAppFixture" signals that the current table is a continuation of a previous one. So a multi-table test looks like this:

AnimalProgressFixture
new case
Susie
Rankin


diagnosis
therapeutic truck ride



... another type of table ...

ContinuedAnimalProgressFixture
payment
30



In the code, the ContinuedAnimalProgressFixture is derived from StepFixture:

  public class ContinuedAnimalProgressFixture extends fit.StepFixture {
      private Case myCase;
      static public Accountant accountant; // Used by other fixtures
      protected Clinic clinic;
      ...

      public void newCase(String animal, String owner) {
        ...
      }

      public void diagnosis(String diagnosis) {
        ...
      }
  }

AnimalProgressFixture extends ContinuedAnimalProgressFixture, but reinitializes things:


  public class AnimalProgressFixture extends ContinuedAnimalProgressFixture {
      public AnimalProgressFixture() {
          super();
          restart();
          accountant = new Accountant();
          clinic = new Clinic(accountant);
      }
  }

I'm not sure I like that either.

If you want to use StepFixture and need more help, let me know.

## Posted at 20:35 in category /fit [permalink] [top]

Mon, 05 Apr 2004

Big visible charts

A blog about Big Visible Charts (via Jason Yip).

## Posted at 17:14 in category /agile [permalink] [top]

Poke-inviting code

Michael Feathers:

You think your design is good? Pick a class, any class, and try to instantiate it in a test harness. I used to think that my earlier designs were good until I started to apply that test. We can talk about coupling and encapsulation and all those nice pretty things, but put your money where your mouth is. Can you make this class work outside the application? Can you get it to the point where you can tinker with it, in real time, build it alone in less than a second, and add tests to find out what it really does in certain situations. Not what you think might happen, not what you hope might happen, but what it really does?

That made me think: wouldn't it be nice to be able to point at a class (or some similar unit) and tell it, "You! Make me an object that I can poke at." It would create representative objects (or mocks) that it depended on, then sit there waiting for you to send messages to it via the interpreter.

Somehow, this seems different to me than running the unit tests under a debugger, hitting a breakpoint sometime after the setup method, then poking away. It puts the responsibility for comprehensibility exactly in the code, not one step removed. (Something like the difference between explaining code with a comment and making the code intention-revealing.)

Of course, you'd want the interpreter to be able to generate a unit test from your poking. Any such test would need some cleaning up. But, perhaps a generator that tried to filter out unneeded commands (assuming cooperation from the programmer about side-effects) could make that effort reasonable. Anyone want a Masters' project?

Even if this idea is dumb, maybe there's something to the notion of code that goes out of its way to be explored. Is that different from simply being decoupled? simply being well tested?

## Posted at 09:19 in category /coding [permalink] [top]

Sat, 03 Apr 2004

Situated software

Clay Shirky has an important piece about what he calls situated software, "software designed in and for a particular social situation or context". Some representative quotes:

We've been killing conversations about software with "That won't scale" for so long we've forgotten that scaling problems aren't inherently fatal. The N-squared problem is only a problem if N is large, and in social situations, N is usually not large. A reading group works better with 5 members than 15; a seminar works better with 15 than 25, much less 50, and so on. [...]

These projects all took the course's original dictum -- the application must be useful to the community -- and began to work with its corollary as well -- the community must be useful to the application. [...]

We constantly rely on the cognitive capabilities of individuals in software design -- we assume a user can associate the mouse with the cursor, or that icons will be informative. We rarely rely on the cognitive capabilities of groups, however, though we rely on those capabilities in the real world all the time. [...]

(Via Ralph Johnson on the uiuc-patterns mailing list.)

This makes me think of all the custom software that small development teams build around the product they're developing. They're a small social group. Can thinking in Shirkyesque terms help? Would it help them pay attention to things they overlook now?

Also: zillions of people are building support code for their small group, then putting it out into the world. Keeping Shirky in mind, what might they do differently? One thing that comes to mind is Michael Feathers' notion of stunting a framework. He suggests small frameworks that are intended to be owned and changed, not merely used by subclassing or plugging.

See also Ben Hyde's comments here and here:

Another point to make about situated software is this balance between a forgiving environment and a strong signal that helps the software to adapt.

The challenge in making a thing survive over time is getting it to adapt.

It's not the software alone that will adapt. From a manglish perspective, what's especially interesting is the possibility that the social situation that supports the software will change in response to it, which will change the software, which...

And more than that might change. Shirky talks about how software creators used the physical environment:

[...] take most of the interface off the PC's dislocated screen, and move it into a physical object in the lounge, the meeting place/dining room/foosball emporium in the center of the ITP floor. Scout and CoDeck each built kiosks in the lounge with physical interfaces in lieu of keyboard/mouse interaction.

(Information radiators, again.) As this goes on, will the lounge and the way people use it evolve?

So suppose we want to start paying more attention to how our secondary, "quick hack", project support work interacts with the micro-social, the micro-cultural, and the physical environment - and to how all those things evolve together. How do we do that? How do we notice gradual changes in the air we breathe? How does a project's network of support evolve in the steady sort of hill-climbing way that we hope the product itself evolves?

Hmm.. I sense an Agile Development Conference or XP Universe Open Space session coming on.

Coming down to earth... prior to attentiveness, there must be skill. Fortunately, Mike Clark's writing the book on that one.

## Posted at 10:54 in category /misc [permalink] [top]

Tue, 30 Mar 2004

Better Software Magazine: writers needed

[Update: tried to clarify "upgrading testing skills".]

I'm a technical editor for Better Software magazine, formerly Software Testing and Quality Engineering. In its older incarnation, the target audience was testers, test leads, process people, and managers. In the newer incarnation, the emphasis will broaden somewhat to capture more people (while hanging on to the original audience). We hope everyone who spends time thinking about quality (in its various manifestations) will subscribe.

That especially includes programmers. To that end, for example, we've recently had an article on remote work (from Andy Hunt, Christian Sepulveda, and Dave Thomas). It had a programmer slant, even though it contained advice for all types of people. We'll soon be having an article on FIT-first development from Dave Astels. Our Tool Look section will soon take a look at the popular IDEA IDE. Etc.

It's foolish that I've never solicited authors on this blog. Here are topics we're looking to cover. And here are instructions about how to get started writing for us.

We'll be having an editorial planning meeting April 1 and 2. We'll add more topics to the list. Mail me if you have suggestions.

In particular, here are topics we need covered in the September issue, with a first draft deadline of May 12:

  • Development realities for managers. What do managers need to know about development today? What significant changes have happened? In what ways are managers out of touch, technically?

  • Usability design.

  • Upgrading testing skills. Dave Thomas writes to programmers:

    The industry is changing underneath us, and most developers seem oblivious, preferring to blame the recession rather than face an awkward fact: we're never going back to the easy life of the late 90's. Instead, every developer is increasingly going to have to fight to stay attractive, working hard to develop the skills needed as the industry matures and more and more of our work becomes commoditized.
    This Front Line piece will be written by a tester who followed Dave's advice. For example, the author might describe how she added a scripting language to her manual testing portfolio. Or she might say how she studied anthropological fieldwork and applied it to interviewing users.

## Posted at 10:13 in category /misc [permalink] [top]

A debug dump

[Update: Spolsky's article, mentioned below, is here. Thanks, Joel.]

Dave Liebreich on debug dumps:

Ask for a debug dump command to be included in the system, and make sure programmers and testers know how to invoke it, grab and decode the information, and expand the command to include more information.

The most debuggable system I ever had a hand in writing was the virtual machine for an implementation of Common Lisp. My motto was that no bug should be hard to find the second time. So part of fixing each tricky VM bug was adding code that would make bugs like it shout out "Here! I'm here!" Over time, the system got quite debuggable.

Inspired in part by that, I later wrote a set of patterns for ring buffer logging. Joel Spolsky wrote an article for Volume 5, Number 5 of STQE on remote crash reporting. The most interesting bit was that his programs "phone home" with very little data. He finds he doesn't need enormous detail.

## Posted at 07:24 in category /misc [permalink] [top]

Mon, 29 Mar 2004

Open source test tools

Open source test tools are hot these days. The two best sources I know are Bret Pettichord and Danny Faught.

## Posted at 17:00 in category /testing [permalink] [top]

Testers, TDD, and disruption

Jonathan Kohl has a couple of important posts on pairing with programmers: here and here.

What Jonathan writes reinforces my feeling that managing the human dynamics of the tester/programmer interaction is a key issue. As I wrote after the Agile Fusion workshop:

I've done limited pair programming with "pure programmers". When I have, I've noticed there's a real tension between the desire to maintain the pacing and objectives of the programming and the desire to make sure lots of test ideas get taken into account. I find myself oscillating between being in "programmer mode" and pulling the pair back to take stock of the big picture. With experience, we should gain a better idea of how to manage that process, and of what kinds of "testing thinking" are appropriate during coding.

My sense is that I'm a fire hydrant of ideas for examples: "Ah, the program will have to be able to handle cases like this... and like this... and like this..." My urge is to spill them out as I think of them. But, as my soaring instructor once told me when I was porpoising on tow, "Don't just do something, sit there." My job is to dole my ideas out at a rate, and in an order, that contributes to forward momentum (without forgetting the ones I hold back).

In his second post, Jonathan quotes James Bach: "... testers are critical thinkers -- and critical thinkers are disruptive." I don't think that need always be true. One can be a critical thinker but not a critical speaker. One can post-process the thoughts and use them to guide progress in a non-disruptive way. That's not traditionally been the role of testers: it's been their job to have critical thoughts, to evaluate the product against them, to report the results clearly and dispassionately, and to let someone else decide what to do about it. In an agile project, it seems to me that the tester's responsibility has to go a bit further.

I'm drawn to the analogy of writers' workshops as they are used in the patterns community. Considerable care is taken to prepare the author to be receptive to critical comments. At its worst, that leads to nicey-nice self-censorship of criticism. But at its best, it allows the author to really hear and reflect on critical ideas, freed of the need to be defensive, in a format where teaching what should change doesn't overwhelm teaching what works and should stay the same.

I suspect testing in agile projects will also have to develop rituals. Agile projects are a specialized context. The rituals will aim to optimize how testing skills fit in.

See also Christian Sepulveda's idea that I think of as "oscillating testers".

## Posted at 08:07 in category /testing [permalink] [top]

Sat, 27 Mar 2004

Living in the material world

[Update: added a title for fear the lack of one would break some aggregator somewhere.]

Ben Hyde writes:

Tools do seem to be making a difference here. A good bug database, the optimistic concurancy design pattern used in CVS, even something as amazingly simple as SubEthaEdit. It gives one hope that some what seems so hard may only be a misunderstanding about how to frame up the work.

There's technological determinism, vividly displayed in the 90's proclamations that the internet would "route around" all political blockages and empower people all over the world and make participatory democracy inevitable.

There's social determinism, the notion that, say, social class drives everything, or that managers need not be expert in the business they manage, or that problems in software projects can be understood with reference only to the people involved: what universal behaviors are they exhibiting?

The view I subscribe to is that the world is a big feedback-ey mess. The technological affects the social affects the material affects the conceptual affects the technological... One of the attractive things about the Agile methods is the emphasis they place on the material world. People are not just conceptual entities processing information. They are embodied, living in a physical space that affects what they do and how they think. Hence the emphasis on the tactility of 3x5 cards, on the open workspace, on information radiators.

Perhaps the agile methods will not just produce more satisfying software quicker. Perhaps they'll shift us geeks away from a too abstract - a too unidimensional - view of the world.

(See also this post from Michael Hamman.)

## Posted at 08:20 in category [permalink] [top]

Fri, 26 Mar 2004

Another definition of agile

From Ron Jeffries, on the agile testing mailing list:

That's what agility might be said to be about: encountering all the problems so early and so often that the effort to fix them is less than the pain of enduring them.

## Posted at 07:03 in category /agile [permalink] [top]

Mon, 22 Mar 2004

Links

A blizzard of ideas this time, all suggestive, none that I've been able to comment on sensibly.

## Posted at 16:59 in category /links [permalink] [top]

Thu, 18 Mar 2004

Jolt Awards

Congratulations to Andy and Dave for winning a Jolt Productivity award for the Pragmatic Starter Kit. Buy copies now.

## Posted at 15:31 in category /misc [permalink] [top]

PLoP 2004

CALL FOR SUBMISSIONS

PLoP'04
Pattern Languages of Programming Conference

Allerton Park, Monticello, Illinois
September 8-12, 2004
The Fellowship of Patterns: The Second Decade of Patterns

A full program, including Writers Workshops, Focus Groups, and Special Presentations to help us look at the last ten years of patterns and plan for the next decade.

This is going to be an exciting PloP!

May 1
Papers Due
Focus Group Proposals Due

For more information: http://hillside.net/plop/2004/

## Posted at 15:31 in category /misc [permalink] [top]

Wed, 17 Mar 2004

Links

Jonathan Kohl has posted a copy of an article of his on developer/tester pairing.


Jason Yip suggests losing the term "agile":

"Agile" is not test-driven development, continuous integration, simple design, whatever; it's any feature, any order, most valuable feature first.

Part of the purpose of the workshop that led to the Agile Manifesto was to find a replacement for the term "lightweight processes." People felt that saying "I'm a Lightweight!" somehow didn't get the right message across to Gold Owners.

"Agile" was certainly a success in those terms. Jason suggests it might have been too successful now that everyone's piling on.

Perhaps, though I do think there are other unfortunate terms to go after first.

However, Jason's definition doesn't capture two of the values the Manifesto documents:

  • individuals and interactions...
  • customer collaboration...
I think those two human elements are important. But including them would make any phrase too long, leading back to the need for a word.


Jeffrey Fredrick on confidence due to tests.

And related: Alberto Savoia with some history of a testing tool company that's eaten its own dogfood. (Disclosure: I've done consulting for Agitar.) I was particularly interested in the effects of their dashboard, both good and bad. I remain wary of numbers. My normal preference is for dashboards and Big Visible Charts that are perceptual, fuzz out the precision in the data, and present the results of exploratory data analysis. But lots of numbers make it harder to game the system (consciously or unknowingly), and using the numbers in a system filled with human feedback ought to make them self-correcting.

## Posted at 11:56 in category /links [permalink] [top]

Tue, 16 Mar 2004

Meta-agility, oh my

As I write, I've bogged down on a paper I was going to submit to OOPSLA Onward! Its hook was talk about ontologies, a term I've semi-literately borrowed from philosophy. An ontology is an inventory of the kinds of things that actually exist, and (often) of the kinds of relations that can exist between those things. Everyone's got one. I abusively extended the term to make ontologies active, to make them drive people's actions.

Consider, for example, what I'll call the agile ontology. That ontology believes that software can be soft. Given the right practices, social organizations, workspace organization, and tools, software and the team that works on it can be trained to accept changing requirements. That contrasts with what I'll call the software engineering ontology, which holds that it is in the nature of software to devolve into a Big Ball of Mud. In the SE ontology, entropy is an inevitable and looming part of the world. In the agile ontology, entropy is avoidable, not central.

Then I was going to say that ontologies are important for two reasons. First, they work. They are self-fulfilling prophecies. The agile ontology constructs software and teams in a way that makes software soft; the SE ontology constructs software in a way that makes change (on average) expensive.

Second, I was going to claim that a person's ontology is malleable. Not infinitely malleable, mind you, but an awful lot of people can be shifted from the SE ontology to an agile ontology, and vice versa. (I myself have arguably made the shift twice over 20+ years.)

Finally, I was going to descend from the Heights of Abstract Argumentation and talk about somewhat pragmatic ways to shift people's ontologies. So the whole paper was to give a way for methodologists and consultants like me to think about what they're doing and get better at doing it.

I didn't want the essay to be yet another piece about the Agile methods, but instead a general statement about methodologizing. I bogged down when I realized both of my bolded claims above are meta-agile. By saying that people's ontologies are malleable, I'm being as optimistic about people as I am about software. But many of my readers would not be. By saying that methodologies work, I'm making a social constructivist statement. I'm saying that agile ontologists can arrange their world so that what they want to be true becomes true. But a SE ontologist would think that absurd: you cannot avoid the cost-of-change curve just by "having an optimistic attitude."

So I would be preaching to the converted about how to convert the nearly converted.

The hell with it. I'm going to install Zope, do something where I can feel the bits between my toes.

## Posted at 22:06 in category /agile [permalink] [top]

Testing in the Agile Manifesto

I often joke that I was the "token tester" at the workshop that created the Manifesto for Agile Software Development. I wasn't a token, but it was clear to me that the Agile story about testing was markedly weaker than the story for programming, business relations, and management. The one exception was unit testing in XP. (And, even then, it seemed there was something different about XP's approach - that was the first time I heard Ward Cunningham say, "Maybe we shouldn't have called it 'testing'," which I suspect is true).

Things have moved along quite nicely since then, to the point where Mike Beedle (another of the Manifesto's authors) suggested last week that something about testing be added:

we have come to value...

  • Tests over requirements definition and traceability

My response was that I would prefer this:

we have come to value...

  • Explanation by example over explanation by abstraction

...or even...

we have come to value...

  • Talking and pointing over just talking

These echo my long, slow change from someone who believed abstraction was the highest calling to someone who wishes he could only be backed reluctantly into abstraction.

It also marks a shift in my institutional center of gravity. It seems silly for a tester not to leap on a chance to insert the magic letters t-e-s-t into something as influential as the Agile Manifesto. But I've long thought of testing as a service function. Back in 1997, I wrote about the testing team's motto:

We are a service organization whose job is to reduce damaging uncertainty about the perceived state of the product.

In 1998, I wrote a paper, "Working Effectively With Developers" that provoked praise from a religious fellow who told me he was pleased to hear a conference talk about the spiritual virtues of service (which I was happy to hear, though I hadn't been thinking in those terms) as well as criticism from people, less panglossian than I, who pointed out that there were scant career rewards in my approach.

So my career arc has been from the 80's, in which I thought the testers served the end users by protecting them from sloppy programmers and crazed managers; to the 90's, in which I thought the testers served the project manager (and, indirectly, the end users) by giving her information that would lead to better decisions; to today, where I think of serving the whole team (and, indirectly, the project managers and end users) by producing telling examples that cause people to learn.

But that shift toward a particular service - supporting learning - by a particular means - telling examples - has led me less and less to think of myself as having a distinct role: tester. It's not that I want to eliminate testers, but that I see greater opportunity in using and remolding testing habits in radically different ways. And there will be some discarding, too.

## Posted at 09:35 in category /agile [permalink] [top]

Not iLife, my life

My life would be ever so much better if I could type control-shift-meta-cokebottle while reading mail and have that message added to an iCal todo list such that clicking on something (the associated URL?) would bring up Mail and show the message-that-prompted-me-to-want-to-do-something. Anyone know if this is possible? Mail me. Thanks.

## Posted at 08:59 in category /mac [permalink] [top]

Fri, 12 Mar 2004

Exploratory testing and empathy

Exploratory testing is the time where team members get to put on their customer persona and try to use what they have created. I don't know of a better way to develop empathy for the end users.

-- Jeffrey Fredrick

## Posted at 15:11 in category /2004conferences [permalink] [top]

Thu, 11 Mar 2004

Text files defended

Carlos E. Perez defends loosely-coupled text files against structured data stores.

Very tangentially, I'm reminded of those who claim that the failure of Lisp and Smalltalk was partly due to their being hermetic - taking all data on their own terms, not the world's. They failed to embrace the string and the regular expression.

(Not just to pick on Lisp and Smalltalk, two fine languages: how long did it take for a regex class to make it into Java? And a regex class is one significant affordance notch below regexps built into the language.)

Despite tending to agree with Carlos, I gotta say: it's just so much finer to define functions on the fly in Lisp than in Ruby. For that particular (but important!) purpose, having the parse tree as a first-class data type is just a total win. Within programs, strings are a second-class way to represent code.

## Posted at 20:34 in category /misc [permalink] [top]

Telling the code to tell you something

Two related ideas:

  1. Chad Fowler builds on an idea from Dave Thomas to show dynamically-typed code that documents the types it probably wants. Dave's idea was that the first type a method's called with is probably the right type. So, in an aspectish way, you point at the method, tell it to remember how it's called and to complain if later calls are different. Chad's idea is that a method can just as easily record the information in a way that lets you create documentation about the types of method arguments.

    Chad's idea of using the results to inform an IDE is particularly clever.

  2. Agitator is a new testing tool that semi-intelligently floods a program with data and records interesting facts about what happens. (Full disclosure: I've received some money from the company that sells it.) Kevin Lawrence tells a story of how the desire to simplify the results Agitator produced for some code resulted in a small class hierarchy replacing much more code smeared all over the place. The end result and the feel of the process is the standard test-driven refactoring story, but the "impulsive force" was interestingly different.

    (See also a different story, from Jeffrey Fredrick. To me, it's a story about how Agitator's large number of examples hinted that there's a bug somewhere and that maybe an assertion right over there would be a good way to trap it.)

The common thread is quickly instrumenting your program, running a lot of ready-to-hand examples through it, getting some output that's not too noisy, and deriving value from that. Not a new idea: how long have profilers been around? But now that "listening to what the code is trying to tell us" is marginally more respectable (I blithely assert), we should expand the range of conversation. It shouldn't be only about the code's static nature, but also about its dynamic nature. And perceiving the dynamic nature should be as simple, as semi-automatic, even as rawly perceptual as, say, noticing duplication between two if statements.

## Posted at 20:30 in category /ideas [permalink] [top]

Engineering as iteration

Good engineering is not a matter of creativity or centering or grounding or inspiration or lateral thinking, as useful as those might be, but of decoding the clever, even witty, messages the solution space carves on the corpses of the ideas in which you believed with all your heart, and then building the road to the next message.

-- Fred Hapgood (via Bret)

## Posted at 14:39 in category /misc [permalink] [top]

The phrase 'exploratory testing'

I don't like the word "test" in "unit test" or "customer test". It leads to endless repetition of sentences like "But don't forget that unit tests aren't really about testing, they're about design." The listener is perfectly justified in asking, "Then why not pick a name that doesn't mislead?" To which, the only real answer is historical accident.

I haven't been enormously comfortable with my use of the phrase "exploratory testing" either. Exploratory testing has historically been a fast and flexible and creative way of finding bugs that matter. My goal for it (in Agile projects) is different. It's to expand the range of possible future stories from which the business expert will choose. Some of those stories might be bug fixes like "make it such that the operating system doesn't crash when you lay a manual on the keyboard and the keys repeat more than about 100 times"1. But I'm much more interested in stories that are feature requests, ones that suggest adding positive business value rather than removing negative business value.

For a time, I was calling that "exploratory learning" to emphasize that it will use some of the techniques of exploratory testing while still being a different sort of thing. But the name didn't catch on, so I reverted to "exploratory testing." But I'm still unhappy about it.

In the Agile Testing mailing list, Randy MacDonald wrote:

Don't build a system where exploratory testing is possible. Do build a system where exploratory design is possible. In fact, build the system via exploratory design.

I'm thinking that "exploratory design", perhaps with another adjective tacked on front, is more what I'm looking for. So, if I start using that, this will remind me to give Randy credit for the term.

1 Not apocryphal. I actually crashed Gould's real-time OS that way sometime in the 80's.

## Posted at 07:53 in category /2004conferences [permalink] [top]

Wed, 10 Mar 2004

Where's the creativity in test-driven design?

Michael Hamman has a couple of posts on creativity. In the first, he defines creativity as, in part, the desire to create a problem. In the second, he speaks of generating creativity by inventing friction.

Michael made me think of creativity and test-driven design. Consider my story of a refactoring episode. In it, I claim that the end result was surprising. And it certainly felt like something like creativity was going on. But where did the creativity lie? After all, what I was doing seems to be fairly straightforward rule-following: I saw two if statements that test the same thing, so I removed the duplication by creating subclasses. Then I made some code more intention-revealing. I iterated until there were no more "impulsive" rules telling me of code to clean up.

Is the creativity somewhere else in the TDD micro-iteration? Is it in the tests? Maybe, but not enormously. The tests are mostly responses to outside impulses (at the highest level, from customer desires). And the coding doesn't seem hugely creative, either, since it's mainly a matter of getting the next test to pass in a straightforward way.

I'm not even quite sure how to pose the issue, but it goes something like this: the end result appears to be the product of a creative process. However, the process, when examined, doesn't seem creative. It seems fairly mechanical, fairly rote rule-following. However, it doesn't feel mechanical from the inside: coding the dullest code test-first is nothing like the experience of sweeping the floor, though to the outsider they might not look intrinsically different.

Part of what's going on, I think, is that the creativity is distributed. It's more a matter of a series of small aha moments than fewer big AHA! moments.

But a bigger part, perhaps, is that the creativity lies more in retrospective discovery than invention. I said something like the following to myself: "Oh, you know that thing I created to get rid of duplication back then? Now I see that changing its name turns it into a potentially sensible - even suggestive - object in the domain model." Discovery. Or consider Ward Cunningham's story of Advancers.

An Advancer is a MethodObject. I first created an Advancer to hold all of the temps used in the otherwise unrefactorable advance method in WyCash.[...]

We came to think of computations in terms of the Advancers we would need. [...] The mere existence of Advancers shaped the way we thought. We got more done. We did it faster. We wrote less code. We had fewer bugs. [...]

We enjoyed the benefits of Advancers for many months before we discovered their real calling. We had been having persistent problems with a few tax reports. [...] Finally, out of frustration, we began to look around for other objects that might help, and there was Advancer.[...]

I don't know if Ward's talking about the same sort of thing I am. I shall have to ask him. In any case, understanding a bit more where my feeling of creativity comes from might help me get it more often - or more justifiably.

## Posted at 21:26 in category /agile [permalink] [top]

Mon, 08 Mar 2004

Unit of measurement elected head of standards board

Sometimes I forget that the world is also a place of delightful whimsy. (Via Ben Hyde)

## Posted at 08:47 in category /junk [permalink] [top]

More on standards

I went all alarmist about work beginning on a standard titled "Recommended Practice for Establishing and Managing Software Development Efforts Using Agile Methods". Since then, the Agile Alliance board has been in contact with the guy heading it up, Scott Duncan. One board member writes:

[Scott] explained to me that it was initiated by government procurement folks who are interested in acquiring software developed with agile methods. They are looking for guidance about how to organize the acquisition process to do this. My sense is that this represents a growing acceptance of agile methods, so it is a good thing. [...]

Scott says he wants to avoid an 'Agile Software Development' standard, but instead wants to give guidance to the customers of agile software. They need to realize what kind of commitment they, as a customer, must make. They also need to know also what kinds of disciplines to look for in an organization claiming to be agile [...]

Since I am interested in how agile software development can be done under contract, I decided to join the standards committee. You are invited to join as well, since Scott would like good representation from the agile community. Work will mainly be done via e-mail or discussion group to begin, you do not have to be an IEEE member to work on the standard (only to vote), and membership from outside the US is solicited. Contact Scott at nacnuds@tsys.com if you are interested. [Bem note: I reversed the letters before the at sign to irk spammers.]

I asked Scott if I could mention this on my blog, and he wrote:

That would be wonderful as long as people understand the commitment to actively participate in the work, i.e., review materials, provide feedback, participate on conference calls, research issues, etc. There are always more people who express "interest," i.e., are willing to get copies of draft standards and emails keeping them up to date on status, than there are people who actually devote time to the work. If people just want to look over the standard in a reasonably final draft form, then being a part of the ballot pool, not the Working Group, is the best approach for them. As long as that is made clear to folks, I'd be glad to have a lot more folks from the agile community involved.

Thanks for asking and offering to help distribute the information. As this the effort to get this started has been in the works for a long time, I do want to begin actual work on the content ASAP, so please let people know that as well so they contact me right away.

I'm still a tad nervous about the "what kinds of disciplines to look for in an organization claiming to be agile", since I'm rather a fan of Ken Schwaber's notion of agile epiphanies (annoying registration required; scroll down when you get through) as opposed to hard-and-fast rules. But, on balance, I suspect I was needlessly alarmist.

## Posted at 08:38 in category /agile [permalink] [top]

Sun, 07 Mar 2004

Sensors; and what to sense

Yesterday I sat in on a few sessions of the W. Ross Ashby Centenary Conference. Two papers presented might tie into some of my recent themes.

Peter Cariani spoke on Epistemology and Mechanism: Ashby's Theory of Adaptive Systems. I was taken by this picture:

This is an elaboration of an ordinary feedback loop. Following the outside arrows from the bottom, we see there's an environment. Sensors detect things about the environment, then a controller makes decisions based on them. Those decisions lead to actions, which affect the environment. The elaboration is that the organism can change the set of sensors that it uses if tests reveal that it's not doing well enough. Cariani says of such organism that it is "capable of learning new perceptual categories". That ties into my recent writeup of tacit knowledge, in which I said of a veterinary student who's learned an important diagnostic category:

"Cows simply are either bright or dull, the way the student herself is either alert or sleepy, or the way a joke is either funny or lame. Any explanation of how she knows seems contrived and after the fact. It's as if the student's perceptual world has expanded."

Hold that thought - malleability of perception - for a moment.

Peter Asaro spoke on Ashby's Embodied Representations: Towards a Theory of Perception and Mind. I was struck by something he said about bees. Suppose you're a bee navigating down a tunnel. You don't want to crash into either side. How can you do it? Well, consider what happens as you drift toward the right side. Features on the right wall will appear to go by you faster, features on the left slower. So just have that specific perception trigger changes in your wing flapping that shift you to the left. You don't need a "world model" with any accuracy; rather, to be a successful bee, you need a diverse set of perceptions that are well tuned to the tasks you need, as a bee, to perform.

Since I'm Mr. Analogy Guy, I of course thought of software projects as needing to be more like bees, especially projects that seem to me to be stuck. The problem is not so much that they don't know what to do. Rather, it's that they don't perceive their problem in a way that allows them to quickly turn it into some action that brings their environment more in alignment with their goals.

So I've recently been trying to think of Big Visible Charts and project dashboards that practically impel action. So, rather than talk about how there are too many meetings, instead put up a chart like this in a public place:

Each bar represents a day, measured in burdened dollars. The red portion is that amount of that day's project expense devoted to meetings. What I hope will happen is illustrated by Friday's bar, in which people's visceral reaction to red provokes the action of fewer meetings.

Putting up such a chart is a little gutsy, of course, but what're they going to do, fire you? In this economy? Oh wait, it's not 1997...

P.S. for the gadget enthusiast: Cariani showed a picture of a device that could grow its own sensors, constructed in the late '50s. There's something very cool about it. (Scroll down for bigger pictures.)

## Posted at 16:48 in category /agile [permalink] [top]

Fri, 05 Mar 2004

A draft paper on disciplinary agency

Faithful readers of this blog category will remember that I'm writing a paper applying Andrew Pickering's The Mangle of Practice to agile methods.

I had hoped that I might use the paper both for a seminar Pickering's running and as a submission to Agile Development Conference. I gave up on the latter idea a couple of weeks ago - I couldn't think of a slant that seemed at all likely to get accepted.

I have finished a draft. It kind of got out of control, partly because I'm rushing to fill in a last-minute gap in the seminar schedule. Partly it's that I'm trying to do way too much in the paper.

Here's the abstract:

Andrew Pickering's The Mangle of Practice is about how practice - doing things - causes change to happen in science and technology. He uses "the mangle" to name the way that machines and instruments, scientific facts and theories, goals and plans, skills, social relations, rules of evidence, and so forth all come together and are changed through practice.

In this paper, I present a detailed case study of the programming practice usually called "test-driven design." I show how Pickering's analysis, particularly his notion of "disciplinary agency," applies well to that practice. However, the flavor of this case study is different than those in his book. Its "dance of agency" gives the lead to disciplinary agency. Disciplinary agency is less source of resistance, more a causal force in modeling and goal setting.

Why the difference? Pickering's book presents an ontology. I suggest that ontologies, too, are mangled in practice.

The paper is in three parts. The first is a rather long story of a refactoring. It's not momentously different than every other story of refactoring you've read: I notice code smells, I change the code, I'm sometimes surprised by where I end up. The only two novelties are that the refactoring happens after I make a FIT test pass, and that I'm coding in the FIT-first style where you write the test-passing code in the fixture and pull out domain objects a bit at a time.

After that I look at the differences between my story and Pickering's story of Hamilton's discovery of quaternions. Where Pickering talks about the world resisting human effort, I talk about the world alternately pushing me around and attracting me.

Finally, I suggest that all this talk about "what the world is doing" isn't purely idle. The Agile worldview ("ontology") is built up through experience and it affects practice. If you believe software can be soft if only you approach it right, you're more likely to figure out how to approach it right. If you believe that software is inevitably ruled by entropy, then you concentrate your effort on damping entropy. That is, I don't believe that methodologies are inherently either right or wrong; I believe they're made right by people who believe in them.

That section closes with a wild speculation: we wouldn't be where we are today if Smalltalk hadn't failed. Its failure led a bunch of bright people to a new place, the one place Smalltalk had a significant toehold: IT. That very different context forced them to invent. (Because of my deadline, I didn't have time to have the people who were actually there explain to me how completely bogus this idea is... but I'm going to be talking to them soon. After all, it's only a draft, so why not go wild, then backtrack?)

I have no real idea who my intended audience could be (other than a bunch of sociology and history majors just dying to learn about Factory Method and Composed Method). But if you're one of them, I would indeed like to hear your comments.

Here's the draft. Since the only readers I know I'll have are nonprogrammers who've read Pickering's book, I define programming jargon and not Pickeringish jargon. You can find enough explanation (I hope) in earlier postings here and here.

## Posted at 13:54 in category /mangle [permalink] [top]

XP/AU events

There'll be a lot going on, testing-wise, at this year's XP/Agile Universe. I'll be helping out with three events on the program.

  • Jonathan Kohl and I will be doing a workshop on tests as documentation. We want to look at examples and talk about how tests can be better documentation.

  • Bret Pettichord, Paul Rogers, Jonathan, and I will be doing a tutorial called "Approaches for Web Testing". It will revolve around the open source WTR framework, which is a set of Ruby classes that drive IE through COM. One of the things I find most interesting about WTR is the way it lends itself to exploration through the Ruby interpreter.

    This will be a hands-on tutorial. No starting knowledge of Ruby required, and we expect that what you learn can be applied to other, lesser, languages.

  • Finally, I'll be giving one of the keynotes. A keynote should be provocative, both in the normal sense and in the sense of "provoking conversation". I like keynotes that people keep referring back to all through the conference. That's the kind of keynote I'll try to come up with.

    But I also like talks that give people something they can apply the next week when they return to their job. I want to do that too.

Watch this space for more info.

## Posted at 12:54 in category /2004conferences [permalink] [top]

Tue, 02 Mar 2004

Two hopeful stories

My parents immigrated from Germany. They arrived in the US in 1958. My dad worked in construction, originally building large commercial buildings, later building houses as an independent contractor.

He wasn't an easy boss to work for. (He fired me once.) He tended to work alone, with occasional helpers, subcontracting out the jobs he couldn't do. There was one big exception while I was in high school, though. He kept hiring two Puerto Rican guys. He really took them under his wing, basically set them up in business.

Once I asked him why. He said something like "I know what it's like to come to a strange country as an immigrant, barely speaking the language, and having to work with people who don't think you deserve a job." (Working in construction with a bunch of WWII vets wasn't the easiest thing.)

I almost said, "Dad - you're practicing affirmative action!" For once, I was smart enough to keep my mouth shut. He was very conservative then, dead set against affirmative action, and he would have thought I was being stupid. Affirmative action was something they did without his consent. Simple decency was not at all the same thing. I remember thinking that if more people behaved decently, like him, we wouldn't need formal programs of affirmative action.

Why am I telling this story? I was reminded of it by a hopeful story of someone acting against type. My country is becoming divisive and bitter, all about the Right Positions and willfully blind to their effects on actual breathing people. So I crave such stories.

P.S. Last I checked, more than a decade ago, those two guys were successful independent contractors, solid parts of the middle class. One of them said they owed it all to my dad.

## Posted at 08:37 in category /misc [permalink] [top]

Mon, 01 Mar 2004

Ossifying fluidity

IEEE-SA also approved the start of work on IEEE P1648, "Recommended Practice for Establishing and Managing Software Development Efforts Using Agile Methods." This new standard will give those who purchase software a process for establishing, contracting and managing Agile development projects and for working with Agile software developers. It will apply to both technical and project management personnel and will focus on defining and controlling feature development.

For someone like me, who has been - to put it mildly - underwhelmed by the IEEE's desire to standardize software development practices, this announcement is rather alarming. The IEEE's track record has been one of either standardizing prematurely or standardizing things that don't work well. In both cases, IEEE standards have been an impediment to progress. (I am quite fond of 802.11b, though - I'm using it right now.)

If, indeed, this standard is really about how outside contracting organizations might interface with the teams doing agile development, I'm perhaps not so concerned. At least they'll be leaving the teams alone to figure out their own practices. And the fact that standard proposer is the Director of Standards for Computer Sciences Corporation's Defense Group is even cause for optimism: Agile becomes mainstream in a universe that is not notoriously aligned to the values of the Agile Manifesto. But I fear mission creep. And the name is really bad.

[Update: deleted unseemly whining.]

[Further update: later news is grounds for cautious optimism.]

## Posted at 15:47 in category /agile [permalink] [top]

I dub this Yip's Law

Anything can be made measurable in a way that is superior to not measuring it at all. -- Tom Gilb

Corollary: Anything can be made measurable in a way that is inferior to not measuring it at all. -- Jason Yip

## Posted at 13:45 in category /misc [permalink] [top]

Quick tests and slow tests

On the agile-testing mailing list, Jeffrey Fredrick writes:

At my previous company I side-stepped both issues -- defining "unit test" and having them get too slow -- by having two suites of tests, the "quick tests" (QTs) and the "build verification tests" (BVTs). We didn't enforce what sorts of tests people wrote or which suite they added their tests to, but we did requite that the QTs had to all execute in under 5 minutes.

In practice, of course, that meant that most "component isolation tests" ended up as QTs while most "component un-isolation tests" ended up as BVTs.

We've adopted something similar at Agitar where we have a "quick cc build" and a "cc build", where the important measure is the feedback cycle, not a philosophical definition of is _really_ a unit test. http://www.developertesting.com/developer_testing/000023.html

What Jeffrey describes is, I think, an organization of tests according to their virtues as change detectors. When you cannot get all possible change detection feedback fast enough, you arrange things so that you get a lot of the value in a little time.

This is completely orthogonal to other issues like whether the tests are technology-facing or business-facing, written in programmer-ese or customer-ese. How useful are our brains, that they allow us to think about the same thing in more than one way!

## Posted at 10:13 in category /testing [permalink] [top]

Risks of quantitative studies

Jakob Nielsen has an article on risks of quantitative studies. A nice checklist of the way numbers can mislead. My favorite bit:

Even when a correlation represents a true phenomenon, it can be misleading if the real action concerns a third variable that is related to the two you're studying.

For example, studies show that intelligence declines by birth order. In other words, a person who was a first-born child will on average have a higher IQ than someone who was born second. Third-, fourth-, fifth-born children and so on have progressively lower average IQs. This data seems to present a clear warning to prospective parents: Don't have too many kids, or they'll come out increasingly stupid. Not so.

[I'll let you read the article to find out why.]

Note that this ties into my earlier lament on exploratory data analysis. EDA is, in part, a way of persuading numbers to alert you to how you might misinterpret them.

## Posted at 09:29 in category /misc [permalink] [top]

Wed, 25 Feb 2004

William James on differences that make a difference

While I'm on an anti-definition kick, let me quote William James's story of the squirrel:

SOME YEARS AGO, being with a camping party in the mountains, I returned from a solitary ramble to find every one engaged in a ferocious metaphysical dispute. The corpus of the dispute was a squirrel - a live squirrel supposed to be clinging to one side of a tree-trunk; while over against the tree's opposite side a human being was imagined to stand. This human witness tries to get sight of the squirrel by moving rapidly round the tree, but no matter how fast he goes, the squirrel moves as fast in the opposite direction, and always keeps the tree between himself and the man, so that never a glimpse of him is caught. The resultant metaphysical problem now is this: Does the man go round the squirrel or not? He goes round the tree, sure enough, and the squirrel is on the tree; but does he go round the squirrel?

[Stop now and answer the question. When I did, I thought the answer was bleeding obvious. Dawn thought so too, but I was surprised that she thought it was the other answer that was obvious. Since, I've asked others. It's not evenly split, but neither Dawn nor I are alone.]

In the unlimited leisure of the wilderness, discussion had been worn threadbare. Every one had taken sides, and was obstinate; and the numbers on both sides were even. Each side, when I appeared therefore appealed to me to make it a majority. Mindful of the scholastic adage that whenever you meet a contradiction you must make a distinction, I immediately sought and found one, as follows: "Which party is right," I said, "depends on what you practically mean by 'going round' the squirrel. If you mean passing from the north of him to the east, then to the south, then to the west, and then to the north of him again, obviously the man does go round him, for he occupies these successive positions. But if on the contrary you mean being first in front of him, then on the right of him, then behind him, then on his left, and finally in front again, it is quite as obvious that the man fails to go round him, for by the compensating movements the squirrel makes, he keeps his belly turned towards the man all the time, and his back turned away. Make the distinction, and there is no occasion for any farther dispute. You are both right and both wrong according as you conceive the verb 'to go round' in one practical fashion or the other."

That's from his second lecture on Pragmatism (and I'm so glad James died before Mickey Mouse was born, so that I can link to the whole thing).

When I get involved in definitional debates, I often think of James's pragmatic method, which is to ask:

What difference would it practically make to any one if this notion rather than that notion were true? If no practical difference whatever can be traced, then the alternatives mean practically the same thing, and all dispute is idle. Whenever a dispute is serious, we ought to be able to show some practical difference that must follow from one side or the other's being right.

Quite often, there is a practical difference. Then, if I'm lucky, I can focus on which practical difference people want, thus sneaking away from arguing about the definition.

## Posted at 20:07 in category /mangle [permalink] [top]

Tue, 24 Feb 2004

Our slant on exploratory testing

For our tutorial on exploratory testing at Agile Development Conference, Elisabeth Hendrickson and I have to answer this question: "What's exploratory testing for, anyway? In an agile project, I mean."

The answer we'll use in the tutorial will be something like this:

We advocate Exploratory Testing as an end-of-iteration ritual in which the Customer, programmers, and other interested stakeholders have the opportunity to take the new Business Value for a test drive to discover New Information.

What we'll want to do is simulate that experience in the tutorial. Our first thought was to do it on (surprise!) software. But there are two problems. First, getting everyone's machine working, getting the software running on it, dealing with configuration problems, etc. - that all takes a good half an hour. That is, maybe 15% of our available time. Ouch.

Second, we can't show the results of the exploratory testing. We can't show a Customer saying, "Oh! What a good idea! Let's put that in right now" - and then having it put in, and then having the Customer see it in action. We can't go through a full iteration, much less more than one. Yet, if exploratory testing's role is largely - as I believe - about shortening the project's feedback loop, we'd be doing a bad thing if we didn't close the loop.

So what we're planning on doing is to have teams of people design a game. The game will be one that demonstrates some property of Agile development. When we tried this out ourselves, we wanted to devise a game that puts across how and why test-driven design feels slower, starts out slower, but catches up in the end.

After the game is designed, groups will divide. Two people will stay, the others will go to join other groups. The changed groups will then playtest the game. They'll use a couple of exploratory techniques we'll describe. They'll come up with new information, things like:

  • Wow, so that's what that feels like.

  • Hmm... I've changed my mind.

  • Oh, this is missing something.

  • Gee, that was unexpected.

  • That didn't work.

  • Maybe what we deferred was more important than we thought.

Then that group will stay together and add something new to the game. Then there'll be another round of playtesting, with a couple of new exploratory techniques.

We're aiming for three things. First, a "Wow, so that's what that feels like" reaction applied to exploratory testing itself. Second, a desire to try it out at a home company. Third, to provide some concrete techniques that apply to software.

And, if we're lucky, we'll get some nice games that really make points well.

Our plans may change.

## Posted at 21:14 in category /2004conferences [permalink] [top]

Definitions get ever more slippery

I've long been fascinated by the notion of incommensurability. It's a term in science studies made popular (sic) by Kuhn's Structure of Scientific Revolutions and Feyerabend's Against Method. Two theories are incommensurable if neither can be fully stated in the vocabulary of the other. Feyerabend argued that incommensurability means that we have no rational (context-independent) way of judging between rival theories.

Terminology can also be incommensurable, since theories are built from terminology (and vice versa). A good example is "velocity". When Galileo was arguing with the Aristotelians about his new world view, both sides used the word "velocity". But Galileo meant something like what we today call "instantaneous velocity", and the Aristotelians meant something like what we call "average velocity". So if they both watched the same experiment they would likely get different answers to the question "What's the ball's velocity?" We can, with hindsight, say they ought to define their terms better. But that's part of the problem. They could define velocity in terms of motion, but "motion" also meant something different to an Aristotelian. A theory of motion must necessarily say something about growth, since the growth of a tree is the same phenomenon as the falling of a ball. And what exactly is the point of a theory of falling balls that can't even begin to explain why fire rises? - the Aristotelian theory could.

You can see a conversation going nowhere.

Kuhn writes (at least tentatively) as if such conversations must go nowhere. People with incommensurable theories live in different worlds. It's as if they have different perceptions. Incommensurability is a gulf that can't be bridged by talk or definitions, only by experience (what Kuhn likens to a gestalt shift).

Because of incommensurability, I accept frustration when communicating between the agile and non-agile worlds. Words like "test" and "design" come freighted with different world views. That's one of the reasons I've tried to talk about tests as "checked examples". Maybe if we use different words, talk will be easier.

But wait - it gets worse.

Last night, I read the first two chapters of Esther-Mirjam Sent's The Evolving Rationality of Rational Expectations: An Assessment of Thomas Sargent's Achievements for a seminar I'm sitting in on. Given that I have nowhere near the economics background the book assumes, I can only give a thumbnail sketch. There was this economist, Sargent. He worked on something called "rational expectations" for many years. Rational expectations holds that people's predictions don't err systematically. They err randomly. That assumption has all sorts of consequences, none of which I understand.

What struck me on page 19 was this sentence: "Sargent's different interpretations of rational expectations were temporally specific." Although that's vaguely worded enough that I'm not sure what Sent was thinking, it made me think this: It's likely that Sargent would say all along that he was working on a single thing named "rational expectations," but what he meant by that term changed over time.

So imagine: not only do Galileo and the Aristotelians face an incommensurability barrier, the Aristotelians have to track the changing connotations and denotations of "velocity". We, today, can say Galileo was always talking about instantaneous velocity, just getting ever better at figuring out what that meant. But that's probably not at all what the story looks like from the inside as it happens, even if it looks like that to Galileo after the fact (since Galileo is doubtless as good at telling stories to himself as we are at telling stories about him).

It's a wonder we can communicate at all about important things. That we do, I humbly submit, has a lot to do with talking about examples, not about definitions. And, perhaps more important, with doing things together. And with imagining what it would be like to do things like someone else does them.

## Posted at 20:39 in category /mangle [permalink] [top]

Sat, 21 Feb 2004

Tacit knowledge

I've posted my January editorial from Better Software magazine. It's about tacit knowledge, and it commits to the web the "bright and dull cows" story that I've told to countless generations of software people.

## Posted at 12:47 in category /misc [permalink] [top]

Children were created by viruses to make replication easier

Actually, this posting has nothing to do with that, but think about it. Why contend with adult immune systems when you can use immature ones? I imagine two viruses brainstorming a billion years ago, when one of them says, "Sexual reproduction!" And the other says, "Yes! And while we replicate within the hosts that have incompetent immune systems, the other hosts will nurture the cells we're using! Boris, you are brilliant!"

That interlude brought to you by Sophie "Why?" Marick. And, soon, no doubt, her brother.

Now that Dawn's home, I should be doing the things I should be doing. But I need a break, so I read some blogs. Here are two nice pieces from Dave Thomas. Both contain coolly idiomatic Ruby.

P.S. The disciplinary agency piece is coming along, but the walking-through-refactoring example is 17 pages of test tables and code and pictures and text. This seems a problem.

And the Powerbook mysteriously stopped ticking three days after I whined about it. Magic.

## Posted at 10:12 in category /links [permalink] [top]

Mon, 16 Feb 2004

The tick-tick-tick of Panther

OS X Panther, recently installed, is driving me insane. Right now, my TiBook's drive is going tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick tick at slightly less than one second intervals.

It does not seem to be paging, since there's 171ish meg of free memory and the pagein/pageout numbers do not change. And yet if I exit a program, the ticking sometimes stops.... There... It did.

But it doesn't seem to be that program that's at fault. I often exit Word and the ticking stops. But sometimes it doesn't, so I exit Mozilla and the ticking stops. But exiting Mozilla alone doesn't always stop it. Etc.

And sometimes starting a program makes the ticking stop.

I've also gotten it into a mode where it's normally not ticking, but exposing the Dock by moving my mouse to the edge of the screen makes it start and continue as long as the Dock is visible.

(I should be systematically investigating, but my time is tight. So I'm not doing things like waiting to see how often the ticking stops on its own.)

Anyone have any insight into this? Mail me. I fear I'll soon be running down the street screaming "Make them stop! The sounds in my head! Make them stop!" That's probably a bad career move.

## Posted at 10:39 in category /misc [permalink] [top]

Exploratory testing at Agile Development Conference

Elisabeth Hendrickson and I will be giving a tutorial titled "Exploratory Testing for Agile Projects" at Agile Development Conference, which will be held June 23-26 in Salt Lake City, USA. We'll be using the time until then to bash together my ideas about how exploratory testing will mutate in the specific context of agile projects, Elisabeth's ideas about the same, and Elisabeth's vastly greater experience with exploratory testing. I'll be using this blog to talk about some of what we're doing, partly to help me figure things out, partly to help me learn how to articulate ideas. (I worked with Elisabeth on the tutorial last Saturday, and I was remarkably inarticulate.)

## Posted at 07:36 in category /2004conferences [permalink] [top]

Fri, 13 Feb 2004

Mangle: Disciplinary agency

[In this blog category, I'll be explaining my understanding of Andrew Pickering's The Mangle of Practice, toward the end of helping me think through a paper.]

In this essay, I'll describe Pickering's notion of "disciplinary agency". I'll use the same mathematical example Pickering uses. In the next essay, I'll use a coding example I stumbled over while happily hacking on the plane to San Francisco. My hope is that Pickering's notion gives some insight into the semi-common idea that "the code is telling us where it wants to go".

 

Let's define agency as "the capacity to do things". People have agency. I use my agency to write this essay. You have the ability - the agency - to read it or to turn aside.

Generally, we think of agency as being something that humans have. In his book, Pickering proposes that it's useful to think of nonhumans as having agency. For example, in an experiment a scientist creates a machine, turns it on, and watches it. While creating it, the scientist is exhibiting agency. But when watching the machine, the scientist is passive while the machine does whatever it is that it does. We can say the machine exhibits material agency. It's no longer manipulated by the human; instead, it's in control while the human sits back.

Another type of agency is disciplinary agency, in which a human gives up control to a routine way of reacting to patterns in the world. We can say that that routine has agency. It has the ability to do things in the world that the human doesn't expect, that the human can only observe and then react to.

This pattern of humans exercising agency, then sitting back and watching while something else exercises agency, Pickering rather fancifully calls the dance of agency. He claims it's a common pattern in intellectual practice.

Pickering uses Hamilton's quaternions as a case study. In the 1800's, mathematicians had established a correspondence between algebraic equations involving imaginary numbers, like x+iy, and 2D geometry. Such an equation corresponded to a vector from the origin to the point (x,y). (x+iy)+(u+iv) corresponded to the vector (x+u, y+v). Multiplication of algebraic equations corresponded to a two-part rule: multiply the lengths of the two vectors, and add their angles.

That's fine for two dimensions. What a mathematician named Hamilton wanted to do was figure out how to link algebra and geometry in three dimensions. How can we talk about a point (x, y, z) algebraically?

Hamilton first modeled 3D after 2D. If a point (x,y) corresponds to x+iy, perhaps (x, y, z) corresponds to x+iy+jz. The z axis is perpendicular to the x axis the way the y axis is, so he said that j is an imaginary number like i. That is, jj=-1 as well as ii=-1.

That's fine, but does it work? One way to tell is to repeat what you already know how to do in the new context. That is, you apply - rotely - a discipline you already know. Hamilton knew the rules for algebra, so he started manipulating equations and seeing if the results corresponded to any sensible extrapolation (from 2-space to 3-space) of multiplying two vectors.

A first thing he did was consider the square of a point in 3-space. According to the normal rules of algebra, the square of x+iy+jz is:

x2 - y2 - z2 + 2ixy + 2jxz + 2ijyz

This is an example of disciplinary agency. Having decided on his representation, Hamilton had no choice about how to proceed: the rules of algebra controlled, and they produced the result that they produced. Everything is straightforward, except there is a bit of a question: what does ij mean? He knew that ii=-1 and he'd assumed that jj=-1, but what was ij? He wasn't forced to answer that question yet, so he could move on.

Now Hamilton turned to geometry. He needed to extrapolate the rules for 2D into 3D, keeping fixed the idea that multiplication means multiplying lengths and adding angles. A problem arises in adding angles. When you square, there are multiple points that satisfy the addition constraint. Which to choose? Hamilton chose the one that lay on the plane connecting the original point and the x-axis.

Hamilton has here taken back agency from the discipline of algebra. But once, he makes his decision, he surrenders again to the discipline of geometry. You can now square a point and, moreover, translate the result back into algebraic notation:

x2 - y2 - z2 + 2ixy + 2jxz

Since this formula and the previous one represent the same point (just gotten at by two different disciplines of multiplication), we know that:

2ijyz=0

Having given discipline free reign to get this result, Hamilton entertained two possibilities. The first was that ij=0. The other was bolder, abandoning commutivity so that ij=-ji. (If you go back and multiply out the algebraic formula under that assumption, you'll see that the you get the same result as the geometric multiplication.)

I'll cut the story short here. The abbreviated version is that he assumed either possibility held and kept multiplying. This time he multiplied two different points rather than squaring the same point. He ran into another problem that required him to accept that ij=0 couldn't work, that it had to be that ij=-ji. The problem further helped him to make the leap to thinking that ij was equal to a new imaginary k. Once he had worked out all the combinations of multiplications of i, j, and k, he'd succeeded: he had a description that allowed algebraic and geometrical multiplication to come to the same answer... except that the new imaginary k meant that he was working in four dimensions rather than the three he'd intended. He'd had one goal, but his assumptions and the results of disciplinary agency had pushed him away from it toward what he came to call quaternions. (The way goals mutate in practice is a theme of Pickering's.)

So we see a pattern:

  • Use your human agency to make some extension of what you know.
  • Yield to disciplinary agency to see where that takes you.
  • When you run into resistance - something isn't working - start thinking. Make a new extension.
  • Yield to disciplinary agency again, and see where it takes you now.
  • Repeat until you're happy.

That's a dance of agency.

## Posted at 06:19 in category /mangle [permalink] [top]

Sat, 07 Feb 2004

All pairs of parameters

Interesting note from Tim van Tongeren:

Based on this information we can determine: Of the 383 medical devices recalled by the FDA between 1983 and 1997, 106 (27.6%) of the bugs could have been caught by testing all pairs of parameter settings.

## Posted at 14:54 in category /testing [permalink] [top]

Mangle: What the book's about

[In this blog category, I'll be explaining my understanding of Andrew Pickering's The Mangle of Practice, toward the end of helping me think through a paper.]

The book is about how practice - doing things - causes change to happen in science and technology.

When you do things, you do things to things. You tweak, twiddle, or frob them. Pickering is concerned with a wide range of "the made things of science", a category that includes machines and instruments, scientific facts and theories, skills, social relations, rules of evidence, and so on. Part of his point is that scientists who do things put every thing in science up for grabs, up for revision. For example, one response to a line of enquiry that isn't fitting the rules is to change the rules. (Echoes of Feyerabend here.)

Pickering claims that you can see a regular pattern in the change of science. People start with some goal - to create another "made thing", be it a mathematical theory or a bubble chamber. They model their goal after some existing made thing. A theory of the three-dimensional correspondence between algebra and geometry is modeled on the existing two-dimensional correspondence. The bubble chamber is modeled on the cloud chamber.

In the course of moving toward the goal, people encounter resistance. It's not just people pushing at the world (broadly construed); it's the world pushing back. That sounds pretty trivial, but Pickering is asking us to take Kent Beck seriously when he says (as he does in Smalltalk Best Practice Patterns), "Since the code seems to be telling us to do this, let's try it." (p. 3)

Resistance is accommodated by adjusting any of the made things available. Sometimes the accommodations work; sometimes they don't. You have to keep on trying. Here's a picture I drew of how change happens. The big blob is the made things you start from. The little blobs are tentative extensions. They keep hitting resistance, in the form of the T shapes. The blobs change color to show that the extensions are flexible - they are not where you were planning on going when you started. The final location is a funny shape to suggest that you should expect to get somewhere unexpected.

Pickering emphasizes the role of chance, the degree to which your choices are dictated by the specific resistances you encounter. Those resistances are not predictable in advance. This undercuts the feeling of the inevitable progress of science; it gives more of a role for the accidents of history. For example, Pickering allows for a chemistry as competent as ours - as capable of doing things in the world - that never happened to come up with the periodic table of the elements.

That's a pretty scary notion when it comes to science. Can it be the periodic table isn't real? It's interesting that I read an article in Science News about some specialist (geophysicist, I think) who'd created a completely different periodic table. Elements appear in more than one place, for example, because that makes sense for his field. One could get into long arguments about which of the two tables is more true to nature, but I'm not gonna. One could speculate that a world in which geophysics was more important than chemistry would have invented his table first and maybe never bothered with Mendeleev's - but I'm not gonna do that either.

I'm not going to because I'm a crass pragmatist, mostly interested in building software and in the evolution of agile methods. For software projects, chance and history so clearly play a role that you won't get embroiled in the equivalent of the science wars for saying "the feature set of Microsoft Word was not inevitable". I even hope that, in my paper, I'll be able to say that the composition of Extreme Programming isn't inevitable - that its state today depends on chance happenings at Xerox PARC, Tektronix, University of Oregon, Chrysler, and Ward Cunningham's office (where one day he decided to make a wiki).

So when does this process of change stop? When do you say you have a bubble chamber, quaternions, the periodic table of the elements, the methodology called Extreme Programming? It changes when good enough associations are made between distinct made things in the culture. "Good enough" means those things serve to stabilize each other. For example, in Morpurgo's search for free quarks, he worked until he had a machine that produced consistent effects, and he could explain those effects with a theory of how the machine worked, and the effects supported one of two theories about free quarks. His changing machine, his changing theory of apparatus, and a preexisting theory of quarks hung together.

(For a related take on how change stops, see actor network theory.)

## Posted at 10:59 in category /mangle [permalink] [top]

Tue, 03 Feb 2004

Investigative Journalist

[Update: corrected misspelling]

Dave Liebreich talks about people who are not business experts interviewing people who are. He uses investigative journalism as a model. What struck me most was this:

The interviewee *believes* the interviewer has a perspective different from their own, or has an incomplete understanding of some of the areas.

That strikes a chord with me, as I'm interested not just in person X's job, or in person Y's job, but in how they think about each other's job, and in how they attempt to structure their interactions with respect to those thoughts. I'm interesting in what's between roles. That accounts for my interest in boundary objects and in some of the literature on "marginal" or "liminal" people - those who live on the boundaries between cultures and cross-pollinate them. (The Constant Reader will guess that I fancy myself one of those people.)

In another entry, Dave writes:

But I still run scripts with tracing turned on, every now and then, just to get a feel for the rhythm of the system and maybe discover something that is off a beat or two.

Part of what I conceive of as the promise of the Agile methods is that they will help make phrases like "the rhythm of the system" or "what the code is trying to tell us" not seem wacko. They're a groping way of expressing something that has real consequences.

## Posted at 07:11 in category /misc [permalink] [top]

Mon, 02 Feb 2004

More on exploratory data analysis

Kind reader Alexandre Drahon writes in response to my EDA lament:

Although Tukey's book is expensive or/and difficult to find, there are some resources available to get a first insight into EDA. A general introduction is available on the Web http://www.itl.nist.gov/div898/handbook/eda/eda.htm (I'm not qualified to say if it is a good introduction, but it was useful for me). There are some references to EDA in the field of data visualization and visual data-mining (for instance http://www.cis.hut.fi/~sami/thesis/thesis_tohtml.html). For french speaking people there is a swiss researchers association dedicated to the subject http://www.unige.ch/ses/sococ/mirage/

I hope you can use one of this resources to introduce EDA to other people. Edward Tufte speaks a lot about John Tukey, maybe his website's forum is a good place to search http://www.edwardtufte.com/bboard/q-and-a?topic_id=1

## Posted at 20:29 in category /testing [permalink] [top]

Sat, 31 Jan 2004

Exploratory Data Analysis

Well, this is depressing.

Some background: Dawn spent one month early in our marriage visiting the mastitis research labs of the U.S. I tagged along. Now, mastitis research labs are often found in less exciting places. What was I to do with my time?

Part of what I did was work through problems from John Tukey's Exploratory Data Analysis. John Tukey invented many ways of visualizing and exploring data. For example, he invented the box and whisker chart:

His book basically created a subfield of statistics, dubbed EDA.

I was attracted to EDA because of my long-standing love/hate relationship with metrics. Tukey seemed to me to have a love of numbers, of the detail they reveal, of the insights they can spark. Yet this dean of statisticians was at the same time wary of how easy it is to misuse numbers. Rather than jumping right to means, standard deviations, and curve-fitting, he emphasized pondering outliers and shapes of curves as a first way to get insight into the process under the data. Tukey seemed so much more sensible than so many software metrics people.

So today I was reading a blog entry about metrics by Alberto Savoia, someone who I think has a pretty sensible attitude toward numbers. (Full disclosure: I've received consulting dollars from Alberto's company, and I plan to receive more in the future. But I chose him to write three articles on load testing for what is now Better Software magazine in part because of his attitude, not because I foretold he'd give me money years later.) While I was thinking my boringly habitual cautionary thoughts - "How are the bad managers out there going to abuse this?" - I suddenly remembered EDA. I thought I would recommend on this blog that people get the book. I envisioned conference discussions about incorporating shapes and outliers into Big Visible Charts and intranet dashboards.

Then I noticed the price of the book. US$118 on bn.com. Completely unavailable at Amazon. Jeez. Other EDA books I remember seem to be out of print; one is $100. Both books used to be priced for undergraduate courses, and now they're priced for niche readers.

Back then, I'd bought Stata, a stats package, because it emphasized EDA. It still has the graphs and the stats, but Google and I could find only one reference to "exploratory" or "EDA" on the site (in the $100 book's blurb).

So that's what's depressing: a promising subfield that I'd hoped to turn my betters on to... seems to have practically vanished. Bummer.

Readers might want to check Tukey's book out of a university library. The book is pre-computer, so you get text on how to tally accurately by hand (probably not the way you do it). And Tukey's writing and typography styles are idiosyncratic. I found them kind of charming; you might not.

## Posted at 14:26 in category /testing [permalink] [top]

Bindings in Ruby

Jim Weirich has a nice little writeup on bindings in Ruby. Though he's writing in a different style, his piece's feel of progressive revelation is very much what I'm aiming at in my A Little Ruby, a Lot of Objects. I need to find some deadline mechanism to make me restart that book.

## Posted at 09:34 in category /ruby [permalink] [top]

Thu, 29 Jan 2004

Order of tests

Suppose you have a set of tests, A through Z. Suppose you had N teams and had each team implement code that passed the tests, one at a time, but each team received the tests in a different order. How different would the final implementations be? Would some orders lead to less backtracking?

I decided to try a small version of such an experiment at the Master of Fine Arts in Software trial run. Over Christmas vacation, I collaborated with my wife to create five files of FIT tests that begin to describe a veterinary clinic. (She's head of the Food Animal Medicine and Surgery section of the University of Illinois veterinary teaching hospital - that's her in the top middle picture at the bottom of the page.) I implemented each file's worth of tests before we created the next file.

  1. 000-typical-animal-progress.html
  2. 1A-orders.html
  3. 1B-when-payment-ends.html
  4. 1C-state-pays.html
  5. 1D-accounting.html

An interesting thing happened. When I got to the fifth file (1D), I had to do a lot of backtracking. One key class emerged, sucking code out from a couple of other classes. I think a class disappeared. After cruising through the first four files, it felt like I'd hit a wall. I'd made some bad decisions with the second file (1A), stuck with them too long, and was only forced to undo them with the fifth file. (Had I been attentive to the small, still voice of conscience in my head, I might have done better. Or maybe not.)

At the trial run, we spent four or five hours implementing. Sadly, only one of the teams finished. They did 1D before 1A. (Their order was 000-1D-1C-1B-1A.) What was interesting was that they thought 1D was uneventful but 1A was where they had to do some serious thinking. I got the feeling that their reaction upon hitting 1A was somehow similar to - though not the same as - my reaction upon hitting 1D. That's interesting.

Here are some choice quotes:

Brian: Am I right in remembering that D was no problem, but that things got interesting at A (which is the opposite of what I observed while taking them in the other order)?

Avi: That's right.

'A' changed some of the "ground rules" that we had been assuming about the system. I think the biggest deal was that, up to that point, all "orders" had been linear transitions from one status to another - from intensive care to normal boarding to dead, for example. Suddenly, there were all different kinds of orders that interacted in complex ways, some of them could be active simultaneously, and they had an effect on far more things than just the daily rate. At this point, both the state of the system and the conditional behavior based on the current state, became complex enough that many more things needed to be modelled as classes that previously had gotten away with being simple data types. It was the first time the code was threatening to become anything like the kind of OO design you would have done if you had sat down and drawn UML diagrams from the start.

Chad: It felt to me like that feeling I get when I'm doing something in Excel and I run into a scenario where pivot tables just aren't cutting it. Suddenly, I need a multi-dimensional view of the data, and I realize that the tool I have isn't going to work. So, it was kind of a flat to multi-dimensional transition.

Since we were intentionally avoiding the creation of new classes or abstractions of any kind (as an experiment), we were facing a rewrite to move further.

Given the fact that our brittle code was starting to take the shape of classes that *wanted* to spring into existence, I wonder how much better the code would have been if we would have done classic test-driven development without the forced stupidity. Unfortunately, it's impossible to conduct a valid experiment to test this without a prohibitively large sample size. Who knows--you may have found an example that will generally cause developers to box themselves into a corner.

If Avi and I could forget the exercise completely, it would be fun to go back and try to do TDD while overly abstracting everything to see if we ran into the same issues.

Another pair had an experience slightly similar to mine. They did 000-1C-1B-1A and then started on 1D. One of them says:

The only discontinuity we felt was at D where we realised we needed to have an enhanced accounting mechanism. The rest of the tests exhibit the expected feeling of tension and then release as we added stuff to the fixture and then refactored it out. D felt different to me because unlike the others (in our ordering) D did two things:

  • It was a significant increment in requirements above and beyond the simple balance model. It was a larger step from a code complexity level than the others.

  • It broke an assumption that was woven through the accounting code.

What occurred to me at the time was that this is an example of change that you'd like not to happen in a real system. We didn't finish D but it would have been easy to fix. If that had happened in the last iteration before UAT it would have been a lot scarier.

Interestingly I didn't feel we had made a mistake, we had decided to not look ahead and do the trivialest thing, we had just learnt something new and needed to deal with it.

What do I conclude from this? Well, nothing, except that it's a topic I want to pay attention to. I don't think we'll ever see a convincing experiment, but perhaps through discussion we'll develop some lore about ways to get smoother sequences of tests.

If anyone wants to play with the tests, you can download them all. You'll also want the FIT jar file; it has a fixture I use in the tests. Warning: you will need to ask clarifying questions of your on-site customer with expertise in running a university large animal clinic. Oh, you haven't got one? Mail me.

## Posted at 11:32 in category /mfa [permalink] [top]

Tue, 27 Jan 2004

A paper to write

I'm sitting in on a sociology of science seminar at the University of Illinois. It's about Andrew Pickering's The Mangle of Practice: Time, Agency, and Science. The idea is that participants will write a paper that may be included in an edited volume of follow-ons to Pickering's book.

I'll be using this blog category to summarize Pickering's ideas as a way of getting them clear in my mind. The basic idea is one of emergence, that novelty and change arise from collisions. A scientific fact might arise from the collision of a theory of the world, a theory of instrumentation, and the brute fact of what the instrument does when you use it. A mathematical fact might arise from bashing together algebra and geometry in an attempt to achieve certain goals.

What attracts me to Pickering's work is what attracts me to the agile methods: an emphasis on practice, on doing things in the world; the idea that the end result is unpredictable and emergent; the idea that everything is up for grabs, up for revision, including the goals you started with; and a preference for the boundaries of fields over their purest forms.

The paper I'm thinking of writing is "The Mangling of Programming Practice from Smalltalk-80 to Extreme Programming". I think it's fairly well-known that the agile methodologists were disproportionately involved in Smalltalk and patterns way back when. What was the trajectory from Kent Beck's and Ward Cunningham's early days at Tektronix to the development of XP as it is today? It's a historical oddity that Smalltalk found a niche in the IT/insurance/business world. What was the effect of bashing the untidiness and illogicality of that world against the elegance of "all objects, all the time"? We see CRC cards being described in 1989. Today, the 3x5 card is practically a totem of XP. So is colocation, big visible charts, and so forth. Here we see an ever-increasing recognition of how the material world interacts with what is so easy to think of as a purely conceptual world. What's going on? Etc.

Does Pickering's account of change shed any light on how we got from Xerox PARC to where we are today? And, more important to my mind, does it give us any ideas about where we really are and how we might proceed?

## Posted at 21:03 in category /mangle [permalink] [top]

Types of bugs

Jonathan Kohl writes about the superbugs that remain after test-driven design. That inspired this ramble.

Long ago, I was much influenced by mutation testing, an idea developed by DeMillo, Hamlet, Howden, Lipton, and others. (Jester is an independent reinvention of the idea.) "Strong" mutation testing goes like this:

  1. Suppose you created a huge number of "mutants" of a program. A mutant is created by changing one token in the original program. For example, you might change a < to a <= or you might replace a use of variable i with a use of variable j. These one-token changes are the mutant transforms.

  2. Run your existing test suite against the original and the mutants. A mutant is killed if it gives a different answer from the original. The test that kills the mutant has the power to discover the bug in the mutant. (Assuming that the original is correct; alternately, the test might discover that the original is wrong in a way that the mutant corrects.)

  3. Add tests until you kill all the mutants (but see note 3 below). What do you now know? You know that the program is free of all bugs that can be caused by your set of mutant transforms.

  4. So? What about the other bugs? Early mutation work made two explicit assumptions:

    • The competent programmer hypothesis: Most bugs are such one-token bugs.

    • The coupling hypothesis: A test suite adequate to catch all one-token bugs will be very very good at catching the remainder.

    Given those hypotheses, a mutation-adequate test suite is generally adequate. That is, it will catch (almost) all bugs in the program.

I quickly grew disillusioned with mutation testing per se.

  1. The number of mutants is enormous (it's often O(N2) in the size of the code under test). Most are easy to kill, but some aren't. So mutation testing is a lot of work. Worse, in my experiments, I didn't get the feeling that those last hard-to-kill mutants told me anything profound about my program.

  2. Some mutants are equivalent to the original program. Compare

    	def max(a, b)
              if a > b
                a
              else
                b
            end
          
    to
    	def max(a, b)
              if a >= b
                a
              else
                b
            end
          
    No test can distinguish these programs, so you have to spend time figuring out whether an as-yet-unkilled mutant is unkillable. When you do - which isn't necessarily easy - it's a let-down.
  3. I don't believe the competent programmer hypothesis because we know that faults of omission are a huge percentage of the faults in fielded products.

  4. I find it hard to believe that mutation-adequate tests are adequate to catch enough faults of omission, so I don't buy the coupling hypothesis either.

Nevertheless, I was - and am - enamored of the idea that after-the-fact testing should be driven by examining the kind of bugs that occur in real programs, then figuring out specific "test ideas" or "test requirements" that would suffice to catch such bugs. Back then (and sadly still sometimes today), test design was often considered a matter of looking at a graph of a program and finding ways to traverse paths through that graph. The connection to actual bugs was very hand-wavey.

It was my belief that programmers are not endlessly creative about the kinds of bugs they make. Instead, they code up the same kinds of bugs that they, and others, have coded up before. Knowledge of those bug categories leads to testing rules of thumb. For example, we know to test boundaries largely because programmers so often use > when they should use >=.

It was my hope back then that, by studying bugs, we could come up with concise catalogs of test ideas that would be powerful at finding many likely bugs. I published a book along those lines.

(I was by no means the only person thinking in this vein: Kaner, Falk, and Nguyen's Testing Computer Software had an appendix with a similar slant. And Kaner has students at Florida Tech extending that work.)

What test ideas were novel in my book were based on bugs I found in C programs. For some time, I've thought that different programming technologies probably shift the distribution of bugs. Java programs have many fewer for loops walking arrays than C programs do, so there'll likely be fewer off-by-one bugs and less need for boundary testing. Languages with blocks/closures/lambdas encourage richer built-in collection functions, so are likely to have even fewer implementation bugs associated with collections. Etc.

As I've gotten more involved in test-driven design, it seems to me that micro-process will probably also have a big effect. Jonathan's note increases my suspicion. So now I'm thinking of things we might do at the Calgary XP/Agile Universe conference, which looks set to be a hub of agile testing activity.

  • Test-first programmers should learn to write better tests as they get feedback from missed bugs. Yet we don't have the sorts of catalogs that we have for refactorings, patterns, or code smells. Nor do we have a folklore of bug prevention. What can we do in Calgary to kick-start things?

  • How should people collaborate to reduce the bugs that slip past code reviews? Jonathon is pushing hard to understand tester-programmer collaboration. Since he'll be at Calgary, maybe we should do something - have programmers adopt testers and vice versa? - so that everyone can accelerate their learning.

It's too late to submit formal proposals to XP/AU, but there's lots of scope for informal activities.

## Posted at 21:03 in category /agile [permalink] [top]

Sun, 25 Jan 2004

Code-reading practices

My first event in the Master of Fine Arts in Software trial run was a lecture on code-reading in the style of the literary critic Stanley Fish. His "affective stylistics" has one read a poem (say) word-by-word, asking what each word does for the reader. What expectations does it set up? or overturn? What if that word were omitted or moved elsewhere? (I've written on a similar topic earlier, and I drew my examples from that entry.)

I compared idiomatic Lisp code, "C-like" Lisp code, idiomatic C code, and Lisp-like C code to show how expectations and membership in "interpretive communities" influence readability. In the process, I learned something unexpected.

I presented code like this to Dick Gabriel, expecting he would think it an idiomatic recursive implementation of factorial.

(defun fact(n &optional (so-far 1))
   (if (<= n 1)
        so-far
      (fact (- n 1) (* n so-far)))

Note: because of my presentation's structure, I originally named the function f so as not to give away immediately that it was factorial. I don't think that's germane to this note, so I'm giving it the clearer name here.

He didn't think it was idiomatic, not really. He found it somewhat old-fashioned, preferring an implementation that replaces the optional argument with an internal helper function (introduced by the labels form).

      (defun fact (n)
        (labels ((f (n acc)
                   (if (<= n 1) acc (f (- n 1) (* n acc)))))
            (f n 1)))
    

Now, I always hated labels. What's the difference between Dick and me? It appears to be reading style. As I understand it from him, truly idiomatic Lisp reading style goes like this:

  1. Look for a key name (fact(n)).

  2. Quickly skip down to the code of maximum density.

               (if (<= n 1) acc (f (- n 1) (* n acc)))))
          
    That's the important code. If that's not clear, find the declarations that clarify it by scanning upward. The most important ones will be nearby.

The labels version of the code fits that. The reading style and writing style are "tuned" to each other. It does not fit my reading style, which is to read linearly through functions (though I do bounce around among functions). So the labels verbiage at the front slows me down. I expect the interior names to be more intention-revealing than they need to be when they're just placeholders to make interesting ideas invokable. Because I don't know the visual cues that say "Pay attention here!", I may do more memorization of facts that turn out to be unimportant.

It's arguable that my reading style is just flat-out worse, but I do think that tuning reading to writing is a more useful way to think about it.

All this may seem small, but it reinforces my idea that attending closely to the act of reading will yield some Aha! moments to improve our practice.

## Posted at 10:41 in category /mfa [permalink] [top]

Fri, 23 Jan 2004

Community

As someone who wants us all to join together under the banner of Agility, I'm a big fan of the rhetoric of community. And I do feel a sense of community, a sense of belonging. Nevertheless, I'm nervous about Community as a metaphor. What does it suggest that's misleading?

I've not studied the social science literature on community, but I should. Ben Hyde has read more than I have. I recommend two postings:

I'm particularly taken by the first posting's description of rituals and traditions. Something clicked when I read that. My goal is to merge some part of the testing community into the agile community. In the process, the former will be transformed more than the latter. Some of the rocky experiences I've been having can perhaps be explained by my inattention to rituals and traditions.

## Posted at 17:27 in category /misc [permalink] [top]

GUI testing prejudices

Someone on the agile testing mailing list asked about testing GUIs in agile projects. Here's my reply, somewhat edited.

My prejudices today, January 22 2004, go like the following. I hope that in a year they're different. It would be sad if neither I nor the field changed.

  • Make sure you have good automated tests for the domain model / guts before worrying about automating the GUI. This automation is done by programmers, testers, and the business expert, each contributing according to skill and need.

  • You will always do exploratory manual testing, so get good at it. I especially like to use exploratory testing to seek out omissions in the original story, what Kevin Lawrence calls "testing around the edges of a story". But exploratory testing should also concentrate on what the stories don't. If stories are about features, make exploration be about user tasks. If the stories assume expert users, do exploratory testing that favors novices.

    I like having the whole team do exploratory testing as part of an end-of-iteration ritual. You may need other exploratory testing besides; let experience drive you to it.

For testing web GUIs:

  • In some styles of implementation, the GUI is a transform from objects (the model) to text (HTML). Sounds like a job for unit testing. I'd rely heavily on that. The GUI implementor really ought to poke manually at the GUI after changing it; I'd see if there are ways to have her do it exceptionally skillfully. If poking is too hard (too many screens to navigate through), I'd make it easy.

  • I'd use one of the freeware drive-IE-via-COM tools (like WTR) to make some smoke tests, but this would be lowish priority.

  • Javascript is a pain for testing. I'd use the above freeware tools to find some way to let me unit test it. But, again, I'd think carefully about when manual testing suffices.

  • Configuration testing (IE vs. Mozilla vs. Safari vs. Netscape 4.71 with cookies turned off vs. ...): Oh Lordy. I don't know enough about it, so I would seek the advice of someone who does.

For testing the other type of GUIs:

  • Exploratory manual testing and programmer manual testing is my first defense. Actually, my first defense is making the GUI thin: get that business logic where it belongs!

  • I've heard good things from people who drive the GUI from "underneath", in the sense that they send instructions to the app (via something like XML-RPC), then the testability code in the app inserts events in whatever event queue the GUI framework provides, then the app does something, then state (GUI and perhaps model) is queried via the XML-RPC route.

  • Probably before that, I'd do some serious hunting for writeups by those people who hang out on TestFirstUserInterfaces@yahoogroups.com.

For both types:

  • It's easy to get sloppy or mindlessly repetitive in manual testing. Pairing is a way of keeping attention and discipline up. I'd try to make sure to apply it.

I refer to these as "prejudices" to reinforce to myself that they're just a starting point - in two ways. First, they're a starting point for my thinking as I approach a new project. Second, whatever approach a project starts with must change as the project progresses.

The word "prejudices" also reminds me that I haven't done enough thinking, nor enough practice.

I think I picked up this explicit use of the word from Bret Pettichord.

## Posted at 07:50 in category /testing [permalink] [top]

Thu, 15 Jan 2004

A new science metaphor for testing

On p. 166 of Laboratory Life, Bruno Latour and Steve Woolgar discuss how a group of researchers armored their factual claims against objection.

Parties to the exchange thus engaged in manipulating their figures [L&W don't mean that in the dishonest sense], assessing their interpretation of statements, and evaluating the reliability of different claims. All the time they were ready to dart to a paper and use its arguments in an effort not to fall prey to some basic objection to their argument. Their logic was not that of intellectual deduction. Rather, it was the craft practice of a group of discussants attempting to eliminate as many alternatives as they could envisage. [Italics mine.]

One common metaphor for software testing is drawn from the description of science most associated with Karl Popper. A theorist proposes a theory. Experimentalists test it by seeing if its consequences can be observed in the world. If the theory survives many tests, it is provisionally accepted. No theory can ever be completely confirmed; it can only be not refuted.

There's a natural extrapolation to testing: the programmers propose a theory ("this code I've written is good") and the testers bend their efforts toward refuting it.

I find both the science story and the testing story arid and disheartening: a clash of contending intellects, depersonalized save for flashes of "great man" hero worship. ("He can crash any system." "Exactly two bugs were found in his code in five years of use.")

Meanwhile, in Latour and Woolgar's book, a team is working together to create an artifact - a paper to submit - that's secure against as many post-submittal attacks as they can anticipate.

For a variety of reasons, I think that's a better metaphor for testing. Testers and programmers work together to create an artifact - a product to release - that's secure against as many post-delivery attacks as they can anticipate. Here, an "attack" is any objection to the soundness of the product, any statement beginning "You should have done..." or "It's not right because...".

Consequences?

  • Just as scientists both review drafts and help each other in the writing, it's natural for testers to both test after and test first.

  • We needn't cling to a harsh separation of roles. In my wife's lab, there are no pure critics. Everyone does experiments, everyone critiques drafts, everyone collaborates on drafts. Could the same kind of thing happen in software? I suspect not. The consumers of my wife's papers are researchers like her. That makes it easy for people like her to anticipate the attacks of people like her. In software, the consumers are different than the producers, which complicates things. Still, the theorist/experimenter analogy makes a split between testers and programmers fundamental and, in a sense, unquestionable. I'd rather see it as undesirable, something to minimize.

  • The Popperian metaphor allows testers to think their responsibility is only to find bugs. They succeed if they find bugs. But many products also need to be malleable. They need to survive an unending series of "attacks" in the form of feature requests. Since testers have no stake in making the product malleable (though they do have a stake in making their automated tests malleable), they will not help the team succeed in those terms and they are likely to decrease malleability. In the Latour/Woolgar metaphor, testers share responsibility for armoring the product against all objections to its transcendental goodness. So they're more likely to push for a product-wide optimum than merely a testing-wide one.

## Posted at 07:59 in category /testing [permalink] [top]

Sun, 11 Jan 2004

Master of Fine Arts in Software trial run results

The MFA for Software trial run ended yesterday. It was quite an experience. It was a success, in these senses:

  • Is the idea of a low-residency crafts/arts-based postgraduate degree in software a good one? Would people be justified in spending their money on it? Yes.

  • Can the idea be implemented? Is there a path from what we did this past week to a smoothly running and well-tuned program? Yes. I think we know what such a program would look like, and we have solid ideas about ironing out the glitches.

  • Do the students want to come back and help fix the glitches? Yes, all of them. (!)

  • Can the path be traversed reasonably quickly? Yes. I think one more trial run should be enough to let us offer the program for real.

The event triggered a lot of ideas in my head. Over the next couple of weeks, I'll be documenting the ones that stick. Other people at the event have blogs, too. And there's also an MFA wiki and mailing list.

## Posted at 09:00 in category /misc [permalink] [top]

Thu, 01 Jan 2004

Tests as documentation for later readers: an example

I think it's hard for people to understand what I mean when I talk about product-level tests driving coding. It's easy to think that the actual executable tests are the only thing the programmer uses to understand the problem to be solved. That's not true. There's conversation with other people. And there are annotations attached to the tests. Thinking about what it would mean for tests alone to drive coding gives little or no understanding of what I really mean.

Conversation is hard to show in a blog, but I can show what I mean by annotations. What follows are the actual tests I used to add a new type of fixture to FIT (a table-driven testing framework). Twelve people will soon be using this fixture, and I wanted these tests also to help them understand how to use it. But what I mainly used them for was guiding me, step-by-step, through the coding.

The example is as I wrote it, except that I removed the egregious glop that Word insists on putting in HTML files. Looking back on it from a few days' remove, it has obvious weaknesses when you approach it expecting user documentation. But I think flaws are typical, so I will not fix them just to save myself embarrassment.

It's likely the tests will be confusing to people who don't know FIT. One of the characteristics of Agile documentation, I think, is that it does not attempt to educate the reader into the tacit and explicit knowledge that can be had in other ways. In this case, the other way is experience with FIT (which, I think, usually means reading the FIT code to answer questions). More commonly, the knowledge is had by being part of the development team.

This is an odd case, because what follows uses a tool to test an addition to itself. The Java code that lies behind the tables may help, so that's included afterwards.

NOTE: If you're attending the MFA for software trial run - this means you, Chad - don't read these tests. I need fresh reactions for my first lecture.

 

These tests are about the StepFixture

fit.StepFixture

A StepFixture alone is not useful. Instead, it's subclassed. We'll use StepFixtureTestFixture.

fit.StepFixtureTestFixture

Steps are designated by the first cell of a column. Arguments are in following cells. check is a special name: it calls the no-argument method in the second column and compares the result to the value in the third column.

fit.StepFixtureTestFixture

   

check

stringIsEmpty

true

add

one string arg

 

check

stringIsEmpty

false

check

currentString

1:/one string arg/

add

arg 1

arg 2

check

currentString

1:/one string arg/2:/arg 1/+/arg 2/

add

   

check

currentString

1:/one string arg/2:/arg 1/+/arg 2/3://

State is maintained across tables, but you can reset it.

fit.StepFixtureTestFixture

   

check

currentString

1:/one string arg/2:/arg 1/+/arg 2/3://

restart

   

check

stringIsEmpty

true

Because StudlyCaps style is not fit for humans, check arguments and other steps can be written in a more friendly style.

fit.StepFixtureTestFixture

   

check

string is empty

true

add a string

a string

 

check

current string

1:/a string/

Arguments can be any type that FIT knows how to convert.

fit.StepFixtureTestFixture

   

set a date

January 19, 1960

 

check

date

January 19, 1960

set an integer

45

 

check

integer

45

You shouldn't use two methods with the same name and same number of arguments. The results are undefined, but doubtless not good.

A check step can also take extra arguments. The last cell remains the expected value, but cells before it are passed to the method.

fit.StepFixtureTestFixture

       

set a date

February 9, 1906

     

check

date

day

9

 

check

plus

1

20

21


package fit;

import java.util.Date;
import java.util.GregorianCalendar;
import java.util.Calendar;

public class StepFixtureTestFixture extends StepFixture {

    private String result = "";
    private int count = 0;


    private String countPrefix() {
        count++;
        return count+":";
    }

    private String delimited(String s) {
        return "/"+s+"/";
    }

    public void add() {
        result += countPrefix() + delimited("");
    }

    public void add(String s) {
        result += countPrefix() + delimited(s);
    }

    public void add(String s, String s2) {
        result += countPrefix() + delimited(s) + "+" + delimited(s2);
    }

    public void addAString(String s) {
        add(s);
    }

    public String currentString() {
        return result;
    }

    public boolean stringIsEmpty() {
        return currentString().equals("");
    }

    private Date someRandomDate;
    private int someRandomInteger;

    public Date date() {
        return someRandomDate;
    }

    public int date(String day) {
        GregorianCalendar cal = new GregorianCalendar();
        cal.setTime(someRandomDate);
        return cal.get(Calendar.DAY_OF_MONTH);
    }

    public int integer() {
        return someRandomInteger;
    }

    public int plus(int first, int second) {
        return first+second;
    }

    public void setADate(Date date) {
        someRandomDate = date;
    }

    public void setAnInteger(int integer) {
        someRandomInteger = integer;
    }
}


## Posted at 18:14 in category /testing [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo