Fri, 25 Jul 2003
At Agile Fusion, the team I wasn't on built some "change detectors" for Dave Thomas's weblog. If I understand correctly, they stored snapshots of sample weblogs (the HTML, I suppose). When programmers changed something, they could check what the change detectors detected. (Did what was supposed to change actually change? Did what was supposed to stay the same stay the same?) I can't say more because I didn't really pay any attention to them.
Christian Sepulveda has a blog entry that finishes by paying attention to them. He writes:
I have started using the term "change detection" when describing automated testing. It has allowed me to "convert" a few developers to embrace (or at least really try) the technique. It has also been a good way to explain the value of it to management.
This terminology switch made me think. It's now a commonplace that test-driven design isn't about finding bugs. Instead, it's a way of thinking about your design and interface that encourages smooth, steady progress. And that thinking produces long-term artifacts (be they xUnit tests or FIT tests or whatever) that aren't really tests either - they're change detectors. They too exist to produce smooth, steady progress: make a little change to the code, ask the change detectors to give you confidence you affected only the behavior you intended to affect, make the next little change...
For the past year or so, I've been consciously trying to facilitate a convergence of two fields, agile development and testing. So I ask questions like these:
Today - a depressing day - these questions remind me of this one (from Hacknot): "What can you brush your teeth with, sit on, and telephone people with? Answer: a toothbrush, a chair, and a telephone." The implication is that straining after convergence leads to ridiculous and impotent monstrosities. As it becomes clear how different are the assumptions and goals and values that the two communities attach to the word "test", I must ask if I'm straining after such a convergence.
I don't believe so, not yet. Today's tentative convergence seems to work for me. I hope it works for people like me. But it's worth worrying about: will it work for enough people?
In recent weeks, I've been reading and hearing about code coverage. It may be coming back into vogue. (It was last in vogue in the early 90's.) It may be time to publicize my How to Misuse Code Coverage (PDF).
Code coverage tools measure how thoroughly tests exercise programs. I believe they are misused more often than they're used well. This paper describes common misuses in detail, then argues for a particular cautious approach to the use of coverage.