Wed, 06 Jul 2005
I started to proudly write up that trick. But the process of putting words down on pixels makes me think I was all wrong and that it's a bad idea. Read on to see how someone who's supposed to know what he's talking about goes astray. (Or maybe it's a good idea after all.)
The picture is of a standard layered architecture. The green star at the top level is a desired user-visible change. Let's say that support for that change requires a change at the lower level - updating that cloud down at the bottom. But other code depends on that cloud. If the cloud changes, that other code might break.
The thing I sometimes do is deliberately break the cloud, then run the tests for the topmost layer. Some of those tests will fail (as shown by the upper red polygon). That tells me which user-visible behaviors depend on the cloud. Now that I know what the cloud affects, I can think more effectively about how to change it. (This all assumes that the topmost tests are comprehensive enough.)
I could run the tests at lower layers. For example, tests at the level of the lower red polygon enumerate for me how the lowest layer's interface depends on the cloud. But to be confident that I won't break something far away from the cloud, I have to know how upper layers depend on the lowest layer's to-be-changed behaviors. I'm hoping that running the upper layer tests is the easiest way to know that.
But does this all stem from a sick need to get it right the first time? After all, I could just change the cloud to make the green star work, run all tests, then let any test failures tell me how to adjust the change. What I'm afraid of is that I'll have a lot of work to do and I won't be able to check in for ages because of all the failing tests.
Why not just back the code out and start again, armed with knowledge from the real change rather than inference from a pretend one? Is that so bad?
Maybe it's not so bad. Maybe it's an active good. It rubs my nose in the fact that the system is too hard to change. Maybe the tests take too long to run. Maybe there aren't enough lower-level tests. Maybe the system's structure obscures dependencies. Maybe I should fix the problems instead of inventing ways to step gingerly around them.
I often tell clients something I got from The Machine that Changed the World: the Story of Lean Production. It's that a big original motivation behind Just In Time manufacturing was not to eliminate the undeniable cost of keeping stock on hand: it was to make the process fragile. If one step in the line is mismatched to another step, you either keep stock on hand to buffer the problem or you fix the problem. Before JIT, keeping stock was the easiest reaction. After JIT, you have no choice but to fix the underlying problem.
So, by analogy, you should code like you know in your heart you should be able to. Those places you fall through the floor and plummet to your death are the places to improve. I guess not by you, in that case. By your heirs.
Test Driven Development is becoming a mainstream practice. However, it is a step along the way, not a final destination. This workshop will explore what the next steps and side paths are, such as Behavior Driven Development, Example Driven Development, Story-Test Driven Development.
The idea is that we'll spend up to an hour having participants give brief pitches for what they think "lies beyond," then split into focus groups to discuss particular topics. The deliverable is a short paper outlining the various approaches discussed, together with people involved in each.
You need not submit a position paper to attend. If you'd like to present your idea (briefly! which rules out fumbling with a projector), send us your topic. Thanks.