Fri, 14 Feb 2003
Yesterday, I did a long demo of test-first programming in Cem Kaner's Software Testing 2 course at Florida Tech. I had a great time. I think the students had a somewhat less great time, though still OK.
Here's the thing I did that I probably won't do again. My style of doing test-first programming is a touch odd. Suppose I'm given a task. (In this case, the task was changing the way a time-recording program allows the user to "lie" about when she started, stopped, or paused a task.)
I begin by writing one or a couple tests from the user's perspective. (In this case, I wrote a test that showed a simplified day's worth of tasks. At one point, the hypothetical user started working on a task, but forgot to tell the program, so she later tells it, "Pretend I started working on task X half an hour ago." Then she forgot to stop the clock at the end of the day, so the next day begins by having her tell the program, "Pretend I stopped the day at 5:15 yesterday.")
As I write one test, I get ideas for others, which I usually just note in a list.
After finishing the test, I'll run it, make some efforts to get it to pass. But I quickly get to a point where I choose to switch gears. (In this case, test failures walked me through the steps of creating the new command, but when I got to the point where the command had to do something, there was no obvious small step to take.)
When I get stuck, I switch gears to more conventional fine-grained programmer tests. (I said, "OK, now it's time to create a new object to manage the desired time. What's a good name for its class?... We really want only one instance. Will client code want to create it in one place and hand it to each relevant command object? Or should command objects fetch a singleton? How about just using class methods?" - all questions posed by writing the first test for the new class.)
That all went well, from my point of view. The problem is that it didn't convey well the tight test-code loop that is essential to test-driven design. We spent a lot of time on that first user-level test, and it took us - I think - too long to get to what I bet the students considered the real code.
So next time I do such a thing (at Ralph Johnson's software engineering course), I think I'll start after I've already defined my first whack at the user experience (though the acceptance-ish test). I'll start in the middle of the task, rather than at the beginning.
Pat McGee and I were talking about whether you can use test-first design to grow systems with good security. I've had some contact in the past with an alternative approach to security, capability security. I wondered whether it would be easier to grow a secure system on top of this better foundation (hoping that Pat would go off and try it).
This led us to this hypothesis: if you find it hard to build something using test-first development, then you're fundamentally looking at the wrong problem (or building on the wrong foundations).
Anyone have examples that are less speculative? (The common difficulty starting with test-first against a huge monolithic chunk of legacy code? Testing GUI systems without a clean separation between presentation and model?)
See this links page for more about capability security.