Sun, 29 Jan 2006
It's just shy of five years since the Agile Manifesto was written. I've often said that I dread the day when I look back on the me of five years ago without finding his naivete and misconceptions faintly ridiculous. When that day comes, I'll know I've become an impediment to progress.
So what about the me of 2001? I do find him a bit ridiculous, though not enough for comfort. During a shortish plane ride, I came up with this list of what I didn't know then:
Tools are important. I'm flying back from working a week at a Delphi shop. Doing... anything... in... Delphi... is... just... so... tedious... that... it... makes... you... want... to... scream. I think it no coincidence that so many of the Agile Manifesto authors had past experience with Smalltalk (or, in my case, Lisp). That kind of background makes it easier to think of software as something you could readily change. I don't think Agile would have taken off without semi-flexible languages like Java and the fast machines to run them.
Moreoever, each new tool—JUnit, Cruise Control, refactoring IDEs, FIT—makes it easier for more people to go the Agile route. Without them, Agile would be a niche approach available only to the ridiculously determined.
People get stuck. What I seem to see often is a team making a big leap. They become more productive, they become happier, the business becomes happier with them. Then they plateau. Now, I know from my weightlifting days that plateaus are a part of growth, but it seems surprisingly hard to make the next leap.
Sometimes I find other Agile consultants surprisingly wistful. The projects they're working with are doing better than they ever did before, but somehow they're not making it to that peak experience the consultant remembers.
The customer role is far harder than I'd anticipated. Five years ago, I wouldn't have said the customer role is the hardest on the project. Now I say it all the time. I also greatly underestimated how central the role is. Sometimes I tell people that I think of good Agile teams as like a compass with the magnetic pole being the customer. You can divert their actions away from the customer, but they'll always push to orient themselves that way. It's an unusually personal relationship.
Testers aren't translators. My image—only half conscious—was of the tester taking business-speak and translating it into tests for the programmers to pass. Now I think of the tester as much more someone who makes nudges that encourage and streamline direct conversations. The translation out of business speak should happen in the code.
Making business-facing tests is difficult and subtle. I pretty much thought I knew how to write "black box" tests, and that the tester's job would be to write those same tests, just earlier and based on much more intensive conversation with the customer. But the tests I advocate today are quite different than the ones I remember thinking about back then, and I'm still coming up with what appear to be important twists.
The interaction between testing and design complicates things. Five years ago, I viewed "test infected" programmers as an uncomplicated good. Programmers, I said, were so enthusiastic about testing that they'd willingly add the hooks testers have always wanted. I'm now thinking it's more complicated. Test-first unit testing leads to small-scale changes in design. Test-first large-scale testing seems to require similar changes in architecture. (See my recent interminable series for hints along those lines.)
Back then, I thought of testers as getting technical stories added to the mix. A tester could do tests of type X much more easily if the programmers did Y, makes the business case to the customer, who can decide to add a story to do Y. Or I thought of testers as writing particular stories in a particular format. When the programmers made those tests pass, the usual rules about minimizing duplication, etc. would cause the architecture to emerge naturally.
I now think that the interaction between tests and architecture will require much closer and sustained conversation than that (will be much less of a waterfall)—unless we're content to rest on a plateau.
Exploratory testing isn't an obvious fit. Back then, I was very taken with how the exploratory coding you see in Agile shops feels like exploratory testing. At a workshop I organized, Michael Feathers also remarked on that. I still think there's a strong connection, and I still talk to teams about exploratory testing, but it remains an obscure practice. When done, it seems still to be mostly about bugs, not—as I used to say—about exploring the business domain and design space. I wish I knew why.