Tue, 19 Dec 2006
Tests are better than requirements documents because they're more lively. Not only do they describe what the system is to do, they give strong hints about whether it does it. Requirements documents just sit there. The liveliness of tests makes up for the occasional awkwardness of their descriptions. (It's harder to write for two audiences—the human and the test harness—than it is to write for one.)
In a series of talks I gave earlier this year, I described three types of business-facing tests: ones based on business logic, ones based on workflow, and ones based on wireframe mockups of a user interface. I talked about wireframes last, and what I had to say compared poorly to the previous two. Those tests had been simultaneously executable and OK-to-good at communicating. But, when it came to wireframes, the best I could do was draw one on a flipchart and say, "I wish I could lift that off and put it in the computer. The closest I can come is this..."
That's bad because we have two separate representations, each of which is lousy for one of the two audiences. I now think I have something better. Here's a wireframe:
It's a drawing created with OmniGraffle Pro (using a stencil from John Dial). That kind of wireframe is easy for a whole team to talk about, but it's too ambiguous for a testing tool. (How would it know whether a given rectangle is a text box, a text field, or the decoration at the bottom of the window?) Fortunately, Omnigraffle allows you to attach notes to graphics. The yellow tooltip-ish rectangle shows annotations to a text field that remove ambiguity.
Here's a test that uses that wireframe:
The image is just there for human consumption. In real life, I'd want the human to work exclusively on the Graffle document and not think about PNG files at all. Instead, I'd have a script watch for changes to Graffle files and regenerate all the PNG images.
The actual test ignores the image. Instead, it parses the Graffle
file ("normal-run.graffle"), hooks the program up to a fake window
The error messages could do a better job of pointing to the right control, and it's a shame that the image doesn't appear in the output. (Fit swallows it along with any other HTML tags in the test input. No doubt I could work around that.) However, this output is only for programmers already deep in the code. It doesn't have to be as friendly as output aimed at a wider audience.
I still have two big open questions.
The next installment ties this into the Atomic Object style of model/view/controller, as described here (PDF) and in a forthcoming Better Software article. But first, I have to figure out how to parse canvases out of Graffle files. And there's that whole vacation thing.
Wed, 13 Dec 2006
Plucked from the bottom of mail from Pete McBreen:
"For a list of all the ways technology has failed to improve the quality of life, please press three." - Alice Kahn
Tue, 12 Dec 2006
This is an editorial that expanded on an offhand earlier post. It was rejected. While it does have two potentially offensive analogies, I figure I have more leeway in what I publish here. It's had a postscript added and is now filled with hyperlink goodness.
A recent UN report states that "New explosive devices are now used in Afghanistan within a month of their first appearing in Iraq." (Reuters, September 27, 2005). Compare that to the rate of diffusion of technology in our field. I'll use continuous integration as an example. It's a well-established technology that's easy to deploy, is practically without risk, has considerable benefit, was first widely described in 2000, and has had a solid open source tool supporting it for at least three years. But there's a reasonable chance you've never heard of it. (Note: true of original audience; likely not true of this blog's audience.) If you tried to deploy it, it might well take months and months to get permission, to round up a build machine, and to get the first people using it.
Something is desperately wrong with this picture. Why is it that people living in isolated harsh conditions where people are trying to kill them can move faster than we can in our offices?
John Robb, a software executive and former Air Force counterterrorism operative, describes what the guerrillas do as open-source warfare, and he's developed a rather elaborate theory of how that works. One underpinning of the theory is what he calls primary loyalties. "A primary loyalty is a connection to a non-state group that is greater than loyalty to a state. These loyalties include those to clan, religion, tribe, neighborhood gang, etc. These loyalties are reciprocated through the delivery of political goods [...] by the group that the state cannot or will not deliver."
Professional-class employees like you and me once had something like a primary loyalty to our employer, especially if it was a large company. In the US and elsewhere, that employer delivered "goods" to us like steady employment, guaranteed pension, medical care, a career path, and the training we needed to advance along it. Under Anglo-American capitalism, at least, corporations no longer deliver many of those things. Instead, as is described in Jacob Hacker's The Great Risk Shift, companies have given us the opportunity and responsibility to provide those things for ourselves. For example, instead of being given a guaranteed pension, we're given money to invest. If we invest well, we'll end up with more retirement money than the pension a company would have given us; if not, well, tough luck.
Whether that's a good or bad shift, employees have acted like people in Iraq and other failed states: they've shifted their primary loyalty elsewhere. In the US, we've seen rising nationalism, increased devotion to religious groupings, and more loyalty to political "tribes" (though not increased formal party membership). None of those loyalties have anything to do with work. Therefore, according to Robb, we're missing a key part of the infrastructure that supports fast diffusion and implementation of technologies at the office.
I think that's bad. We need groups that deliver the goods and are deserving of loyalty. Existing structures (unions, professional societies) aren't working, and I'm loathe to wait for them to start. The best I can offer is the autonomous team. I'm not talking about collections of individuals who've sleepwalked through "team-building exercises," but actual teams that work together very closely (often in pairs), learn together quickly, and provide cover for each other. When a team is working, the business comes to view it as a single specialist, a unit, with authority over what happens within itself. If the team decides to try continuous integration, it will deploy it without ever thinking to ask permission.
I acknowledge that it's offensive, at some gut level, to suggest emulating killers. But if this decade has a notable example of the "learning organization", it is—sadly—groups of insurgent cells with high internal loyalty and loose connections to both each other and also to the overarching sources of goals and funding.
P.S. John Robb's ideas haven't convinced me yet—sometimes his analogies seem more than a bit strained—but you may find his site worth a read. Hacker's notion of a risk shift has also drawn some scorn, though that particular link misses the point that matters to me. If you're an investor in the stock market, you expect stocks with higher volatility to pay higher returns over time. The higher returns are your payment for accepting higher volatility, usually tagged as "risk". What I take from Hacker is that a career today has higher volatility than in the past, but that higher risk has not come with significantly higher returns—instead, the US real median income has increased by 31% from 1967 to 2005 (source, PDF, p. 5). That's an annual real return of 0.6%. For comparison, that's a bit less than the real return on short-term US Treasury bills, historically the world's least risky investment.
Mon, 11 Dec 2006
My wife is writing the chapter on mammary gland health and disorders for Large Animal Internal Medicine, a standard reference. Her current draft is 119 double-spaced pages. It has 532 citations. The scary thing is how much she remembers—off the top of her head—about the contents of the papers. She is truly a fox.
Thu, 30 Nov 2006
Scripting for Testers has been renamed Everyday Scripting in Ruby because a couple of reviewers argued that pretty much all that was required to make it suitable for a larger audience was changing the title and the bit of Introduction that says who the book is for. So we did.
I hope testers still pick it up. The subtitle says "for teams, testers, and you", which helps Google find it when you type in "scripting for testers." (It's the top hit.)
Sadly, the scheduled ship date is a bit after Christmas. Since it would be sad if testers didn't get the book under their tree, we've decided to delay the holiday.
Thanks to those who helped me on it: Mark Axel, Tracy Beeson, Michael Bolton, Paul Carvalho, Tom Corbett, Bob Corrick, Lisa Crispin, Paul Czyzewski, Shailesh Dongre, Gunjan Doshi, Danny Faught, Zeljko Filipin, Pierre Garique, George Hawthorne, Paddy Healey, Andy Hunt, Jonathan Kohl, Bhavna Kumar, Walter Kruse, Jody Lemons, Iouri Makedonov, Chris McMahon, Christopher Meisenzahl, Grigori Melnik, Sunil Menda, Jack Moore, Erik Petersen, Bret Pettichord, Alan Richardson, Paul Rogers, Tony Semana, Kevin Sheehy, Jeff Smathers, Daniel Steinberg, Mike Stok, Paul Szymkowiak, Dave Thomas, Jonathan Towler, and Glenn Vanderburg.
UPDATE: People have pointed out the lack of links. I am a master of Marketing.
I've started using OpenOffice (in its Mac-ified NeoOffice form) for writing Fit tables. It's working considerably better than Word. Not only does it produce decent HTML (valuable when you're trying to figure out exactly what's going on), it does a better job of producing an HTML file that looks similar to the original WYSIWYG editor view, both when displayed through a browser and when read back into the editor.
I should note that I'm still using Word X for the Mac, so others might have better luck with Word than I've had. But if Word isn't working well for you, check out OpenOffice.
Fri, 10 Nov 2006
These are all mentioned in Crypto-Gram.
First, the recent torture-lite bill boiled down to C code.
There's more than one relevant bug. More here.
Wed, 01 Nov 2006
IEEE Software will have a special issue on test-driven development (May/June 2007). I'm a reviewer, and I've been asked to spread the word. The Call for Papers is here. The deadline is December 1.
My job is to give a thoughtful reaction to this essay, to describe what it means to a person with my perspective. Here goes.
You've just heard a description of a fall from a Golden Age when the world allowed us our values—to a world where the people and structures that hold power over us are disinterested, immune to our influence, and unwilling to leave us to putter in peace, despite our heartfelt claims that we'd all be better off if they did.
We're not the first people in this situation—many have been in far worse—and as I reread Mr. Waldo's essay one day, I thought it might be instructive to see how those others have handled it.
One response, the default perhaps, is despair and retreat from engagement. I think we're all familiar with that feeling, and with those who've succumbed to it, so I won't discuss it further.
The next two responses come from the Hellenistic period of Greek history, which followed the Classical period and was a time of turmoil, during which you might easily and uncontrollably go from great wealth to poverty or from power to slavery. This raised the practical question: how do you make yourself happy in a hostile world?
Zeno of Citium's answer has come to be called Stoicism. In this tradition, happiness comes from the possession of the genuinely good, and the only things that are genuinely good are the characteristic virtues of humans: wisdom, justice, temperance, courage, and so forth. We might include the desire to apprehend elegance in design as a virtue.
The wise person—the happy person—makes decisions based on how they align with the genuinely good. The results of those decisions have nothing to do with happiness: the Stoic would prefer they lead to wealth, health, and life, but is ultimately indifferent if they lead instead to poverty, sickness, and death. Epictetus puts it this way:
Our opinions are up to us, and our impulses, desires, aversions--in short, whatever is our doing. Our bodies are not up to us, nor our possessions, our reputations, or our public offices... if you think that [those] things ... are your own, you will be thwarted, miserable, and upset, and will blame both the gods and men.
From this, we get the popular image of the Stoic as someone who does what's right, because it's right, and is immune to attempts to sway her through non-rational emotions like fear of death. Marcus Aurelius, a later Stoic, put it this way:
Say to yourself in the early morning: I shall meet today ungrateful, violent, treacherous, envious, uncharitable men... I can [not] be harmed by any of them, for no man will involve me in wrong.
The Stoic approach to our problem would be to do thoughtful design because it is a good, and to be indifferent to the consequences. We would, for example, not care if the only company that would allow us to design well pays poorly, builds mundane software, and has no free soda in the kitchen. Stoicism is, I believe, what Mr. Waldo advocates.
But Stoicism was not the only philosophy that sprang from the chaos of the Hellenistic period. Epicurianism was another.
This is Epicurus, the founder of Epicurianism. In Epicurianism, happiness means having your desires satisfied and pain avoided. The virtues—courage, wisdom, and the like—are useful because they lead to the satisfaction of desires, not in and of themselves (as in Stoicism).
The best strategy toward happiness is to pare your desires down to the minimum, which are then easily satisfied. One should avoid desires that are inherently unlimited, such as those for wealth, power, fame, and the like, in favor of desires that can be readily satisfied—by, say, filling your stomach when hungry. Moreover, simple food is easier to obtain than fancy food and fills the stomach just as well; therefore, you should strive to be happy with simple food, though equally happy to eat fine food when it's there.
When I think of Epicurianism today, I think of the open source programmer who comes home from an unsatisfying job and spends part of the evening working on Firefox plugins or Ruby packages, designing them to meet the highest standards. Since, to Epicurus, current pain is outweighed by the mental pleasure of remembering past pleasures and anticipating future ones, the next day at work is thus made tolerable.
A third reaction is, to a Western audience, most associated with the period after the stability of the Roman empire collapsed. It is a negotiated retreat from the field of battle.
Here is a monastery. I suspect that it was built on the edge of a cliff not because of the view but because that's a defensible location.
Monasteries had defenses because they were liable to attack. Many of the attacks were like those of the Vikings on England, Ireland, Scotland, and elsewhere. Those attacks came from outside the existing, fragile social order. But there were also attacks from closer to home. A record that spans the 1400s shows that monasteries in Ireland had troubles long after the Vikings ceased to be a threat:
This being roughly a thousand years after Christianity reached Ireland, but before Martin Luther, I'm speculating here that these attackers were Catholic Christians, yet they were not deterred by the presumed anger of the Christian God at attacks on His monks. Hence: walls, cliffs, and towers to which the monks could retreat while the raiders plundered.
Still, the monks did not simply disappear from society behind walls. They provided value to those they'd left behind.
For example, they would pray for the souls of your departed relatives.
And monasteries were a convenient place to stash the still-living bodies of inconveniently undeparted relatives. The picture is of Sophia, inconvenient to Peter the Great, in a nunnery.
And, of course, in Belgium there was beer.
I am sure these services gained them some protection.
When I think of monasticism today, I think of Agile projects. In Agile projects that are running well, there is an implicit or explicit deal between the team and the business. The team promises to deliver shippable business value at frequent intervals and not to whine when the business changes its mind about what it wants. In return, the business leaves the team alone to build the product as they like. That allows people who crave good design to do it—provided they can mesh it with the need to deliver frequently. In practice, that means that code becomes the whiteboard on which the design is discussed, rediscussed, and refined. This—in the best cases—seems to me exactly the same process Mr. Waldo describes. By that, I mean that the attitudes of people toward the design are the same, the conversations have the same air, the values informing the conversations are the same, and the code—in roughly the same time frame—comes to have as satisfying a design.
I claim the monasticism of the Agile project is a more sustainable model than Stoicism or Epicurianism. It requires less of us because we get to lean on each other. Even programmers, notoriously not team players, gain strength from each other.
Perhaps that's a claim we can discuss.
For my part, I've recently become obsessed with the weakness of Agile Monasticism. Here is a story I heard from an ex-employee of a company I'll call Frex:
[That] year came dreadful fore-warnings over the land of [Frex], terrifying the people most woefully: these were immense sheets of light rushing through the air, and whirlwinds, and fiery dragons flying across the firmament. [By this, he refers to the acquisition of Frex by a larger company.] These tremendous tokens were soon followed by a great famine [the new head of marketing moved the Customer out of the project team room] and not long after, on the sixth day before the ides of January in the same year, the harrowing inroads of heathen men [a new VP of Development] made lamentable havoc in the church of God in Holy-island by rapine and slaughter. [The imposition of a "more mature" development process caused all but one of the team to quit.]
Such stories are common. Agile projects have no real defensive walls; all they can do is deliver return on investment and hope the business values it. But we all know that ROI is only a part of what moves businesses. Those in the Agile world all know of resistance to Agile from those middle managers who see it as a threat to their power to command and control. Telling such a person that her sabotage endangers the company's ROI is like an abbot standing in the path of Christian raiders and threatening them with loss of their immortal souls: sometimes it works, but nowhere near often enough. And it never works with the worshippers of Odin.
The universe of Agile teams is like a school of fish. Every once in a while, a predator sweeps through us, grabs a team in its mouth, and destroys them. We flail around in panic for a few moments, talk about the stupidity of it all with our nearest neighbors, then reform as before, ready for the next predator.
This is—I repeat—still better than before. Teams do tend to protect their members. Testers are less likely to be offshored. Those who obsess about design can do it without justifying themselves to the unsympathetic. But the teams themselves, as wholes, have no structure of protection.
Mr. Waldo's essay, paradoxically, is leading me to seek answers to the current problems of Agility in collective action exactly because its focus on individual courage calls attention to our biggest blind spot: we believe that each of us must alone contend against aggregates possessing decades of institutional power. We don't even think about standing shoulder to shoulder.
What path we should take, I don't know. Unionism is so foreign to the professional class in the US that I'm nervous about admitting I've ever even had the word in my mind. The ACM appears to me an organization for extracting money from people in return for papers printed in 9-point type, papers placed in bibliographic categories that don't seem to have changed since the seventies. Neither it, nor the IEEE, have enough spunk. The Agile Alliance, on whose board I sit, doesn't seem to have the right leverage. So I don't know what we should do, together, but I'll be thinking on the problem, and that's because of On System Design.
Special thanks to Donnchadh Ó Donnabháin, who tutoried me in Gaelic pronunciation.
The photo of Despair is copyright by Carl Robert Blesius and was retrieved from http://blesius.org/gallery/photo?photo_id=1061. It is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 1.0 license.
The picture of the dinosaurs is used by permission of clipart.com.
The other photographs did not have copyright notices.
Tue, 24 Oct 2006
My life would be ever so much better if I had a snippet of the ka-chunk sound that a 35mm slide projector makes when changing slides. Anyone got one or can record one?
Update: My life is ever so much better. There are at least three people in the world much better at Google than I am. Thanks, all.
Update: I've gotten requests for what I used. Here it is: http://freesound.iua.upf.edu/tagsViewSingle.php?id=4868. It uses a Creative Commons license.
For a mini-talk I'm giving at OOPSLA and possibly part of later talks, I probably need acceptable pronunciations of Gaelic words. I tried Gaelic pronunciation guides I found on the net, but what I'm coming up with can't be right. Here are the words. Do you know how they're pronounced? (Send a description or, better, a sound file.)
Update: A Gaelic speaker will be teaching me the right pronunciation at OOPSLA before my talk. Isn't that cool?
Thu, 19 Oct 2006
A recent UN report states that "New explosive devices are now used in Afghanistan within a month of their first appearing in Iraq." (Reuters, September 27, 2005). How long does it take your organization to put a new technology or technique into use? How does that make you feel?
Offensive though it is, this decade's most notable examples of "learning organizations" are groups of insurgent cells with high internal loyalty and loose connections to both each other and also to the overarching sources of goals and funding.
Tue, 17 Oct 2006
I've implemented the fixture described earlier. It takes a table in a particular format, generates a new ColumnFixture table, and causes that table to be executed. You can see the Fit output a programmer works with here.
The source and jar file are at http://www.exampler.com/testing-com/tools/fitlibrary-extensions-0.1.zip. The README.txt file will tell you about examples.
I believe it works correctly, and I put it to the test at a client's on Monday. Nevertheless, it is an early version: I made no attempt to handle malformed input gracefully. I haven't made it work with DoFixture yet. I need to clean up the source directory structure. (JUnit tests are intermingled with source files.) I realize that the fixture knows almost enough to generate much of the ColumnFixture code for you, so I'm going to add that.
The version in the zip file was compiled under Java 1.4, though it is likely to compile under earlier versions.
Mon, 09 Oct 2006
If you're going to OOPSLA, I recommend you attend the tutorial Programmers are from Mars, Customers are from Venus: A Practical Guide to Working with Customers on XP Projects. I haven't taken it myself (never been in the right place at the right time), but I've talked to the lead presenter, Angela Martin, at length about the topic, and she's given me the complete notes. For those who don't know of her, Angela is one of the first names that springs to mind when you think about customers / product directors / product owners. Not only does she have practical experience, but she's also done some extremely interesting anthropological-ish research. Her co-presenters are Robert Biddle and James Noble, who have quite a good reputation as presenters. (Their postmodern programming presentation a few years back is a classic, a St. Crispin's Day event.)
I mention this because the description at the OOPSLA site has two flaws. "Working with Customers on XP Projects" in the title makes it seem that it's not for Scrum projects, but it is. And the blurb does not say, "This is for you too, programmers." From Angela's description, it most emphatically is.
Thu, 28 Sep 2006
I'm too sick to write what I should be writing, and I can't sleep, so I decided to collect my thoughts and references about a current political topic I've been studying as I have time. The normal sort of posts will return shortly, but for the moment I'll use whatever reputation I have for careful-but-sympathetic thought to push back against an all-but-inevitable failure.
My understanding is that habeas corpus is a method by which prisoners can challenge their imprisonment before a judge. The idea has worked pretty well for 700 years. It fits with John Adam's phrase "a government of laws, not men": no one has exclusive power; everyone is subject to being checked and balanced.
It is now due to be removed in a hastily-considered bill. Despite what some say, the idea of habeas is not to "give terrorists rights"; it is to preserve the rights of those wrongly accused as terrorists or unlawful combatants. There have been many such people already. Sometimes people are just detained; some are sent to Syria and tortured.
The bill allows for review of detentions by military commissions, but to date only ten have been held. People can be held forever without any recourse. (Some people have continued to be held even after review found them innocent, though a large number have been released.) In newer versions of the bill, "people" can include US citizens. Unlike the military's current definition of unlawful combatant, which covers only "those who engage in acts against the United States or its coalition partners in violation of the laws of war and customs of war during an armed conflict," the new one covers anyone who "has engaged in hostilities or who has purposefully and materially supported hostilities against the United States" or its military allies. Who are our allies? What does "material support" mean? I guess some of us might just find out.
During detention, what? It is simply not the case, as the President stated, that Geneva Convention Common Article 3 is impossibly vague about the treatment of prisoners. Ironically, on the same day as that statement, the US military released its new procedures, which explicitly conform to the Geneva Conventions. It's not surprising that, in over fifty years, we've been able to come to agreement about what the Conventions require. But now we're going to replace them with new language that will have to be freshly interpreted. Everyone, save the people who'll actually be making the decisions—who refuse to commit themselves—is running around saying "waterboarding is allowable" or "waterboarding is not allowable", but that's silly. In the absence of court review, what's allowable is whatever's done. The legal principle is that "there is no right without a remedy."(Stories about what's being done will leak, I suppose, as they always do. That could lead to some sort of remedy. I wonder if leaking, receiving, or reporting leaks counts as "material support"?)
But there's no institutional check on the non-professionals or the rogue professionals. We'll just have to rely on the moral character of everyone involved. That's of course entirely opposed to the American tradition of rule by laws, not men, but <sarcasm> apparently we face a threat more grave than the 45,000 nuclear warheads the Soviet Union had at its peak and a struggle more threatening than World War II, and so cannot afford the traditions that have worked for more than two hundred years </sarcasm>.
We certainly face threats—always have, always will—but I don't see any reason to give into the "this time it's different" fallacy.
All this matters to me because my parents grew up in Nazi Germany. I grew up knowing that cultures can descend into madness, and that it can happen without the majority ever really explicitly willing it or being really conscious of it. No, I'm not saying that America is just like Nazi Germany; I'm saying that men like my grandfather—not politically involved, just trying to live their lives—somehow, through fear or anger or depression or just passivity, let decency slip out of their grasp.
It also matters because I grew up knowing that the Americans were the good guys. My father (in the German Navy) was captured near Marseille. He didn't mind; he and his fellows didn't fight back. They wanted out of the war, and they wanted to surrender to the safest force: the Americans. Prisoner of war camp (American and French) was no picnic—my father weighed 130 pounds when he got out—and there was abuse, but it was not institutionalized (except in one camp, for a short time). He got what he expected, and he believes he has no cause for complaint.
In contrast, my Uncle Paul was captured by the Soviets on the eastern front. I imagine he fought harder than my father to avoid capture, because everyone knew what happened to Russian prisoners. And it did happen: it was 1950 before he even knew the war was over, and he came home broken for life.
There's practical value to being seen as the good guys, the just guys, the humane guys. That's not just true when fighting Germans; it works in the middle east, too.
Out of fear or anger or depression or just passivity, we're letting our elected representatives—our employees—reinforce hysteria to no effective end. If that bothers you, here is your Senator's contact information, and here is your Representative's.
Although this bill is being pushed by Republicans, I believe it should not be a partisan issue. The bill does not square with the conservative tradition of Chesterton's gate. It's being rushed through because what being a Republican politician today means is all about winning at domestic politics. (Just as being a Democratic politician appears to be all about not losing.) I can echo the the author of this fantastic essay: I miss Republicans. I miss Eisenhower; he'd surprise you.
Wed, 27 Sep 2006
My examples below use a simple rule for deciding what values of a boolean expression to test. I should probably describe it and justify it.
Given an expression with all
So the table for
The case for
The reasoning behind these rules is based on mutation testing, the name for a long thread of academic research on testing. The way I state it (which is different in an unimportant way from how it's usually put) is that mutation testing involves assuming that the code is incorrect in some definable way, then asking for a test suite that can distinguish the incorrect code you have from the correct code you should have.
Now, for any given program, there are an infinite number of variants, so mutation testing depends on picking a definition-of-incorrectness that (a) lets you generate a reasonably small set of alternatives, but (b) gives you confidence that you've caught all the plausible errors. The usual approach is to assume one-token errors.
For example, suppose you are given
One-token errors aren't the only ones you could make. For example,
you might completely forget that
Suppose you have the original
Trying all the possible combinations of variable values will either find a one-token error or kill all the mutants. But you never have to try all of them. There will be some test inputs that don't add anything: any mutant they kill will be killed by some other test input. So you can construct a minimal set for any given expression.
If you look at the table below, you can see that the rule
Remember all this assumes that tests powerful enough to catch
one-token errors will catch more complicated (but still plausible)
errors. A way to convince yourself is to try and find a variant
These rules are easy to memorize. The cases for expressions that
When using the style I described earlier, I don't think you need
multi, because I'm tentatively advocating always breaking tables that
1 I can't remember if the transformations I used when
working all this out included substituting one variable for another
2 I'm leaving what it means to "run a program" vague. That gets to the difference of whether the mutation is "weak" or "strong". See this post by Ivan Moore. I didn't find much in the online literature about mutation testing; if you want to know more, you'll have to go to the library. There are some starting references at the end of this paper (PDF).
Sun, 24 Sep 2006
Using Fit to describe boolean (yes/no) decisions can be much clearer if you just insist that all decisions be expressed in multiple, uniform, simple tables. No boolean expressions in the code may mix
Suppose you're given a jumble of three packs of cards. You are to pick out every red numbered card that's a prime, not rumpled, and is from either the Bicycle pack or the Bingo pack (but not from the Zed pack). Here is a way you could write a test for that using CalculateFixture:
I bet you skimmed over that, read at most a few lines. The problem is that the detail needed to be an executable test fights with the need to show what's important. This is better:
That highlights what's important: any card must successfully pass a series of checks before it is accepted. This test better matches what you'd do by hand. Suppose the cards were face down. I'd probably first check if it were rumpled. If so, I'd toss it out. Then I'd probably check the back of the card to see if it had one of the right logos, flip it over, check if it's black or a face card (two easy, fast checks), then more laboriously check if it matches one of the prime numbers between 2 and 10 (discarding Aces at that point).
The code would be slightly different because it has different perceptual apparatus, but still pretty much the same:
It does bug me that the table looks so much more complex than the code it describes. It still contains a lot of words that don't matter to either the programmer or someone trying to understand what the program is to do. How about this?
From this, the Fit fixture could generate a complete table of all the given possibilities, run that, and report on it. (Side note: why did I pick Queen as a counterexample instead of Jack or King? Because if the program is storing all cards by number, the Queen will be card 11. Since I'm not going to show all non-primes—believing that more trouble than it's worth—I should pick the best non-primes.)
The same sort of table could be created for cases where any one of a list of conditions must be true.
Now, many conditions are more complicated than all of or none of or any one of. However, all conditions can be converted into one of those forms. Here's an example.
Suppose you're allowed to pay a bill from an account if it has enough money and either the account or the "account view" allows outbound transfers. That would be code like this:
However, that could also be written like this:
I claim that code is just as good or even better. It's better
because there's less of a chance of a typo leading to a bug
The corresponding tables would be like this:
In this particular case, I left off the Example and Counterexample columns because they're obvious. I'd expect the fixture to fill them in form me. I didn't include a table about the balance being correct because I wouldn't think the programmers would need it, nor would others need it to believe the programmers understand it.
One thing that worries me about this is that the table doesn't rub your nose in combinations. Such a table is more likely to force you to discover business rules you'd forgotten about, that you'd never known about, or that no one ever knew about. (Well, it does that for a while - until the tedium makes your mind glaze over.) In a way, this fixture makes things too easy.
On the other hand, there's something to be said for protecting later readers from the process through which you convinced yourself you understood the problem.
I'm tempted to launch into implementing this, but I have other things to work on first.
Thu, 21 Sep 2006
I read Tom Wolfe's The Right Stuff a zillion years ago. One passage hit me then, and it's stuck with me. The time is somewhere in the beginning of the Mercury program:
Asking Gus [Grissom] to "just say a few words" was like handing him a knife and asking him to open a main vein. But hundreds of workers are gathered in the main auditorium of the Convair plant to see Gus and the other six, and they're beaming at them, and the Convair brass say a few words and then the astronauts are supposed to say a few words, and all at once Gus realizes it's his turn to say something, and he is petrified. He opens his mouth and out come the words: "Well... do good work!" It's an ironic remark, implying "... because it's my ass that'll be sitting on your freaking rocket." But the workers start cheering like mad. They started cheering as if they had just heard the most moving and inspiring message of their lives: Do good work! After all, it's little Gus's ass on top of our rocket! They stood there for an eternity and cheered their brains out while Gus gazed blankly on them from the Pope's balcony. Not only that, the workers—the workers, not the management but the workers!—had a flag company make up a huge banner, and they strung it up high in the main work bay, and it said: DO GOOD WORK.
That came to mind when I read this abstract:
This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities—a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine's hardware and software and the adoption of more rigorous election procedures.
Since this is by no means the first report, I feel safe in saying Diebold is not DOING GOOD WORK.
I wish people who could matter—that especially means you, Fourth Estate—cared. We're all on top of the freaking rocket. (Not just the US, since the size of our military and economy puts much or all of the world on the rocket too.)
I'm sure there are people at Diebold who feel embarrassed or even humiliated by what their company is selling. If any one of them wants throw caution and good sense to the winds and hang up a DO GOOD WORK banner, I'll buy it for you. Seriously.
I've been invited to the Software Practice Advancement Conference. The idea appeals: expense-paid trip to London, opportunity to rouse the rabble along some lines I'll be previewing here as I have time, and a conference that's said to be good (I've never been). On the other hand, I hate overseas flights because I can't sleep on planes, and Dawn almost certainly can't come with.
Here's what would tip me over the edge. There are lots of people I could learn from in London. If there are teams there who do something really well (making small stories, writing FIT tests, release planning, etc. - anything), I would like to come work with you for several days. Not just visit and watch, but act as much like a team member as I can. Let me know.
P.S. The idea of visiting practice is part of what I want to rouse the rabble to, something that lives in the same space as the MFA for Software, something that's part of my formal discussion of Jim Waldo's OOPSLA essay On System Design, which will be titled something like Surviving in a World of Ever-Looming Malignity: Or, Monasticism for the Married.
UPDATE: Yes, I'm not expecting to be paid for the visits.
Mon, 11 Sep 2006
A couple of years at the Agile conference in Calgary, a big topic of discussion was whether Agile was poised to cross the chasm from visionary early adopter types to the early mainstream. This year at Agile2006 it sure seemed to me we had.
If I recall the high-tech adoption curve correctly, a big difference between the Visionary early adopters and the Pragmatist early mainstream is who they talk to. The Visionaries talk to the Technology Enthusiasts to find ways to have big wins. The Pragmatists talk to other Pragmatists, especially ones in the same industry, to find ways to have safe wins.
My main client these days is a good example of a Pragmatist. Before adopting Scrum, they methodically went to visit other companies that had been using Scrum successfully. That's the first time I've seen that.
Agile in the mainstream is definitely a good thing, but every silver lining comes with a cloud. I worry that the clear sunshine of innovation will be obscured by the mists of scale. (Sorry about that...)
If you believe Moore, the mainstream market naturally shakes out into a single dominant "gorilla" and several "chimps" that scrabble for the leavings. He uses Oracle as an example of the gorilla, companies like Sybase as examples of chimps. Or you could think of the relational model in general vs. other ways of organizing and accessing persistent data.
On the one hand, that's good for innovation: the chimps have to find some angle to distinguish themselves from the safer gorilla choice. On the other hand, the innovation is constrained: it can't be too wildly different from the gorilla or else you're no longer in the mainstream market. (The distinction here might be between object databases—never made it in the mainstream—and adding object-ish features to relational databases or just figuring out how to make object-relational mapping work.)
But more important, to me, is a redirection of talent. The gorilla of Agile is Scrum + a selection of XP practices (perhaps most often the more technical ones like continuous integration or TDD). Consultants and consultancies can make more money, grow their practice faster, and have more influence by helping new teams start with Scrum+XP and by taking steps to make Scrum+XP more palatable to large segments of the mainstream market (the later mainstream, what Moore calls Conservatives). People doing that don't have time to do other things.
We saw that at Agile 2006, where the proportion of novices perhaps reached some sort of tipping point that made it more like a conventional conference. That's not a criticism: the Agile Alliance is there to help Agile projects start and Agile teams perform—says so right on the website—and making sure the beginner is served is absolutely necessary to those goals.
So that's all good. But I'm not comfortable unless I've got the feeling that there's something just beyond the horizon poised to surprise me. I'm not usually the one to find it: I'm more of a synthesizer, amplifier, or explainer than an innovator. So I selfishly need people out there searching, not teaching Scrum+XP.
I'm getting a sense that some significant chunk of people are ready for Agile to take a surprising jump forward. See, for example, what Ron Jeffries has recently written. Some part of my next year will be spent in support of that. I have at least one whacky idea, a bit related to the MFA in software.
I'll be poised to spring into action soonish. Just let me get this book done, please let me get it done, without any of the changes in response to reviewer comments introducing a nasty bug.
Tue, 29 Aug 2006
Give a thought to going to the second Continuous Integration and Testing Conference in London on October 6-7. I went to the first one and liked it. I'd go to this one, but I understand you can't take water on planes now and I'm mostly water.
I will be going to the Simple Design and Test conference near Philadelphia (USA), on October 27-29.
And RubyConf is sold out already. Rats. That'll teach me.
Mon, 28 Aug 2006
Agile depends critically on programmers keeping the code
clean. Lots of us know important steps in making code cleaner:
rename methods and classes as their purpose changes, be wary
I make two claims.
I wonder how I could learn more about that? The best way would be to work with other people on several disparate systems for a long time—which is not in the cards.
Sat, 12 Aug 2006
I have finished the review draft of Scripting for Testers. I am going on holiday.
Fri, 04 Aug 2006
Posted at the request of Ross Collard, organizer.
Mon, 31 Jul 2006
The Gordon Pask Award recognizes two people whose recent contributions to Agile Practice demonstrate, in the opinion of the Award Committee, their potential to become leaders of the field. The award comes with a check for US$5000.
Our criteria are evolving (and, starting with this second year, they're mainly in the hands of the past recipients). We are looking for people who provide both ideas and actions. We want people who are advancing the state of the practice. But we also want people who are spreading knowledge of the existing state of the practice, so that Agile teams know what more there is to learn. And we also want people who are helping people on a personal level, not just at the abstract level of ideas.
Sun, 30 Jul 2006
This trend is one I had trouble explaining at Agile 2006, so bear with me. (Or skip the whole thing - might be the best use of your time.)
Imagine telling the story of how the bicycle evolved. You could tell it as a story of technology. In it, the bicycle evolved from a crude prototype to today's designs because of improvements in materials technology, a greater understanding of applying human power to spinning wheels, and changing "ecological niches" (from unpaved or poor roads to both roads that allow greater speed and also steep paths ridden purely for recreation).
You wouldn't really include people in that story. Yes, tires got wider because people all of a sudden chose to ride down mountains, but once that niche was chosen, the form of the bicycle can be seen as inevitable. Or you might note that the frames of some bicycles are shaped differently because (first) women riders wore skirts and (later) because of tradition. But, allowing for that, the form of the woman's frame follows function.
In such a story, one of technological determinism, it would be absurd to say that a mountain bike would look different if, say, society's class structure were different.
But there's another kind of story, one of social determinism, where human relations play a driving role. A socially deterministic story of ethernet might point out that squirting packets into the ether, checking for collisions, and possibly resquirting isn't an inevitable design. After all, at one point, token ring networks were a pretty serious contender, and they were much more orderly: you wait until you get a token, then you talk. No collisions allowed. A socially determinist story would point out that ethernet was developed at a deliberately freewheeling, relatively unstructured laboratory, not too many miles from one of the most try-it-and-see-what-happens cities in the world (San Francisco). The story would try to work through how the design of ethernet reflected the overlapping societies of the actual humans participating in its creation.
A true socially determinist story sounds weird to me (and, I suspect, you). After all, surely Ethernet was a better design than token ring: no complexity of worrying about a machine crashing while it has the token, for example. And, therefore, someone would have invented it anyway, and it was just happenstance that they worked at Xerox Palo Alto Research Center.
But we technologists tell pretty weird stories, too. Remember "information wants to be free" and "the Net interprets censorship as damage and routes around it"? Those are pure technology determinism, and they seem at least a tad less plausible today than they did around the time of the Netscape IPO.
As something of an instinctive middle-of-the-roader, stories that combine the human/social and the technological make the most sense to me. Agile is noteworthy for telling such stories. For example, the story of an XP project is not the story of a progression of work artifacts (as many processes are); instead, it's a story that includes people sitting in particular physical configurations and deliberately not replicating the ownership relations of the society around them (when it comes to code and expertise).
But at the same time, XP isn't a story you can tell well without talking about technology. It's not a story of a surgical team or a squad of soldiers: it's a story of working software, changed frequently in behavior-preserving and behavior-adding ways.
So, for example, continuous integration is partly about a social reaction to a shifting technological practice. Suppose you're working alone on a machine. You write code that passes the test that motivated it. You also run a whole bunch of other tests that take a few seconds to run. When one of them fails, that's no big deal, so there are no social pressures to be extra careful to avoid them.
In contrast, failing nightly builds disrupt the project much more, so—often—peer pressure is used to prevent them. (In one company, anyone who broke the build had to keep the Frog of Shame on their monitor for all to see.)
Jeffrey Fredrick's article on continuous integration shows how a particular technology—semi-fast notification of semi-substantial test runs—requires a social contract different from both the super-fast local build and the unbearably-slow nightly build:
Fredricks' article demonstrates a nice back-and-forth between the technical and social. It's that integrated story that I worry is slipping away. One way it will happen is for those with a technologist bias (most people on our teams) to vote with their feet. The dominant methodology today is "Scrum plus some of XP." The parts of XP that often seem to get left out of the "some" are the human ones: pairing, shared code ownership. Whatever you may think about the merits of XP's particular practices, they do tend to make it obvious that a team has to form some sort of a social contract. Maybe the habit sticks. Maybe it won't when the team choses from a buffet of practices, picking the sweet corn of refactoring over the brussels sprouts of shared code ownership.
Perhaps because I have a technologist bias, I'm more alarmed by social stories that include no technology. These are stories that involve how Placating people interact with Blaming people, or how INTJs interact with ENFPs—but don't involve what they're interacting about. Such models apply as well to a surgical team as to a software team, despite the fact that "crash" has a profoundly different meaning to each of them.
I'm not denying value to pure-technology or pure-social discussions. I just think they're seductively easy. I want more discussions like one that was had in Jeff Grover's and Zhon Johansen's wonderful discovery session at Agile 2006. They began with exercises demonstrating particular human quirks, but the talk afterward seemed to zero in on specific practices.
One exchange sticks out in my memory. There was an exercise about people's personal space. That, in itself, is nothing special (if you already know about it), but I thought the resulting discussion of pairing went in a nice direction. Personal space surely matters in pairing, but someone observed that sitting side by side is different than sitting face-to-face, and that the focus on a shared object (something external to gesture at) allows a smaller personal space. Someone else then noted that personal space is why he so wants chairs with wheels in pairing environments. That way, when people need to have a longer discussion, they can turn toward each other and simultaneously scoot back to maintain comfort. I thought that was cool. It's about social organization of people in a particular physical environment doing a particular task.
Wed, 26 Jul 2006
At Agile 2006, I'm seeing or inventing several unhappy trends that I want to call out.
At the first Agile Development Conference (the predecessor conference), I noted with surprise how often the word "trust" came up. At this conference, the surprisingly common word is "leadership." As in: "what's needed to make Agile succeed is executive leadership." Noteworthy: what was once called the Executive Summit is now the Leadership Summit.
As an inveterate champion of the little guy, I've always hated the Great Man theory of business. That's the idea that it all depends on the brilliance and Will of the Jack Welches and Chainsaw Als. I'm seeing that theory accepted as a matter of course in Agile, and it bugs me. It's part of the domestication of Agile: the fitting of something potentially disruptive into the comfortable patterns of life.
Imagine, if you will, the Great Man theory of the Scrum Master: "a team needs the leadership of their Scrum Master to excel." That's the opposite of the truth: the Scrum Master is not a master of the team; she's a master of Scrum: she knows best how the team can use Scrum to succeed. The team leads her, rather than vice versa. As both Mike Cohn and Ken Schwaber have said to me, one of the hardest parts of being a Scrum Master is not leading: is keeping your mouth shut and insisting that the team solve their problem rather than depending on someone else to tell them what to do.
I view executive leadership in the same way. We know how to do software better. It's the executive's job to support us in doing that—to clear obstacles out of the way of our practice—and not to lead us. We already know where to go. We know how to do our job. We need to be assisted, not led.
Thu, 20 Jul 2006
Someone from the NOSQAA is being relentless about getting me to do something at their annual Quality Expo in Cleveland, Ohio, USA, in early November. (It happens that I have a client in Cleveland these days.)
That ties in with some thoughts about the long-overdue Scripting for Testers book. (Which is getting close, honest!) I'm not a fan of two- or three-day 60-people-in-a-room training courses. Even if there are lots of exercises, most of the course doesn't stick. It doesn't cause the kind of change that I want to cause.
So, when people call me and tell me they want me to train their testers in Ruby, I'm not planning on offering them such a course. Instead, I'm going to pattern my offering on the way I do consulting, which is to fly in for a week per month, sit down with people at computers and do work on their product, repeating the trips until they decide I'm no longer worth the money.
The Ruby variant would go like this: I won't train the testers in Ruby. I wrote a book that's supposed to allow them to self-train. So I want the company and testers to demonstrate that it won't all be a waste of time by working through parts 1 through 3 of the book on their own and starting to apply Ruby to their own projects. I'll come in, once or more, to help them with those projects, make observations, give impromptu mini-courses on topics I think they should know. That will be more expensive and time-consuming than a stand-up course, but it will have a much higher chance of working.
But I can do more, tying Ruby into my normal consulting. Suppose I'm flying to a city once a month anyway. What I'd like to do is organize something akin to a flash mob: a flash user group of testers (and others) who want to learn scripting. They'd learn it on their own, in concert or individually. When I'm in town, we'd have dinners devoted to the topic. At some point, we'd cap it off with a one-day mini-conference on Ruby and testing. I'm envisioning that the morning would be devoted to enticing beginners. Again, I'd downplay the lecture. What I'd want is the members of the existing flash user group to pair up with newbies and show them the Wonders of Ruby. In the afternoon, we'd have advanced topics. Perhaps something like RubyConf would work: have people present how they've used Ruby in their job. That way people would get ideas, hook up with people doing similar things.
Then, having gotten things going, I would ride off into the sunset.
To see if that works, I'd like to do a dry run in Cleveland. The question is whether there's interest. If you're near Cleveland and interested, drop me a line. Forward this URL to people in Cleveland. Let's see if we can get a critical mass going. If so, I'll tell Ms. Persistent-Far-Beyond-the-Call-of-Duty-They're-Lucky-to-Have-Her that she's won me over.
Wed, 19 Jul 2006
Here are things on my mind these days. If you're at Agile 2006, and you have experience to offer, please let me listen to your story.
Thanks. I should be easy to spot. I still look roughly like the picture at the top of the page, though with a mustache and dorky goatee now. Something like this:
Tue, 18 Jul 2006
A while back, I sat in while Ralph Johnson gave a dry run of his ECOOP keynote. Part of it was about refactoring: behavior-preserving transformations. The call was for research on behavior-changing transformation that are like refactorings: well-understood, carefully applied, controlled.
Ralph mentioned that persistent question: what does "behavior-preserving" mean? A refactoring will change memory consumption, probably have a detectable effect on speed, etc.
My reaction to that, as usual, was that a refactoring preserves behavior you care about. Then I thought, well, you should have a test for behavior you care about. ("If it's not tested, it doesn't work.") That, then, is my new definition of refactoring:
A refactoring is a test-preserving transformation.
If you care about performance, a refactoring shouldn't make your performance tests fail.
Tue, 11 Jul 2006
I stumbled across a bunch of graphs about the US national debt, courtesy Mark Wieczorek. Keeping in mind my suspicion of simplistic use of numbers, one graph is still pretty interesting for someone who grew up, as I did, hearing all about "tax and spend" liberals of the Lyndon Johnson variety. It's below, showing the yearly increase in the debt in constant dollars. I've overlaid color. Republican Presidential administrations are red, Democratic blue. The bars across the top show control or near-control of Congress.
Note: the vertical lines are approximate. If I were truly serious, I'd make some sort of effort to determine if the lines should be shifted to the right (since Presidents don't have an instantaneous effect). However, I'm mainly doing this because I'm stuck on something I'm supposed to be writing, I need a break, and I'm in a hotel room in Cleveland.
The current situation is worse than that graph makes it appear, as the high level of yearly debt under Bush is projected to continue, as shown on the right (non-constant dollars).
The final picture is my family. The small ones get to pay it off—the "bridge to nowhere", an overpriced prescription drug plan, Paris Hilton's tax break on unearned income, sloppy accounting in Iraq, a culture of corruption that's far beyond what Democrats achieved in their days of power, all of it. That's shameful. I expect my children's generation will look at the adults of today and call us lazy, feckless, self-centered, and stupid. With justice.
Sat, 08 Jul 2006
Background: On the Agile Testing list, someone wrote:
This statement ignores context, and its application breeds contempt not only for context but for nurses.
I was in one of those moods, so I wrote this:
I've been talking about scrubbing for surgery with my wife (who both does it, and has a grant proposal out to study something related to it). What strikes me about it is something that's been said here before about testing in Agile projects, but I think needs to be said again.
One thing about scrubbing is there is universal agreement about the goal: minimize the amount of "trash" (bacteria, etc.) that gets into the wound.
Even though, in a non-emergency, you do always always always scrub, I was surprised at how much variation there is. Some people have a rule that you scrub each of four sides of each finger ten times. Some people think you don't have to count; you just have to scrub for ten minutes. Some scrub for five. People scrub with different things. And so on.
Although the rules vary, they are rules, rather than judgment calls. People do not scrub according to today's context. They scrub the way they always scrub, which is likely the way they were taught or the way their colleagues do it. It's not really possible for them to judge context -- there's just too much noise in the causal chain from scrubbing to surgical outcome. That also makes experimental justification of scrubbing techniques hard. Still, if pressed, a surgeon could make an argument for her style in terms of the agreed-on goal.
The other thing that struck me is the degree to which the (rich) world has been constructed around the goal of sterility.
Testing in Agile projects:
Those are the extremes, of course. I'm sure Michael takes advantage of opportunities to change the context, and I've seen Ron adapt to the context. However, the founding document of the context-driven school (Kaner et. al's Testing Computer Software) says, right on page vii, in bold italic font, "This book is about doing testing when your coworkers don't, won't, and don't have to follow the rules."
I switched from the context-driven approach to what I saw as a different approach because I saw Agile as making two key shifts with respect to testing:
If I am right and the debate is really about emotional comfort and personal identity, I don't expect argumentation per se will resolve it. Of the people who talk about idea change in a convincing (to me) way, only Feyerabend gives much of a role to argumentation. His Against Method is (in large part) about how Galileo argued in favor of the Copernican system in Dialogue Concerning the Two Chief World Systems. According to Feyerabend, Galileo cheated. He misrepresented the opponents' arguments, ridiculed their conclusions by surreptitiously substituting his own assumptions for theirs, studiously avoided the weaknesses behind his favored theory, and appealed to his readers' desire to hang with the cool kids.
Tue, 27 Jun 2006
At an Agile Alliance board meeting, some of us were fretting that Agile 200X might go the way of a lot of conferences: the vast bulk of the attendees would be novices to the field, there would be a fixed set of experienced constant attendees (mostly the presenters), and the middle layers of experience would be missing. The middle layers wouldn't come because so much of the content would be tailored to novices.
There's nothing wrong with novices. More: a conference should cater to novices. However, that middle layer is necessary to advance the field and keep the conference lively and changing.
Jeff Patton said something—I forget what—and I spun his idea into the idea of the Agile Fringe. It's based on the Edinburgh Fringe, which "surrounds" the Edinburgh Arts Festival (and, in fact, dwarfs it). My understanding of the Fringe is that anyone willing to rent space can present anything they want. Fringe events can be more avante garde than would fit in the regular Festival.
My idea is for an Agile 2006 Fringe. People willing to donate the proportional cost of a room to the Agile Alliance (or do something else that indicates they're serious) can have it for that time to do something of their choosing. They may throw it open to the public—post notices all over the conference—or they may confine it to a secretive cabal of insiders. Whatever they want.
My preference would be for something that involves doing, rather than only talking, since there are Open Space sessions for group discussions. But it'll be your space and your time: whatever you want is fine with me. For example, I could imagine continuing an Open Space discussion with a subset of like-minded participants.
As has been the case all year, I'm too overwhelmed to do an adequate job at any of my wild-eyed (or even staid) volunteer activities. I'm pretty sure we have the room. I don't know the cost yet. I've put little thought into it. It's up to you. If you want to make something of the opportunity, feel free. Contact me to tell me what should happen.
P.S. The Agile 2006 hotel is full. I believe attendance is already well over last year's, and another sell out would surprise no one.
The Agile Alliance is hiring a part-time operations manager:
More here. Pass it on.
Sun, 25 Jun 2006
I have a client that has many, many mainframes. Every project I might coach involves mainframes to a much greater extent than I've experienced before. I'd like to help the mainframe people with their programming and, especially, testing. If anyone has experience reports for me to read or stories to tell me, please do. I've already ordered Agile Database Techniques and Refactoring Databases.
I will set something up on the topic at Agile2006, both in the Open Space sessions and in the Agile 2006 Fringe (to be explained later).
This week, I gave seven (!) presentations of a live demo of testing and design in an Agile project. I started with a product director's idea for a story; showed the business-facing tests used to nail down that idea for the programmers; demonstrated how a programmer can use testing to make every step a small, safe, checked one; and ended (in some versions) with a working feature to be demoed and then manually tested (in an exploratory style). The idea was to get across a gut feel for how development feels, plus show some key principles in action.
Here's something that really came into focus as I (at first) kept radically changing the presentation and (later) tweaked it:
I expect product directors to read these documents collaboratively, sitting down with at least one programmer or tester. So the product director has to be semi-comfortable with the notation. (I also like it if that notation lends itself to looking at the feature in a different way. For example, a tabular notation for state machine designs encourages you to think through more cases than a node-and-arc notation does. That's also why Fit tests are good for business rules.)
So we want readability by a non-technical audience. However, the need for the documents to be executable pushes the notation in the direction of the product's implementation language.
It's balancing those two forces that's the trick.
There are two other sets of forces to balance:Fragility vs. comprehensiveness
The more detail there is in the test, the more fragile it becomes. That means a change to a single fact about the program will break many tests, and the breaking of a particular test may tell you nothing new about the program. That's wasteful.
And yet, detail that is not tested may not be gotten right in the first place. If it is right, but then goes wrong, you may well not notice it.
Excess detail seems to cause the most problem in the user interface. Today, my solution is to have the tests describe intermediate results from user-experience design (as I have glancingly learned it, mainly from Jeff Patton). Today's two types are:
I sometimes refer to myself as a "recovering abstracter." I used to jump to abstractions way too fast. Now I believe in building them gradually by implementing examples.
Neverthess, abstractions are important. In many programs, the real value comes from the business logic. Those are abstractions (of what's already worked for the business, I hope). All of my tests above abstract out detail. More importantly, the story of a project's ubiquitous language is one of developing shared abstractions.
But the majority of business people, it seems, are not practiced at thinking in abstractions (at least, our kind of abstractions). Notoriously, they want to see the user interface right away, they want it to be pretty (that is, detailed), and they want to talk in terms of what's on a screen rather than the concepts behind it. Their desire to do that conflicts with our desire to abstract away fragile and confusing detail.
We need to strike a balance. Over time, we need to show them that they can get what they want from us more easily if they tolerate our need to write things down in wierd and hard-to-visualize notations. (It worries me that I don't see what we're giving up in exchange.)
Thu, 15 Jun 2006
I'm practicing for a set of five demos I'm doing next week. In each, I'll work through a story all the way from the instant the product director first talks about it, through TDDing the code into existence, and into a bit of exploratory testing of the results. Something interesting happened just now.
Step one of the coding was to add some business logic to make a column in a Fit table pass.
In step two, I worked on two wireframe tests that describe how the sidebar changes. These tests mock out the application layer that sits between the presentation layer and the business logic.
What remained was to change the real application layer so that it uses the new business logic. That, I said (imagining my talk), is so simple that I'm not going to write a unit test for it. Even if I do mess it up (I claimed), I have end-to-end tests that will exercise the app from the servlets down to the database, so those would catch any problem.
You can guess the results. I made the change and ran the whole suite. It passed. Then I started up the app to see if it really worked, and it didn't. The problem is in this teensy bit of untested code:
The problem is that I have an extra
From this, we can draw two lessons:
I'm still not inclined to write a unit test.
This is the neatest thing to happen to me today. But nothing like it better happen in the real demo.
Earlier, I wrote about sentence style tests for rendered pages. Based partly on conversations about that entry with Steve Freeman and partly on bashing against reality, I've changed the style of those tests.
Since they are about the part of the app that the user sees and since I'd like them to be readable by the product director, I found myself asking where they would come from in a business-facing-test-first world and how the product director would therefore think about them. I imagined that, sometime early on, someone makes a sketch, paper prototype, or a wireframe diagram. So I came to think that this test ought to be a textual, automatically-checkable wireframe diagram. Like this:
One interesting thing is that I put the setup for the test after the checking code. That's because the page layout seems more important.
How well does that test describe this page? (The sidebar is described in tests of its own.)
I'll let you be the judge.
Tue, 13 Jun 2006
It really gripes me when people argue that their particular approach is "agile" because it matches the dictionary definition of the word, that being "characterized by quickness, lightness, and ease of movement; nimble." While I like the word "agile" as a token naming what we do, I was there when it was coined. It was not meant to be an essential definition. It was explicitly conceived of as a marketing term: to be evocative, to be less dismissable than "lightweight" (the previous common term).
Discussing the characteristics of Agile software development by reference to the dictionary is akin to discussing the product characteristics of Glade Air Freshener according to the definition of "glade" as "an open space in a forest". There is some limited use: I can imagine an S.C. Johnson and Son executive objecting to a proposed new scent by saying a user smelling it is more likely to think of a day at the beach - "that briny smell" - than Bambi at the edge of the forest. In the same way, I can imagine someone saying of a development team that since it doesn't respond nimbly to changes in the business environment, it sure doesn't seem to be "agile" from the perspective of those paying the bills.
But it would be unreasonable for our executive to object to chemists adding tri-nitro-benzo-dawnocaine because it's extracted from sea water, not meadow earthworms. By the same token, the nimbleness of the Agile methods from the point of view of the business may be achieved by being inflexible about frequent releases of shippable software. Or the project might insist on a path toward faster feedback (like unit testing) even if that path's short-term costs are higher than some alternative and the long-term benefits of feedback aren't clear in this case.
In a way, context-driven testing may be more agile than Agile testing in that it relies on individual rationality and choice in cases where XP and even Crystal would at least begin by following rules and precedents.Richard P. Gabriel is reported to have used the scrapheap metaphor in a 1986 talk about "Used Software".
Robert Chatley and Tom White have been working on sentence style for tests in Java:
Fri, 09 Jun 2006
Steve Freeman and Nat Pryce will have a paper titled "Evolving an Embedded Domain-Specific Language in Java" at OOPSLA. It's about the evolution of jMock from the first version to the current one, which is something of a domain-specific language for testing. It's a good paper.
I've been doing some work recently on an old renderer-presenter project, and I was inspired by the paper to rip out my old tests of a rendered page and replace them with tests in their style. Here's the result. It first has flexmock sentence descriptions of how the renderer uses the presenter. Then come other sentence descriptions of the important parts of the page structure.
I rather like that, today at least. It's much more understandable than my previous tests. After only a few months, I had to go digging to figure them out, but I doubt I'll have to do that for these. Moreover, I think these tests would be more-or-less directly transcribable from a wireframe diagram or sketch of a page on a whiteboard. They're also, with a little practice, reviewable by the product director.
(I'm still very much up in the air about how much automated testing how close to the GUI we should do, but this has nudged my balance toward more automated tests.)
I also remain fond of workflow tests in this style:
These workflow tests can be derived from interaction design work as easily as Fit tests are. They're less readable than Fit tests, but not impossibly code-like. These workflow tests are end-to-end. They go through HTTP (using my own browser object, rather than Watir or Selenium), into the renderer/presenter layer, down into the business logic, and through Lafcadio into MySQL.
Thu, 08 Jun 2006
Update to the previous article from Andy Schneider.
Pete Windle was the coauthor on the paper of Andy's I cited. Sorry, Pete.
Tue, 06 Jun 2006
Let's pretend there have been three ages of programming: the Age of the Library, the age of the Framework, and the Age of the Scrapheap. They correspond to three ages of documentation: the Age of Javadoc, the Age of Javadoc (plus the occasional tutorial), and the Age of Ant.
The first substantial program I ever wrote was a reimplementation of Plato Notes (think USENET news) for the TOPS-10 operating system. To do that, I only had to learn two things: Hedrick's Souped Up Pascal and the operating system's API. I don't remember the documentation for Hedrick's Pascal - probably I mainly used Jensen and Wirth. If you've read most any book defining a programming language, you'd recognize the style. The operating system was documented with a long list of function calls and what they do. Anyone who's seen Javadoc would find it unsurprising—and vice-versa.
This style of documentation says nothing in particular about how to organize your program or how the pieces should fit together. The next Age provides more structure in the form of frameworks. JUnit is a familiar example: you get a bunch of classes that work together but leave some unfilled blanks, and you construct at least a part of your application by filling in those blanks. A framework will usually come with Javadoc (or the equivalent for the framework's language). There's likely to be some sort of tutorial cookbook that shows you how to use it, plus—if you're lucky—a mailing list for users.
The third age is the age of Scrapheap Programming (named after a workshop run by Ivan Moore and Nat Price at OOPSLA 2005). In this style, you weave together whole programs and large chunks of whole programs to solve a problem. (See Nat's notes.) The scraps have a sideways influence on structure: unlike frameworks, they are not intended to shape the program that uses them. But they have a larger influence on the structure than the APIs do. APIs still allow the illusion of top-down programming, where you match the solution to the problem and don't worry about the API until you get close to the point where you use it. In Scrapheap programming, it seems you rummage through the scrapheap looking for things that might fit and structure the solution around what you find.
What of documentation? Programming has always benefited from a packrat memory. One of the first things I did in my first Real Job was to read all the Unix manpages, sections 1-8, and just last year I surprised myself by remembering something I'd probably learned in 1981 and never used since. But I'm not so good at learning by using, which seems more important in scrapheap programming than in the previous ages.
There are two parts to that learning. You need to somehow use the world to direct your attention to those tools that will be useful someday: Greasemonkey, Cygwin, Prototype, and the like. Next, you have to play with them efficiently so that you quickly grasp their potential and their drawbacks.
There's a variant of dump picking that plays to my strengths. Once last month, I was faced with a problem and I said "Wait - I remember reading that RubyGems does this. I wonder how?" A short search of the source later, and I found some code to copy into my program. Last week I used something Rake does to guide me to a solution to a different problem.
Which raises another issue of skill. I'm halfway good at understanding Ruby code, even at figuring out why a Ruby app isn't working. As I've discovered when looking for a Java app to demonstrate TDD, I'm much worse at dealing with Java apps. When I download one, type 'ant test', and see about 10% of the tests fail (when none should), I don't know the first obvious thing a Java expert would do.
I liken this to patterns. There was a time when the idea of Composite was something you had to figure out instead of just use. There was a time when Null Object was an Aha! idea. As happened with small-scale program design, the tricks of the trade of learning code need to be (1) pulled out of tacit knowledge, (2) written down, (3) learned how to be taught, and (4) turned into a card game. I don't know who's working on that. A couple of sources come to mind: Software Archaeology, by Andy Hunt and Dave Thomas, and Software Archaeology, by Andy Schneider.
Fri, 26 May 2006
The economies of scale that favor large corporations come with diseconomies for many of the people who work within them. It's kind of like agriculture that way.
But large corporations are not closed systems. The customers of the large corporations get the benefit in lower prices (though not without hidden costs).
The people who win out are the economic hunter-gatherers who live on the fringes of the Large. People like me. We get the benefit of economies of scale without paying our share of the price. Sorry about that.
Thu, 25 May 2006
Any time you write code that sits on top of a third party library, your code will hide some of its behavior, reveal some, and transform some. What are the testing and cost implications?
By "cost implications," I mean this: suppose subsystem USER is 1000 lines of code that makes heavy use of library LIB, and NEW is 1000 lines that doesn't (except for the language's class library, VM, and the operating system). I think we all wish that USER and NEW would cost the same (even though USER presumably delivers much more). However, even if we presume LIB is bug free, we have to test the interactions. How much? Enough so that an equal-cost USER would be 1100 lines of unentangled code? 1500? 2000? It is conceivable that the cost to test interactions might exceed the benefit of using LIB, especially since it's unlikely we're making use of all of its features.
More likely, though, we'll under-test. That's especially true
because I've never met anyone with a good handle on what we're
testing for. Tell me about a piece of fresh code, and I can
rattle off things to worry about: boundary conditions,
special values like
The result of uncertain testing is a broken promise. Given test-driven design, bug reports should fall into two categories:
The TDD promise is that there should be few type 2 real bugs. But if we don't know how to test the integration of LIB and USER, there will be many of what I call fizzbin bugs: ones where the programmer fixing them discovers that, oh!, when you use LIB on Tuesday, you have to use it slightly differently.
Since fizzbin bugs look the same to the product director or user, greater reuse can lead to a product that feels shaky. It seems to me I've seen this effect in projects that make heavy use of complex frameworks that the programmers don't know well. Everyone's testing as best they can, but end-of-iteration use reveals all kinds of annoyances.
I (at least) need a better way to think about this problems. More later, if I think of anything worth writing.
Wed, 24 May 2006
Here's an addition to my earlier hints for revising. What a reader sees as a digression often seems central to an author. To see how important it really is, try removing it. Then ask what text later in the piece has to be changed because of that. If the answer is "not much," you've got a digression.
The trick for an author alone is to tell which paragraphs to check. (After all, the whole problem is she's blind to what the reader sees.) Checking them all would be proportional to the square of the number of paragraphs—ick. All I can think of is to focus attention on changes of topic.
Once you've found a removeable paragraph, you can either remove it (probably the safest choice) or make the rest of the piece depend upon it.
I've been working as product director for a project. As many do, I find that I ought to be spending more time at it than I can. I've written only a few business-facing tests (as examples). Would things have gone better if I'd written more? In some cases, yes. In other cases, no. It's actually worked fine to have the programmer implement his understanding of what I mean, then have me point at the mis-fits and describe tweaks. That's true even though he's remote and I'm doing the describing mainly by email and IM (with some voice).
This is a special case: reimplementation of an existing system, nothing exotic about the domain, etc. etc. What I'd like is a better understanding of when to use each of the following development tactics (and blends between them):
Note—and I think it's important—that I am assuming a full set of rigorous TDD-style tests. So the issue here has little to do with untested code; it has more to do tradeoffs between styles of explanation.
Wed, 17 May 2006
When teaching TDD, what I like best is to work with people on real changes to their own code. Sometimes that doesn't work. There may be logistical problems. The code may have such legacy nature that progress is way too slow to give them any feel for what a day in the life of a test-driven programmer is like (which is a big part of my goal).
When their code doesn't work out, demoing with toy applications is an unsatisfying alternative all around. I'd like to demo with some substantial open source Java application that was built test-first (so is testable). Does anyone have a recommendation? If so mail me.
Sun, 14 May 2006
I used to teach the occasional class at the University of Illinois. One summer, I taught "CS397BEM: Being Wrong." The idea of the class was that any solution to a problem brings its own problems. The first example I gave in the class was the body's immune system. It's a solution to a problem: bacteria that want to eat us. So the body has neutrophils that eat the bacteria. But there's a problem: neutrophils exude antimicrobial crud. When they swarm to a site of infection as part of inflamation, the crud damages the body. The solution to that problem is to make the neutrophils short-lived. Once the bacteria are eaten, no more neutrophils are attracted and the existing ones die off before too much damage is done. (My resident expert says this explanation is "simplistic, but OK.")
I thought this class was important because we too often solve the problem in front of us, then stop. We don't try, even casually, to predict the accompanying problems. More importantly, we don't attend to the problems when they surface, so we let the inflammation get worse for too long.
Here's a problem I've noticed but ignored: Big Visible Charts lose their effectiveness over time. They cease providing the same pressure to improve or maintain. Part of it is that they become invisible; the eye ignores what it's seen a zillion times before. Another part, I think, is that people are bad at maintaining a level pace. We randomly jitter, sometimes in the worse direction, sometimes in the better. It's always easier to stay worse than to get better, so eventually one jog worseward isn't corrected with a jog betterward. Now you're at a worse level, and the fact that you've tolerated that makes tolerating the next jog worseward easier.
That's by way of explaining why my weight kept creeping up until the scale said 180.2 pounds. It's not just a disgusting lack of willpower: it's a universal law!
At some point in the decline, you need to stop, take serious stock of things, remind yourself of what you're trying to accomplish, adjust yourself, and return to the task with renewed energy. That's what I've done. It's back to the 2 pounds lighter per week regime, which I think is sustainable down below my previous low. Then the trick will be not to let the supposedly steady state get quite so out of hand next time.
The Big Visible Blog did help me take stock, especially once I crossed a multiple-of-ten threshold. Because other people were watching, I eventually couldn't stomach showing the trend without explanation. But an explanation would be too lame unless it were part of a description of a correction. Hence this post.
(All this might be sophistry, though. The fact is that it's been a truly lousy two months in most all of the spheres I care about—my family, my wife's job, the exhibited character of my nation, and parts of my work life. While Clif Builder Bars are not the junkiest of food, I overdo them as comfort food in black times.)
Tue, 25 Apr 2006
I don't know what the US should do in Iraq. I believe we're morally obligated to make it come out the best we can. I don't know if we're now doing that, if we could do better by changing course, or if our presence irretrievably does more harm than good. Since the Administration is unwilling to be truthful with the citizenry, since the press is unable to travel in Iraq and is in any case broken as an institution, and since I'm certainly not competent to collect and judge the data myself, I expect I won't know for twenty years, if ever.
However, it is wrong for me to sit here, fat and happy, paying no price while Iraqis, the US military, the UK military, and their families suffer. The least I can do is not add to the trouble of others by expecting my children to pay for all this.
Using figures from the US Internal Revenue service and the Congressional Research Service, I figure my family's share of the Iraq+Afghanistan wars to date is around US$2749, and our share of ongoing costs is US$630 per year. Since we are a two-income family, make rather more than the average, and I believe in a progressive income tax, my rough guess is that we should pay a lump sum somewhere between US$5000 and US$7500, and then between US$1000 and US$1500 yearly. I urge the Congress to raise my taxes accordingly.
I'm serious. Who's with me?
Calculations based on 131,301,697 individual returns (2004), US$6.9 billion per month cost of operations and US$361 billion spent to date (October 2005).
I believe in a progressive income tax because a dollar is worth less to me than to someone who makes minimum wage.
Fri, 21 Apr 2006
No doubt because I'm a self-proclaimed skeptic about definitions, I was asked:
[...] can you supply dictionary-like definitions for:
I've finished a first draft of "How to be a Product Director." PDF, 17 pages (with pictures and screen shots and pull quotes and sidebars). Comments welcome.
Wed, 19 Apr 2006
I will put the finished product up on the web.
An interesting story about pairing with users to implement new stories. I found it intriguing because it repeats some of my hobbyhorses from a different perspective (that of an APL programmer). Notice that he gives up on English in favor of examples. Notice also the adaptation of a programming language into a more-ubiquitous cross-domain language. (This made easier by the close fit APL starts out with.)
The part of the story about moving processing in from a distant site reminds me of what I've started calling a service-oriented development strategy. The idea is to forget about the program as an object but rather think of the project team as providing a repetitive service to the user base. In the company described in the link, someone—probably the product director—would funnel claims to process into the team. One person on the team would be an expert in manually processing claims. She'd be a bottleneck, so she'd enlist other available people—the programmers—to handle the simpler claims. Programmers being lazy, they'd quickly write code to make repetitive parts of the task go away. They'd also (and thereby) learn the domain. As they did, they could handle more complex claims, which would lead to more capable code. Lather, rinse, repeat.
Now, the truly lazy service person won't want to even type information into the program; she'll want the users to do it themselves. That means seducing the users into trading the ease of just forwarding a claim to the team for the benefit of higher throughput. So now—and only now—the programmers have to focus on making the UI usable by normal humans. First, they'll make it good enough for those few technology enthusiasts among the users. Then they'll improve it enough for the pragmatists and even the conservatives. (Thus, the standard high-tech adoption lifecycle is followed within a single project.)
At some point, you run out of claims that the product director thinks should be automated. So you stop and send the programmers off to do something else.
I've not convinced anyone to actually try this. I probably never will.
(Note: This strategy doesn't really match the story, since the users do nothing but claims processing. It's probably better suited to situations where the software is a necessary but non-central part of the job.)
One immediate objection is that this will lead to a lousy, patched-on-after-the-fact UI. For what it's worth, Jeff Patton (Mr. Agile Usability) doesn't think that's necessarily so. In fact, when I talked to him about it, he said that committing to a UI too early can hamper the project if there's not been time yet to incrementally make a decent model of the users' world(s).
Tue, 18 Apr 2006
We're going to try to gear this toward advanced practitioners, while recognizing there will probably be a majority of novices in the room. What we'll do first is divide into groups, each with a Customer (preferably someone who's held that role on a real project). That Customer will repeatedly describe a story, and the team will turn it into tests. After that, we'll reconvene, discuss, identify 2-4 areas of interest, break into smaller discussions, address them, and produce a poster summary for each.
Because of the italicized bit above, I'd like you to encourage all your Real Customers to come to the conference. Even if they don't come to our workshop, this conference needs to be a place where Customers can learn from each other.
Wed, 12 Apr 2006
I'm not one to quibble over definitions. If someone points at something that's obviously a cow and says "deer", I usually don't argue the point. While we're arguing about what it is we're about to feed, the poor beast will starve.
Still, it creeps me out when people refer to tests (aka examples) as specifications. There's an important distinction:
A specification describes a correct program, while a test provokes a correct program.
In math geek terms, specifications are universally quantified statements, ones of the form "for all inputs such that <something> is true of them, <something else> is true of the output." Tests are constant statements, ones with no variables.* They look like this: "given input 5, the output is 87."
This matters because, while both kinds of statements can be true or false, the only way to deduce the truth of a universally quantified statement from a set of constant statements is to exhaustively list all possible inputs. That's rarely possible.
To make the point concrete, a set of tests allows the programmer to write this:
if (the input is that of test 1)
Given that code, the tests say absolutely nothing about the correctness of the something that's done for all remaining cases.
Absurd example? An employee of a beltway bandit once told me his project had done exactly that. Proudly told me, no less.
But let's pretend we live in an ethical culture. There, the tests combine with certain habits and memories to provoke particular actions. Consider a programmer faced with two tests:
Those tests could be passed with this code:
But a programmer who's been raised well has a fastidious distaste
The assertions themselves are like two pebbles rolling
downhill. Whether they start an avalanche depends on what they roll
into: the hill has to be ready. For the test-driven, the avalanche is
a procedural assurance that the program computes
That's why I don't like calling sets of tests a specification. In practical terms, I don't like it because it always, always, always leads to someone making the argument about universal quantification vs. tests or quoting Dijkstra to the effect that "program testing can be used very effectively to show the presence of bugs but never to show their absence." The ensuing discussion is rarely, in my opinion, a good use of time. So what am I doing having it?
* Actually, a test statement can be seen as having variables, being a quantified statement like this:
For all a, b, c, ..., x, y, z: given input 5, the output is 87
where each of the variables is something you hope is irrelevant to the output. The trick is to capture all the relevant variables, pin them down, and feed them into the process.
Sun, 09 Apr 2006
At yesterday's successful Continuous Integration and Testing Conference, it occurred to me that the aim of continuous integration is to addict people to particular feelings. When they don't feel them, they'll do things to produce them. Those actions are good ones; they'll solve or head off problems. Those feelings are:
Thu, 06 Apr 2006
During one of the days of Agile2006, Bill Wake and I will be hosting a set of "extreme test makeovers." Throughout the day, we'll have makeover artists who are experts in unit and acceptance testing, with tools like Fit, JUnit, NUnit, Watir, and more. Some of the people who've expressed interest in helping touch up tests are Brian Button, Ward Cunningham, Janet Gregory, Ron Jeffries, Rick Mugridge, Bret Pettichord, Charlie Poole, and Jim Shore.
The idea is that people will bring their laptop, already loaded with tests that can be run. Best would be tests for real product code; that way, you can go back to work and justify the conference trip by slapping your laptop down on your boss's desk and showing improved tests. Tests for any substantial chunk of code are OK, though.
Did I mention that people should bring tests that can be run? That's really important.
Sessions will be 90 minutes each, with five minutes for expert speechifying at the beginning and ten minutes to record lessons learned and stick them up on the wall.
For at least some of the sessions, we'll provide some way for observers to see what's happening (a projector and a microphone).
We'll have a signup sheet at the conference, but I've also started a mailing list where people with tests (that can be run at the conference) can hook up with makeover artists. It's http://groups.yahoo.com/group/test-makeover. People who want to help out can also announce that there.
There may also be informal sessions after the formal ones.
The Gordon Pask Award recognizes two people whose recent contributions to Agile Practice demonstrate, in the opinion of the Award Committee, their potential to become leaders of the field. The award comes with a check for US$5000.
Last year's recipients were:
You can see that we are looking for people who provide both ideas and actions. We want people who are advancing the state of the practice. But we also want people who are spreading knowledge of the existing state of the practice, so that Agile teams know what more there is to learn. And we also want people who are helping people on a personal level, not just at the abstract level of ideas.
Send nominations making the case for a particular person to firstname.lastname@example.org The deadline for nominations is May 31.
Thu, 16 Mar 2006
From the Dept. of Painstaking Even-Handedness: Yes, but the credit card company's explanation that it's not as bad as it seems does have some weight. The author didn't in fact simulate a meth addict rooting through someone's trash.
Wed, 15 Mar 2006
Why is it that I so stubbornly believe that code can get more and more malleable over time? — Two early experiences, I think, that left a deep imprint on my soul.
I've earlier told the story of Gould Common Lisp. The short version is that, over a period of one or two years, I wrote thousands of lines of debugging support code for the virtual machine. Most of it was to help me with an immediate task. For example, because we were not very skilled, a part of implementing each bytecode was to snapshot all of virtual memory,* run the bytecode in a unit test, snapshot all of virtual memory again, diff the snapshots, and check that the bytecode changed only what it should have.
The program ended up immensely chatty (to those who knew how to loosen its tongue). There are two questions any parent with more than one child has asked any number of times: "all right, who did it?" and "what on earth was going through your mind to make you think that was a good idea?" The Lisp VM was much better at answering those questions than any human child.
I was only three or so years out of college, still young and impressionable. Because that program had been a pleasure to work with, I came to think that of course that was the way things ought to be.
Later, I worked on a version of the Unix kernel, one that Dave
Mystery Kernel, so I became well aware that not all programs
were that way. But at the time, I was also a maintainer of GNU Emacs
PowerNode. With each major release of Emacs, I made whatever
changes were required to make it run on the PowerNode, then I fed
those changes back to Stallman. Part of what I worked on
That experience also made an impression on me, and probably accounts for a tic of mine, which is to hope that each change will give me an excuse to learn something new about the way the program ought to be.
I've been lucky in the experiences I've had. A big part of my luck was being left alone. No one really cared that much about the Lisp project, so no one really noticed that I was writing a lot of code that satisfied no user requirement. GNU Emacs was something I did on my own time, not as part of my Professional Responsibilities, so no one really noticed that Stallman pushed harder for good code than those people who were paid to push hard for good code.
I'm not sure whether people on the Agile projects of today have it better or worse. On the one hand, ideas like the above are no longer so unusual, so it's easier to find yourself in situations where you're allowed to indulge them. On the other hand, people's actions are much more visible, and they tend to be much more dedicated to meeting deadlines—deadlines that are always looming. I'm wondering these days whether I'm disenchanted with one-week iterations. I believe that the really experienced team can envision a better structure and move toward it in small, safe steps that add not much time to most every story. I'm not good enough to do that. I need time floundering around. To get things right, I need to be unafraid of taking markedly more time on a story than is needed to get it done with well-tested code that's not all that far from what I wish it were (but makes the effort to get there one story bigger and so one story less likely to be spent). It's tough to be unafraid when you're never more than four days from a deadline.
So I think I see teams that are self-inhibiting. When I work with programmers (more so than with testers), I find it difficult to calibrate how much to push. My usual instinct is to come on all enthusiastic and say, "Hey, why don't we merge these six classes into one, or maybe two, because they're so closely related, then see what forces—if any—push new classes out?" But then I realize (a) I'm a pretty rusty programmer, (b) I know their system barely at all, (c) they'll have to clean up any mess we make, not me, and (d) there's an iteration deadline a few days away and a release deadline not too far beyond that. So I don't want to push too hard. But if I don't, someone's paying me an awful lot of money to join the team for a week as a rusty programmer who knows their system barely at all.
It ought to be easier to focus just on testing, but the same thing crops up. There, the usual pattern goes like this: I like to write business-facing tests that use fairly abstract language (nowadays usually implemented in the same language as the system under test). My usual motto is that I want to see few words in the test that aren't directly relevant to its purpose. Quite often, that makes the test a mismatch for the current structure of the system. It's a lot of work to write the utility routines (fixtures) that map from business-speak to implementation-speak. Now, it's an article of faith with me that one or both of two things will probably happen. Either we'll discover that the fixture code is usefully pushed down into the application, or a rejiggering of the application to make the fixtures more straightforward will make for a better design. But..., (a) if I'm wrong, someone else will have to clean up the mess (or, worse, decide to keep those tests around even though they turned out to be a bad idea), and (b) this is going to be a lot of work for a feature that could be done more easily, and (c) those deadlines are looming.
I manage to muddle through, buoyed—as I think many Agile consultants are—by memories of those times when things just clicked.
* The PowerNode only had 32M of virtual (not physical) memory, so snapshotting it was not so big a deal.
A question to ask every time you finish a story: "What's now easier to do?" Be ever so slightly disappointed unless each story contributes, in some small way, to making the system more malleable.
Tue, 07 Mar 2006
Mon, 06 Mar 2006
One of the things that interests me is how a team gets into alignment with its Product Director (sic). The path is often rocky, especially with someone who's never been a Product Director before.
Something I hear a lot of complaints about—and have griped about myself—is Product Directors who are too attached to the UI:
I think the cause is one part lack of experience, one part fear, one part necessity, and three parts reasons I don't know yet. The Product Directors are not used to talking about programs in abstract terms, in conversations where they're not pointing at a UI element but instead pointing at (say) nodes in business workflows. The fear is that crudeness of interface extends all the way down through the code, so that it represents everything being tackily done, not just the top layer. Will the programmers ever be able to put a good GUI on?
Those are (I claim) bad causes. The good cause is that the Product Director is the project's representative outward to the business. She will be showing the product to lots of people even more likely to judge a book by its cover, a product by its UI. A snazzy UI may be the path of least resistance.
Still, I think we'd be better off if we knew how to make a persuasive case for growing the UI as gradually as we grow the feature list. When the product is halfway to release, it should have half the features and half the UI glitz, not 1/4 of the features and all of the glitz.
If you have a track record at persuading Product Directors to hold off on the GUI, or if you have anything profound to say about any aspect of aligning the Product Director and the team*, I'd like you to write me an article for the princely sum of US$500 and undying fame.
* I don't want to give the impression that I think the Product Director does all the shifting of perspective and the rest of the team does none. It's not a matter of whipping the Product Director into shape. The alignment is a matter of trust going in all directions; I just happen to focus on having the Product Directors trust the programmers and process because lack of that trust makes it harder for me to get example-driven development going.
Tue, 28 Feb 2006hints for revising, I wrote:
I hardly ever read my text aloud without remembering an incident from my days as an English major. In one class, we had to write a poem. Other people read them aloud. When someone read mine, I discovered that what sounded OK when I read it sounded awful when he did. There were places where I slowed down, sped up, or placed emphasis and he did not. He didn't because there were no cues in the text to tell him to do that. All the cues were in my auditory memory or imagination.
Recently I've been experimenting with having my Powerbook read the text to me (Program -> Services -> Speech). "Vicki's" rather odd intonation helps me find awkwardnesses that I don't otherwise notice. She's not a replacement for my own reading, but I think it's worth listening to her reaction.
Mon, 27 Feb 2006
The responsible among you will be using software with "a number of security vulnerabilities [...]. Although the vulnerabilities are serious, they are all easily fixable." A cynical person—not me!— might take "a serious flaw in the key management of the crypto code" which "was openly published two and a half years ago in a famous research paper, and is now known by anyone who follows election security, and can be found through Google"—but is not yet fixed—to suggest that the bugs, including that one, might not be allocated the "few hours [required] to do the whole job" any time soon.
Have no fear, though, since "the security issues are manageable by a reasonably careful combination of short-and long-term approaches." I'm sure that everyone involved is reasonably careful at all important times. Have fun!
Wed, 22 Feb 2006
My "working your way out of the GUI testing tarpit" series really ought to be put into a single paper with the rough transitions smoothed over. Until that happens, if ever, what I've got will have to serve. Here's the table of contents.
Mon, 20 Feb 2006
The Big Visible Chart of my weight at the top of the blog has been red for too long. This has been a stressful month and a lousy, lousy week, with stress coming in from multiple directions and time to exercise coming from nowhere. The added stress of falling short of the two-pounds-per-week goal every week is working against meeting it. It's the wrong kind of feedback. Therefore, from next week until things get better, green will mark any weight below 170, red any weight over 173, and grey the range between. I expect no better than grey.
The American Dialect Society voted "truthiness" the 2005 word of the year. It "refers to the quality of preferring concepts or facts one wishes to be true, rather than concepts or facts known to be true." For me, 2006 is turning into the year of replacing end-to-end tests with unit tests. One risk to face is that unit tests can play into truthiness. This picture illustrates the problem:
Everything seems fine here. The tests all pass. What the picture doesn't show is that the Widget tests require the strings from the Wadget to be in ascending alphabetical order. The fake Wadget dutifully does that. The Wadget tests don't express that requirement, so the real Wadget isn't coded to satisfy it. The strings come back in any old order.
Truthiness would be wishing that unit tests add up to a working system. But the truth is that those two units would add up to a system like this:
We know that those sorts of mismatches happen in real life. So we should fear unit tests.
More tests are a proper response to fear. Hence the desire to wrap the entire chain above in an end-to-end test that 'sees what the user sees'. However, such tests tend to be slow, fragile, etc. So I want to replace them with smaller tests or other methods that are fast, robust, etc., thus reducing the need for end-to-end tests to a bare minimum.
Two such methods are:
I expect there are a host of other tricks to learn (but I'm not at this moment aware of places where they're written down). What's seems to me key is to take the strategy of "something could go wrong somewhere, so here's a kind of test with a chance of stumbling over some wrongness" and replace it with (1) a host of tactics of the form "this could go wrong in places like that, so here's a specific kind of test or coding practice highly likely to prevent such bugs" and (2) a much more limited set of general tests (including especially manual exploratory testing).
P.S. I don't like the word "truthiness." It seems statements should have truthiness, not people. A question for you hepcats out there who are down with the happening slang: which is more copacetic, "that's a truthy statement" or "that's a truthish statement"?
Sat, 18 Feb 2006
A client and I were talking over how Model-View-Presenter would work for web applications. The sequence diagram to the right (click on it to get a bigger version in a new window) describes a possible interpretation. Since the part that corresponds to a View just converts values into HTML text, I'm going to call it the Renderer instead. The Renderer can be either a template language (Velocity, Plone's ZPT, Sails's Viento) or—my bias—an XML builder like Ruby's Builder.
I did a little Model-Renderer-Presenter spike this week and feel pretty happy with it. I'm wondering who else uses something like what I explain below and what implications it's had for testing. Mail me if you have pointers.
(Prior work: Mike Mason just wrote about MVP on ASP.NET. I understand from Adam Williams that Rails does something similar, albeit using mixins. So far handling the Rails book hasn't caused me to learn it. I may actually have to work through it.)
Here's the communication pattern from the sequence diagram:
What good is this? Classical Model-View-Presenter is about making the View a thin holder of whatever controls the windowing system provides. It does little besides route messages from the window system to the Presenter and vice versa. That lets you mock out the View so that Presenter tests don't have to interact with the real controls, which are usually a pain.
There's no call for that in a web app. The Renderer doesn't interact with a windowing framework; it just builds HTML, which is easy to work with. However, the separation does give us four objects (Action, Model, Renderer, and Presenter) that:
The second picture gives a hint of the kinds of checks and tests that make sense here. (Click for the larger version. Safari users note that sometimes the JPG renders as garbage for me. A Shift-Reload has always fixed it.)
More later, unless I find that someone else has already described this in detail.
Thu, 16 Feb 2006
Over at the Agile-Testing list, there's another outbreak of a popular question: are testers needed on Agile projects? To weary oldtimers, that debate is something like the flu: perennial, sneakily different each time it appears so that you can't resolve it once and be done with it, something you just have to live with.
After skimming the latest set of messages on the topic, I returned to editing a magazine article and then I had a thought that might just possibly add something.
Editors are supposed to represent readers (and others), just as testers are supposed to represent users (and others). To an even greater extent than testers, editors do exactly what the the represented people do: they read the article. And yet, you can't take J. Random Reader and expect her to be a good editor. Why not?
It seems to me that as readers we're trained to make allowances for writers. We're so good at tolerating weak reasoning, shaky construction, and muddled language that a given reader will notice only a fraction of the problems in a manuscript. A good editor will notice most of them. How?
Some of it is what "do we need testers?" discussions obsessively circle: perspective. Editors didn't write the manuscript (usually...), so their view of what it says is not as clouded by knowledge of what it should have said. Editors also do not have their ego involved in the product.
But that perspective is shared by any old reader. What makes editors special is, first, technique. I put those techniques into two rough categories:
But there's something else that editors and testers have that programmers don't have: leisure. When I'm acting as a pure reader, I intend to get through it and out the other side quickly. As an editor, there's no guilt if I linger. There's guilt if I don't. One problem that Agile projects have is a lack of slack time, down time, bench time. There's velocity to maintain—improve—and the end of the iteration looms. Agile projects are learning projects, true, but the learning is in the context of producing small chunks of business value. There's no leisure for the focus to drift from that. (I'm using "leisure" rather than "permission" because so much of the pressure is self-generated.)
My hunch is that perspective is less important than technique and leisure for producing good products. If the testing and programming roles are to move closer together (which I would like to see), the real wizards of testing technique need to collaborate with programmers to adapt the techniques to a programmer's life. (I tried to do that a few years ago. It was a disaster, cost me two friendships. Someone else's turn.) And projects need some way to introduce leisure. (Gold cards?)
Wed, 15 Feb 2006Jeffrey Fredrick about a conference:
Author's note: I know that at most five people want to read my thoughts on the traditional separation of powers. It's just that public discourse in the US is so broken, unserious, and partisan that I sometimes get this image of my college-age children, ten years hence, asking what I did about it. And then I write something so that I can tell them then that back in 2006 I commanded the tide to stop. Take heart, though. Coming up are a few postings on model-view-presenter, web applications, and the testing implications.
So here's the way I understand it.
Congress established a court and a law, FISA, governing the wiretapping of foreign intelligence agents. That court has rarely denied a warrant, though they've modified some larger number. Warrantless surveillance is allowed for fifteen days (after declaration of war), three days (to gather evidence to be used for a warrant application), or one year (but only of foreign nationals).
The Administration has steadfastly refused to describe limitations on its powers. When signing new laws (such as the recent torture ban), the President has expressly reserved the right to bypass them because of his commander-in-chief power. Other presidents have used "signing statements" in the same way, but this one uses them far more often. (As far as I know, the legal force of signing statements has yet to be decided.)
Further, to my knowledge, the Administration has not proposed any bills to remedy the claimed defects in FISA. (In the searching that led to all these links, I found claims that the Republican majority had offered such, but were rebuffed. I didn't find primary sources, though.) By going the legislative route, they would involve all three branches of government in these important decisions.
The Executive branch is not showing the caution that Washington called for. It's unconservative, since conservatism—if it is to mean anything—ought to mean a healthy distrust of messing with what works in hope of something better. I'm a strange mixture of conservative and what (in the US) is called liberal. But when it comes to the American presumption that you need a system designed to work despite being run by knaves and scoundrels, not because it's run by wise men, I'm conservative.
Until the Administration demonstrates that they are being cautious about encroaching on the other branches (by, say, giving examples of potential wartime powers they do not claim), or argues publicly that they require more powers, the people of the US should urge their representatives in Congress to push back against any appearance of usurpation.
P.S. I know from Google searching that many people can not distinguish an argument about separation of powers from a desire to leave Al Qaeda's phones untapped. So just let me say that I don't have enough information to have an opinion about the specific surveillance in question.
Tue, 07 Feb 2006
Update: I forgot that Kevin Rutherford also suggested the word "director". Great minds, etc.
Mark Smeltzer has come up with an alternative to Appraisers: Product Directors. I like that. It puts the focus on the product, not on managing the team. It connotes movement and responsiveness. Unlike the common metaphor of driving projects, it doesn't imply that other people are passive passengers. Instead, they're active participants in a joint project. If the word makes you think of a movie director, it also brings to mind that person charged with having the clearest idea of the end product during production. It also ought to have connotations of balancing features and cost, of producing the most you can within a given budget. Sometimes it does, though directors get notorious for the opposite.
We also talked a bit about "ScrumMaster" and what might be a more business-friendly term. Mark points out:
Making an analogy to the film industry, the AD (Assistant Director) role embodies many of the ideas and responsibilities associated with ScrumMasters. In the end, that may be what I go with: Assistant Product Director.
One thing I learned from the smattering of email: if told to link a name to one property of the role, different people have very different ideas of what that one property should be. Trying to pick a name within the project might lead to a useful discussion (reminiscent of Gause and Weinberg's heuristic for naming projects in Exploring Requirements). Or, in the wrong hands, it might lead to the most tedious and pointless discussion possible.
Sun, 05 Feb 2006
I'm pleased to have been part of the inspiration for bellygraph.com. As you can see above, my own cruder bellygraph has not recovered from the holidays. Too much to do + more travel + winter + general stress = more eating & less exercise. Humbug.
... so someone else should.
Tue, 31 Jan 2006
The Pacific Northwest Software Quality Conference is one of my favorite conferences. I think it usually runs about 200 people, so it's small enough to meet people. As a regional conference always in the same place (Portland, OR, USA), there's a continuity of attendees that allows some papers to be less introductory than in other conferences.
They tell me:
Deadline is March 31.
The idea of essays is one of those oddities that have made OOPSLA so interesting and productive over the years. You should submit. By March 18.
I'm tired of having to write "Customers (product owners, business experts, etc.)" when talking about the particular project role XP calls "customer" (or "Customer," in a largely fruitless effort to short-circuit the association with someone buying something in a store).
We don't have this problem with "programmer" or "tester", so what's up with that other role? Maybe it's that its name is not based on a verb. It's kind of clear what the central activity of a programmer or tester is—to program or to test—but what is it that a Customer does? Customate? A product owner presumably owns, but "to own" is a pretty passive concept.
Maybe things would be clearer if (a) the noun we used for the Customer role was linked to a verb, and (b) that verb had something to do with the central activity of a Customer (product owner, etc.).
And what is that central activity? I think it's to determine the value of a particular proposed change. The verb that comes to my mind is "appraise." So the role would be named Appraiser. Here's a definition:
1: one who estimates officially the worth or value or quality of things
I like the word "officially," which hints at the making of a final judgment. I also like "authenticity" and "validity." They have connotations of determining whether something is real or not. In software, the Appraiser determines whether something that could become real should become real.
The only active-verb-based alternative in semi-common use is Goal Donor. I think it's inferior to Appraiser because it's about what that role does from the perspective of a programmer. From the perspective of the business, the judging of value is more important than the giving of goals.
Therefore, unless I get a better suggestion by February 15, 2006, on that date all references to "Customer" in XP books or "Product Owner" in Scrum books will retroactively change to "Appraiser," in exactly the same way that "test-driven" became "example-driven" in late 2003.
Sun, 29 Jan 2006
It's just shy of five years since the Agile Manifesto was written. I've often said that I dread the day when I look back on the me of five years ago without finding his naivete and misconceptions faintly ridiculous. When that day comes, I'll know I've become an impediment to progress.
So what about the me of 2001? I do find him a bit ridiculous, though not enough for comfort. During a shortish plane ride, I came up with this list of what I didn't know then:
Tools are important. I'm flying back from working a week at a Delphi shop. Doing... anything... in... Delphi... is... just... so... tedious... that... it... makes... you... want... to... scream. I think it no coincidence that so many of the Agile Manifesto authors had past experience with Smalltalk (or, in my case, Lisp). That kind of background makes it easier to think of software as something you could readily change. I don't think Agile would have taken off without semi-flexible languages like Java and the fast machines to run them.
Moreoever, each new tool—JUnit, Cruise Control, refactoring IDEs, FIT—makes it easier for more people to go the Agile route. Without them, Agile would be a niche approach available only to the ridiculously determined.
People get stuck. What I seem to see often is a team making a big leap. They become more productive, they become happier, the business becomes happier with them. Then they plateau. Now, I know from my weightlifting days that plateaus are a part of growth, but it seems surprisingly hard to make the next leap.
Sometimes I find other Agile consultants surprisingly wistful. The projects they're working with are doing better than they ever did before, but somehow they're not making it to that peak experience the consultant remembers.
The customer role is far harder than I'd anticipated. Five years ago, I wouldn't have said the customer role is the hardest on the project. Now I say it all the time. I also greatly underestimated how central the role is. Sometimes I tell people that I think of good Agile teams as like a compass with the magnetic pole being the customer. You can divert their actions away from the customer, but they'll always push to orient themselves that way. It's an unusually personal relationship.
Testers aren't translators. My image—only half conscious—was of the tester taking business-speak and translating it into tests for the programmers to pass. Now I think of the tester as much more someone who makes nudges that encourage and streamline direct conversations. The translation out of business speak should happen in the code.
Making business-facing tests is difficult and subtle. I pretty much thought I knew how to write "black box" tests, and that the tester's job would be to write those same tests, just earlier and based on much more intensive conversation with the customer. But the tests I advocate today are quite different than the ones I remember thinking about back then, and I'm still coming up with what appear to be important twists.
The interaction between testing and design complicates things. Five years ago, I viewed "test infected" programmers as an uncomplicated good. Programmers, I said, were so enthusiastic about testing that they'd willingly add the hooks testers have always wanted. I'm now thinking it's more complicated. Test-first unit testing leads to small-scale changes in design. Test-first large-scale testing seems to require similar changes in architecture. (See my recent interminable series for hints along those lines.)
Back then, I thought of testers as getting technical stories added to the mix. A tester could do tests of type X much more easily if the programmers did Y, makes the business case to the customer, who can decide to add a story to do Y. Or I thought of testers as writing particular stories in a particular format. When the programmers made those tests pass, the usual rules about minimizing duplication, etc. would cause the architecture to emerge naturally.
I now think that the interaction between tests and architecture will require much closer and sustained conversation than that (will be much less of a waterfall)—unless we're content to rest on a plateau.
Exploratory testing isn't an obvious fit. Back then, I was very taken with how the exploratory coding you see in Agile shops feels like exploratory testing. At a workshop I organized, Michael Feathers also remarked on that. I still think there's a strong connection, and I still talk to teams about exploratory testing, but it remains an obscure practice. When done, it seems still to be mostly about bugs, not—as I used to say—about exploring the business domain and design space. I wish I knew why.
Sat, 28 Jan 2006
Where do we stand?
I want to end this series by closing one important gap. We know that links go somewhere, but we don't know that they go to the right place, the place where the user can continue her task.
We could test that each link destination is as expected. But if following links is all about doing tasks, good link tests follow links along a path that demonstrates how a user would do her work. They are workflow tests or use-case tests. They are, in fact, the kind of design tests that Jeff Patton and I thought would be a communication tool between user experience designers and programmers. (At this point, you should wonder about hammers and nails.)
Here's a workflow test that shows a doctor entering a new case into the system.
I've written that with unusual messages, formatted oddly. Why?
Unlike this test, I think my final declarative tests really are unit tests. According to my definition, unit tests are ones written in the language of the implementation rather than the language of the business. My declarative tests are about what appears on a page and when, not about cases and cows and audits. They're unit tests, so I don't mind that they look geeky.
Workflow tests, however, are quintessential business-facing tests:
they're all about asserting that the app allows a doctor to perform
a key business task. So I'm trying to write them such that,
punctuational peculiarities aside, they're sentences someone calling
the support desk might speak.
I do that not so much because I expect a user to look at them
as because I want my perspective while writing them to be
outward-focused. That way, I'll stumble across more design
Sending all messages to an object representing a
Similarly, I'm using layout to emphasize what's most important. That's what the user can do and what, having done that, she can now do next. The actual checks that the action has landed her on the right page are less important—parenthetical—so I place them to the side. (Note also the nod to behavior-driven design.)
The methods that move around (like
As you can see, I don't check much about the page. I leave that to the declarative page tests.
That's it. I believe I have a strategy for transforming a tarpit of UI tests into (1) a small number of workflow tests that still go through the UI and (2) a larger number of unit tests of everything else.
Thanks for reading this far (supposing anyone has).
The tests I was transforming didn't do any checking of pure business logic, but in real life they probably would. They could be rewritten in the same way, though I'd prefer to have at least some such tests go below the presentation layer.
There are no browser compatibility tests. If the compatibility testing strategy is to run all the UI tests against different browsers, the transformation I advocate might well weaken it.
There are no tests of the Back button. Should they be part of workflow tests? Specialized? I don't know enough about how a well-behaved program deals with Back to speculate just now. (Hat tip to Seaside here (PDF).)
Can you do all this?
The transformation into unit tests depends on there being one place that receives HTTP requests (Webrick). Since Webrick is initialized in a single place, it's easy to find all the places that need to be changed to add a test-support feature. The same was true on the outgoing side, since there was a single renderer to make XHTML. So this isn't the holy grail—a test improvement strategy that can work with any old product code. Legacy desktop applications that have GUI code scattered everywhere are still going to be a mess.
See the code for complete details.
Sun, 22 Jan 2006
The previous solution to my copy-Unicode problem turns out not to work for non-Unicode characters, at least not for the sort of screwy characters testers like to paste into apps. So I had to solve it right. I put the solution here in hopes that it'll be found in a web search someday and save someone some time.
For the Windows version and for copying non-Unicode, look here: http://www.exampler.com/testing-com/review-copies/test-strings-0.1.zip. That's an alpha version of a collection of utility methods oriented toward helping testers mess with text fields. They're inspired by James Bach and Danny Faught's perlclip. They work on both the Mac and Windows. The source will eventually live on the Scripting for Testers site.
Tue, 17 Jan 2006
See, just explaining the problem and sleeping on it makes the solution wave to attract your attention:
One way to put unicode on the Mac OS X pasteboard is to use FUJIMOTO Hisakuni's rubyaeosa to execute Applescript.
(The hex characters are Mac-Roman "chevrons" that vaguely look like ‹‹ and ››. Applescript doesn't use 7-bit ASCII. The glop after "utf8" is sigma and phi in the Greek alphabet.)
I could dig further into rubyaeosa to find a Ruby message send equivalent to "set the clipboard", but I maybe think that's a bad idea. This is an example for Scripting for Testers, and I think that the message of getting the job done with bailing wire and twine and moving on is a good one.
Now on to Windows...
Mon, 16 Jan 2006
Speaking of screwy character encodings, I have a theory about the origins of theAbominationThatIsCamelCase.
In the late 70's, I was a computer operator for a PDP-10. We had spiffy VT100-compatible terminals. But there was this odd CRT off in a corner that we referred to as "the European terminal". On it, the character code that we know and love as ASCII underscore displayed as a left arrow. I remember being disconcerted by a program that used underscore for assignment, and I was told that the language (whatever it was) assumed European terminals. More to the point, I think I remember being told that the_camel_case_naming_style was used either because it would look silly to have names like a←variable←name or because such names would be syntax errors in the Mystery Language.
I have since then assumed that this once-necessary convention stuck in people's heads after it became unnecessary or even harmful, like that pop song you loved when you were 14 or the English units of measure. (I'll spare you any pop economics about path dependence.)
(This story is similar to Wikipedia's Alto Keyboard Origin, though it would seem to put the origin closer to ASCII-63 (which had the left arrow and no underscore), ASCII-67 (which might have perpetuated the arrow), or the early ECMA standards (ditto).)
I am blissfully ignorant of Unicode.
Nevertheless, I want to write a Ruby script that puts Unicode characters (the Greek alphabet, say) onto the Mac OS X pasteboard. It has to be pure Ruby (no writing in C). 7-bit ASCII I can do, and 8-bit Mac-Roman, both using pbcopy. However, I can't see a way to do Unicode.
Please let me know if I'm wrong.
I don't care about the encoding the Ruby code works with. UTF-8, UTF-16, Punycode, whatever.
P.S. Interesting how much more understandable the Wikipedia pages on Unicode are than the official site is.
P.P.S. Seeming bug in Textedit on 10.4.3: if I create a file full of Greek characters and save it as UTF-16, I can open it and see the same characters. If I save it as UTF-8, when I reopen it, it looks like it's full of Mac-Roman characters.
Tue, 10 Jan 2006
I'm working on code in which a particular object's
It's reasonable to suppose I have fewer than 400,000 hours left in my life, and I spent one of them finding that out.
Who knew HP's source material was code?
Mon, 09 Jan 2006
Here, I dispose of another reason to run tests through the GUI: bad links and other ways of getting to pages. These bugs can be found with unit tests instead. The mechanism fits in well with business-facing test-driven design.
Let's start with a bug. In build 343, an Activity Summary page is added to the app. Links to that page are added to thirteen other pages. In build 582, someone changes the URL of the Activity Summary page and dutifully changes twelve of the thirteen pages that link to it. It's a user who finds that the thirteenth link wasn't updated.
A link-checking program won't find all such bugs because it probably can't get to all the pages of the program. So, the claim is, you should have a GUI testing tool traverse every link. Here, I'll change the sample app to show a better way.
Because I was frightened by DTML as a small child, I lean away from template languages with embedded code and toward code that generates XHTML. (We can argue the merits of the two approaches another day.)
My Renderer class is nothing fancy. A bunch of core methods generate simple XHTML. From them, I've built up more complicated methods, such as the ones used here:
Now suppose I want to add a help link to that page, using a method
The test doesn't explicitly check the help link, but it doesn't have to: the renderer assertion will nevertheless check it for us. Here's what will happen if the link is bad:
(Note: I later added an explicit assertion that the help link exists because I consider it an essential part of the page. The implicit check only fails if the link exists but is bad; the explicit assertion fails if it doesn't exist at all.)
The link-creation routine checks that the particular help topic exists, but it
doesn't check that "help" is the right action to get to the help
pages. It's easy to ask if
the app responds to an action named
Here's the code that adds the button to the page:
The renderer could ask the app before generating
And, since I'm changing the method anyway, I might as well have it make sure
But that's starting to bug me. I'm asking the App more and more, not telling it. Is this Feature Envy? Do I want to worry that other methods that generate this action will have to duplicate the knowledge of which checks are appropriate?
It seems to me that the renderer should hand a potential presentation to the app and ask it to apply whatever rules are relevant, but in a way that insulates the app from any knowledge of the presentation (that it'll be in XHTML, etc.). That can be done using a closure as a callback:
The App would look like this:
(Note: the renderer could
This division of responsibility works well with test-driven design.
(As usual, I should note that I have not seen these ideas applied at the scale of a real app. If I ever have time to create a Giant Microbes fan site for my kids, I'll explore them further.)
At long last returning to the help popup, I can change the code that generates the link to this:
The App code that would rule on the template would be:
Any unit test that generated a help link would auto-check for a bad action or bad topic. It
would not check whether the
One final note: we are still working our way out of the tarpit. I haven't stressed it in this installment, but both of the old-format tests continue to work. As always, the goal is to gradually reduce the need for slow and fragile tests.
See the code for complete details.
Thu, 05 Jan 2006
It really gripes me when people like me are accused of not supporting troops overseas when in fact it's the government's poor planning and execution of post-war reconstruction that we don't support. (Not to mention the stinginess when it comes to troop and veteran benefits.) So when I heard about a program to donate frequent-flier miles to benefit the troops and their families, I did. Unfortunately, of the three airlines I fly, the only one still accepting donations (Northwest) is the one I had the least miles on. Apparently everyone else heard about this long ago. If you haven't, now you have, and I urge you to donate. Get a jump on next Christmas's charitable rush.
When I finally upgraded to Mac OS X Tiger, my old Emacs broke again. I hunted around for a replacement, tried a couple, and settled on Aquamacs. It has a few glitches, but it not only works like Emacs should, it also does a surprisingly decent job of acting Maclike. Some things I like:
It's good enough that I dropped a donation on its author.