Archive for the 'testing' Category

An alternative to business-facing TDD

The value of programmer TDD is well established. It’s natural to extrapolate that practice to business-facing tests, hoping to obtain similar value. We’ve been banging away at that for years, and the results disappoint me. Perhaps it would be better to invest heavily in unprecedented amounts of built-in support for manual exploratory testing.

In 1998, I wrote a paper, “When should a test be automated?“, that sketched some economics behind automation. Crucially, I took the value of a test to be the bugs it found, rather than (as was common at the time) how many times it could be run in the time needed to step through it manually.

My conclusions looked roughly like the following:

test tradeoffs in general

Scripted tests, be they automated or manual, are expensive to create (first column). Manual scripts are cheaper, but they still require someone to write steps down carefully, and they likely require polishing before they can truly be followed by someone else. (Note: height of bars not based on actual data.)

In the second column, I assume that a particular set of steps has roughly the same chance of finding a bug whether executed manually or by a computer, and whether the steps were planned or chosen on the fly. (I say “roughly” because computers don’t get bored and miss bugs, but they also don’t notice bugs they weren’t instructed to find.)

Therefore, if the immediate value of a test is all that matters, exploratory manual testing is the right choice. What about long-term value?

Assume that exploratory tests are never intentionally repeated. Both their long-term cost and value are zero. Both kinds of scripted tests have quite substantial maintenance costs (especially in that era, when testing was typically done through an unmodified GUI). So, to pull ahead of exploratory tests in the long term, scripted tests must have substantial bug-finding power. Many people at that time observed that, in fact, most tests either found a bug the first time they were run or never found a bug at all. You were more likely to fix a test because of an intentional GUI change than to fix the code because the test found a bug.

So the answer to “when should a test be automated?” was “not very often”.

Programmer TDD changes the balance in two ways:

Test tradeoffs for TDD

  1. New sources of value are added. Extremely rapid feedback reduces the cost of debugging. (Most bugs strike while what you did to create them is fresh in your mind.) Many people find the steady pace of TDD allows them to go faster, and that the incremental growth of the code-under-test makes for easier design. And, most importantly as it turns out, the need to make tests run fast and reduce maintenance cost leads to designs with good properties like low coupling and high cohesion. (That is, properties that previously were considered good in the long term—but were routinely violated for short-term gain—now had powerful short-term benefits.)

  2. Good design and better programmer tools dramatically lowered the long-term cost of tests.

So, much to my surprise, the balance tipped in favor of automation—for programmer tests. It’s not surprising that many people, including me, hoped the balance could also tip for business-facing tests. Here are some of the hoped-for benefits:

  • Tests might clarify communication and avoid some cases where the business asks for something, the team thinks they’ve delivered it, and the business says “that’s not what I wanted.”

  • They might sharpen design thinking. The discipline of putting generalizations into concrete examples often does.

  • Programmers have learned that TDD supports iterative design of interfaces and behavior. Since whole products are also made of interfaces and behavior, they might also benefit from designers who react to partially-finished products rather than having to get it right up front.

  • Because businesses have learned to mistrust teams who show no visible progress for eight months (at which point, they ask for a slip), they might like to see evidence of continuous progress in the form of passing tests.

  • People often need documentation. Documentation is often improved by examples. Executable tests are examples. Tests as executable documentation might get two benefits for less than their separate costs.

  • And, oh yeah, tests could find regression bugs.

So a number of people launched off to explore this approach, most notably with Fit. But Fit hasn’t lived up to our hopes, I think. The things that particularly bother me about it are:

  • It works well for business logic that’s naturally tabular. But tables have proven awkward for other kinds of tests.

  • In part, the awkwardness is because there are no decent HTML table editors. That inhibits experimentation: if you don’t get a table format right the first time, you’re tempted to just leave it.

    Note: I haven’t tried ZiBreve. By now, I should have. I do include Word, Excel, and their OpenOffice equivalents among the ranks of the not-decent, at least if you want executable documentation. (I’ve never tried treating .doc files as the real tests that are “compiled” into HTML before they’re executed.)

  • Fit is not integrated into programmer editors the way xUnit is. For example, you can’t jump from a column name to the Java method that defines it. Partly for this reason, programmers tend to get impatient with people who invent new table formats—can’t they just get along with the old one?

With my graphical tests, I took aim at those sources of friction. If I have a workflow test, I can express it as boxes and arrows:

a workflow test

I translate the graphical documents into ordinary xUnit tests so that I can use my familiar tools while coding. The graphical editor is pretty decent, so I can readily change tests when I get better ideas. (There are occasional quirks where test content has changed more than it looks like it has. That aspect of using Fit hasn’t gone away entirely.)

I’ve been using these tests, most recently on wevouchfor.org—and they don’t wow me. Sad While I almost always use programmer TDD when coding (and often regret skipping it when I don’t), TDD with these kinds of tests is a chore. It doesn’t feel like enough of the potential value gets realized for the tests to be worth the cost.

  • Writing the executable test doesn’t help clarify or communicate design. Let me be careful here. I’m a big fan of sketching things out on whiteboards or paper:

    A whiteboard

    That does clarify thinking and improve communication. But the subsequent typing of the examples into the computer is work that rarely leads to any more design benefits.

  • Passing tests do continuously show progress to the business, but… Suppose you demonstrate each completed story anyway, at an end-of-iteration demo or (my preference) as soon as it’s finished. Given that, does seeing more tests pass every day really help?

  • Tests do serve as documentation (at least when someone takes the time to surround them with explanatory text, and if the form and content of the test aren’t distorted to cram a new idea into existing test formats).

  • The word I’m hearing is that these tests are finding bugs more often than I expected. I want to dig into that more: if they’re the sort of “I changed this thing over here and broke that supposedly unrelated thing over there” bugs that whole-product regression tests are traditionally supposed to find, that alone may justify the expense of test automation—unless I can find a way to blame it on inadequate unit tests or a need to rejigger the app.

  • (This is the one that made me say “Eureka!”) Tests alone fail at iterative product design in an interesting way. Whenever I’ve made significant progress implementing the next chunk of workflow or other GUI-visible change, I just naturally check what I’ve done through the GUI. Why? This checking makes new bugs (ones the automated tests don’t check for) leap out at me. They also sometimes make me slap my forehead and say, “What I intended here was stupid!”

But if I’m going to be looking at the page for both bugs and to change my intentions, I’m really edging into exploratory testing. Hmm… What if an app did whatever it could to aid exploratory testing? I don’t mean traditional testability features like, say, a scripting interface; I mean a concerted effort to let exploratory testers peek and poke at anything they want within the app. (That may not be different than my old motto “No bug should be hard to find the second time,” but it feels different.)

So, although features of Rails like not having to restart the server after most code changes are nice, I want more. Here’s an example.

The following page contains a bug:

an ordinary web page

Although you can’t see it, the bottom two links are wrong. They are links to /certifications/4 instead of /promised_certifications/4.

  1. Unit tests couldn’t catch that bug. (The two methods that create those types of links are tested and correct; I just used the wrong one.)

  2. One test of the action that created the page could have caught the bug, but did not. (To avoid maintenance problems, that test checked the minimum needed to convince me that the correct “certifications” had been displayed. I assumed that if they were displayed at all, the unit tests meant they were displayed correctly. That was actually almost right—every character outside the link’s href value was correct.)

  3. I missed the bug when I checked the page. (I suspect that I did click one of the links, but didn’t notice it went to the wrong place. If so, I bet I missed the wrongness because I didn’t have enough variety in the test data I set up—ironic, because I’ve been harping on the importance of “irrelevant” variety since 1994.)

  4. A user had no trouble finding the bug when he tried to edit one of his promised certifications and found himself with a form for someone else’s already-accepted certification. (Had he submitted the form, it would have been rejected, but still.)

That’s my bug: a small error in a big pile of HTML the app fired and forgot.
Suppose, though, that the app created and retained an object representing the page. Suppose further that an exploration support app let you switch to another view of that object/page, one that highlights link structure and downplays text:

The same page, highlighting link hrefs

To the eyes of someone who just added promised certifications to that page, the wrong link targets ought to jump out.

There’s more that I’d like, though. The program knows more about those links than it included in the HTTP Response body. Specifically, it knows they link to a certain kind of object: a PromisedCertification. I should be able to get a view of that object (without committing to following the link). I should be able to get it in both HTML form and in some raw format. (And if the link-to-be-displayed were an object in its own right, I would have had a place to put my method, and I wouldn’t have used the wrong one. Testability changes often feed into error prevention.)

And so on… It’s easy enough for me to come up with a list of ways I’d like the app to speak of its internal workings. So what I’m thinking of doing is grabbing some web framework, doing what’s required to make it explorable, using it to build an app, and also building an exploration assistant in RubyCocoa (allowing me to kill another bird with this stone).

To be explicit, here’s my hypothesis:

An application built with programmer TDD, whiteboard-style and example-heavy business-facing design, exploratory testing of its visible workings, and some small set of automated whole-system sanity tests will be cheaper to develop and no worse in quality than one that differs in having minimal exploratory testing, done through the GUI, plus a full set of business-facing TDD tests derived from the example-heavy design.

We shall see, I hope.

Google talk references

One thing I meant to say and forgot: Just as the evolution of amphibians didn’t mean that all the fish disappeared, the creation of a new kind of testing to fit a new niche doesn’t mean existing kinds are now obsolete.

Context-driven testing:

Testing Computer Software, Kaner, Falk, and Nguyen
Lessons Learned in Software Testing, Kaner, Bach, and Pettichord
http://www.context-driven-testing.com
“When Should a Test Be Automated?”, Marick

Exploratory testing:

James Bach
Michael Bolton
Elisabeth Hendrickson
Jonathan Kohl

Left out:

The undescribed fourth age

Embedded vs. independent testers

Bruce Daley posts on how most humans are biased to think they’re less error-prone than they are. As far as I know, that’s a claim solidly based in empirical research. (See also Bruce Schneier’s The Psychology of Security.) From this, he concludes:

Given the nature of their work, software developers and software programmers suffer more from the illusion of knowledge and the illusion of control than most other professions, making them particularly subject to over-looking mistakes in their own code. Which is why software needs to be tested independently.

However. Consider the graph below.

Here, the programmer and independent tester start testing at the same time. (Bad programmer! Bad!) The programmer starts out with more knowledge of the app than the tester (the line marked P/+), but she also has a large amount of cognitive bias (P/-) and lacks testing skill. That makes her miss bugs her knowledge would otherwise allow her to find (the area under the red line). Moveover, her biases seem to be pretty impervious to evidence.

The tester starts out with less knowledge, but has no (relevant) cognitive biases at all. Also, his testing skill lets him ramp up his bug finding pretty fast—but it still takes him a while to overcome her advantage.

Which do you want doing the testing? If you’re shipping at time A, it looks like the programmer has the edge. (Compare the shaded areas under the curve.)

We could expect that advantage to erode over time. If the ship date is farther out, the independent tester would have an advantage, as this graph shows:

Even when all that matters is bug count, the decision is not straightforward, especially since it’s based on information you can’t know until after you’ve decided. (How long will it take the tester to get up to speed? How many and what kind of bugs will the programmer miss?)

On most projects, there are lots of other factors to consider.

So I encourage people not to make the assertion the post’s author does.

Project testing growth path

In response to a potential client, I wrote something very like the following. The interesting thing is that I’m placing more emphasis on manual exploratory testing. It’s not so much that I suddenly realize its importance as that automated business-facing tests continue to be hard to implement and adopt. More on that anon.

A short sketch of a reasonable growth path would go like this:

  1. Get the programmers sold on test-driven design. How difficult that is depends mainly on how much legacy code you have (where legacy code is, as Michael Feathers says, code without unit tests). Legacy code is hard to test, so programmers don’t see the benefits of testing as quickly, so it requires that much more discipline to get over what’s always a higher hump than with greenfield code. (Michael Feathers’ Working Effectively with Legacy Code is the gold standard book, though there’s an important strategy—”strangler applications“—that’s not covered in depth. Also, I’m the track chair for a new Legacy Code track at Agile2008, I just asked Feathers to give the keynote, and he says he has “a number of surprising proposals about how to make things better”.)

    I’ve come to feel that the most important thing to get across to programmers is what it’s like to work with code built on a solid base of tests. If they understand that early on, they’ll have a clear idea of what to shoot for, which helps with the pain of legacy code. I wrote a workbook to that end.

  2. At the same time, move testers away from scripted manual tests (if that’s what they’re doing) and toward a more exploratory style of manual testing. The people who are strongest on exploratory testing in Agile are Jonathan Kohl, Elisabeth Hendrickson, and Michael Bolton.

  3. As programmers do more unit testing, they will become accustomed to changing their design and adding code in support of their own testing. It becomes more natural for them to do the same for the testers, allowing them to do “automation-assisted exploratory testing”. (Kohl writes about this.) I like to see some of the testers learn a scripting language to help with that. Ruby is my favorite, for a variety of reasons. I wrote a book to help testers learn it.

  4. Over this period, the testers and programmers should shed most animosity or wariness they have toward each other. They’re working together and doing things to help each other. It helps a lot if they sit together.

  5. Once the programmers are sold on test-driven design, they will start wishing that the product owners would supplement what they say about what they want with clear, concrete, executable examples of what they want. That is: tests, written in the language of the business. That isn’t as easy to do as we thought it would be five years ago, but it can be done more or less well. Often, the testers will find a new role as helpers to the product owners. For example, they should get involved early enough to ask questions that lead to tests that prevent bugs (which is better than discovering the bugs after you’ve paid some programmers to implement them).

  6. Throughout this, some kinds of testing (like performance testing) don’t change all that much. For performance testing, I trust Scott Barber.

As a side note: I’m quite fond of the new The Art of Agile Development by Shore & Warden: enough to publicly declare that I’ll bring a copy to every team I work with. Lots of good from-the-trenches experience summarized there.

An occasional alternative to mocks?

I’m test-driving some Rails helpers. A helper is a method that runs in a context full of methods magically provided by Rails. Some of those methods are of the type that’s a classic motivation for mocks or stubs: if you don’t want them to blow up, you have to do some annoying behind-the-scenes setup. (And because Rails does so much magic for you, it can be hard for the novice to have a clue what that setup is for helpers.)

Let’s say I want a helper method named reference_to. Here’s a partial “specification”: it’s to generate a link to one of a Certification's associated users. The text of the link will be the full name of the user and the href will be the path to that user’s page. I found myself writing mocks along these lines:

mock.should_receive(:user_path).once.
     with(:id=>@originator.login).
     and_return("**the right path**")
mock.should_receive(:link_to).once.
     with(@originator.full_name, "**the right path**").
     and_return("**correct-text**")

But then it occurred to me: The structure I’m building is isomorphic to the call trace, so why not replace the real methods with recorders? Like this:

  def user_path(keys)
    "user_path to #{keys.canonicalize}"
  end

  def link_to(*args)
    "link to #{args.canonicalize}"
  end

  def test_a_reference_is_normally_a_link
    assert_equal(link_to(@originator.full_name, user_path(:id => @originator.login)),
                 reference_to(@cert, :originator))
  end

This test determines that:

  • the methods called are the right ones to implement the specified behavior. There’s a clear correspondence between the text of the spec (”generate a link to”) and calls I know I made (link_to).

  • the methods were called in the right order (or in an order-irrelevant way).

  • they were called the right number of times.

  • the right arguments were given.

So, even though my fake methods are really stubs, they tell you the same things mocks would in this case. And I think the test is much easier to grok than code with mocks (especially if I aliased assert_equal to assert_behaves_like).

What I’m wondering is how often building a structure to capture the behavior of the thing-under-test will be roughly as confidence-building and design-guiding as mocks. The idea seems pretty obvious (even though it took me forever to think of it), so it’s probably either a bad idea or already widely known. Which?

Alternately, I’m still missing the point of mocks.

P.S. For tests to work, you have to deal with the age-old problems of transient values (like dates or object ids) and indeterminate values (like the order of elements in a printed hash). I’m fortunate in that I’m building HTML snippets out of simple objects, so this seems to suffice:

class Object
  def canonicalize; to_s; end
end

class Array
  def canonicalize
    collect { | e | e.canonicalize }
  end
end

class Hash
  def canonicalize
    to_a.sort_by { | a | a.first.object_id }.canonicalize
  end
end

A tagging meme reveals I short-change design

There’s one of those tagging memes going around. This one is: “grab the nearest book, open to page 123, go down to the 5th sentence, and type up the 3 following sentences.”

My first two books had pictures on p. 123.

The next three (Impro: Improvisation and the Theatre, AppleScript: the Definitive Guide, and Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life) didn’t have anything that was amusing, enlightening, or even comprehensible out of context. So I kept going, which is cheating I suppose. The last, How Designers Think, had this:

The designer’s job is never really done and it is probably always possible to do better. In this sense, designing is quite unlike puzzling. The solver of puzzles such as crosswords or mathematical problems can often recognize a correct answer and knows when the task is complete, but not so the designer.

That’s a hit. It made me realize a flaw in my thinking. You see, it reminded me of one of my early, semi-controversial papers, “Working Effectively With Developers” (referred to by one testing consultant as “the ‘how to suck up to programmers’ paper”). In its second section, “Explaining Your Job”, I explicitly liken programmers to problem solvers:

A legendary programmer would be one who was presented a large and messy problem, where simply understanding the problem required the mastery of a great deal of detail, boiled the problem down to its essential core, eliminated ambiguity, devised some simple operations that would allow the complexity to be isolated and tamed, demonstrated that all the detail could be handled by appropriate combinations of those operations, and produced the working system in a week.

Then I point out that this provides a way for testers to demonstrate value. I show a sample problem, then write:

Now, I’d expect any programmer to quickly solve this puzzle - they’re problem solvers, after all. But the key point is that someone had to create the puzzle before someone else could solve it. And problem creation is a different skill than problem solving.

Therefore, the tester’s role can be likened to the maker of a crossword or a mathematical problem: someone who presents a good, fully fleshed-out problem for the programmer to master and solve:

So what a tester does is help the programmer […] by presenting specific details (in the form of test cases) that otherwise would not come to her attention. Unfortunately, you often present this detail too late (after the code is written), so it reveals problems in the abstractions or their use. But that’s an unfortunate side-effect of putting testers on projects too late, and of the unfortunate notion that testing is all about running tests, rather than about designing them. If the programmer had had the detail earlier, the problems wouldn’t have happened.

Despite this weak 1998 gesture in the rough direction of TDD, I still have a rather waterfall conception of things: tester presents a problem, programmer solves it, we all go home.

But what that’s missing is my 2007 intellectual conception of a project as aiming to be less wrong than yesterday, to get progressively closer to a satisfactory answer that is discovered or refined along the way. In short—going back to the original quote—a conception of the project as a matter of design that’s at every level of detail and involves everyone. That whole-project design is something much trickier than mere puzzle-solving.

I used the word “intellectual” in the previous paragraph because I realize that I’m still rather emotionally attached to the idea of presenting a problem, solving it, and moving on. For example, I think of a test case as a matter of pushing us in a particular direction, only indirectly as a way of uncovering more questions. When I think about how testing+programming works, or about how product director + team conversations work, the learning is something of a side effect. I’m strong on doing the thing, weak on the mechanics of learning (a separate thing from the desire to learn).

That’s not entirely bad—I’m glad of my strong aversion to spending much time talking and re-talking about what we’ll build if we ever get around to building anything, of my preference for doing something and then taking stock once we have more concrete experience—but to the extent that it’s a habit rather than a conscious preference, it’s limiting. I’ll have to watch out for it.

When to write test helper code

On the agile-testing list, I answered this question:

So for the first time in many, many years I’m not in a test management position, and I’m writing tests, automating them, etc. We’re using a tool called soapUI to automate our web services testing–it’s a handy tool, supports Groovy scripting which allows me to go directly to the DB to validate the results of a given method, etc. One feature of soapUI is centralized test scripts; I can create helper scripts to do a bunch of stuff–basically, I write the Groovy code I need to validate something and then I often find I’m moving it into a helper function, refactoring, etc.. My question is, how do you know the right balance between just mashing up the automation (ie, writing a script as an embeded test script) vs. creating the helper function and calling it?

Bret Pettichord suggested I blog my answer, so here it is:

I use these ideas and recommend them as a not-bad place to start:

  1. At the end of every half-day or full-day of on-task work (depending on my mood), I’ll spend a half an hour cleaning up something ugly I’ve encountered.

  2. I’ll periodically ask myself “Is it getting easier to write tests?” If it’s not, I know I should slow down and spend more effort writing helper code. I know from experience what it feels like when I’ve got good test support infrastructure—how easy writing tests can be—so I know what to shoot for.

  3. I have this ideal that there should be no words in a test that are not directly about the purpose of the test. So: unless what the test tests is the process of logging in, there should be no code like:

    create_user "sam", "password"
    login "sam", "password"
    ...

    Instead, the test should say something shorter:

    using_logged_in_user "sam"
    ...

    That tends to push you toward more and smarter helper functions.

    Unsurprisingly, I often fall short of that ideal. But if I revisit a test and have any trouble at all figuring out what it’s actually testing, I use that as a prod to make the extra effort.

What was something of a breakthrough for me was realizing that I don’t have to get it right at this precise moment. Especially at the beginning, when you don’t have much test infrastructure, stopping every twelve seconds to write that helper function you ought to have completely throws you out of the flow of what you’re trying to accomplish. I’ve gotten used to writing the wrong code, then fixing it up later: maybe at the end of the task, maybe not until I stumble across the ugliness again.

Code emphasis for tests that teach

In product code, not repeating yourself is almost always a good idea. In tests, it’s not so clear-cut. Repeating yourself has the same maintenance dangers as it does for code, but not repeating yourself has two additional downsides:

  • A common knock against well-factored object-oriented code is that no object does anything; they all just forward work on to other objects. That structure turns out to be useful once you’re immersed in the system, but it does make systems harder to learn.

    One purpose of tests is to explain the code to the novice. Remove duplication too aggressively, and the tests do a poor job of that.

  • Another purpose of tests is to make yourself think. One way to do that is to force yourself to enumerate possibilities and ask “What should happen in this case?” That’s one of the reasons that I, when acting as a tester, will turn a state diagram into a state table. A state diagram doesn’t make it easy to see whether you’ve considered the effect of each possible event in each possible state; a state table does. (It’s not as simple as that, though: it’s hard to stay focused as you work through a lot of identical cases looking for the one that’s really different. It’s like the old joke that ends “1 is a prime, 3 is a prime, 5 is a prime, 7 is a prime, 9 is a prime…”)

    If you factor three distinct assertions into a single master assertion, it’s easy to overlook that the second shouldn’t apply in some particular case. When you factor three distinct setup steps into one method, you can more easily fail to ask what should happen when the second setup step is left out.

So as I balance the different forces, I find myself writing test code like this:

  # Guard against manufactured URLs.
  def test_cannot_update_a_user_unless_logged_in
    new_profile = NewProfile.for(’quentin‘).is_entirely_different_than(users(:quentin))

    put :update,
        {:id => users(:quentin).login, :user => new_profile.contents}
        # Nothing in session
    assert_redirected_to(home_path)
    assert_hackery_notice_delivered
    new_profile.assert_not_installed
  end

  def test_cannot_update_a_user_other_than_self
    new_profile = NewProfile.for(’quentin‘).is_entirely_different_than(users(:quentin))

    put :update,
        {:id => users(:quentin).login, :user => new_profile.contents},
        {:user => users(:aaron).id}
    assert_redirected_to(home_path)
    assert_hackery_notice_delivered
    new_profile.assert_not_installed
  end

There’s duplication there. In an earlier version, I’d in fact reflexively factored it out, but then decided to put it back. I think the tests are better for that, and I’m willing to take the maintenance hit.

Nevertheless, there’s a problem. It’s not obvious enough what’s different about the two tests. What to do about that?

Consider explaining the evolution of a program over time in a book. Authors don’t usually show a minimal difference between before and after versions. Instead, they show both versions with a fair amount of context, somehow highlighting the differences. (When I write, I tend to bold changed words.) I wish I could highlight what’s special about each test in my IDE, so that it would look like this:

  # Guard against manufactured URLs.
  def test_cannot_update_a_user_unless_logged_in
    new_profile = NewProfile.for(’quentin‘).is_entirely_different_than(users(:quentin))

    put :update,
        {:id => users(:quentin).login, :user => new_profile.contents}
        # Nothing in session
    assert_redirected_to(home_path)
    assert_hackery_notice_delivered
    new_profile.assert_not_installed
  end

  def test_cannot_update_a_user_other_than_self
    new_profile = NewProfile.for(’quentin‘).is_entirely_different_than(users(:quentin))

    put :update,
        {:id => users(:quentin).login, :user => new_profile.contents},
        {:user => users(:aaron).id}
    assert_redirected_to(home_path)
    assert_hackery_notice_delivered
    new_profile.assert_not_installed
  end

Something for IDE writers to implement.

Links

Jason Gorman
“And that, folks, is how enterprise-scale reuse works. It is, I tell you. It’s true!”
Ben Simo

“We can’t stop the conversation at ‘I just did that and I’m a user.’”

The “no user would do that” retort is the bane of testers. Ben talks well about moving the conversation past that. But a step further: any project I’d want to work on is a learning project, one that wants to be less wrong than yesterday, one that likes finding out about mistakes. Get past this particular conversation: fine. Maybe testers could even train programmers to swallow that particular reflexive retort. But the defensiveness about having partial understanding will still leak out other places.

Now, I once sat down with Elisabeth Hendrickson while she tested an app of mine. I’d built it with TDD to the max: business-facing tests all the way down to unit tests. It took her about ten minutes to find a high-priority bug. I immediately slipped right into the defensive programmer stance. It took me a few minutes to snap out of it. But if we worked together for longer, I’d like to think I’d get past that.

I aspire to be like Mark “capabilities” Miller, a programmer I once worked with. When someone found a bug in his code, he’d write a long email about it, praising the person, attributing all sorts of cleverness to her, and explaining how he’d come to make that mistake.

Bret Pettichord

“People often recommend that you treat a bug as a story. […] I think this approach is incorrect. We’ve found a better way to handle [bugs].”

I want to disagree with Bret, but I haven’t come up with a counterexample that convinces even me.

Milton Mayer

“What happened here was the gradual habituation of the people, little by little, to being governed by surprise; to receiving decisions deliberated in secret; to believing that the situation was so complicated that the government had to act on information which the people could not understand, or so dangerous that, even if the people could not understand it, it could not be released because of national security….

“And one day, too late, your principles, if you were ever sensible of them, all rush in upon you….The world you live in — your nation, your people — is not the world you were born in at all. The forms are all there, all untouched, all reassuring, the houses, the shops, the jobs, the mealtimes, the visits, the concerts, the cinema, the holidays. But the spirit, which you never noticed because you made the lifelong mistake of identifying it with the forms, is changed.”

Test design links (biased toward exploratory testing)

Here are some links I will point to when people ask me about test design. Add more in the comments and I’ll promote them to the main posting.

Mnemonics

  • Michael Bolton on:

    • SF DePOT (Structure, Function, Data, Platform, Operations, and Time) here and here

    • CRUSSPIC STMPL (capability, reliability, usability, security, scalability, performance, installability, compatibility, supportability, testability, maintainability, portability, and localizability)

    • HICCUPPS (History, Image, Comparable Products, Claims, Users’ expectations, the Product itself, Purpose, Statutes)

    See also various of Bolton’s articles.

  • Adam Goucher on SLIME (Security, Languages, requIrements, Measurement, Existing)

  • Jonathan Kohl on MUTII (Market, Users, Tasks, Information, Implementation).

  • Michael Kelly on test reporting with FCC CUTS VIDS (Feature tour, Complexity tour, Claims tour, Configuration tour, User tour, Testability tour, Scenario tour, Variability tour, Interoperability tour, Data tour, Structure tour)

  • Ben Simo on testing failure handling with FAILURE (Functional, Appropriate, Impact, Log, UI, Recovery, Emotions)

  • Scott Barber on designing model workloads for performance testing with FIBLOTS (Frequent, Intensive, Business Critical, Legal, Obvious, Technically Risky, Stakeholder Mandated). (He has others that are outside the scope of this posting.)

Other reminders

Mind maps

Online course materials

Books

Things that ought to be in books

Videos

Thanks to Michael Bolton, Adam Goucher, Matthew Heusser, Jonathan Kohl, and Chris McMahon for links.