Position statement for functional testing tools workshop

Automated functional testing lives between two sensible testing activities. On the one side, there’s conventional TDD (unit testing). On the other side, there’s manual exploratory testing. It is probably more important to get good at those than it is to get good at automated functional testing. Once you’ve gotten good at them, what does it mean to get good at automated functional testing?

There is some value in thinking through larger-scale issues (such as workflows or system states) before diving into unit testing. There is some value (but not, I think, as much as most people think) in being able to rerun larger-scale functional tests easily. In sum: compared to doing exploratory testing and TDD right, the testing we’re talking about has modest value. Right now, the cost is more than modest, to the point where I question whether a lot of projects are really getting adequate ROI. I see projects pouring resources into functional testing not because they really value it but more because they know they should value it.

This is strikingly similar to, well, the way that automated testing worked in the pre-Agile era: most often a triumph of hope over experience.

My bet is that the point of maximum leverage is in reducing the cost of larger-scale testing (not in improving its value). Right now, all those workflow statements and checks that are so easy to write down are are annoyingly hard to implement. Even I, staring at a workflow test, get depressed at how much work it will be to get it just to the point where it fails for the first time, compared to all the other things I could be doing with my time.

Why does test implementation cost so much?

We are taught that Agile development is about working the code base so that arbitrary new requirements are easy to implement. We have learned one cannot accomplish that by “layering” new features onto an existing core. Instead, the core has to be continually massaged so that, at any given moment, it appears as if it were carefully designed to satisfy the features it supports. Over time, that continual massaging results in a core that invites new features because it’s positively poised to change.

What do we do when we write test support code for automated large-scale tests? We layer it on top of the system (either on top of the GUI or on top of some layer below the GUI). We do not work the new code into the existing core—so, in a way that ought not to surprise us, it never gets easier to add tests.

So the problem is to work the test code into the core. The way I propose to do that is to take exploratory testing more seriously: treat it as a legitimate source of user stories we handle just like other user stories. For example, if an exploratory tester wants an “undo” feature for a webapp, implementing it will have real architectural consequences (such as moving from an architecture where HTTP requests call action methods that “fire and forget” HTML to one where requests create Command objects).

Why drive the code with exploratory testing stories rather than functional testing stories? I’m not sure. It feels right to me for several nebulous reasons I won’t try to explain here.

8 Responses to “Position statement for functional testing tools workshop”

  1. David Peterson Says:

    I think the real problem is a lack of separation of the “what” from the “how”.

    Workflow tests are essentially scripts containing a series of steps to take (do this, do that, check this, check that). They describe HOW to test something. When you talk about “how” you’re locking yourself into a specific implementation (e.g. a particular arrangement of the GUI) and that makes your tests brittle and awkward to change.

    It is possible to write tests in a way that keeps the “what” and “how” separate using frameworks like Concordion or Fit.

    If the “how” is written by the developers in code, along with the rest of the code, then it can be refactored and kept free from duplication. This means that you can drastically alter the implementation and enjoy the kind of external-behaviour-oriented automated safety-net that neither unit tests nor exploratory testing can give you.

    Exploratory testing is important as an ongoing activity but to drive code by exploratory testing seems a bizarre idea. I think you need to try harder to put those “nebulous reasons” into words because currently I’m not getting it.

  2. Michael Bolton Says:

    Exploratory testing is important as an ongoing activity but to drive code by exploratory testing seems a bizarre idea.

    It doesn’t to me, but I suspect that’s because I interpret two things differently.

    1) E.T. would be A driver of code, not THE driver, because…

    2) E.T. is simultaneous test design, test execution, and learning. If one thinks of E.T. in terms of test execution, then the idea of E.T. driving code might indeed seem bizarre. But if one were to think more of the design and learning legs of E.T., then see (1).

    —Michael B.

  3. Bob Corrick Says:

    Interesting.

    If I imagine some system, which maintains state (the WHAT?) and is subject to various presentations and interactions in a workflow user interface (the HOW?), are you proposing that the architecture be evolved under exploratory testing? Would that be an analogy to TDD?

    Perhaps an example of the pain you have experienced in workflow tests would help me to understand better.

  4. Jim Knowlton Says:

    Brian,

    Don’t you think there is a fourth category in-between automated testing through the UI and unit tests? For example, my current job description is as a “White Box QA Engineer”…in this, I don’t write tests against the individual methods, but also don’t write UI-facing tests. I test at the component level, writing a script (currently in Groovy) to send a post commend to a web application, for example, then checking the database through a JDBC query to verify that the database updated correctly.

  5. Lisa Crispin Says:

    Brian, I have to disagree. We could not be productive without the vast automated functional tests that we have. We wouldn’t get enough exploratory testing done without automation to help it to make up for not having automated customer-facing tests. I hate to see people decide that things like FitNesse or Watir tests aren’t really needed. I’m afraid the testers will get stuck again doing manual tests because the programmers will decide that exploratory testing is something for testers to do, and they don’t need to help by providing a way to do it easily.

  6. Brian Marick Says:

    Jim: Most likely, I’d consider your tests “automated functional”. (I prefer “business-facing” to “functional”, but I was keeping to the title of the workshop.) That is, if the point of the test is that, after a user posts an order, the accounting department knows X, the shipping department knows Y, etc., that’s — to me — a typical functional test. The fact that it’s implemented by sending POSTs with Groovy vs. with a browser, and by peeking into the database to see if the right thing happened, is not particularly important.

    More to the point of this note: your kind of tests are the ones I do. Almost none of my tests will drive the browser, for example. So your kind of tests are the kind I want to see get cheaper.

  7. Brian Marick Says:

    Bob: there’s an example of the kind of pain that motivated this proposal here:
    Google keynote

    It’s toward the end of the post.

  8. Exploration Through Example » Blog Archive » Send me bugs that are caught in end-to-end testing Says:

    […] For some time now, I’ve been skeptical of the ROI of end-to-end automated tests and of the value of automating the kind of business-facing examples that drive development. […]

Leave a Reply

You must be logged in to post a comment.