Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Fri, 30 Dec 2005

Working your way out of the automated GUI testing tarpit (part 5)

part 1, part 2, part 3, part 4

In the last installment, I made an automated GUI test faster—but it still takes three seconds to run. In this installment, I'll increase it to unit-test speeds. In fact, I'll argue that it really is a unit test. The next-to-last step out of the GUI testing tarpit is to convert existing GUI tests into unit tests of rendering and of the business logic behind what's rendered. (The final step is to create workflow tests that really do have something to do with the GUI.)

The existing tests call enter and press methods on a Browser object (after going through differing types of indirection). That Browser object turns presses into HTTP requests. They're sent to localhost:8080 and received by a Server that's a separate process. The server picks apart the HTTP Request and sends commands like login and new case to an App. The App manipulates a Model, then returns the name of the next page to display. The server renders that page and sends it back to the browser.

We can speed up the declarative test by cutting out the network. NullBrowser has the same interface as Browser, but it calls the App directly. The test now runs in around 0.5 seconds. Almost all of that time is spent in XML parsing and XPATH searching. I wish the test were faster, but not enough just now to find a different XML parser.

(You can skip this section unless you care about what power the rewritten test loses.)

Have I weakened the test? This sequence of the Server's code (spread among several methods) is now unexercised:

      @dispatched << [command, args]
      @current_page_name = @app.send(command, args)
      @current_xhtml = @renderer.send("#{@current_page_name}_for", @app)
      response.body = @current_xhtml
      raise HTTPStatus::OK

But how many tests do I need to be confident this code works? And does this test need to be one of them? I think not, so we can live with this weakening, but I'll make a note to later ensure that some test checks the sequence.

Some server setup now also goes untested. It looks like this:

  def install_UI_servlets
    install_generic_proc('/') { | request, app |
    install_command(:login, 'login', 'password')
    install_command(:record_case, 'client', 'clinic_id')
    install_command(:record_visit, 'diagnosis', 'charges')
    install_command(:record_audit, 'auditor', 'variance')

If, for example, record_audit were misspelled in the next-to-last line, our changed test would no longer detect that. So we need at least one test that exercises each application command through HTTP. It could be a separate test for each command, or one test for all the commands together, or anything in between—but this test no longer has anything to do with that. I'll defer the issue of those tests until what I think will be part 7. (Note that exercising each command will check the dispatching and rendering code shown three paragraphs ago, so I can erase my earlier reminder.)

The real HTTP server renders a page for each command, so the earlier version of this test did as well. The new version only renders the one page it cares about. So certain bugs in rendering might not be caught by this test. (They'd have to be very unsubtle bugs, since even the earlier version never actually checked any of the HTML along the way to the page-under-test. Only something like a thrown exception would be noticed.) Still, we need at least one test that checks each rendered page. I'll keep that in mind as I continue.

I next did a little cleanup, removing the fake browser object from the execution path since it really adds no value. I'll skip the details. Suffice it to say that the effort surfaced some duplication hidden behind this surface:

  def test_cannot_append_to_a_nominal_audit
    as_our_story_begins {
      we_have_an_audit_record_with(:variance => 'nominal')
    assert_page_title_matches(/Case \d+/)

The duplication made me wonder: what's this test really about? Does it have anything at all to do with movement through pages? No, it's about the rendering of pages in the presence of model state that ought to affect what gets rendered. These kind of tests are better described like this:

Given an app with particular state,
when rendering a particular page:
    I can make certain assertions about that page.

Or, in code:

  def test_nominal_audit_prevents_the_add_audit_action
    given_app_with {
      audit_record('variance' => 'nominal')
    when_rendering(:case_display_page) {

This is a business-facing test in that it describes a business rule: if you've got one nominal audit, there should be no way to add any more audits. It's also like a unit test in that it gives very specific instructions to a programmer. In my case, the fact that this test fails instructs me to change a particular localized piece of code:

  def case_display_page(app)
                        submit('Add an Audit Record'))))

(I'll talk about my rendering peculiarities in some later installment.)

A lot of Fit tests share this property of being about localized business rules (or business rules that should be localized). It seems to be a distinct category of business-facing test, one that often gets overlooked because of the assumption that a customer/acceptance/functional test must be end-to-end and must go through the same interface as the user does.

My test here should be one of a file-full of tests that describe what's most important—from a business point of view—about the presentation of a particular place (or interaction context) in the application. Another test of that sort would be this one:

  def test_typical_case_display_page
    given_app_with {
      case_record('clinic_id' => 19600219)
    when_rendering(:case_display_page) {
      assert_page_title_matches(/^Case 19600219/)

This test describes three facts about the Case Display page's default appearance that must survive any fiddling with how it looks: it must have a title that includes the case's clinic ID, and there must be a way to cause the add-visit and add-audit actions in the App. (This test passes, by the way, though the previous one continues to fail.)

Consider this test something like a wireframe diagram in code.

Most tarpit GUI tests are addressing, explicitly and implicitly, several issues all jumbled together. If you separate them, you get something that's both faster and much more clear. Here, I've addressed the particular issue of what must be true of a page. Later, I'll address the particular issue of what must be true of navigation among pages. But first, I'll make my test pass and see what that suggests about hooking business rules into rendering.

See the code for complete details.

## Posted at 20:30 in category /testing [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.




Agile Testing Directions
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects

Permalink to this list


Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list


Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI


Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."


Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich


Where to Find Me

Software Practice Advancement


All of 2006
All of 2005
All of 2004
All of 2003



Agile Alliance Logo