Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Mon, 19 Dec 2005

Working your way out of the automated GUI testing tarpit (part 4)

part 1, part 2, part 3

The story so far: One of my main goals for tests is that they contain no excess words. That means that a GUI test should not describe the path by which it gets to the page under test. In part 1, I described a declarative format. With it, the test writer specifies all and only the facts that should be true of the app at the point the test begins. Part 2 gives a simple implementation that figures out a path through the app that makes those facts true. Part 3 recommends that you migrate tests to this format only as they fail.

The new format tests, though, run as slowly as they did before being migrated. Now it's time to make them faster. I'll do that in two steps. The first doesn't even double their speed. That's hardly sufficient, but the implementation has a side effect that helps the programmer and exploratory tester.

Previously, I only pretended the app talked across the network. Since that fakery would make any timings useless, it's now running on a real server (WEBrick), fielding real live HTTP. So localhost:8080 shows this stunningly attractive UI:

Welcome to the Case Management System

Authorized Users Only


In part 1, I wrote three versions of a test. All three of them communicate with the server in exactly the same way: they send eight different HTTP GET commands (just as a browser would if you visited the app and then pressed seven buttons on seven pages).

To speed up the test, I've made it remember all eight of the commands the first time it runs. (That all happens behind the scenes; there are no changes to the test.) Now later runs can send the commands in a big glob via a side channel. That avoids seven of the round trips.

The results are underwhelming. The original test takes 5.1 seconds. The version that sends the big glob takes 3.2 seconds. The more complicated the test setup path, the greater the speedup would be, but still—is this worth the trouble?

Not so far, but it will be after the next speedup. I hope. In the meantime, there's a useful spinoff feature. One of the reasons I hate anything to do with improving a UI is that every time you tweak a page, you have to navigate to it to check whether the change looks right. Having to do that four or five times in a row drives me wild. So I wish this were a universal law:

You can get to any page in an app in one step.

Now that we can remember application state, that's possible here. Imagine the following:

You have to tweak a particular page in the UI. You navigate to that page, then type this:

  ruby hyperjump.rb --snapshot myfix

You go into the code, make a change, reload the app, and return the app to its previous state like this:

  ruby hyperjump.rb myfix --open

The --open tells hyperjump to open localhost:8080/refresh in the browser. That shows the page corresponding to the saved state, which is the page you're tweaking.

This jump-to-page feature would also be useful for exploratory testing. It's common to go to the same place in the program multiple times during a bout of exploratory testing. Perhaps you're trying to learn more about the circumstances in which a bug occurs (a kind of failure improvement). Or you're trying different paths through the program, each of which starts some distance into it.

There's nothing new about using captured commands to accelerate tasks. People have been using GUI capture/replay tools for this kind of thing since the dawn of time. But it's nice that the feature fell out of a different goal.

For more about the implementation, refer to part 4b. The code has the complete details.

## Posted at 22:49 in category /testing [permalink] [top]

Working your way out of the automated GUI testing tarpit (part 4b)

Here are some (decidedly optional) details about the implementation described in part 4.

Consider this test:

  def test_cannot_append_to_a_nominal_audit
    as_our_story_begins {
       we_have_an_audit_record_with(:variance => 'nominal')

    assert_page_has_no_button_labeled('Add Audit')

as_our_story_begins sets up the application state by deducing a sequence of commands to send to the browser. After that's done the first time, the sequence is stored in a file devoted to a single test method. The one for the test we've been using is path-cache/declarative-test.rb/test_cannot_append_to_a_nominal_audit. Its contents look like this:

[[:login, ["unimportant", "unimportant"]],
  [:new_case, []],
  [:record_case, ["unimportant", "213"]],
  [:add_visit, []],
  [:record_visit, ["unimportant", "100"]],
  [:add_audit, []],
  [:record_audit, ["unimportant", "nominal"]]]

The next time declarative-test.rb is run, as_our_story_begins notices there's a cache file, and sends its contents over an XMLRPC connection. The server turns it into an array:

  command_descriptions = eval(command_string)

Then each command is dispatched to the App object:

  command_descriptions.each { | one |
      @current_page_name = dispatch(*one)

  def dispatch(command, args)
    @dispatched << [command, args]
    @app.send(command, *args)

That dispatch method is exactly the same method used to react to requests from the browser:

    @current_page_name = dispatch(command,
                                                          values(request, *required_args))

By doing that, I reduce the suspicion that the restored state is somehow different than the one that app had at the moment the snapshot was taken.

The only difference between the two different routes into the app is what happens after dispatching. dispatch returns the name of the next page to send to a browser. When the request comes from a browser, the page is rendered and sent back. When it comes by the XMLRPC side channel, nothing is done, but the most recent page name is stashed away. When the browser visits localhost:8080/refresh, the name is used to render the page:

  install_generic_proc('/refresh') { | request, app |


  • There's a way in which all of my tests could be broken (even before caching). My test doesn't drive a real browser. Instead, I use a Browser object that sends GET requests directly to the server. As a result, nothing in the test will fail if the wrong pages are rendered. The tests would appear to work perfectly fine if every GET request returned a blank page instead of any of the correct forms.

    That wouldn't be a problem in real life. In real life, my tests would be issuing commands to a browser via Watir or Selenium. I should probably use one of them for this demo but (1) Watir only works with Windows IE and I use a Mac, and (2) I'm too lazy to learn Selenium right now.

  • The list of commands is stored in the app, not in the test. When it's time to cache the state, the test asks the app for the list. No thought went into the decision to do it that way. Maybe some should have.

  • Previously, the tests bogusly succeeded. They fail now, so that I can later write the code to make them pass.

  • The tests launch the server in a subprocess (see test-util.rb). They use fork(), kill(), and wait(). I don't know if those work on Windows.

Credit: The idea of replaying server-level commands just popped into my head. It might have been put there by Michael Silverstein's "Logical Capture/Replay".

## Posted at 22:49 in category /testing [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.




Agile Testing Directions
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects

Permalink to this list


Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list


Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI


Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."


Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich


Where to Find Me

Software Practice Advancement


All of 2006
All of 2005
All of 2004
All of 2003



Agile Alliance Logo