exampler.com/testing-com > Writings > Craft of Software Testing Review

Testing Foundations
Consulting in Software Testing
Brian Marick

Services

Writings

Weblog

Tools

Agile Testing

This is a book review posted in comp.software.testing in November, 1994.


Brian Marick, The Craft of Software Testing, Prentice Hall, 1995, 553 pages, $47, ISBN 0-13-177411-5.

People on this list have seen occasional references to this book, which is at long last finished and available. I thought I would try to write a reasonably unbiased description. I'll mention weak points in the book as a way of forcing me to write -- and distribute -- improvements.

My main goal for the book was to solve a problem I saw with existing books: they were not specific enough. They required too much invention and "filling in the blanks" from the reader. I wanted to fix that, to present testing techniques in complete detail and to tie them together into a sensible "cookbook" process that would work reasonably well in most situations.

A secondary goal was to publicize two semi-original ideas of mine: making a clear distinction between deciding what needs testing (test requirements) and designing tests; and the use of test requirement catalogs.

Audience: the book is subtitled "subsystem testing". The subtitle is serious. This book does not attempt to describe all types of testing. It concentrates on "testing in the medium": moderate sized subsystems such as device drivers, class libraries, optimization phases in compilers, individual protocol modules, etc. A typical reader will be a developer testing his or her own code. Independent testers who can look at the code are a second audience. These techniques and this process can be adapted to system testing (I teach a course that does that), but there are many important blanks a system tester would have to fill in.


The book is divided into six parts:

The Basic Technique

This section describes the basic subsystem testing process in gory detail. It uses as a running example a single subroutine from L. Peter Deutsch's Ghostscript program. That's a small example, but it's at least real code, even somewhat tricky code, with a real bug.

Adopting Subsystem Testing

This section describes how to adopt these techniques. The assumption is trying to adopt everything all at once is a mistake. You're better off starting small - spending a little effort/money, getting modest improvements, and being encouraged to continue - than trying to do everything all at once, getting frustrated, and giving up.

Subsystem Testing in Practice

This section extends the basic technique in more realistic directions: testing big subsystems, what to do when you don't have all the information used in Part 1, testing bug fixes, and what to throw out when you don't have time to do everything.

Examples and Extensions:

A grab-bag of chapters extending the basic ideas to particular situations: syntax testing, testing consistency checkers, state machines and statecharts, and object-based and object-oriented software. Oops -- the publisher would want me to say, "testing object-based and ==>OBJECT-ORIENTED<== software".

Multiplying Test Requirements

Some more advanced wrinkles on the basic techniques. Optional.

Appendices

These are the parts I use often: reference catalogs and checklists for day-to-day testing.


Does the book succeed at its goals? Reasonably well, I think, though better at the secondary goals. People who take my courses like the idea of test requirements and are usually enthusiastic about the idea of test requirement catalogs.

And the primary goal? Testing really is a craft, just like carpentry or mathematics. It's hard to learn a craft from a book. It's better to learn it in person, by watching someone else do it and then having that someone comment on your attempts. So, if you buy the book, you'll find the occasional blank to fill in, and it may take you some time to puzzle out something that I could clarify in a minute if I were sitting beside you.

The final test is whether whether the time spent reading the book pays off in smoother testing and bugs found earlier. I believe that, for most people, it will, so I'm content.

There are three larger blanks that I regret and promise to fill:

  1. Anyone who knows me knows that I think faults of omission are the most important kind of code-level faults. Those are those faults where, as Robert Glass says, "the code isn't complicated enough for the problem": omitted special cases, usually. The dilemna is that the same person who caused the fault will not be terribly good at finding it. (That's one of the reasons we have system testers.) But sometimes that person does find such faults. And one of my goals is to make that happen as often as possible. A main way is to "trick" the developer into looking at the code in a different way.
  2. After you're a comfortable and experienced tester, a good way to trick yourself is to summarize the code into what I call an abbreviated specification. However, it takes time to get comfortable and experienced. Until then, creating an abbreviated specification is a hurdle, especially for developers. They want to test based on the code, rather than after spending time creating an abstraction that describes the code. In my developer testing course, I've recently capitulated and described how to apply the same technique to the code. That's not in the book. The extrapolation isn't terribly difficult, but it's a pretty large blank in a book that was supposed to have only small ones.
  3. Integration test requirements are derived from function calls. The discussion is too sketchy. In particular, there's a particular type of integration test requirement that I do a poor job of explaining. (I've tried to explain it in several different ways over the years. Only within the last month do I think I've finally succeeded.)

The chapter on test implementation has no discussion of maintainability of test suites, which is critically important.

I have some other minor quibbles.

  1. Some people will be disappointed by the sketchy treatment of the "big picture", how testing integrates with the rest of development. That was a deliberate decision. This book is not everything you need to know. A decent discussion would have inflated the page count even more.
  2. The bookkeeping that subsystem testing requires cries out for automated assistance. There isn't any.
  3. The discussion of object-oriented testing is hard to follow. It should have been divided into two parts: object-oriented testing when you design like Bertrand Meyers and others say you should (easy) and object-oriented testing when you don't (complicated and messy). The section on object-oriented testing is speculative and without a solid base of real testing of big systems, so it's got to be incomplete.

I'm sure you'll have your own objections if you read the book. Let me know: marick@exampler.com/testing-com

Services

Writings

Weblog

Tools

Agile Testing

[an error occurred while processing this directive]