by Tom Gilb and Dorothy Graham
(Microsoft Press, 1993, ISBN 0-201-63181-4)
reviewed by Brian Marick on October 31, 1996
Buy this book from fatbrain.com technical bookstore
From the preface:
"The word 'inspect' is an ordinary English verb meaning 'to look at or examine.' Software Inspection (with a capital 'I'), as described in this book, is an extra-ordinary technique which has been proved successful again and again - in so far as it is properly applied...
"There was (until now) no definitive book which described the Inspection process clearly and in its most advanced, complete and productive form.
"The authors have extensive experience in many and varied software engineering quality improvement techniques, and in particular in Inspections, and a particular feature of this book is the numerous small tricks, insights and practical observations gathered since we began to spread this method to our international clients in 1975."
Dave Gelperin said something at a conference that struck me as utterly profound. It went roughly like this: "People don't like Inspections. They never have. The fact that Inspections have survived in the face of universal dislike for over two decades must be proof of their value." Something so disliked, and yet so valued, is worth knowing about.
Inspections (with the capital 'I') are at one end of a continuum of formality and public accountability. At the other end is you, alone, double-checking your work. Slightly more formal is "buddy checking", where you and a friend check your work together. More formal is a group of people who meet to review work. Such reviews often produce formal reports in which the group takes collective responsibility for the work. Finally, there are Inspections, which have several variants. All share these characteristics:
There is a group of "checkers" who individually check the work alone at their desks. Often, the checkers will have specific roles (check hardware interfaces, check for standards compliance, etc.) The "work", by the way, may be anything: code, user documentation, requirements, etc.
The checkers gather, together with a moderator and a recorder, in a logging meeting. The checkers report potential problems ("issues"), which the recorder records on the board. The moderator keeps the meeting on track.
The logging meeting is only to report issues. The moderator squelches any attempt to resolve them.
During the logging meeting, checkers are expected to discover and report new issues. The synergy of the meeting encourages them; without it, they might as well email their issues to the recorder.
The original author (or someone else) resolves the issues after the inspection. There is some degree of double-checking of that work, ranging from buddy checking by the inspection leader to a full-blown re-inspection.
Gilb and Graham augment this common practice with a strong and valuable emphasis on feeding inspection data into process improvement.
This book has a rare quality. It is complete and consistent. It is "whole" - not sketches of a solution, not leaving the "obvious" steps to be supplied by the reader, not very dependent on the reader's judgement at all. In fact, withholding judgement may be the best way to read it. More than once, I found myself saying, "That can't possibly work", only to realize later that it does in fact work - partly because of something explained later, partly because I had to make sure I thought about software development in their terms. (The book is somewhat leadenly written, which makes key assumptions easier to miss.)
You will like this book (and stand a better chance of adopting their Inspection process) if you have these characteristics:
You must be obsessed with the efficiency of micro-tasks.
Take these quotes from page 192:
This emphasis is reminiscent of nothing so much as the time and motion studies of Taylorism. ("If we position the bricklayer at this height relative to the new row of bricks, and the brick supply and mortar at this height, and provide one runner per twelve bricklayers, we will optimize bricklaying efficiency.") They differ from Taylorism in that the optimizing procedures are only initially given by the experts from on high; thereafter, they're tailored through metrics kept by, and improvement suggestions made by, the workers themselves.
You must be comfortable with a generative, step-by-step software production process.
Their description of software development is reminiscent of transformational grammars in linguistics. You know the drill: you start by deciding what type of sentence you want, perhaps a noun phrase followed by a verb phrase. You can then decide what type of noun phrase you want, perhaps a noun followed by a prepositional phrase. You can then... (Apologies to linguists out there if I've botched the details.) According to Gilb and Graham, software development is similarly rule-driven:
You must believe that complete and sufficient rules and procedures can be captured and written down, and that maintaining consistency between documents is appropriate.
You must work in a stable organization with widespread respect, a decent minimum level of competence, and good communication.
If you do not work in such an organization, complete and sufficient rules can't be captured. Why do I say this? Page 426 gives rules for code, one of which is:
How could such rules be useful? Surely they are far too vague? They won't lead to pitched battles about relative complexities of commentary and code during the logging meeting (because the moderator won't allow it), but surely the battles will start just outside the door?
Not if you have widespread respect. An essential feature of their Inspection process is that the person editing the document after the Inspection must address all issues. So if anyone thinks a comment is too terse, it has to be dealt with. Perhaps it will be rewritten, perhaps a glossary will be added, perhaps a request will be made to change upstream design documentation, whatever. The key notion is that the checker is always right. As they say on p. 224:
"The one thing you [the editor] are not allowed to do is to ignore any logged issue. A logged issue is not necessarily an 'error' on your part. It is, however, proof that the real organization out there did have, and thus will probably continue to have, some sort of problem in the future unless you act to prevent it now."
This requires respect and minimum competence (you can't think the checker is a bozo - and the checker must not in fact be a bozo). It requires communication, because you have to seek out the checker afterwards to discuss the issue. If you have these things, there will emerge a concensus about what terms like "complete" and "relevant" mean. If the organization is stable (low turnover), this concensus can be preserved.
If your organization doesn't have these properties, the process is likely to devolve until only nit-picking, bookkeeping, undebatable issues are raised, the sort shown on page 55:
(Either the authors picked a poor example - all these issues are found by mechanical consistency checks - or I've completely missed the point of the book.) Further, the rules themselves are likely to become exclusively bureaucratic, like this subset from page 425:
For all its completeness, I have two gripes with this book:
Click on the task name to see other works that address the same task.
Comments to firstname.lastname@example.org