Testing Foundations
|
by Brian Marick (Prentice Hall, 1995, ISBN 0-13-177411-5)
reviewed by Brian Marick on September 27, 1996
Buy
this book from fatbrain.com technical bookstore
From the preface:
"This book is about 'testing in the medium'. It concentrates
on thorough testing of moderate-sized components of large systems.
I call these components subsystems. Good testing of subsystems is
a prerequisite for effective and efficient testing of the
integrated system."
(This is my own book, so don't expect an impartial review. I'll leave it to you to judge whether I'm too harsh or too kind.)
The doctrine used to be that developers can't find bugs in their code. In fact, there are some kinds of bugs that reasonably well trained and motivated developers can find reliably enough. There are other kinds of bugs that almost all developers will do a lousy job of finding. It makes economic sense to have developers find (or prevent) the first sort and have product testers concentrate on the latter. This book is primarily about how developers should test.
I made a terrible mistake when writing the book. Its first part discusses how to design tests from an external description of a program's behavior (a specification). I chose to use a rigid format for specifications (reminiscent of pseudocode or some formal notations). It's unlikely a developer will ever use such a specification. In fact, many won't have any specification at all. They will all have code, so I should have explained how to design tests directly from the code. As it is, the book advocates converting the code into a form of abbreviated specification, then designing tests from that. While that is likely more effective than designing tests directly from the code, I no longer believe it's cost-effective. More to the point, no one will do it, so why teach it?
Hidden beneath the incorrect presentation is useful information. The book makes two major contributions. The first is a description of a form of error-based testing. It is based on the observation that much of what programmers do is the creation and use of clichés. Some are common to all programming; others are specific to particular products or application domains. Moreover, programmers tend to make clichéd mistakes when using clichés. As simple examples, think of off-by-one errors or failing to check the error return from a function call. Useful ways to find errors associated with particular groups of clichés can be stored in catalogs that are economical to use. A general-purpose catalog is given with the book, together with a long discussion of how to use it.
The book's second contribution is a reaction to a problem common in testing books: they sketch the idea behind a solution, but leave the reader to "fill in the blanks". When I was a beginning tester - and still to this day - I often found that filling in the blanks was hard (and frequently revealed that the idea was neither as simple nor as useful as the sketch made it sound). The book presents a detailed test design process that integrates several techniques, leaving relatively few details to the reader.
Click on the task name to see other works that address the same task.
Writing Solid Code, by Steve Maguire, has a less thorough discussion of test design but a better discussion of adding test support code to the product. It describes ways to prevent or detect bugs that are complementary to The Craft of Software Testing.
"Testing Made Palatable", by Marc Rettig, can convince developers that testing is worthwhile and not intolerable.
1. The Specification
2. Introduction to the SREADHEX Example
3. Building the Test Requirement Checklist
4. Test Specifications
5. Test Drivers and Suite Drivers
6. Inspecting Code with the Question Catalog
7. Using Coverage to Test the Test Suite
8. Cleaning Up
9. Miscellaneous Tips
10. Getting Going
11. Getting Good
12. Using More Typical Specifications (Including None at All)
13. Working with Large Subsystems
14. Testing Bug Fixes and Other Maintenance Changes
15. Testing Under Schedule Pressure
16. Syntax Testing
17. A Second Complete Example: MAX
18. Testing Consistency Relationships
19. State Machines and Statecharts
20. Testing Subsystems that Use Reusable Software
21. Testing Object-Based Software
22. Object-Oriented Software 1: Inheritance
23. An Example of Testing Derived Classes
24. Object-Oriented Software 2: Dynamic Binding
25. Simpler Test Requirement Multiplication
26. Multiplying Operation Test Requirements
A. Test Requirement Catalog (Student Version)
B. Test Requirement Catalog
C. POSIX-Specific Test Requirement Catalog (Sample)
D. Question Catalog for Code Inspections
E. Requirements for Complex Boolean Catalog
F. Checklists for Test Writing