Testing Foundations
|
Testing Computer Software
by Cem Kaner,
Jack Falk, and Hung
Quoc Nguyen
(Van Nostrand Reinhold, 1993, ISBN 0-442-01361-2)
reviewed by Brian Marick on August 23, 1996
Buy
this book from fatbrain.com technical bookstore
From the preface:
"Testing Computer software provides a realistic, pragmatic
introduction to testing consumer and business software under
normal business conditions. We've tested software and managed
testers for well known, fast-paced Silicon Valley software
publishers. We wrote this book as a training and survival guide
for our staffs."
This is, simply, the best book for the product tester or the testing lead. The reason for that is exemplified by chapter 5, "Reporting and Analyzing Bugs". I don't know of any other testing book that devotes a full chapter to this essential topic. Problem reports are the main way testers have an effect on the product, yet - save for this book - you'd get the impression that the sum total of human knowledge on this topic is captured by the single sentence "Make sure bug reports are repeatable by someone else." In contrast, this chapter opens with "The point of writing Problem Reports is to get bugs fixed." The authors of course emphasize reproducibility. But they also warn you how easy it is to antagonize programmers to the point that bugs don't get fixed. They describe the the information programmers need to fix bugs. And they help you present problem reports so that important ones don't get shuffled into the "fix it if we end up with extra time" pile.
There are people in the world who have technical knowledge but are somehow nevertheless ineffective at their jobs. I don't know how to characterize what they lack. Some of it is people knowledge. Some of it is a sense for priorities. Some of it is a feel for how things fit together. Whatever "it" is, this book was written by people who have it, and any tester who wants it could benefit by reading Testing Computer Software.
And then there are people who are ineffective at their jobs because they lack technical knowledge, which in this context mainly means the ability to design good tests. Testing books traditionally discuss test design from the wrong perspective. They treat it as a systematic exploration of certain aspects of the product, somewhat assisted by a nebulous process called "error guessing". In fact, error guessing - the use of past experience and an understanding of the frailties of human developers - makes the difference between a systematic process that finds tons of bugs and one that systematically accomplishes little.
Testing Computer Software is still somewhat weighed down by the traditional approach. The chapter on test design has the conventional text about equivalence classes, boundaries, and the like. There are nuggets on error guessing scattered throughout the book, most notably in an appendix containing some 400 errors, but there is no systematic treatment. I may be asking for too much here - I'm certainly asking for more than I provided in my own book. Nevertheless, I've heard too many people dismiss chapter 8, Testing Printers (and Other Devices), as "just about testing printers on PCs". It's actually a fascinating case study that, upon reflection, illustrates a number of general points, including how error guessing can be used effectively. But the authors don't make this evident.
Bottom line: in 1981, my first post-college boss handed me a copy of Glenford Myers's The Art of Software Testing and told me I had three months to test the HFPM (don't ask). Myers's book helped. But if I could go back in time, I'd give myself Testing Computer Software and tell myself to read it at least three times at yearly intervals.
Click on the task name to see other works that address the same task.
"Negotiating Testing Resources: A Collaborative Approach", by Cem Kaner, addresses the universal situation of having more testing tasks than time to do them in. It describes how to agree on a schedule. It gives more detail than the corresponding sections of the book.
1. An example test series
2. The objectives and limits of testing
3. Test types and their place in the software development process
4. Software errors
5. Reporting and analyzing bugs
6. The problem tracking system
7. Test case design
8. Testing printers (and other devices)
9. Localization testing
10. Testing user manuals
11. Testing tools
12. Test planning and test documentation
13. Tying it together
14. Legal consequences of defective software
15. Managing a testing group
Appendix: common software errors