exampler.com/testing-com > Writings > Reviews > Practical Priorities in System Testing

Testing Foundations
Consulting in Software Testing
Brian Marick

Services

Writings

Tools

Agile Testing

"Practical Priorities in System Testing"

an article by Nathan H. Petschenik, in IEEE Software, September 1985
reviewed by Brian Marick on September 24, 1996

From the article:
"The charter of the PICS/PCPR system test group is to provide an extra level of testing on our releases beyond developer testing, to advise management of the quality of the release, and, after management decides how to handle any outstanding problems, to give our customers an accurate truth-in-packaging statement."


In this article, Petschenik discusses rules for targeting a testing effort. These rules allow his group to catch three times as many defects as customers report, and nine times as many high priority defects. The testing is of a large database application for the Bell Operating Companies. It is done in the two calendar months before release (assuming no show-stopping bugs are found), which is typical of commercial software development. Their test suite consists of 40,000 test cases for about 2 million lines of code.

Their priority rules are:

  1. Design tests based on user scenarios that span system components, not on the components themselves. In today's trendy terminology, Petschenik is saying you should base tests on use cases, not product architecture. This rule is based on the observation that "developers will tend to check individual components more carefully than they check how these components are supposed to work together." System testing should fill in the gaps of developer testing, not replicate it.

    In my experience, this is a rule commonly broken by system test organizations. It is too common for testers to test features in isolation, often simply working through the reference manual (which is organized alphabetically or by feature group). For example, one tester might test the editor. Another might test the printing code. No one will discover that the product crashes when you edit, print, use the editor to revise some pages, then print again - something that millions of users will do.

  2. Retesting old features is more important than testing new features. This rule is justified by two observations. The first: "The old capabilities [...] are of more immediate importance to our users than the new capabilities." Existing users already depend on old features; they can't depend on the new ones yet, because they don't have them. The second observation is that developers test whether their new code now works; they're much worse at testing whether the old code still works.

  3. "Testing typical situations is more important than testing boundary cases." There are two justifications for this rule. First, developers do an adequate job of testing boundary cases. Second, boundary cases are uncommon in normal use. A failure in a boundary case will be seen by few users; a failure in typical use will affect many users (by definition).

    Typical use is hard to discover. They do it by analysing bug reports from users, while admitting that's a weak method. On-site interviews, videotaping, and instrumented executables are more direct and reliable methods. Some organizations are blessed by being users of their own product (though they are often not entirely typical).


This article will help you with the following tasks

Click on the task name to see other works that address the same task.
 
Designing a task-based test case
Bug reports are one way to decide what tasks to test.
Planning the testing project
Many people already plan their testing this way. Those who don't will find this article useful. Be warned that the article doesn't give much more detail than is contained in this review.

Services

Writings

Tools

Agile Testing

[an error occurred while processing this directive]