exampler.com/testing-com > Writings > OO Testing Part 2

Testing Foundations
Consulting in Software Testing
Brian Marick

Services

Writings

Weblog

Tools

Agile Testing

Notes on Object-Oriented Testing

Part 2: Scenario-Based Test Design

July, 1995

Contents
1. Recapitulation
2. What's Missing?
3. Scenario-Based Test Design
3.1 An example of finding an incorrect specification
4. Effects of Object-Oriented Programming
4.1 Surface structure
4.2 Deep (architectural) structure
5. Summary

You may click on the [c] links at any time to return to the Contents.

1. Recapitulation

In Part 1, I described one type of test design that targets plausible faults. Testing techniques search the code or specification for particular clues of interest, then describe how those clues should be tested. The clues in an object-oriented program are the same as in a conventional program. Divining what to test requires either more work or more bookkeeping, because there are more places where relevant information might live. But the approach is essentially the same.

I am posting these notes so that people can tell me where I'm mistaken, oversimplifying, or whatever. Most of my experience is with fault-based testing, so problems are more likely here.

The concluding part (perhaps parts) will talk about test implementation and test evaluation.

[c]

2. What's Missing?

Part 1's fault-based testing misses two main types of bugs:

Fault-based testing, which is local in scope and driven more by the product than the customer, finds such problems only by chance. (There are ways to organize testing to increase that chance, but still not enough.) So another type of testing is needed.

[c]

3. Scenario-Based Test Design

This new type of testing concentrates on what the customer does, not what the product does. It means capturing the tasks (use cases, if you will) the customer has to perform, then using them and their variants as tests. Of course, this design work is best done before you've implemented the product. It's really an offshoot of a careful attempt at "requirements elicitation" (aka knowing your customer). These scenarios will also tend to flush out interaction bugs. They are more complex and more realistic than fault based tests often are. They tend to exercise multiple subsystems in a single test, exactly because that's what users do. The tests won't find everything, but they will at least cover the higher visibility interaction bugs.

[c]

3.1 An example of finding an incorrect specification

Suppose you were designing scenario-based tests for a text editor. Here's a common scenario:


Fix the Final Draft

Background: it's not unusual to print the "final" draft, read it, and discover some annoying errors that weren't obvious from the on-screen image.

  1. Print the entire document.
  2. Move around in the document, changing certain pages. As each page is changed, it's printed.
  3. Sometimes a series of pages is printed.

This scenario describes two things: a test and some user needs. The user needs are pretty obvious: an easy way to print single pages and a range of pages. As far as testing goes, you know you need to test editing after printing (as well as the reverse). You hope to discover that the printing code causes later editing code to break, that the two chunks of code are not properly independent.

No big deal here. So here's another scenario:


Print a New Copy

Background: someone asks you for a fresh copy of the document. You print one.

  1. Open the document.
  2. Print it.
  3. Close the document.

Again, relatively obvious. Except that this document didn't appear out of nowhere. It was created in an earlier task. Does that task affect this one?

In the FrameMaker editor, documents remember how they were last printed. By default, they print the same way next time. After the "Fix the Final Draft" task, just selecting "Print" in the menu and clicking the "Print" button in the dialog box will cause the last corrected page to print again. So, according to FrameMaker, the correct scenario should look like this:


Print a New Copy

  1. Open the document.
  2. Select "Print" in the menu.
  3. Check if you're printing a page range; if so, click to print the entire document.
  4. Click on the Print button.
  5. Close the document.

But that's surely wrong. Customers will often overlook that check. They'll be annoyed when they trot off to the printer and find one page when they wanted 100. Annoyed customers signal specification bugs.

You might miss this dependency in test design, but you'd likely stumble across it in testing (just like a normal user would). As a tester, you would then have to push back against the probable response: "that's the way it's supposed to work".

[c]

4. Effects of Object-Oriented Programming

Object-oriented programming can have two effects. It certainly changes the architecture of the product (its deep structure). It may have some effect on what the customer sees (the surface structure).

[c]

4.1 Surface structure

Object-oriented programming arguably encourages a different style of interface. Rather than performing functions, users may be given objects to fool around with in a direct manipulation kind of way. But whatever the interface, the tests are still based on user tasks. Capturing those will continue to involve understanding, watching, and talking with the representative user (and as many non-representative users as are worth considering).

There will surely be some difference in detail. For example, in a conventional system with a "verbish" interface, you might use the list of all commands as a testing checklist. If you had no test scenarios that exercise a command, you perhaps missed some tasks (or the interface has useless commands). In a "nounish" interface, you might use the list of all objects as a testing checklist.

However, remember that a basic principle of testing is that you must trick yourself into seeing the product in a new way. If the product has a direct manipulation interface, you'll test it better if you pretend functions are independent of objects. You'd ask questions like, "Might the user want to use this function - which applies only to the Scanner object - while working with the Printer?" Whatever the interface style, you should use both objects and functions as clues leading to overlooked tasks. (Note: I use the word "clue" carefully. You should not say, "Oh - I don't have any task that uses the 'printer' object", then write a quick and dirty test that prints. You should discover the various ways in which real users would use the printer object to get real work done.)

[c]

4.2 Deep (architectural) structure

Test design based on the surface structure will miss things. User tasks will be overlooked. Important variants that should be tested won't be. Particular subsystem interactions won't be probed.

Looking at the deep structure might reveal those oversights. For example, if Blob of Code A depends on Blob of Code Z, but no test seems to exercise A's use of Z, that's a hint. You may have overlooked a user task.

What is the difference due to OO programming? To answer, look at the ways deep structure can be described. I'll use Booch diagrams as an example. In my admittedly limited experience, the different styles of OO design description seem roughly the same for my purposes.

Routine CALLER's only argument is a reference to some base class. What might happen when CALLER is passed a class derived from it? What are the differences in behavior that could affect CALLER?

But you can also use the inheritance structure in scenario-based testing. If some class BackupDestination has subclasses "Floppy" and "ReallyFastStreamingTapeDrive", you should ask what the differences are between floppies and really fast streaming tape drives. How will users use them differently?

[c]

5. Summary

Object-oriented programming changes the bookkeeping of test design, not the approach. Bookkeeping changes for scenario-based testing are minor compared to those required for fault-based testing.

Services

Writings

Weblog

Tools

Agile Testing

[an error occurred while processing this directive]