You may click on the [c] links at any time to return to the Contents.
In Part 1, I described one type of test design that targets plausible faults. Testing techniques search the code or specification for particular clues of interest, then describe how those clues should be tested. The clues in an object-oriented program are the same as in a conventional program. Divining what to test requires either more work or more bookkeeping, because there are more places where relevant information might live. But the approach is essentially the same.
I am posting these notes so that people can tell me where I'm mistaken, oversimplifying, or whatever. Most of my experience is with fault-based testing, so problems are more likely here.
The concluding part (perhaps parts) will talk about test implementation and test evaluation.
Part 1's fault-based testing misses two main types of bugs:
Fault-based testing, which is local in scope and driven more by the product than the customer, finds such problems only by chance. (There are ways to organize testing to increase that chance, but still not enough.) So another type of testing is needed.
This new type of testing concentrates on what the customer does, not what the product does. It means capturing the tasks (use cases, if you will) the customer has to perform, then using them and their variants as tests. Of course, this design work is best done before you've implemented the product. It's really an offshoot of a careful attempt at "requirements elicitation" (aka knowing your customer). These scenarios will also tend to flush out interaction bugs. They are more complex and more realistic than fault based tests often are. They tend to exercise multiple subsystems in a single test, exactly because that's what users do. The tests won't find everything, but they will at least cover the higher visibility interaction bugs.
Suppose you were designing scenario-based tests for a text editor. Here's a common scenario:
Background: it's not unusual to print the "final" draft, read it, and discover some annoying errors that weren't obvious from the on-screen image.
This scenario describes two things: a test and some user needs. The user needs are pretty obvious: an easy way to print single pages and a range of pages. As far as testing goes, you know you need to test editing after printing (as well as the reverse). You hope to discover that the printing code causes later editing code to break, that the two chunks of code are not properly independent.
No big deal here. So here's another scenario:
Background: someone asks you for a fresh copy of the document. You print one.
Again, relatively obvious. Except that this document didn't appear out of nowhere. It was created in an earlier task. Does that task affect this one?
In the FrameMaker editor, documents remember how they were last printed. By default, they print the same way next time. After the "Fix the Final Draft" task, just selecting "Print" in the menu and clicking the "Print" button in the dialog box will cause the last corrected page to print again. So, according to FrameMaker, the correct scenario should look like this:
But that's surely wrong. Customers will often overlook that check. They'll be annoyed when they trot off to the printer and find one page when they wanted 100. Annoyed customers signal specification bugs.
You might miss this dependency in test design, but you'd likely stumble across it in testing (just like a normal user would). As a tester, you would then have to push back against the probable response: "that's the way it's supposed to work".
Object-oriented programming can have two effects. It certainly changes the architecture of the product (its deep structure). It may have some effect on what the customer sees (the surface structure).
Object-oriented programming arguably encourages a different style of interface. Rather than performing functions, users may be given objects to fool around with in a direct manipulation kind of way. But whatever the interface, the tests are still based on user tasks. Capturing those will continue to involve understanding, watching, and talking with the representative user (and as many non-representative users as are worth considering).
There will surely be some difference in detail. For example, in a conventional system with a "verbish" interface, you might use the list of all commands as a testing checklist. If you had no test scenarios that exercise a command, you perhaps missed some tasks (or the interface has useless commands). In a "nounish" interface, you might use the list of all objects as a testing checklist.
However, remember that a basic principle of testing is that you must trick yourself into seeing the product in a new way. If the product has a direct manipulation interface, you'll test it better if you pretend functions are independent of objects. You'd ask questions like, "Might the user want to use this function - which applies only to the Scanner object - while working with the Printer?" Whatever the interface style, you should use both objects and functions as clues leading to overlooked tasks. (Note: I use the word "clue" carefully. You should not say, "Oh - I don't have any task that uses the 'printer' object", then write a quick and dirty test that prints. You should discover the various ways in which real users would use the printer object to get real work done.)
Test design based on the surface structure will miss things. User tasks will be overlooked. Important variants that should be tested won't be. Particular subsystem interactions won't be probed.
Looking at the deep structure might reveal those oversights. For example, if Blob of Code A depends on Blob of Code Z, but no test seems to exercise A's use of Z, that's a hint. You may have overlooked a user task.
What is the difference due to OO programming? To answer, look at the ways deep structure can be described. I'll use Booch diagrams as an example. In my admittedly limited experience, the different styles of OO design description seem roughly the same for my purposes.
Routine CALLER's only argument is a reference to some base class. What might happen when CALLER is passed a class derived from it? What are the differences in behavior that could affect CALLER?
But you can also use the inheritance structure in scenario-based testing. If some class BackupDestination has subclasses "Floppy" and "ReallyFastStreamingTapeDrive", you should ask what the differences are between floppies and really fast streaming tape drives. How will users use them differently?
Object-oriented programming changes the bookkeeping of test design, not the approach. Bookkeeping changes for scenario-based testing are minor compared to those required for fault-based testing.