Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Mon, 01 Dec 2003

Debugging, thinking, logging: how much of each?

Charles Miller quotes two authors on opposite sides of the debate over whether programmer tests eliminate the need for a debugger. I'm on the "I hardly ever use a debugger" side, but perhaps that's only because programmers perk up when I say I don't even know how to invoke the debugger in my favorite language (Ruby). Since it's my job to make programmers perk up, it's not in my interest to be a debugging fiend.

The debate made me think of a third approach, or perhaps a complement to the other two, that doesn't get the press it deserves. Let's step into the Wayback Machine...

It's 1984, the height of the AI boom. Expert systems are all the rage. The company I worked for hatched the idea that what builders of flight simulators (the kind that go inside big domes filled with hydraulics) really wanted was... Lisp. (I love Lisp, but this was not the savviest marketing decision.)

They cast around for someone who knew Lisp. I'd played around with it for a week. That qualified me to be technical lead. Dan was a quite good C programmer and happy on strange, out-of-the-way projects. Sylvia was a half-time graduate student who knew Fortran and Prolog.

In the end, we produced the best Lisp in the world, if by "best" you mean "quality of the final product divided by how much the team started out knowing about Lisp".

Actually, it wasn't bad. I'm pleased with what we accomplished.

It wasn't really that momentous an accomplishment, though. There was a free Lisp-in-Lisp implementation from Carnegie-Mellon. It ran on a machine called the Perq, whose main feature was user-programmable microcode. CMU had microcoded up a Lisp-machine-like bytecode instruction set, and their compiler produced bytecodes for it to execute. So we got a good start by coding up an interpreter (virtual machine) for the same instruction set. We just used C instead of microcode. I did the infrastructure (garbage collector, etc.) and Sylvia did most of the bytecodes.

I now get to the point...

Early on, I decided on a slogan for my code: "no bug should be hard to find the second time". Whenever a bug was hard to find, I wrote whatever debug support code would have made that kind of bug easy to find. Over time, the system turned into something that was eminently debuggable. Snap your fingers, and it told you what was wrong.

The things I did were very situated: they depended on the bug. But one thing I did was add a lot of logging. By letting the bugs drive where I put in logging statements, I avoided cluttering up the code too much. I remain a big fan of logging, and I'm distressed that the logging you see is so often so useless to anyone but the original author.

Resources:
  • I wrote a set of patterns for ring buffer logging for PLoP 2000. They could be a lot better (and a lot more complete), but they don't seem to be rewriting themselves, and I'm not gonna.

  • I also wrote a logging package for Ruby that does all the things I want. I don't think anyone else uses it, alas, due at least in part to its lame installation procedure. People are probably better off with logger (built into 1.8) or log4r (more popular).

The use of logging makes debuggers less necessary. Instead of single-stepping to figure out how on earth the program got to a point, you look at the log. If the logging is well-placed, and you have decent logging levels, you don't get mired in detail.

Having said that, it doesn't seem that logging is that useful to me in programmer tests. I don't need to know how the tests got somewhere. It's more useful in acceptance tests, where more is happening before the point of failure. Still, I rarely find myself looking at the log. It's most useful when trying to diagnose a bug not found by an automated test. Such bugs could be found by users or by exploratory testing. (Because exploratory testing is rather free-form, the log can help remind you of what you did when it's time to replicate a bug.)

One logging tip for large systems: I had a great time once doing exploratory testing of a big java system that had decent logging. I'd dink around with the GUI, but have the scrolling log open in another window. Every so often, something interesting would flash by in the log: "Look! The main event loop just swallowed a NullPointerException!" That would reveal to me that I'd tickled something that had no grossly obvious effect on the external interface. It was then my job to figure out how to make it have a grossly obvious effect.

## Posted at 21:18 in category /misc [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo