Archive for June, 2010

Release 0.1.1 of Midje, a Clojure mocking tool

The devoted reader will recall that I earlier sketched a “little language” for mocking Clojure functions. I’ve finished implementing a usable–perhaps even pleasant–subset of it. I invite you to take a look. In keeping with my whole “work with ease” thing, I’ve made it dirt simple to try it out: one click and two shell commands. You don’t even have to have Clojure installed.

A small note about collaborators

It’s often the case that a class has one or more collaborators that are used by many of its methods. When you want to override one for testing, you can either pass the test double into the constructor (probably as a default argument) or you can smash the double into an instance variable you happen to know points to the collaborator.

I used to use the constructor approach, but I increasingly find myself using the Hulk! Smash! approach. It looks like this…

The object-to-be-tested names its collaborators and gives them the values they’ll use in production:

I do this because I think of the collaborators as a static, declaration-like property of the class. (If I were ambitious, I’d convert collaborators_start_as into a class-level declaration like this:

I haven’t been that ambitious yet, as it turns out.)

Each test declares which collaborators it overrides:

Note that (in Ruby), I can even mock out classes, so I don’t have to go through contortions to pass in instances that an outside caller really would have no particular interest in:

With several of the mocking packages available to me, I wouldn’t even have to go through the indirection of putting the class ThingSource into an instance variable. I could do this:

I’m not bold enough to do that yet.

That $20 Billion BP deal

Various Republicans have characterized the recently announced 20 billion dollar deal between the government and BP as a “shakedown” for a slush fund, one that Obama doesn’t have constitutional authority to set up. From what I’ve read, it strikes me as a pretty straightforward deal, not wildly different than the way I used an escrow company as an independent party when selling testing.com. Or that unpleasant incident with my neighbor’s grill a few years back…

Me: You careless oaf! You burned up my lawn!

Him: I admit it. I take full responsibility. Sorry.

Me: You’re going to pay for this damage, you know. All of it.

Him: I will do that.

Me: I bet that big old maple that overhangs the house is going to have to come out. That’s going to cost you a bundle.

Him: Oh now, it doesn’t look that bad.

Me: I had a tree hit by lightning once. Didn’t look much worse than that, but carpenter ants got into it and the tree had to come out. The guys who took it out told me I should have done it right away, that the tree was worse off than it looked. I’m not waiting this time.

Him: Uh… Don’t know all that much about trees.

Me: Neither do I. Look. Let’s simplify this, keep at least some of it out of court. Let’s get someone with experience at judging these kinds of damages and have him decide, shrub by shrub, tree by tree, what you owe.

Him: Like the independent mediator written into lots of contracts.

Me: Right.

Some time passes as we dicker over who we both trust.

Me: Now, I’m pretty sure this is going to cost you over $2000…

Him: Aw, that seems high

Me: That maple is four stories tall!

Him: Hey, I still think it’ll be alright. But OK—so long as we get that mediator guy, the one I trust.

Me: And I want you to put the money in escrow.

Him: Aw, c’mon. You know I’m good for it.

Me: Yeah, well, my trust in you is not super-high right now.

Him: Well…

Me: Look. Back out of the deal if you want. You’re going to pay one way or the other. You can do it quickly or drag it out. Your choice.

Him: OK, OK: it’s a deal.

TDD in Clojure, part 3 (one wafer-thin function; conclusions)

Part 1
Part 2

All that remains is to add the locations of bordering cells to a given sequence of locations. A small wrinkle is that a border location may be next to more than one of the input locations, so duplicates need to be prevented. I could write the test this way:

I don’t like that. When the test fails, it’ll likely be hard to discover how the expected and actual values differ. And I bet that a failure would more likely be due to a typo in the expected value than to an actual bug. The test is just awkward.

I like this version better:

It is a clearer statement of the relationship between two concepts: a location’s neighborhood and the border of a set of locations. More pragmatically, it’s less typing. (When I first started coding in this style, it surprised me how much test setup clutter went away. I had much less need for “factories” or “fixtures” or “object mothers“.)

Given either test, the code is straightforward:

Onward

No matter how satisfying the individually-tested pieces, the whole has to work, which is why everything ends by running at least one end-to-end test. A test like the one we started with:

Because I’m making up my test notation as I go, I’ve been running all the tests manually in the REPL. Now I can run the whole file:

If you look carefully, you’ll see that the test would fail because the locations are in the wrong order. But that’s a quick fix:

Done. Ship it!

Development order - Test-down or REPL-up?

It’s often said that the Lisps are bottom-up languages, that you test out expressions in the REPL, discover good functions, and compose them into programs. A lot of people do work that way. A lot of people who use TDD to write object-oriented code also work that way: when implementing a new feature, they start at low-level objects, add whatever new code the top-level feature seems to demand of them, then use those augmented objects in the testing of next-higher-level objects.

For I guess about a year now, I’ve been experimenting with being strictly top-down in some projects. I find that leads to less churn. Too often, when I go bottom-up, I end up discovering that those low-level changes are not in fact what the feature needs, so I have to revisit and redo what I did.

I get less disrupted by rework when I go from the top down (or from user interface in). It’s not that I don’t blunder—we saw one of those in the previous installment—but those blunders seem easier to recover from.

That’s not to say that I don’t use the REPL. We saw that a little bit in this program, when I was writing neighbors. It’s perfectly sensible to do even more in the REPL. I think of the REPL as a handy tool for what XP calls a spike solution and the Pragmatic Programmers call tracer bullets. When I’m uncertain what to do next, the REPL is a tool to let me try out possibilities. So I might be stuck in a certain function and go to the REPL to see how it feels to build up parts of what might lie under it. After I’m more confident, I can continue on with the original function, test-driven, reusing REPL snippets when they seem useful.

I don’t claim everyone should work that way. I do claim it’s a valid style that you’d be wise to try.

Notation

I’m something of an obsessive about test notation, and I’ve been endlessly fiddling with a Clojure mock notation. I implemented its first version as a facade on top of clojure.contrib.mock. As I experimented, though, I found that keeping my facade up to date with my notational variants slowed me down too much, so I put the code aside until the notation settled down.

I’m pretty happy with what you’ve seen here. Are you? If so, I may start on another mock package. I’ve got the most important parts: a name and sketch of a logo. (”Midje” and someone flying safely between the sun [of abstraction without examples] and the sea [of overwhelming detail].)

Tests and code together

You can see the completed program here. It mixes up tests and code. I’ve tried that on-and-off over the years and always reverted to separate test files. This time, it’s seemed to work better. I probably want an Emacs keystroke that lets me hide all tests, though. I’d also want alternate definitions of the test macros so that I can compile them out of the production system

What next?

I’ll write a web app in this style, using Compojure.

TDD in Clojure, part 2 (in which I recover fairly gracefully from a stupid decision)

Part 1
Part 3

I ended Part 1 saying that my next step would be to implement a function that counts the number of living neighbors a cell has. Given that we’re already pretending (through stubbing) that a living? function exists, living-neighbor-count is pretty trivial if we also pretend we’ve got a neighbors function:

Following my “mapping, like accessors, is too simple to test” guideline, I almost didn’t write a test. But what the heck:

Once the test passes, we need to write neighbors. To implement it, we’re going to have to take cells apart (to get x and y coordinates) and put them back together (to create neighbors). So I don’t see any point to using stubs and dummy variables like ...cell... in this test:

Boldly, I will here use one test to define both cell-at and neighbors (as well as the test helper have-coordinates that checks a list of cells against a list of coordinates).

(If I were more sensitive to that small voice in my head that warns I’m going astray, I would have heard something around now, but I ignored it. So we will too.)

Enter the REPL

My thought about how to implement neighbors has three steps, so I’ll try them out in the REPL. First, I’ll make (x,y) pairs to add and subtract from the original cell’s coordinates:

That’s good, except (0, 0) shouldn’t be in there. (A cell can’t be its own neighbor.) So I need to delete that:

(remove #{[0 0]} product) is a Clojure idiom. remove returns its second (sequence) argument, omitting any element that the first argument (a function) returns truthy for. #{x} is the set containing x. In Clojure, sets act as functions that return something truthy iff their single argument is in the set. That is:

Finally, I need a function that shifts a cell by an offset. For the REPL, I’ll pretend the cell is just an [x y] vector. (We have yet to define what it really is.)

I can build neighbors from what I’ve tried out. To make the test pass, I’ll continue to use vectors for cells, hiding them behind a simple functional interface of cell-at, x, and y.

The concrete representation of the cell — and disaster

Here are the functions as yet undefined:

There’s no more escaping it. I’m going to have to decide what kind of thing border produces. That thing has to be a sequence for tick to map over:

border's result is also stored in world where living? will use it to decide whether a given cell is alive or dead.

My first thought was that I could use the set idiom I used above—the bordered world could just be the set of all living coordinates. Sneakily, any location not in the set would represent a dead cell. That would be great for implementing living?, but it wouldn’t work for tick, which has to process not only living cells, but also the dead cells that make up the border.

So my fallback was for border to produce a map, something like this:

Maps are sequences, so you can map over them. But I don’t think I’ve ever actually tried it. What happens?…

OH GREAT. If I go down this route, we’ll have three different ways of representing cells:

  • as the original location in inputs like *vertical-blinker*: [0 1]
  • as part of a living/dead map: {... [0 1] :dead ...}
  • as a living/dead vector: [ [0 1] :dead ]

That’s intolerable. And yes, I bet at least half of my two readers thought I was mistaken not to think about data structures at the very beginning. However, my strategy with Clojure TDD has been to put off thinking about data structure as long as I can, and I’ve been surprised and pleased by how often I ended up with simpler data than it seemed I would. I’ve found that, given the use of globally-available immutable “background” data, much of what might have been explicit data structure–vectors of maps of vectors of…–ends up in the implicit structure of the computation. More about that, though, will have to wait for another post.

A recovery plan

The problem is here:

When I wrote that, I remember that the still small voice of conscience objected to the way I was both stashing the bordered-world away as background and simultaneously picking it apart with map. That just felt weird, but I argued myself into thinking it was harmless. It was not.

Really, since my whole program takes input [x y] pairs (such as *vertical-blinker*) and turns them into a different set of [x y] pairs, most of my work ought to be done with those pairs. I should be thinking about locations of cells, not cells themselves. In that way of thinking, border shouldn’t produce “cells”. It should take locations of living cells and produce locations that point to both living cells and adjacent dead cells.

Further, I shouldn’t repeat those locations in a world function. Instead, I need something that can answer questions about cells, given their locations. It should be a… (I’m bad with names)… an oracle about cells. I first imagined this:

using-cell-oracles-from should produce any wise and oracular functions we need. So far, that’s just living?.

I realized something more. Locations are flowing into the pipeline, locations are flowing out, and in this version, locations won’t be transformed into cells anywhere within the pipeline. That makes unborder, which was originally supposed to convert a mixture of living and dead cells into only living locations, seem kind of stupid. If tick produces only living locations, unborder can go away. (The name unborder always bugged me, because it didn’t really describe what the function would have to do. Once again, I should have paid attention.)

That leads to this top-level function:

That wasn’t so bad…

As it turns out, changing my mind about such a fundamental decision was easy.

What did I have to do to the code? I had to write using-cell-oracles-from. Here’s a test.

I won’t show the code that passes this test—it’s a somewhat grotty macro (but a simple transformation of the earlier against-background). You can see it in the complete source for this post.

I did a quick global-replace of “cell” with “location” and tweaked a couple of the resulting names. Although both you and I know that locations are just pairs, I retained the functions make-location (formerly cell-at), x, and y to keep the code insulated from the potential of another change of mind.

I had to convert the successor function to dead-in-next-generation?. That was pretty simple. I had to change two lines in the test. Here’s one:

To make that test pass, I had to rewrite successor. It used to be this:

Now it’s this:

That was just a matter of inverting the logic and deleting killed and vivified. (Before I ever got around to writing them!)

The ease of this change makes me happy. Even though I blundered at the very beginning of my design, the way stub-heavy TDD lets me defer decisions—and forces me to encapsulate them so that I have something to stub—made the blunder a not-catastrophe. I wish I could say that I blundered deliberately to demonstrate that property of this style of TDD, but that would be a lie.

Enough for today

Only one function remains: add-border-to. That’ll be pretty easy, but this post is already too long. The next one will finish up the implementation and add whatever grand summary I can come up with.

Effective tax rates over the years

One of my pet peeves is how people get all obsessed with marginal tax rates when no one actually pays them. For more-or-less random reasons, I decided to look at effective rates over the years. After all, what with all the yelling and screaming about taxes over the past thirty years, you’d think they’ve been wildly gyrating. Not so:

I picked data for the top quintile of people, figuring that you, dear reader, were most likely to fall into that category.

Now, changes in tax rates aren’t the only way people’s incomes change. So I decided to plot three lines:

  • The after-tax income of the prototypical top-quintiler.

  • The after-tax income assuming that, starting with 1979, effective tax rates never increased. That is, every decline to a new low was allowed, but not the reverse. So the effective rate remained at its 1986 low of 23.8% until 2003 (the last year for which I could find data).

  • The after-tax income assuming the reverse: that only increases happened. So the rate remained at 27.5% until the 1995 increase to 27.8% and got pegged at 28% the next year.

The result:

When it comes to taxes, difference between a Reagan and a Clinton is just not that huge. Choosing who to vote for based on tax zealotry is probably silly. Lots of other things the government does has a larger effect on your income and your children’s.

I could be missing something.

TDD in Clojure: a sketch (part 1)

Part 2

I continue to use little experiments to help me think through TDD in Clojure. (I plan to begin a realistic experiment soon.) Right now, I’m mainly focused on three questions:

  • What would mocking or stubbing mean in a strict(ish) functional language?

  • What’d be a good mocking notation for Clojure?

  • How do you balance the outside-in style associated with mocks and the bottom-up style that the REPL (interpreter) encourages?

Here’s an example from Conway’s Game of Life. It begins with an implementation suggestion from Paul Blair and Michael Nicholaides at the Philly Code Retreat. Instead of thinking of the board as a 2×2 array of cells, with some of them dead and some alive, think instead only of living cells, each of which knows its coordinates. Here’s an example that shows how “blinkers” blink from generation to generation.

A couple of things have happened here:

  • This is my notation for a straightforward non-stubbing test. The value on the left is executed and it’s compared (for equality) to the value on the right.

  • I’ve started coding outside-in, and I’ve named the first function I need: next-world.

The Blair/Nicholaides approach advances the “world” to the next generation by (conceptually) adding dead cells around the edge of all the living cells, running the normal life rules that govern how cells change because of their neighbors, and then throwing away all the cells that end up dead. In other words:

  • The pending bit is just there because (sadly) Clojure makes you declare functions before mentioning them. pending just creates functions that print that they’ve not yet been implemented.

  • The rest of the code flows the world argument through a pipeline of three functions. If you’re not familiar with the -> macro, the result is the same as this:

    I don’t feel the need to test this code now because it’s really declarative—it says what it means to produce a next world under this approach. (It will be tested in the very end by the “integration test” that shows a blinker working.)

I can now implement any of the three new functions. I’ll pick tick because it seems to be the heart of the matter. Here’s a first implementation:

There are two odd things going on here.

First, stubbing function calls.

In object-oriented languages, I think of mock-driven-design as a way of teasing out collaborators for the object I’m building. I push responsibilities for work onto objects that I’ll implement later. Mocking lets me defer the implementation of those objects until I’m ready, and creating some examples of the API teaches me the (implicit) specification for the new object.

I’ve found that with pure functional programs that don’t modify state, it makes more sense to think of a function like (f 2) => 4 as a fact. What I’m doing as I test-drive a function is describing how facts about its inputs and outputs depend on other facts, in an almost Prolog-like way. For example, consider this code:

That says that, for any cell you care to provide, f of that cell will be 10, provided g of that cell is true and h is 2. If either of those latter two facts don’t apply to the cell, I’m not saying what f’s value is.

I use the funny ...cell... notation in the way that mathematicians use n to talk about any integer. (They call that universal quantification.) I don’t want to create a particular cell because I might need to specify properties that have nothing to do with the function I’m working on. This notation says that nothing about the cell is relevant except for what comes after the provided.

Here’s one way to write a Life rule in this notation:

The falsey bit in the first line is because Clojure has two distinct values that can mean “false”. falsey is a function that takes the result of the left-hand side and fails the test if that result is anything other than one of the two false values. I’m using it because I don’t want to overspecify living?. There’s no reason to care which of the two “false” values it returns.

There’s a problem with this test, though. Remember what I said above: the left-hand side gets evaluated and handed to falsey. That means living? has to have a definition—which means I’d have to settle on how the code knows whether a cell is alive or dead. I like doing one thing at a time and putting off decisions as long as I can, and right now I’d rather be focused on successor instead of cell representations.

Here’s a way to defer that decision:

Here I’m saying something subtly different than before. I’m saying that the result of successor is specifically that cell produced by calling killed on the original cell. The =means=> notation tells the framework to create a mock instead of evaluating the right-hand side for its value. In a more familiar mocking syntax (for Ruby), the whole test is equivalent to:

OK. The next figure gives the whole set of Life rules, expressed as executable tests. (Well, executable as soon as I implement the testing framework.) Notice that I called the outer wrapper know (a fact) instead of example. know seems more appropriate for rules. The two forms mean the same thing.

Notice also that I implemented a notation for saying “run this test for each value in a sequence”. The use of commas, as in [4,,,8], indicates that—conceptually—the fact is true for all values four through eight. Only the ones listed are actually tried. (Commas count as >white space in Clojure.)

This isn’t the tersest possible format—a table would be better—but it’ll do. I think it’s reasonably readable. Do you?

Here, for reference, is code that passes the test:

We now have an expanded choice of functions to write:

I could go breadth-first—with border and unborder—or go depth-first with one of the functions on the second line. In this particular case, I’d rather go depth first. I’ve avoided deciding on a representation, so I don’t know yet what border should do.

If this installment meets your approval, I’ll add another one that begins work on—oh—probably living-neighbor-count is the most complicated, so it’s a good one to chip away at.

Why free market enthusiasts should love trial lawyers

[I am not an expert in these fields.]

Most people who self-identify as enthusiasts of the free market loathe trial lawyers. I believe they do so out of tribal loyalties, not from reason. In this essay, I’ll only address the “not from reason” part.

Some people are faith-based free-marketeers. They believe that in our current economic organization the market would make the correct decisions in the absence of regulation. I distinguish these people from those who believe that, given a different economic organization, the market would make the correct decision in the absence of regulation. The difference between the two is that the faith-based free-marketeers pretend that externalities do not exist.

Externalities are cases where part of the cost of a good or service is not paid by participants in a market transaction.

Example: If your factory is upstream of me, and you dump sewage in the river I get my drinking water from, you damage me but (in our society minus government) I have no recourse. Your customers get cheaper goods than they should.

(A somewhat famous) Example: You run a railroad on your property. Sparks from your trains ignite my wheat field. Who should suffer? How much?

Example: As a fisherman, I gain direct benefit from overfishing, the cost is spread amongst all other fishermen, and there is no way for them to get compensation from me.

The free-marketeers I’m concerned with solve this problem by extending property rights to everything. I have property rights to the water I drink from, so I can charge you for the right to pollute. In a pure market situation, it makes as much sense for me to bill you for my burned wheat as it does to force you to install spark arrestors. If fish are proportionately owned by fishermen, my overfishing is in effect stealing someone else’s property.

So, in what’s sometimes called pure market anarchism, the BP Gulf oil spill is handled by the owners of the gulf and adjacent lands demanding compensation from BP. BP, as a rational economic entity, will have already adjusted its operations so that its gain from drilling in the gulf is more than the total of what it will pay owners in a spill. (Or, equivalently, BP buys the appropriate amount of insurance.)

However, there’s a problem even in the case of perfect information and friction-free litigation: How does BP know how much to adjust its operations? That must be done by understanding the cost (and benefit) of previous spills. But how is the cost arrived at? By what was previously paid out. (Yesterday’s weather, for you Extreme Programmers out there.) But BP (or its insurance company) and any patch-of-Gulf-owner are likely to disagree about the degree of damage and thus the amount of compensation due. For any given spill, its cost is disputed. But a single number is required. How can this be handled?

One way is by our existing system of litigation: BP lawyers and the owner’s lawyers (or lawyers for a collective of owners) fight it out in front of some impartial authority (a judge or jury). If you agree with that, you love trial lawyers because they’re required to make that system work.

However, many free marketeers find trial lawyers unsavory, so they envision some more dispassionate entity. People of my generation often read Heinlein in their formative years and long for the role of “Fair Witness” - someone who, due to deep training, does not let her own preferences sway her judgments.

Suppose there exists a Fair Witness-like adjudicator. There are two possibilities: (1) she gains all knowledge needed to decide the case herself, or (2) she relies on proxies who feed her information. In a market world, I suppose that different Fair Witnesses would offer both approaches. I claim, though, that the proxy approach (2) would win out.

Why? If we assume the market works, we must assume that salesmanship, marketing, and advertising work. They are not significantly regulated, so are not affected by market distortions, yet they persist. (Man do they persist.) If you believe the market works, you must believe they lead to better decisions by consumers. (Else the company that eschewed them would be able to deploy the cash saved to positive ROI activities.)

However, what are marketing, salesmanship, and advertising but an argument that your competitor’s counter-argument is weak? It is an inherently adversarial relationship, using the consumer as the highly-interested judge of competing arguments.

This is precisely parallel to the situation of two advocates arguing in front of a Fair Witness. Therefore, the skills of an advocate—a trial lawyer—are essential for the market process of determining how much BP should spend on safety (or, equivalently, how much its insurance provider should insist it spend on safety).

I should note that I am not a believer in the wisdom of the market—due in part to my readings in behavioral economics and asymetrical information—but as a software person I run into a ton of people who claim to believe in the free market but are in effect supporters of America’s peculiar system of crony capitalism that favors unnaturally-large (according to their expressed beliefs) business. Or they are glibertarians. This essay is for them.

Send me bugs that are caught in end-to-end testing

For some time now, I’ve been skeptical of the ROI of end-to-end automated tests and of the value of automating the kind of business-facing examples that drive development.

I’ve walked the walk. The Critter4Us app that’s being used at the University of Illinois vet school does not have these kinds of tests. I’m doing contract programming on another app. I make heavy use of Growing Object-Oriented Software-style tests, but I don’t have any that are larger than unit-sized.

What I’ve discovered with Critter4Us is this: if I do what I consider good TDD, then I run through end-to-end tests by hand, and follow up with some not-wonderful exploratory testing*, I do not have bugs that escape to production but would have been caught by full end-to-end tests. (* It’s not-wonderful exploratory testing because I’m a not-wonderful exploratory tester.)

I have written some partial end-to-end tests that exercise the route through the server from HTTP Request to HTTP Response*. Even those are probably not justifiable if the question you care about is “Do they find bugs manual testing would have found, only faster?” However, I write them for two reasons. First, they make me feel better about the pacing of my work, and my own ease-of-work is important to me. The second reason is that I believe a lot of progress in Agile has come through people wanting something, being so naĆ­ve they didn’t realize it was impossible for them to have it, so they changed their context to make it possible. So I’m edging toward writing end-to-end tests as a way to force myself to figure out how to make them cost-effective to me.

(* These are very partial end-to-end tests because most of the code lives in the browser front end.)

However, these apps—while “real”—are relatively small, and I do see occasional tweets saying “Having those automated end-to-end tests really saved our butts today!” I’d like to examine some of those bugs in detail so that I can (preferably) discover what kinds of bugs make end-to-end tests worthwhile and thus what specific kinds of end-to-end tests are worthwhile or (less exciting) figure out what unit-style tests were missing.

So email me if you have a juicy bug. But please be aware that “in detail” likely means NDA-level detail and possibly a fair amount of email back-and-forth. And I will want to describe the bugs and systems (in sanitized form) to a worldwide audience.

“Why” documentation - in code? in tests?

With one major exception, I’m one of those “if your code needs documentation, it’s not written right” people. The exception is code that answers the question “Why do that instead of this other obvious thing?” If I’m answering a “why” question with a comment, I always wonder whether the comment should go in the code or in the tests.

As an example, consider an app I’m working on. In response to a user request (picking a menu item), a potentially long-running network transfer starts. What the user sees while that’s going on is that the app switches to the tab that normally displays the table of data she’s asked for, that table has been erased, and there’s a progress indicator spinning there.

Getting that to work is more complicated than you might think. Let’s pretend you’re trying to understand it.

The code behind the menu item posts a notification:

The DirectoryController’s initialization declared that this method gets that notification:

The RemoteDirectory also fields the notification. It looks like this:

First question: do you think the comment is helpful? If not, should I also explain that the Savon SOAP library doesn’t (seem to) allow asynchronous operation? Should I explain that RubyCocoa doesn’t allow threading, so I can’t invent asynchrony myself?

Second question: Would you rather see the comment here, in the code, or in the tests? Here’s the test of the load method:

Does the test name reduce or eliminate the need for the comment? (I would normally be using shoulda and assert2.0 here. I’m not because the project was originally in MacRuby, which didn’t handle them at the time I switched to RubyCocoa.)

If you think the name helps, does that suggest I should change the name in the product code to something other than load? What?