Mon, 24 Nov 2003
Element of the Art: Triggers
Here are some thoughts about my topics and exercises for the Master of
Fine Arts in Software
trial
run. They are tentative.
The topics are driven by
a position I take. It is that requirements do
not represent the problem to be solved or the desires of the
customers. Further, designs are not a refinement of the requirements,
for practical purposes. Neither is code. And tests don't represent
anything either.
Rather, all artifacts (including conversation with domain experts) are
better viewed as "triggers" that cause someone to do something (with varying degrees of
success). Representation and refinement don't enter into it (except in
the sense that we tell stories about them). So both requirements and
system-level tests are ways of provoking programmers to do something
satisfying. And code is something that, when later programmers have
to modify it, triggers them to do it in a more useful or less useful
way.
In practical terms, I am thinking of covering these topics:
- How conversation triggers tests, text, and more conversation
-
Goal: to increase understanding of, and skill at, interviewing domain
experts and feeding the resulting information into programming.
Exercise: My wife is a large animal veterinarian at Illinois. Their
medical records system is wretched. I told her I wanted to practice
programming by writing a new one (without any expectation that it
would really be used). I'll have a bit of a start on that by the time
of the MFA trial.
She and her graduate students are domain experts. We can interview
them to see what they do and what they want. We can use two variant
interviewing styles: just talking, and talking augmented by writing
tests. (We will compare and contrast the two. We'll also see what
questions arise as interviewers try to explain what they learned to
people not involved in the interviews.) (I'm also hoping to get some
sociology students to watch the interviews and comment on them.)
Thereafter, we will flesh out sets of tests and keep track of
questions that arise. Why didn't they come up earlier? We'll also
think about what's missing from the tests. Is there anything we feel
the need to write down? Why?
Then we'll do some coding. What questions arise?
- The order of coding
-
Goal: to learn how the order in which tests are developed affects
the final code.
Exercise: Pairs of people will do test-driven development. Each pair
will be given a small set of tests to pass. They're to follow YAGNI,
writing as little code as they can. After the tests pass, they'll come
get a new set of tests, which will (I hope) provoke them to implement
a more elaborate state machine pattern. Iterate several times.
Each pair will get the same set of tests, but in different orders.
After each pair is finished, they'll join up with another finished
pair. First question: how different is the code (and why)? Second
question: were any of the sequences better than the others (and why)?
- Learning from tests
-
Goal: To learn to write or organize tests so they're more useful to
later readers.
Exercise: Each person will bring some code+tests that they are
familiar with. They will also bring a set of questions for someone
else to answer about the code. Another person will try to answer the
questions by first looking at or running the tests, then (if
necessary) looking at the code, running it, etc. The two will then
discuss how the tests could have been more informative (new tests?
different organization? better names?)
- How code triggers code readers
-
Goal: to become more skillful at writing code that targets a
particular kind of reader.
Exercise: We'll begin with a set of code that is stylistically
idiomatic for one audience. Working in pairs, people will identify
what makes it idiomatic and rewrite it to match the expectations of
another audience.
## Posted at 08:05 in category /misc
[permalink]
[top]
|
|