Exploration Through Example

Example-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
191.8 167.2 186.2 183.6 184.0 183.2 184.6

Mon, 24 Nov 2003

Element of the Art: Triggers

Here are some thoughts about my topics and exercises for the Master of Fine Arts in Software trial run. They are tentative.

The topics are driven by a position I take. It is that requirements do not represent the problem to be solved or the desires of the customers. Further, designs are not a refinement of the requirements, for practical purposes. Neither is code. And tests don't represent anything either.

Rather, all artifacts (including conversation with domain experts) are better viewed as "triggers" that cause someone to do something (with varying degrees of success). Representation and refinement don't enter into it (except in the sense that we tell stories about them). So both requirements and system-level tests are ways of provoking programmers to do something satisfying. And code is something that, when later programmers have to modify it, triggers them to do it in a more useful or less useful way.

In practical terms, I am thinking of covering these topics:

How conversation triggers tests, text, and more conversation
Goal: to increase understanding of, and skill at, interviewing domain experts and feeding the resulting information into programming.

Exercise: My wife is a large animal veterinarian at Illinois. Their medical records system is wretched. I told her I wanted to practice programming by writing a new one (without any expectation that it would really be used). I'll have a bit of a start on that by the time of the MFA trial.

She and her graduate students are domain experts. We can interview them to see what they do and what they want. We can use two variant interviewing styles: just talking, and talking augmented by writing tests. (We will compare and contrast the two. We'll also see what questions arise as interviewers try to explain what they learned to people not involved in the interviews.) (I'm also hoping to get some sociology students to watch the interviews and comment on them.)

Thereafter, we will flesh out sets of tests and keep track of questions that arise. Why didn't they come up earlier? We'll also think about what's missing from the tests. Is there anything we feel the need to write down? Why?

Then we'll do some coding. What questions arise?

The order of coding
Goal: to learn how the order in which tests are developed affects the final code.

Exercise: Pairs of people will do test-driven development. Each pair will be given a small set of tests to pass. They're to follow YAGNI, writing as little code as they can. After the tests pass, they'll come get a new set of tests, which will (I hope) provoke them to implement a more elaborate state machine pattern. Iterate several times.

Each pair will get the same set of tests, but in different orders.

After each pair is finished, they'll join up with another finished pair. First question: how different is the code (and why)? Second question: were any of the sequences better than the others (and why)?

Learning from tests
Goal: To learn to write or organize tests so they're more useful to later readers.

Exercise: Each person will bring some code+tests that they are familiar with. They will also bring a set of questions for someone else to answer about the code. Another person will try to answer the questions by first looking at or running the tests, then (if necessary) looking at the code, running it, etc. The two will then discuss how the tests could have been more informative (new tests? different organization? better names?)

How code triggers code readers
Goal: to become more skillful at writing code that targets a particular kind of reader.

Exercise: We'll begin with a set of code that is stylistically idiomatic for one audience. Working in pairs, people will identify what makes it idiomatic and rewrite it to match the expectations of another audience.

## Posted at 08:05 in category /misc [permalink] [top]

About Brian Marick
I consult mainly on Agile software development, with a special focus on how testing fits in.

Contact me here: marick@exampler.com.

 

Syndication

 

Agile Testing Directions
Introduction
Tests and examples
Technology-facing programmer support
Business-facing team support
Business-facing product critiques
Technology-facing product critiques
Testers on agile projects
Postscript

Permalink to this list

 

Working your way out of the automated GUI testing tarpit
  1. Three ways of writing the same test
  2. A test should deduce its setup path
  3. Convert the suite one failure at a time
  4. You should be able to get to any page in one step
  5. Extract fast tests about single pages
  6. Link checking without clicking on links
  7. Workflow tests remain GUI tests
Permalink to this list

 

Design-Driven Test-Driven Design
Creating a test
Making it (barely) run
Views and presenters appear
Hooking up the real GUI

 

Popular Articles
A roadmap for testing on an agile project: When consulting on testing in Agile projects, I like to call this plan "what I'm biased toward."

Tacit knowledge: Experts often have no theory of their work. They simply perform skillfully.

Process and personality: Every article on methodology implicitly begins "Let's talk about me."

 

Related Weblogs

Wayne Allen
James Bach
Laurent Bossavit
William Caputo
Mike Clark
Rachel Davies
Esther Derby
Michael Feathers
Developer Testing
Chad Fowler
Martin Fowler
Alan Francis
Elisabeth Hendrickson
Grig Gheorghiu
Andy Hunt
Ben Hyde
Ron Jeffries
Jonathan Kohl
Dave Liebreich
Jeff Patton
Bret Pettichord
Hiring Johanna Rothman
Managing Johanna Rothman
Kevin Rutherford
Christian Sepulveda
James Shore
Jeff Sutherland
Pragmatic Dave Thomas
Glenn Vanderburg
Greg Vaughn
Eugene Wallingford
Jim Weirich

 

Where to Find Me


Software Practice Advancement

 

Archives
All of 2006
All of 2005
All of 2004
All of 2003

 

Join!

Agile Alliance Logo