Exploration Through ExampleExample-driven development, Agile testing, context-driven testing, Agile programming, Ruby, and other things of interest to Brian Marick
|
Thu, 21 Aug 2003... But I think these arguments, while valid, have missed another vital reason for direct developer-customer interaction - enjoyment... ... they had never before realized that physical space could have such a subtle impact on human behavior... ... An idiom, in natural language, is a ready-made expression with a specific meaning that must be learned, and can't necessarily be deduced from the terms of the expression. This meaning transposes easily to programming languages and to software in general... ...It's a mirror, made of wood...
## Posted at 16:01 in category /misc
[permalink]
[top]
At XP Agile Universe, two people - perhaps more - told me that I'm not doing enough to aid the development of Agile testing as a discipline, as a stable and widely understood bundle of skills. I spend too much time saying I don't know where Agile testing will be in five years, not enough pointing in some direction and saying "But let's see if maybe we can find it over there". They're probably right. So this is the start of a series of notes in which I'll do just that. I'm going to start by restating a pair of distinctions that I think are getting to be fairly common. If you hear someone talking about tests in Agile projects, it's useful to ask if those tests are business facing or technology facing. A business-facing test is one you could describe to a business expert in terms that would (or should) interest her. If you were talking on the phone and wanted to describe what questions the test answers, you would use words drawn from the business domain: "If you withdraw more money than you have in your account, does the system automatically extend you a loan for the difference?"
A technology-facing test is one you describe with words drawn from the
domain of the programmers: "Different browsers
implement Javascript differently, so we test whether our
product works with the most important ones." Or:
" (These categories have fuzzy boundaries, as so many do. For example, the choice of which browser configurations to test is in part a business decision.) It's also useful to ask people who talk about tests whether they want the tests to support programming or critique the product. By "support programming", I mean that the programmers use them as an integral part of the act of programming. For example, some programmers write a test to tell them what code to write next. By writing that code, they change some of the behavior of the program. Running the test after the change reassures them that they changed what they wanted. Running all the other tests reassures them that they didn't change behavior they intended to leave alone. Tests that critique the product are not focused on the act of programming. Instead, they look at a finished product with the intent of discovering inadequacies. Put those two distinctions together and you get this matrix:
In future postings, I'll talk about each quadrant of the matrix. What's my best guess about how it should evolve? |
|