Testing Reveals Requirements
Moving on from a placeholder
Let's agree that requirements aren't perfect. Sometimes problems get spotted only when there's a deliverable to test. Sometimes the deliverable is a working system, sometimes it's a sketch. If you're not looking for trouble, you're missing your chance.
When we test, we often are the first to feel the truth about what we're building. Sometimes, what we've found shows us that what we've delivered needs to change.
Example: our requirements tell us to aggregate all transactions: We have 20 transactions, and aggregated 17.
When the demonstrable truth runs counter to a clear requirement, there's an easy bug to log.
Sometimes, what we've found reveals that we need to change something deeper than the deliverable.
Example: we've aggregated all the transactions. We notice that when we include a reversed transaction, we count the transaction and its reversal as two transactions, and we see that the values cancel out. Both behaviours match expectations. We investigate averages for sets that include reversals, and they're seem low because we're counting reversed transactions. We make an example, with one transaction of value 100, and two pairs of reversed transactions: the average transaction value is 20 for each of 5 transactions. Do we need to change how we count transactions, change how we calculate averages, or change our expectations of what the counts and averages should be? We want to investigate what happens to daily averages, when the transaction and its reversal are in different days? What happens when the reversal is reversed?
We make systems for people. Perhaps some people want a count of all transactions including reversals, and others want an average value of all unreversed transactions. If we discover this in a deliverable, then we need the business expertise to understand that this is an inconsistency, we need the organisational knowledge to understand who cares, we need the explanatory skills to take our scenario to both in a way that makes sense and the facilitation skills help them to understand each other's point of view, and maybe be part of their decision. And then someone needs to be persuaded to put up the money to pay for the change that is needed and agreed.
The interesting thing here is that we need to change our decisions about the system, before we work out how to change the system. Something unexpected has been revealed about the problem. Testing has revealed a requirement.
If we do what we might hope for, and discover this before the system is built, we'll still need all those skills, but perhaps the change is swifter to deliver, or cheaper and simpler to deliver. We're testing the requirements, not the deliverable – but still purposefully exploring those requirements and judging what we find.
Did I say we? Do you do that? Do all testers? Are testers expected to do that, in your organisation? Or are testers expected to make a known set of simple measurements to check off requirements?
Surprises come from all directions. You'll find them in the code, in the data, in system behaviours, in people's reactions. Their roots are in our assumptionad and expectations, in our requirements, forced on us by our architecture or bought in by a dependency.
If we look for trouble, we'll stand a better chance of reacting productively to surprises.
Comments
Sign in or become a Workroom Productions member to read and leave comments.