Open Miro frame. Return to central page.
None. The tester tells you that the room is dark
If exploratory testing looks for bugs, then diagnosis seeks to understand those bugs.
Diagnosis is not exploratory testing β exploratory testing looks for unknown problems, while diagnosis seeks to explore known problems. This distinction may be useful for perspective. In practice, diagnosis is necessary and vital skill for exploratory testers. On we go.
Sharing Stories of Diagnosis
Let's talk about recent diagnoses, in our working lives as software testers.
(JL: File not found on server / Any button)
When working through a diagnosis, a tester needs to take into account architecture, system integration and potential bugs. This depth may be at odds with how the problem looks, or with a tester's limited scope of observaton and influence. Complex symptoms can asise from tiny mistakes; fundamental mistakes can show innocuous symptoms. Software diagnosticians need to consider proximate and ultimate cause, chains of cause and effect, necessary and sufficient causes.
However, they must also acknowledge that cause and effect are not necessarily discrete, linked in an obvious way, or even separable. In systems where there is feedback between cause and effect, diagnosis is easily-biased towards the interests of the diagnostician. A given diagnosis may be useful, but may not be right.
Machine M
Machine M sets up several artificial situations. Each situation illustrates a different principle of diagnosis. Each is illustrated with a bug report β those reports may be unreliable, but they are a worthwhile starting model.
-βYou'll only see one crashing bug at a time. For example: you won't trigger the crash for bug 2 when working on bug 3.
-βThe exercise is, as far as I've been able to make it, honest. Explicitly, this means that
reset
always takes you back to the same starting position, and prev
and next
do the same.We'll all work on the same bug at the same time. We'll stop and talk about those bugs before we move to the next. We'll start with Bug 1, and go up from there β use prev
/ next
to change.
- you need to look for a crash β the bug reports should get you there. You'll need to
reset
after a crash to regain control. - try to reproduce that crash in different ways. When you have a sense of what is sufficient and necessary to a crash, you've got a model. You can, if the group is open to it, tell the group your model.
- You should test that model by looking for evidence to support it, and by looking at situations which might challenge it. Telling the group will probably give you unexpected insights into challenges, as your model and theirs will be different.
When diagnosing behaviours whose cause is obscure, we have to consciously let go of our current model of the system. In the real world, we can turn to fault models and to broad observation β here, you may feel when you have a fixed idea. Use this workshop to explore the challenge of letting go.
Exercise: Bug 1
The machine crashes if I press all the buttons
Exercise: Bug 2
I pressed red blue gren blue yellow blue and the machine crashed
Exercise: Bug 3
I turned each light on and off in turn. The machine crashed
Exercise: Bug 4
Turning all the lights on in the order yellow blue red green makes the machine crash. No other order has this problem.
Exercise: Bug 5
The machine seems unstable after I turn everything off and on
Exercise: Bug 6
The machine still seems unstable after I turn everything off and on
Principles of Diagnosis β for software
Reduce to parts
Start with the proximal cause, and work backwards
Consider sequence / history as well as state
Consider factors which are outside your direct control
If your model is too complex, you may need another kind of model
Models are hard to let go
Over the Break
As you test, please take time to notice:
- when the system changes you β revelations, new models, diagnoses Β
- how you spread that change β notes, bug reports, system insights