Teeny Tiny Test Harness
The more JavaScript I write, the more I use console.assert
to test my code. It's my go-to teeny tiny test harness.
I use it in two main ways:
Friction-free TDD โ I use CodeRunner to explore short, self-contained ideas. These proofs-of-concept fit onto a page or two. As these ideas are new toys, and as I want them to be self-documenting, I build with TDD to keep me on the straight.
Here's an example of how I use it on ย result
output from a test call to a target function.
console.assert(Array.isArray(result), "should return array")
console.assert(result.length === 1, "should return a single value");
console.assert(result[0].slug === "thisOne", "should return an object with right slug attribute");
console.assert(result[0].title === "this one", "returned object should have a title")
console.assert(result[0].updated_at === "somedate", "returned object should have updated_at")
If I choose to take this forwards, I would expect to adjust it into something more clear. I'll expand on that below.
Parameter Validation โ I miss typed languages, sometimes. So I write routines to validate the input to functions, especially when I've just wasted my own day by debugging something I should have avoided.
A validation looks like:
check = console.assert;
check(Array.isArray(inboundList), "buildListOfLinkedPages has not received an array");
check(inboundList.length > 0, "buildListOfLinkedPages expects non-empty content");
check(inboundList[0].hasOwnProperty("slug"), "buildListOfLinkedPages expects first element of array to have checkg")
check(inboundList[0].hasOwnProperty("title"), "buildListOfLinkedPages expects first element of array to have checkle")
check(inboundList[0].hasOwnProperty("updated_at"), "buildListOfLinkedPages expects first element of array to have property updated_at")
Do these two examples look similar? Why yes, they do โ and that is because the output of the routine tested in the first is the input checked in the second.
Enhancements to the Harness
If I'm moving from teeny-tiny to just tiny, I'll grab something that makes things more clear, and more easy. I've done it, a bit, in the validation above: I use check
as a synonym for console.assert
.
I'll use the synonym example=console.assert
, too โ this helps me understand which of my tests are the fundamental examples I want to keep. I might ย let startNewTest=console.log
, as syntactic sugar to help me catch what I'm testing.
Once I'm using synonyms rather than the native commands, I might put more in those calling functions; to let me keep counts, or toggle suites on and off. At this point, I recognise I'm building a test harness and so turn to Jasmine or Tape or my own libraries.
The point is that I can start building tests with no imports, no plumbing, no necessary infrastructure to explain to myself or others once memory has faded.
Enhancements to the Tests
These examples aren't part of polished code โ I'm working in a spike. However, if this idea seems useful, I'll need to move on. My test code will need to change to support that.
I'd typically take a hard look at my TDD scaffolding, keep whatever seems scaffolding-like and might help me as/when I make changes, and bin most of the rest ย of the cruft.
I'd rewrite some non-scaffolding cruft, and add more, to act as examples. I'd write (or at least note down) some failing edge cases, to help me remember limitations that I've left in. My aim in doing this would not be to check the code's behaviour, but to use working examples to clarify what I intended to make.
I'd also take a look at dependencies โ perhaps I can see a way to write new tests to join a few components. My aim is to set the groundwork to explore unexpected interactions. When coding, I make unit testing easy by favouring [[pure functions]] if possible, so I don't spend much time on dependencies for my tests.
I might write probes and measures. Probes to hit whatever I'm testing with generated ranges, and to look programatically for simple stuff, or to plot the output in some way that I can eyeball. Measures to capture response times and more. My aim in doing this is to understand what I actually made.
More manually, I might keep track of those measures and probe outputs, so that I would have a chance of observing changes in behaviour. I might use an existing or generated test set and capture the results as an approval test. My aim here is to support my flaky memory as I make changes over time, or as I use my stuff in different contexts.
Comments
Sign in or become a Workroom Productions member to read and leave comments.