Photo by Frederik Merten / Unsplash

Making the RasterReveal Exercise

Articles Mar 16, 2022 (Mar 17, 2022) Loading...

A roughly-chronological story of how I ended up with the RasterReveal exercise.

This is an example of what I do when making an exercise. For this, when most of the tech is taken from / written by another, you might imagine that I do more configuration and diagnostic work (because I don't know the thing I'm using) or less (because my code is built by one gadfly mind over years). I think it's about the same, though the work is perhaps more transferrable in this case. I've called this work wrangling and debugging (link will take you to the series I wrote on it in late 2021). I seem to spend ages wrangling and debugging, yet I don't find much about it in testing or coding literature.

None the less, testers do it, coders do it, and we seem to spend ages on it, whether we're building a new thing from the ground up, or integrating some massive collection of legacy kit. Let's acknowledge that, and talk about how we can do it.


Stumbling across an exercise

The exercise turned up as I was looking at charting tools. I was playing with Paper.js's demos. The division raster example was interesting because it was nice to do, and because that pleasure came from revealing something. A part of my mind is – unconsciously and constantly – on the look out for things which might help me to explain something. I thought that I could use the demo to help people get a swift feel for discovering something. I felt that it would be a better exercise if the image was less well-known, and better still if it was a random image. I wanted to integrate it with this site, too.

I let the idea brew for a couple of days, then put a swift demo of my own together. I chose to work in Tumult Whisk – I reckoned it wouldn't work straight away. Whisk gives instant feedback on my changes, so I could get swift feedback on my changes as I discovered what was possible, and learned how to make it work.

Building my own out of parts

To find out about the assets and dependencies, I started with the browser's DevTools, and got confused. I turned instead to Working with Paper.js, and dropped in the script exposed by the demo. I was glad to see that the demo site's dependencies on on jQuery and on codeMirror were not needed to get the thing running locally.

I changed the image to a different local image. The experience of revealing the image remained pleasurable and a little surprising event when I knew the image, so I reckoned that it was worth pressing ahead with randomness. Unsplash will serve you a random image at https://source.unsplash.com/random/. For more depth (you can specify quite a lot about your random image) take a look at  https://awik.io/generate-random-images-unsplash-without-using-api/.

0th go – security matters

Random images failed initially and consistently with an error that seemed, CORS-like, to regard an image from a different base URL to be inherently insecure.

Initially and consistently? I've since re-tried it and it worked fine, though ...um... demonstrating that one needs to curate photos for a positive learning experience. If you're doing it yourself, note that within paper.js, the URL needs a / on the end to avoid a generic "no photo" image from Unsplash, even though the un-slashed address works in the browser.
And in a further update, I tried to repro a working random image only to find, once again, that I'm stymied by The canvas has been tainted by cross-origin data. I suspect a hidden dependency, so perhaps there's a diagnostic exercise here.

I chose to move on with a curated selection of locally-served images – which also addressed my concern about putting a randomly-chosen image in front of people I didn't know while they were at work – so picked up a selection of images which seemed delightful and tried them out.

First go – pictures matter

My experience was... not great. I learned that I needed images with large contrasting areas – flowers in the snow were no good. Details necessary to make sense of the image needed to be easy to find – images which recede to the centre and images with a gradient and a small point of attention were out, so I dropped a woodland tunnel and a drone shot. Images needed to play to our reality/TV-honed visual sense, so I dropped a couple which were more abstract or surreal.

Having tried it out locally on my laptop, I put the the thing live on exercises.workroomprds.com/rasterReveal. I put it there because I've got more-direct control over javascript and over local media there – it's my own server, rather than this service. From there, I used an iframe to embed it on this page. I tried it out on a real person (that's what kids are for), and it seemed OK.

Then I tried it on an iPad, and it was awful.

Second go – devices matter

On the iPad, the image was huge. Worse, the hardware responded by scrolling the huge image. Scrubbing with a finger, as an analog to scrubbing with the mouse, lead only to nausea. Setting the iframe to no scroll made a difference, as did adjusting the frame size. Yet the experience was still awful; once the scrolling/sizing grimness was cleared, it became clear that revealing the image worked very differently, and I felt confused and disconnected as things I touched revealed nothing, but image details appeared elsewhere.

Despair – have I got an exercise after all?

Wondering whether this was to do with touch events, rather than mouse events, I went to the issues list, to find several issues from a decade ago which turned on the non-equivalence of touch events and mouse events. Similar problems had been fixed more recently, in 2018, but there were still problems. I couldn't see anything to do with event position, though. Aware that I was using an old library in new tech, and that I was stepping away from more-common use cases by dropping it into an iFrame, I was worried that I might have hit a bug in the library. I reinforced my dismay when I checked the original example at http://paperjs.org/examples/division-raster/– it too showed the same behaviour on the iPad.

Investigation as an antidote to despair

However, studying what was actually happening showed me that the area being acted on was consistently not where I was touching: it seemed to be displaced proportionally to the top-left of the picture. For example, banging away at the centre tended to divide up the lower right quadrant, and anything much outside the top left quadrant made no difference at all. Within paper.js' site, comparing their Voronoi example on desktop and iPad made it even more clear. Oddly, this gave me some comfort; perhaps it was less likely that the iFrame was getting in the way. The origin seemed OK – perhaps the scales were wrong. My laptop is an old non-retina MacBook, and my iPad has much greater pixel density...

The experimenter in me wanted to write a small, controlled bit of code to as ann example. The wrangler wanted to change the stuff I already had. The wrangler won. I made the following things the same size; the source image, the canvas on workroomprds.com, the iframe on this site. Didn't help.

I took a step back; a new experiment would be long, and I had no guarantee of learning anything of long-term value. Was there anything else to change? Could code help, rather than aligning numbers? The paper.js code looked alright, as far as I could tell. Would I need to write a new set of code and introduce a scaling factor by device? Could I write it? What devices would I need to write it for? As I was thinking of this, I went looking in the HTML and CSS looking for other places to influence size, and noticed the following in the HTML for the canvas element.

<canvas ... resize ...></canvas>

I searched in the HTML5 docs for canvas. Finding that resize was not a documented attribute, I looked (there is no search) in paper.js's docs – and in the tutorial section, I found Canvas Configuration, which said: resize="true": Makes the canvas object as high and wide as the Browser window and resizes it whenever the user resizes the window. Worth a go.

Resolution

I removed resize, and found to my relief that, finally, revealing the image by scrubbing the iPad with my finger worked in the same way as it worked on a laptop, hovering with the pointer.

I'll look at more devices when I dare.

I note from the same Canvas Configuration tutorial, that I could also

  • sort out the css for the frame with `canvas[resize] {width: 100%; height: 100%;},  
  • investigate the option to manage high DPI screens,
  • dig into the an onResize event which is called on resize – and which I guess calls the onResize event in the paper.js code I nicked from their demo.

Building the software into an exercise

I need to remind myself, on every exercise, that people need clear instructions. I'm not there yet – I've made a good second attempt, but the instructions will get better once I'ev taught it more.

Good instructions tend to be pithy (verbs, sequence, answering what to do), to set constraints (time, tech, answwering how long / how to do it), and to give purpose (answering why). In something like this, there are layers to purpose: the purpose someone might have as they poke at a picture is different from their purpose as they think about their work (my instructions need to cover both), and different again from the purpose they might have as they enter the exercise, or as they bring in their colleagues (I might seek to inspire rather than instruct for that context)..

I imagine that people will be generally learning without me on hand, which is ideal. I kept track of questions, and of revelations which beg a question, as I've explored various pictures. I've sorted those questions, and added them to the exercise.


Emergent Behaviours

Here are a couple of behaviours which emerged unexpectedly as I put working code into unfamiliar contexts:

  • the location where I acted on the screen was not, on at least one system, the point where the system acted on the image
  • working code was stymied by built-in security measures

I should also acknowledge that the exercise depended on helpful images. That may not be an emergent behaviour of the software system, but it certainly emerges from studying the system of brain-and-hand-and-context-and-software. If we test the artefacts we build to the exclusion of how they are really used, we leave part of the job un-done.

Member reactions

Reactions are loading...

Sign in to leave reactions on posts

Tags

Comments

Sign in or become a Workroom Productions member to read and leave comments.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.