One way we achieve our mission of supporting life through art is by asking artists to solve problems. The impact of Covid-19 on our planned programme meant we needed to be resourceful. Often we turned to our Education Coordinator and artist Stuart Moore, who in turn looked to nature for solutions.
I had the task of making some machine-vision devices to count visitors and to control installations and ended up with a set of cyber interpretations of different creatures eyes. The process started with some standard AI techniques, but the interesting part came after that approach didn’t actually work very well.
In order adhere to the latest and ongoing public health guidance in July 2020, as we safely reopened our In Steps of Sundew exhibition at The Arches, Fineshade Wood, the first device needed to count visitor numbers and to populate a spreadsheet that our team could access remotely to assess visitor behaviour throughout opening hours. The twitchy digital quality of this initial system gave much lower accuracy than I needed. It occurred to me that the camera working as the eye was far more sophisticated than the computer that was working as the brain. In a real creature that would never happen. The eye and brain develop together as a single entity.
By way of a hasty rethink, I removed the lens from the camera to reduce its image to a single coloured blob. That drastically reduced the amount of information the computer had to deal with, bringing the complexity of the camera more in line with the capability of the computer brain. After promising results, I realised that it was exactly analogous to the eye of a clam or scallop.
A clam has a number of very simple eyes along the opening of its shell. Each eye really just senses light levels – it can’t focus an image. In a way, a single eye sees a single pixel. Each eye is shadowed from the others a little by the wrinkles in the shell, so they don’t see exactly the same thing as one another. (If they saw identically to each other, then there would be no need to have more than one of them). It’s possible to tell a great deal about the environment by such simple means. The clam can sense the approach of predators and my machine could count passing people. To my surprise, the cyber clam eye could also tell the temperature of the building! Another very useful thing about simple eyes is that it’s fairly obvious how their neurons must wire together because there are so few options.
To continuously play Ikran Abdille’s 2016 film, Kenya, we used a self timed monitor to switch on and off shortly before and after opening hours. However, on occasion the system’s standby feature would interrupt the uninvigilated gallery space and turn the monitor off prematurely. The first device had to count people. The next device had a trickier task – it needed to be able to watch TV and turn it back on in the event that it had switched into standby mode.
A clam can’t see well enough for that. It needs an eye that can focus an image. This got me thinking about how a more complex creature would handle the task. A fly of course has a faceted eye. Each facet or “Ommatidia” is in fact a long tube with a group of light sensitive cells at the bottom. The long tube shadows each group of cells significantly from the others, so that the overall effect is the same as a lens. My logic, if you’re happy to call it that, was each collection of light sensors at the bottom of an Ommatidia is the same as the collection of eyes on a clam, and shadowed by stronger wrinkles. One Ommatidia is a whole clam. The fly has a lot more neurons, but if the pattern of the fly eye is just a multiplication of the clam structure, then it seems reasonable to think the pattern of connecting neurons is multiplied up in the same way. I have no way of telling if that’s literally true of a fly, but it is true that the resulting device worked rather nicely. It watched TV for months and never missed a beat. I admit though, I’m a little anxious about the ethics of building slaves doomed to watch TV forever. I think there may be a Tales Of The Unexpected episode a bit like that.
Triple Harvest – The Archives, our 2021 exhibition at The Arches, had a monitor playing eight different films ranging in length from just a few minutes to just under an hour. As this would result in a drastically different visitor experience depending on which film was playing on arrival, we felt the ability for visitors to select the films would enhance the experience and improve engagement. However, a simple remote control was out of the question as it might increase the risk of coronavirus transmission through touch, or simply go missing.
A third device needed a third variation. This time it had to recognise simple hand gestures, like swipe and point so that people could navigate through films without physically touching anything. The fly eye didn’t recognise anything it saw. Its perception, similar to the clam, was more like an ability to tell the ‘amount’ of whatever it was interested in.
Creatures like toads and crabs are a little more sophisticated – but only a little. The relationship of the toad or crab eye to the fly eye seems pretty clear. The separate groups of light sensitive cells are no longer separate but next to one another as a retina, though a grouping structure is still present and called a “Receptive Field”. The individual light guiding facets are fused together into a single lens.
Considering how well toads and crabs get on, their recognition is remarkably basic. Experiments show if a toad sees a horizontal dark line, it thinks it’s a worm. If it sees a vertical dark line, it doesn’t, and that’s about all there is to it! A crab sees upwards mostly, looking at the bright surface of the water. If it sees a shadow on one side of its vision, it runs away in the opposite direction.
My third device looked at the ceiling like a crab and watched for lines like a toad. If it saw something long and thin, it looked at which way it moved. That was enough to perceive left and right swipe gestures and a point gesture for scrolling through or selecting films respectively. Importantly, with very little computer power and enjoyably, with a rather organic vibe to the way it responds you had to stroke it the way it liked. There’s probably a lot of interesting things to explore in there about a machine that can feel different body language.
For me, it opens up some interesting questions about the nature of sight. It seems that seeing is particularly shaped by what you are trying to see. It’s quite possible that the toad is not just disinterested in vertical lines, it may actually lack the ability to perceive them. Our vision can tell an astonishing amount. It’s quite probable that by inheriting from predators, we can tell exactly where in space a tennis ball is at high speed by the wiring on our retina that measures the motion blur. On the other hand, what can’t we see? Are there types of pattern that are invisible to us? If so, I just bet there’s some kind of device that can uncover them somehow…
Stuart Moore is Education Coordinator for Fermynwoods Contemporary Art, having taught in secondary and post sixteen phases. Stuart is a sound artist / microtonal composer that works closely with technology. His work explores the relationship between experience of the involuntary soundscape and purposeful human composition, by way of studying the roots of the perception of music. Stuart’s work is driven by his belief that sound is the most direct expression of human feeling.
Both of our exhibitions The Arches were endorsed by the Visit England’s We’re Good To Go Industry Standard consumer mark to reassure visitors that we adhered to current and ongoing Government and public health guidance.
Images: Scallop eyes, Matthew Krummins, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons; Film selecting gesture sensor, Fermynwoods Contemporary Art, 2021; They’re all just a change in entropy to me, Stuart Moore, 2021