Looked but FAILED TO SEE!
The most common collision between a motorcycle and another vehicle happens at a junction, when the other vehicle (usually a car) turns across the motorcyclist’s path. It accounts for the majority of crashes in an urban area but is also a relatively common crash on a rural road too.
I’ve already mentioned that a significant proportion of these crashes happen when the driver COULD NOT see the bike in the run-up to the crash – the motorcycle might have been hidden by other vehicles, pedestrians or roadside furniture, or concealed by the driver’s own vehicle.
But in around one-third of all collisions, the bike was in a place the driver could have seen it, but for some reason FAILED TO SEE the machine. This is a ‘detection errors’ – ie, the driver looked in the right place, but for some reason failed to identify the presence of a motorcycle in the moments before making his manoeuvre.
Road safety has always treated this as ‘not looking properly’. This ‘fault’ of the driver is nearly always presented as a simple ‘common-sense’ truth, as in “if it’s visible, and if you look hard enough, you’ll see it.”
Sadly that’s simply not true, as any stage illusionist or camouflaged soldier knows.
Illusionists and camouflage both exploit human visual perception limitations, so if we’re to understand why drivers might fail to spot a motorcycle that should be in clear sight, we need to understand a little about how the human eye works with the brain to present a representation of the outside world into our conscious mind.
The starting point is to understand that the human eyes and brain are not the equivalent of a camera and film (or digital sensor). If you plonk a bike in front of a camera, the bike is what the camera sees. But if you put a human in front of the same scene there are a number of reasons that something in plain view can go missing.
So if we’re to understand just how invisible we can be whilst on two wheels, we need to look for a genuine understanding of visual perception, not just resort to the tired old blame-game approach by saying “the driver didn’t look properly”. I’ll start by looking at two human visual perception issues, before finishing off this investigation in the next article.
Narrow foveal zone and peripheral blindness
Hold your arm straight out, clench your fist and give a ‘thumbs-up’. Look at your thumb nail. Now shift the focus of your attention to the top knuckle instead. Your eyes just moved. Although your thumb nail is only a couple of centimetres below your thumbnail, your eyes had to shift FOCUS because the cone of clear, focused colour vision – the foveal zone – is just a couple of degrees of visual angle deep.
Turn your thumb on its side and repeat. Your eyes moved again.
We have to move our eyes because only a tiny patch of the retina – known as the fovea – that actually transmits a sharp camera-like image to the brain and, to see a particular object in detail, we need to line up the fovea to the ‘fixation point’. The zone where we have this ‘foveal’ clear vision is also just a couple of degrees of visual angle wide.
Although the retinas of both eyes combine to give us visual coverage which extends slightly more than 180 degrees left-to-right, outside of the fovea, that light falls on a part of the retina with a very different construction. This ‘peripheral vision’ becomes increasingly blurry and lacking detail, and colour vision fades increasingly to black-and-white the further we move away from the fovea.
Why this limitation? There’s a simple answer – transmitting ALL the visual data that falls on the retina to the brain at the same high fidelity as the fovea would require an optic nerve bigger than the eye – there simply isn’t the capacity to carry, let alone process, the data.
Interestingly, designers of high definition Virtual Reality goggles have hit much the same problem. To get a high pixel density – and thus high realism imagery – across the entire goggle would require more computing power than any domestic computer or phone can deliver. So they are trying to exploit this phenomenon by increasing pixel density ONLY where the user is looking. The screen therefore provides increased resolution where necessary and where the eye can USE it, rather than attempting to display it across the entire screen and frying the processor.
But here’s the remarkable thing. We don’t notice that because the brain creates an illusion. It’s so good that few of us ever notice, but it’s there. The phenomenon has been known to visual science for centuries – it’s attributed to Leonardo da Vinci.
Given the tiny coverage of the fovea, the vast majority of the incoming visual data falls into peripheral vision. Just 20 degrees off the line-of-sight, our clarity of vision (or ‘visual acuity’) is about one tenth of that of the fovea.
Nevertheless, we do have some ability to detect light / dark contrast in peripheral vision, but we’re much more likely to detect sudden bright stimuli and movement.
But once we do, we automatically turn our head to bring the attractant into our line-of-sight so we can examine it with the fovea’s high-resolution vision – this is called a fixation.
Depth of field
Just like a camera, the human eye has a depth-of-field. If we focus on something close to us, everything in the background is out of focus. And vice-versa – if we’re focused on a background object, those closer up tend to blur. Combine depth-of-field with the narrow cone of foveal vision and not only does this have consequences in terms of detecting / not detecting other vehicles in peripheral vision, it also leads me to question the concept of ‘eye contact’ that’s so frequently proposed in the motorcycle safety literature. It seems a doubtful concept at best. Anecdotally, I have heard (and I’m sure you have too) motorcyclists say many times:
“I had eye contact with the driver and he/she still pulled out.”
I’d suggest this is the explanation; that although the driver appears to be looking at us, his actual visual fixation is behind us, and our machine is actually in his peripheral vision. I think that the best we can say is that if the driver is looking our way, we MIGHT have been seen, but it would be wise to assume the driver hasn’t spotted us.
So here’s this month’s takeaway. Never forget that the eye is not a camera, and no two people see the same scene in the same way. And if there’s one vehicle that’s likely to go missing when drivers search the road environment, it’s a motorcycle.
Don’t assume you’ve been seen… EVER.
…to be continued
Kevin Williams / Survival Skills Rider Training www.survivalskills.co.uk
(c) K Williams 2020
The Science Of Being Seen – the book of the presentation £9.99 plus P&P and available now from: www.lulu.com
The ‘Science Of Being Seen’ is a presentation created in 2011 for Kent Fire and Rescue’s ‘Biker Down’ course by Kevin Williams. Biker Down is now offered by over half the nation’s FRSs as well as the UK military, and many deliver a version of SOBS. Kevin personally presents SOBS once a month for KFRS in Rochester. He toured New Zealand in February 2018 delivering SOBS on the nationwide Shiny Side Up Tour 2018 on behalf of the New Zealand Department of Transport.
Find out more here: https://scienceofbeingseen/wordpress.com