For even the simplest task, like catching a ball or crossing a busy street, our brain’s visual motion system needs to keep up with the fast pace of the dynamic world around us. But unfortunately, the brain’s neural processing can’t match the speed of the changes in our environment. Its computations are carried out by sluggish neurones, embedded in a complex network, with multiple stages and variable transmission times: neural processing lags behind the world that the brain is trying to represent.
This problem is acute when time is short. When Andy Murray returns a serve he needs to respond quickly – too quickly to precisely encode the motion of the ball and then act. If he’s going to make contact, he needs to predict the ball’s trajectory. In tennis, the player must use all the cues from an opponent’s body and racket position, as well as a best guess from previous behaviour, to predict their shot. This is the kind of prediction and anticipation that we are familiar with, and it requires a conscious effort. However prediction might be inherent in the motion processing system itself as a means of accommodating the inevitable processing delays.
My research seeks to understand at what point in time and space one sees moving objects. In the last decade it has become clear that a moving object appears to be ahead of its true spatial location. The motion does not even have to be real motion. As long ago as 1834, Robert Adams reported that after staring at the Falls of Foyers, which at that time cascaded into Loch Ness, the rock beside the waterfall appeared to move upwards. This effect is considered paradoxical in that while the object appears to move, there isn’t any corresponding change in the object’s spatial position. However the after-effect of motion does induce a small gradual change in position in the direction of the aftereffect. This observation shows the experience of motion itself can induce position shifts. They are not simply a product of early visual processing.
Nevertheless, motion can influence early visual processing. We are more able to detect the presence of a target probe just beyond a patch of moving pattern, if the probe is a continuation of the pattern, compared to when the probe is presented in reversed contrast, as a photographic negative. This phase dependency doesn’t happen at the trailing edge of motion. This finding suggests motion induces a forward prediction of spatial pattern which sums with the probe to improve detection.
By using a computational model that combines motion computation and pattern prediction and extrapolation, I hope to gain a better understanding of the mechanisms that underlie these observations. I will explore what motion prediction might offer the visual system. Prediction is one way of generating an error signal. We need an error signal to check whether motion is computed correctly. We can predict what a pattern will look like if shifted according to our motion signal and then compare our prediction against the moving input at a later point in space and time, noting any error. The error signal may allow a fast revision of the motion signal or indeed a re-calibration of the neural mechanisms underlying motion analysis and the position sense, allowing compensation for processing delays.
The process of building a detailed, explicit computation model should allow us to test these ideas and clarify more generally what problems the visual system needs to address and solve to compute motion. The new insights from this process may also provide new strategies for the computation of image motion in computer vision.
Professor Alan Johnston
University College London
Awarded a Leverhulme Research Fellowship in 2013