Intro to Brain Models #2

Wanna see something that’ll make you mistrust your own vision? Stand in front of a mirror and move your gaze left and right. Maybe shift your gaze back and forth between your eyes. Your eyes don’t seem to move at all, right? Now record yourself doing this act and watch the video. Your eyeballs move quite vigorously, but you still see them to be stationary! How is this possible?

Historically, researchers believed that our eye temporarily stops processing visual signals when we move our eyes (our eye moves in rapid jerks called ‘saccades‘) to avoid seeing blurry images of the world around us every time we shift our gaze. This was called ‘saccadic blindness‘. However, a clever experiment showed that we indeed process retinal images even during a saccade. A series of vertical lines on a screen was made to move horizontally at speeds fast enough to make them invisible to the naked eye. However, when the viewer moves the eye in the same direction as the movement of the lines, they can temporarily see the lines since the relative speed between the moving eye and the moving lines is reduced. This experiment showed that the brain is processing the images falling on our retina even during a saccade, but somehow suppressing them unless they’re clear – ‘saccadic suppression’. If the brain’s intent is to avoid processing blurry images, then why don’t we suppress blurry images resulting from external motion? When we’re looking out of a moving train, for example.


In a 2016 paper titled ‘Neural mechanism of saccadic suppression’, the authors discuss their experiment to identify how our brain’s perception of retinal images change during a saccade, specifically in the middle temporal (MT) and the middle superior temporal (MST) cortical areas of the brain. They found that about 66% to 68% of the neurons showed significant differences in the way they process retinal images from a saccade-induced motion vs externally-induced motion (like a train journey). The remaining neurons either didn’t respond to high-velocity image motion at all or responded equally well in both cases. What surprised them is a saccade-induced reversal in the behaviour of a significant percentage of neurons from the former category. Neurons that lit up when they identified left-to-right motion of an externally-induced image were lighting up when the images moved right-to-left during a saccade! Further, this reversal started about 70ms before the saccade began, indicating that there’s some top-down intervention responsible for saccadic suppression that is linked with the brain’s decision to initiate a saccade in addition to whatever effects the actual movement of the eye may be contributing.

What could be influencing this alteration in the way our brain processes the signals from our eye during a saccade? The clue lies in the observation that this alteration starts about 70ms before the actual saccadic movement begins. What comes just before a saccadic movement? To answer this question, let’s take a small detour into the fascinating world of electric fish.


Mormyrid fish detect their prey by using electroreceptors on their body to sense small electric fields generated by their prey. However, this is a tricky business for the Mormyrids because they themselves repeatedly generate large electric pulses for navigation and communication (known as Electric Organ Discharges or EODs). These EODs activate its electroreceptors, interfering with the much weaker electrical fields generated by their prey. How then do they go about detecting their prey without confusing themselves all the time? The authors investigate this question using an elegant and elaborate setup.

First, they figure out how to record electrical activity from the neurons in the fish’s electrosensory lobe (ELL), the region within the fish brain where signals from its electroreceptors first get processed. Then, they manage to mimic the fish’s EODs by placing a small electric dipole within the water and near the electroreceptors on its scales. Then, they paralyse the muscles which generate EODs without interfering with the fish’s ability to send commands to these muscles. They also figured out how to tell when a fish was sending a command to discharge EODs. Lastly, they generate fake prey-like electric fields within the water to see how it’s processed by the Mormyrid ELL.

What they found is that the fish’s ability to detect the fake prey reduced drastically whenever they sent an artificially generated EOD that was not synced with the fish’s command. However, the fish regained its ability for prey detection the moment they synced their EODs to predictably follow the fish’s command. Their best guess? Whenever the fish sends out a command to generate an EOD, it also tells the neurons in its ELL to filter out the electrical signature of its own EOD from the total signal received at its electroreceptors. Think of it as a negative image of its own EOD’s electrical signature that gets sent to the neurons in its ELL. When this negative image gets added to the total signal coming from its electroreceptors, it filters out the EOD from the electrical signature generated by external electrical activities in its environment (like the presence of a prey). This is akin to how your active-noise cancellation headphones work.


If this is how electric fish cancel out the sensory signals resulting from their own actions, could this be something our brain does too? The answer is yes.

Every time we initiate a movement of any kind, our brain sends a copy of that command (called an “efference copy“) to all the relevant regions of the brain that are involved in sensory perception. Why? To help them predict and adapt to the incoming signals generated from our own actions. Such information about our own commands for action plays a different role in prediction-based models when compared to extraction-based models of the brain.

In extraction-based models, such information helps our brain identify which component of the incoming sensory signal must be attributed to the consequences of one’s own actions. The primary benefit of this information is accurate attribution. This, as we’ll see later, is the precursor to our sense of self, at least of one kind (there are several kinds of self we possess).

On the other hand, in prediction-based models, such information helps our brain predict future sensory signals that are a direct consequence of our own actions. The primary benefit of this information is the accurate processing of cause and effect. This is the precursor to our sense of agency. The feeling of being the cause of actions or free will.

Does this mean our sense of self is stronger in systems of the brain that employ extraction-based models? Conscious thinking, for example. On the same lines, is our sense of agency stronger in systems that have prediction-based models (like perception)? Is it possible to have one without the other if we can find an activity that exclusively uses systems that are extraction-based or prediction-based? How can we test these predictions? More on these later.

Similar Articles

Leave a Reply