Brain Models

Have you heard of the Predictive Coding theory? It’s a new perspective on how our brain works. In the traditional model, the signals from our sense organs go through repeated analysis by multiple layers of neurons. Each layer deciphers something new about a signal – usually something more abstract than what was known in the previous layer. The combination of all the information extracted from a signal in this way results in our perception of that signal. Predictive coding theory flips this model on its head.

In predictive coding (PC), the primary role of our brain is not extraction of information from external signals, but to predict the information in future signals. In this model, our brain first tries to predict exactly what signal might be received by its neurons. Then, it uses the actual signals from our sense organs to validate or invalidate this prediction. If the prediction turned out to be correct, it’ll strengthen the assumptions on which the prediction was made. If the prediction was incorrect, it’ll weaken those assumptions. The collection of assumptions based on which we make our predictions can be called our brain’s internal model of the world. As we interact more and more with the world around us, this internal model becomes better and better at predicting the features of our world.


This distinction between extraction-based and prediction-based models of the brain might seem trivial and mechanical at first. However, it makes a world of difference once we start digging deeper. The root of all those differences is this insight:

An extraction-based model will be limited to perceiving what is already there in the signals coming from the outside world. However, a prediction-based model can generate perceptions that have no basis in reality as long as they help us predict the observable consequences of reality.

Let me give you an example –

In this image, you can see a bunch of 2D circles that appear either to be bulging out of the paper or to be dented inwards. If you observe them closely, the dented circles are simply an upside-down version of the bulging-out circles. Why should our brain perceive a shape drawn on a 2D paper to extend into the third dimension? Why should this extension into the third dimension change from outwards to inwards upon rotation? More interestingly, why can’t we unsee this projection into the third dimension despite knowing that the shape is drawn on 2D paper?


We’ll not see such illusions in a purely extraction-based model of processing signals from the external world. These are the side-effects of a prediction-based model. The prediction of a bulged or dented 3D object in front of our eyes will NOT get invalidated by the signals falling on our eyes. At the same time, since we live in a world where we’re more likely to see 3D objects than 2D objects, such a prediction is less likely to get invalidated by future experiences than the prediction that the signals are coming from a 2D shape artificially darkened in just the right way to produce this illusion. So our brain picks this generic and more likely prediction over a specific and less likely prediction even when we intellectually know that such a prediction happens to be wrong in this case.

But wait, you might stop me, our intellectual knowledge is also produced by the same brain that produces perception. Then, why isn’t it influenced by these quirks of a prediction-based brain design?

It is this question that led me away from pure models of the brain to impure ones – where our brain is neither purely extraction-based nor purely prediction-based, but a combination of systems. Some of these systems happen to be purely prediction-based – all systems involved in perception, for example. Other systems like conscious thinking seem to be purely extraction-based. Further, it seems, that these two kinds of systems don’t talk to each other as often as they talk to systems of the same kind. Are there systems that are partly prediction-based and partly extraction-based? They are beginning to pop up. The first such theory was published in April 2022.


Despite countless hints that our brain is an amalgamation of both these kinds of systems, almost all researchers are focusing their attempts on pure models. This is probably stemming partly from their need to keep our models of the brain simple and clean. However, if there’s one thing we know about evolution – it’s messy, it’s hacky and it’s opportunistic. Simplicity might be too lofty a burden to place on a brain that is a product of such evolutionary processes. Borrowing Einstein’s words to frame my criticism of this bias – “Everything should be made as simple as possible, but no simpler.” A second, stronger reason for ignoring hybrid models might be that there’s still so much left to be understood in each of these models that it’s still too early to dive into hybrid models. To someone holding this point of view, a hybrid model might seem too lazy. It might seem like settling for a hacky explanation before testing the limits of more elegant alternatives. To be honest, I see the merit in this point of view. However, I’m beginning to silence my own criticisms in this matter since I believe at the end of the day, nature doesn’t care about elegance any more than it cares about simplicity.

In the following series of posts, we’ll dive deeper into both these kinds of models of the brain as well as explore what hybrid designs of the brain might look like. Let’s get started!

Similar Articles

1 Comment

  • Jyothsna
    May 29, 2024 at 4:03 am 

    Interesting…

Hey! comments are closed.