Our Brain Predicts - And Hallucinates - What We See

Our Brain Predicts - And Hallucinates - What We See

Our Brain Predicts - And Hallucinates - What We See

An essay by Maija Tammi in conversation with Floortje Bouwkamp

Interviewee: Floortje Bouwkamp, Doctoral researcher at the Donders Institute for Brain, Cognition and Behaviour in the Netherlands. (FB)

 Interviewer: Maija Tammi, Artist and doctor of arts. (MT)

MT: Researcher Bouwkamp, you are investigating the predictive brain. What does that mean? 
FB: For a very long time, scientists thought that the brain passively perceives signals that are coming in. For instance, with vision,the idea has been that sensory information from the outside hits our retina and is sent to the back of our brain, where it is processed in a hierarchical way. The first regions of our visual cortex are sensitive to orientation and lines, and the latter and more frontal areas are sensitive to more complex features like color or shape, all the way to objects, faces, or whole scenes.

The idea of the predictive brain is very different from the above. Instead of thinking about vision as a passive process, it is active. Our brain is constructing a model of the environment. And from this model, the brain makes a prediction of what it thinks it is seeing, in the case of vision. This prediction can then be compared to the actual incoming signals, i.e. the sensory information registered with your eyes. The difference between the internal model and the signals coming in is called a prediction error, and this generates a response in the brain that we can measure.

This predictiveness of the brain makes us very efficient in dealing with large inputs of information. The brain builds models, filters out what we already know, and concentrates on the new information that does not match the model to be able to make an even better model.  

MT: So, the more we have seen things, the better predictions our brains are able to make?
FB: Yes, exactly, the predictions are based on lived experiences. My research topic is how do we use pre-existing knowledge when we search for things in our environments. And people are much faster in finding things in these environments if they have been, even without realizing, exposed to the search environments before.

MT: What happens when there are competing predictions? When it's unclear what we are seeing, for example. How does the brain decide on what it “sees”?
FB: This is exactly where things become interesting. We have the internal model and the incoming signals, and either or both can be uncertain. In foggy weather, the incoming signals can be ambiguous, thus your brain will rely more on its model, foreknowledge. When the internal model is vague, we’ll rely more on the incoming signals, or what we call the perceptual evidence.

For example, if you are in a forest in this foggy weather and you see a shape, you are likely to think that the shape is a deer because the context is a forest. But if you are in a city, you are more likely to think that this same shape is a human. We use an example at the lab, where one sees a few different versions of a street scene with a blurry shape either by the road or on the road. It is always the exact same shape, just flipped horizontally or vertically. When the shape is horizontal and it's on the road, people see a car. And when it's vertical and by the road, people see a human. So the context determines what you perceive.

MT: What about when the difference between the internal model and the incoming signals is very big? Do we try to gather more information to solve what exactly we are seeing? 
FB: Yes, there are two ways to solve this. One is to change the internal model because it's wrong. The brain can alter the internal model multiple times, so that it minimizes the prediction error and eventually arrives at the best solution. But if the error cannot be solved by changing the model, one can also sample more. For instance, people will take longer to process images that are ambiguous to them. A member of our lab, Dr. Lea-Maria Schmitt, is showing people artificial images where she combines two animals, for example, a rabbit with a duck. This confuses people, and they will need to pay more attention and sample more information to infer what they are looking at. Basically, it takes longer when the information in the brain must travel back and forth, in multiple iterations, to solve the prediction error. This is called iterative inference.

MT: In the earliest text on the uncanny (in 1906) Ernst Jentsch describes uncanny as something that defies the intellectual mastery of our physical environment. In one of his examples, one sits on a tree trunk and suddenly the trunk starts to move and shows itself to be a giant snake. Why is uncertainty such a powerful feeling? And does our brain opt for the safest predictions? 
FB: The goal of the brain is to predict the world as accurately as possible, so it does not like uncertainty. The tree root example is a bit of a specific case, though, as this kind of stimulus triggers a bypass / fast route in our brain instead of the normal way of processing information, to keep us safe.

In general, what I like about the concept of “uncanny” is that it deals with something being “strangely familiar” it evokes a sense of familiarity but also questions whether this thing is part of our natural world: whether something is animate or not, alive or not, alien or not. These kinds of things challenge our internal model in a way that most people find uncomfortable. This might indeed have to do with some evolutionary pressure that being able to distinguish animate from inanimate keeps us safe.

MT: Makes sense, and Freud remarked in his essay The “Uncanny” (1919) that while we do not like uncanny things in our physical surroundings, in arts – literature and film, for example – the same things can be pleasurable. But this, of course,requires that the reader/spectator know that they are physically safe.
FB: This also follows the Goldilocks principle. We do like surprises, a degree of randomness, or a bit of uncertainty when it is at an intermediate level. A lot of uncertainty is uncomfortable, but when things are fully predictable, they can also be boring. There is this niche in between that keeps us intrigued and curious, and makes us enjoy things. 

MT: At MU, there is my new artwork On the Third Day which consists of a video and an animation. At the very beginning of the animation, there are muffled and distorted sounds that do not make sense when one first hears them. It is impossible to make out the words. The words are “Run. Run. Listen. You have to escape now.” Why is it that, when knowing what the words are, one can suddenly hear them very clearly?
FB: I think this is a very powerful experience. What happens is that you first hear the sounds, but you're uncertain what you're dealing with. The incoming signal is super unclear, and you also have no context, no lips moving, for instance. Maybe it is not even words, but just sounds. So that's why we can't interpret the signal and we're left with uncertainty. But if you are told what you're listening to, you insert knowledge into your prediction, and then everything changes. I love this example because it's so clear that when you’ve been told the words, you perceive exactly what they are.

MT: I had a peculiar problem with this part of the work. I knew what I wanted to do with the sounds for the beginning of the animation, but there was no way for me to know if the distortion of the words was enough, because I always heard the words myself. Thus, I had to test it on friends who knew nothing about the piece and ask them if they heard anything. 
FB: Yes, that's the burden of knowledge, right? As soon as you know, you can't undo knowing the words. That makes it very tricky.

MT: This brings to my mind a magic trick where a magician throws a ball in the air, and it vanishes. However, the magician only appears to throw the ball in the air, and our brain “hallucinates” seeing the ball in the air. Is the working of this trick based on our brain’s prediction capabilities?
FB: Yes, that is how the trick works because we are used to seeing a ball moving a certain way. We anticipate the trajectory of the ball. If we didn’t, playing tennis would be completely impossible, for example. This ball example relates to research in our lab. We have a project where we show people a ball moving as blips on the screen. It is a dot changing place and we perceive it as movement. After seeing this multiple times, people have an expectation of the movement of the dot. And when we only show the starting point of the movement and then a black screen, the activity in the visual cortex shows similar activity to when seeing the full trajectory. The brain is “hallucinating” the trajectory of the dot, because it expects it.

MT: Ok, we can hallucinate a ball moving, but we can also imagine a lot of other things, and simulate events in our minds. But what is the relationship between imagination and the predictiveness of the brain? 
FB: There are interesting links between the two, but they are not the same. First, the activity in the brain is weaker when we imagine a ball versus seeing it, but the two also have a different activity profile in the brain.

For instance, we know that when people are imagining a certain picture, the activity in their brain, especially in the part of the brain related to vision, shows very similar activity to when they are actually seeing the image, similar to the moving dot example, but not entirely. That folded outer layer of your brain, the neocortex, has layers. With advanced MRI machines, we can measure activity not only in the different areas of the brain, but also in the different layers of the neocortex. When we see something, information goes from the back of our brain to the front of our brain, and the information travels through these layers in a specific way. This is what we call bottom-up. But there is also a connection pattern in the layers that relates to the information traveling the other way, which we call top-down. For instance, when we are imagining something, we are driving activity from the front of our brain to the back of our brain. This makes sense, as there is no information coming in from your retina, as we are only thinking about it. When we are anticipating an image, the related activity has the same layer profile as when we are imagining something. In that sense, when our predictions ask us to really hallucinate something, then it is quite like imagining something.

MT: The artworks On the Third Day and Hulda/Lilli involve storytelling. How does storytelling affect our predictions? 
FB: If you think about building internal models, they are based on knowledge, and storytelling is generating context and knowledge. It helps you build a model of whatever world you are dealing with. This will influence how you perceive any further information. 

In our lab, Dr. Eva Berlot is investigating this quite literally. She shows people illustrations that create a story shown in a certain order. Either the order of the images keeps the narrative intact, making it meaningful or the order is scrambled, and thus not meaningful. People’s eye movements differ, for instance, they look at different parts of the image depending on whether the order was meaningful or not meaningful. Thus, a narrative can influence information sampling strategies.  

I think viewers of your work are experiencing this in real life; the experience changes dramatically depending on the story that is told. It reminds us that we are far from neutral, objective beings seeing things “as they are”. Instead, we are actively shaping what we are seeing, and stories play an important role in this.

MT: I showed you a test version of the flower petals moving on a sea of cockroaches without telling you what you were looking at. What were your initial thoughts?
FB: I saw the video clip before you mentioned the word uncanny, but in hindsight, that is exactly how I felt, but not straight away from the beginning. When I first saw the video, I just saw these lovely flower petals. But then I noticed some movements, and that triggered curiosity. Curiosity is an important driver in reducing uncertainty about the world, as it makes us actively explore our environment. Somehow, I thought there was a woman underneath, like it was a floral bath. Perhaps I thought this because of the movie American Beauty, where a man fantasizes about a girl covered in rose petals. But the movement of the petals was peculiar,and then the uncanny feeling hit me. It reminded me of being pregnant myself. Feeling a baby move in your belly is wonderful, but also very, very unfamiliar. But when I saw a little antenna peeking out in the video, everything suddenly changed, and I realized,“Bugs! There are bugs of some kind!”.

MT: We have been talking about the predictiveness of the human brain, but how well would this apply to other species? 
FB: If you view humans as agents who need to understand their natural environment, this does not pertain only to humans. Animals also must understand their environment for survival. So, I think a lot of these things that we are researching also hold for animals, but at the same time, we know that the way the human brain has optimized learning is quite distinct from other species. We are masters at learning. If we compare our capabilities of learning, predicting, and having expectations to those of other animals’, the abilities differ vastly. For instance, human babies just a few months old already have an idea of the statistical properties of their environment. The babies have expectations about what should happen next, they are sensitive to surprises, and they will actually look with their eyes in a way that will reduce uncertainty and maximize learning. This doesn’t compare to what we can expect from any adult monkey.

MT: Last time we talked, the principal investigator of your lab, professor Floris de Lange, was also present. We talked about this small box he has in his office. It is a small box that has an on-off switch, and if one turns the switch to “on”, a finger appears and pushes the switch back to off. Thus, uncannily, the box just turns itself off. Floris said that a student of his gave the box to him because it resembles the human brain. Why does the human brain want to switch itself off? 
FB: There are two ways to look at this. One way is that our brain is trying all the time to come up with better and better predictions of the world, and then filter the known things out. One could say that the goal is that everything is predictable, we understand everything, and then there is nothing left to perceive. Then, the brain shuts off, so to speak.

Another way to look at this is that one of the reasons we think there is a predictive brain is because it is very efficient in terms of energy consumption. If the brain is predicting the world and it only needs to compare its prediction to the incoming signals, meaning that the brain only has to calculate the prediction error, that's less computationally costly than perceiving everything and calculating all the information. But if this is true, that the brain wants to maximally predict and then spend as little energy as possible, you could also say it actually prefers to be inactive. So that's a bit of a weird idea. Philosophers have posed the question that if this is true, then why don't we just kill ourselves? Because the dead spend no energy. 

To answer this question, neuroscientists have borrowed an idea from thermodynamics, that systems try to be stable and they do this by reducing what is called free energy. The more stable a system is, the less free energy there is. So, similar to the idea of reducing uncertainty, there's also this idea of reducing free energy. If you see the brain as a system that wants to be stable by reducing free energy, then the answer to this philosopher's question is that, if one would die, it would release a massive amount of free energy. Thus, we don't kill ourselves, preferring to be as stable as possible.