Today I finished reading the final pages of Anil Seth's book, "Being You: A New Science of Consciousness." Here's a provocative passage from the Epilogue.
Everything in conscious experience is a perception of sorts, and every perception is a kind of controlled -- or controlling -- hallucination.
What excites me most about this way of thinking is how far it may take us.
Experiences of free will are perceptions. The flow of time is a perception. Perhaps even the three-dimensional structure of our experienced world and the sense that the contents of perceptual experience are objectively real -- these may be aspects of perception too.
The tools of consciousness science are allowing us to get ever closer to Kant's noumenon, the ultimately unknowable reality of which we, too, are a part.
This idea that conscious experience is a perception of sorts fits with what Sam Harris often says in the guided meditations I listen to on his Waking Up app.
For example, he'll ask if it seems that we're perceiving the world from a place somewhere inside our head. Then Harris will say that this seeming is itself a conscious experience, in the same way that objects in the world are conscious experiences.
In other words, we can't get outside of what the brain is doing. There's no ethereal observer, or self, floating around somewhere in our cranium. What there is are many billions of neurons connected in trillions of ways, which end up producing our conscious experiences.
I thought about trying to explain this from what I've learned in Seth's book. Then I recalled that Seth had written an essay published in the Boston Globe, "Reality is what you make of it," where he explains how the brain hallucinates reality.
I've copied in the essay below, which covers key points in his Being You book.
I open my eyes and a world appears. I’m sitting on a plastic chair on the deck of a tumbledown wooden house, high in a cypress forest a few miles north of Santa Cruz, Calif. It’s early morning. Looking straight out, I can see tall trees still wreathed in the cool ocean fog that rolls in every night, sending the temperature plummeting. I can’t see the ground, so the deck and the trees all seem to be floating together with me in the mist.
There are some other plastic chairs, a table, and a tray arranged with coffee and bread. I can hear birdsong, some rustling around in the back — the people I’m staying with — and a distant murmur from something I can’t identify. Not every morning is like this; this is a good morning. I have to persuade myself, not for the first time, that this extraordinary world is a construction of my brain, a kind of “controlled hallucination.”
Whenever we are conscious, we are conscious of something, or of many things. These are the contents of consciousness. To understand how they come about, and what I mean by controlled hallucination, let’s change our perspective.
Imagine for a moment that you are a brain. Really try to think about what it’s like up there, sealed inside the bony vault of the skull, trying to figure out what’s out there in the world. There’s no light, no sound, no anything — it’s completely dark and utterly silent. When it forms perceptions, all the brain has to go on is a constant barrage of electrical signals that are only indirectly related to things out in the world, whatever they may be.
These sensory inputs don’t come with labels attached (“I’m from a cup of coffee.” “I’m from a tree.”). They don’t even arrive with labels announcing their modality, whether they are visual or auditory or sensations of touch, temperature, or proprioception (the sense of body position).
How does the brain transform these inherently ambiguous sensory signals into a coherent perceptual world full of objects, people, and places? The essential idea is that the brain is a prediction machine, so that what we see, hear, and feel is nothing more than the brain’s best guess of the causes of its sensory inputs. Following this idea all the way through, we will see that the contents of consciousness are a kind of waking dream — a controlled hallucination — that is both more than and less than whatever the real world really is.
Here’s a common-sense view of perception. Let’s call it the “how things seem” view.
In this view, there’s a mind-independent reality out there, full of objects and people and places that have properties like color, shape, texture, and so on. Our senses act as transparent windows onto this world, detecting these objects and their features and conveying this information to the brain, whereupon complex neuronal processes read it out to form perceptions.
A red coffee cup out there in the world leads to a perception of a red coffee cup generated within the brain. As to who or what is doing the perceiving — well, that’s the “self,” isn’t it?, the “I behind the eyes,” one might say, the recipient of wave upon wave of sensory data, which uses its perceptual readouts to guide behavior, to decide what to do next. There’s a cup of coffee over there. I perceive it and I pick it up. I sense, I think, and then I act.
This is an appealing description. Patterns of thinking established over decades, maybe centuries, have accustomed us to the idea that the brain is some kind of computer perched inside the skull, processing sensory information to build an inner picture of the outside world for the benefit of the self. This picture is so familiar that it can be difficult to conceive of any reasonable alternative. Indeed, many neuroscientists and psychologists still think about perception in this way, as a process in which the brain works from the “bottom up” to discern features of things in the world.
Here’s how the bottom-up picture is supposed to work: Stimuli from the world — light waves, sound waves, molecules conveying tastes and smells, and so on — impinge on sensory organs and cause electrical impulses to flow “upwards” or “inwards” into the brain. These sensory signals pass through several distinct processing stages, and at each stage the brain picks out increasingly complex features.
Let’s take vision as an example. At first the brain might detect features like luminance or edges, and later it might detect the parts of discrete objects — such as eyes and ears, or wheels and side-view mirrors. Still later stages of this processing system would respond to whole objects, or object categories, like faces and cars.
In this way, the external world with its objects and people and all sorts of everything becomes recapitulated in a series of features extracted from the river of sensory data flowing into the brain. Signals flowing in the opposite direction — from the “top down” or the “inside out,” serve to refine or otherwise constrain the bottom-up flow of sensory information.
This bottom-up view of perception fits well with what we know about the anatomy of the brain, at least at first glance. Perceptual systems of all modalities are organized in the brain as hierarchies. In the visual system, for example, the primary visual cortex of the brain is close to sensory inputs, while the parietal and frontal cortices, where later stages of processing are believed to occur, are further away.
Studies of brain activity also seem friendly to this bottom-up view. Experiments going back decades — investigating the visual systems of cats and monkeys — have repeatedly shown that neurons at early stages of visual processing respond to simple features like edges, while neurons at later stages respond to complex features like faces. More recent experiments using methods like functional magnetic resonance imaging have revealed much the same thing in human brains.
You can even build artificial “perceiving systems” this way. Machine vision systems based on artificial neural networks are nowadays achieving impressive performance levels, in some situations comparable to those of humans. These systems, too, are frequently based on bottom-up theories.
With all these points in its favor, the bottom-up “how things seem” view of perception seems to be on pretty solid ground.
Ludwig Wittgenstein: “Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis?”
Elizabeth Anscombe: “I suppose, because it looked as if the sun went round the earth.”
Wittgenstein: “Well, what would it have looked like if it had looked as if the earth turned on its axis?”
In this delightful exchange between Wittgenstein and his fellow philosopher (and biographer) Elizabeth Anscombe, the legendary Austrian thinker uses the Copernican revolution to illustrate the point that how things seem is not necessarily how they are. Although it seems as though the sun goes around the earth, it is of course the earth rotating on its own axis that gives us night and day, and it is the sun, not the earth, that sits at the center of the solar system.
Nothing new here, you might think, and you’d be right. But Wittgenstein was driving at something deeper. His real message for Anscombe was that even with a greater understanding of how things actually are, at some level things still appear the same way they always did. The sun rises in the east and sets in the west, same as always.
As with the solar system, so with perception. I open my eyes and it seems as though there’s a real world out there. Today I’m at home in Brighton, England, and there are no cypress trees as there were in Santa Cruz, just the usual scatter of objects on my desk, a red chair in the corner, and beyond the window a totter of chimney pots. These objects seem to have specific shapes and colors, and for the ones closer at hand, smells and textures too. This is how things seem.
Although it may seem as though my senses provide transparent windows onto a mind-independent reality, and that perception is a process of “reading out” sensory data, what’s really going on is — I believe — quite different. Perceptions do not come from the bottom up or the outside in, they come primarily from the top down or the inside out. What we experience is built from the brain’s predictions, or best guesses, about the causes of sensory signals.
There is a real world out there, but the way in which that real world appears in conscious experience is always a construction — a writing as much as a reading. As with the Copernican revolution, this top-down view of perception remains consistent with much of the existing evidence, leaving unchanged many aspects of how things seem, while at the same time changing everything.
This notion of perception is best described as “controlled hallucination,” a phrase I first heard from the British neuroscientist Chris Frith many years ago. The essential ingredients of the controlled hallucination view, as I think of it, are as follows.
First, the brain is constantly making predictions about the causes of its sensory signals, predictions that cascade in a top-down direction through the brain’s perceptual hierarchies. If you happen to be looking at a coffee cup, your visual cortex will be formulating predictions about the causes of the sensory signals that originate from this coffee cup.
Second, sensory signals, which stream into the brain from the bottom up, or outside in, keep these perceptual predictions tied in useful ways to their causes — in this case, the coffee cup. These signals function as “prediction errors,” registering the difference between what the brain expects and what it gets at every level of processing.
These bottom-up signals therefore serve to refine and calibrate the top-down predictions. By adjusting top-down predictions so as to suppress bottom-up prediction errors, the brain’s perceptual best guesses maintain their grip on their causes in the world.
The third and most important ingredient in the controlled hallucination view is the claim that perceptual experience — in this case the subjective experience of “seeing a coffee cup” — is determined by the content of the (top-down) predictions, and not by the (bottom-up) sensory signals. We never experience sensory signals themselves; we only ever experience interpretations of them.
Mix these ingredients together and we’ve cooked up a Copernican inversion for how to think about perception. It seems as though the world pours itself directly into our conscious minds through our sensory organs. With this mindset, it is natural to think of perception as a process of bottom-up feature detection — a “reading” of the world around us. But what we actually perceive is a top-down, inside-out neuronal fantasy that is reined in by reality. It is not a transparent window onto whatever that reality may be.
And — to channel Wittgenstein once more — what would it seem like, if it seemed as if perception was a top-down best guess? Well, just as the sun still rises in the east and sets in the west, if it seemed as if perception was a controlled hallucination, the coffee cup on the table — the entirety of anyone’s perceptual experience — would still seem the same way it always did and always will.
When we think about hallucination, we typically think of some kind of internally generated perception, a seeing or a hearing of something that isn’t actually there, or that other people don’t hear or see — as can happen in schizophrenia or perhaps during a psychedelic adventure. These associations place hallucination in contrast to “normal” perception, which is assumed to reflect things as they actually exist out in the world.
In the top-down view of perception, this sharp distinction becomes a matter of degree. Both normal perception and abnormal hallucination involve internally generated predictions about the causes of sensory inputs, and the two share a core set of mechanisms in the brain.
The difference is that in normal perception, what we perceive is tied to — controlled by — causes in the world and the body, whereas in the case of hallucination our perceptions have, to some extent, lost their grip on these causes. When we hallucinate, as considerable evidence from psychological and brain imaging studies is now indicating, our perceptual predictions are not properly updated in light of prediction errors.
If perception is controlled hallucination, then — equally — hallucination can be thought of as uncontrolled perception. The two are different, but to ask where to draw the line is like asking where the boundary is between day and night. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.
Anil Seth, a professor of cognitive and computational neuroscience at the University of Sussex, in the United Kingdom, is the author of “Being You: A New Science of Consciousness,” from which this essay is adapted, with permission from Dutton, an imprint of Penguin Publishing Group. Copyright © 2021 by Anil Seth.
Recent Comments