I love it! Because I loved The Matrix movie. There's something wildly appealing about our consciousness being deceived about the nature of reality in such a fashion that it is very difficult to escape the bounds of that deception.
The "it" that I'm loving is a book by Michael Graziano, a professor of psychology and neuroscience at Princeton University. Rethinking Consciousness: A Scientific Theory of Subjective Experience is one of the best books I've read about the nature of consciousness, and I've read a lot of them.
Here's a 13-minute video where Graziano describes the key aspects of his Attention Schema Theory.
And here's some excerpts from his book that provide a pretty good overview of the theory, especially if you watch the video first.
Graziano's approach explains why almost all people believe that they possess, or are, an immaterial consciousness. That just feels so real, so right, so immediately obvious.
Which is exactly what the Attention Schema Theory predicts. A schema isn't the real thing. It is a simplified version of the thing. Evolution hasn't given us the ability to be aware of what the hundred billion or so neurons in the brain are doing.
Instead, the brain presents us with a view of the world, and of ourselves, that is functional, because it leaves out unnecessary details. Like, how the physical brain produces a metaphysical sense of consciousness and attention.
This passage describes why the Attention Schema Theory is believable. Covert attention refers to "a roving mental focus that can take in information apart from where the senses are pointed, like hearing sirens at a distance or recalling a memory." (from the book jacket)
In my view, the attention schema theory of consciousness has some inevitability to its logic. First, we know that the cortex uses covert attention. Second, we know that it needs to control that attention. Third, we know that the brain must have an internal model of attention in order to control that attention.
Fourth, we know that a detailed, fully accurate internal model is at best wasteful and at worst harmful to the process, and so, this internal model of attention would necessarily leave out the mechanistic details.
Therefore, and fifth, an attention schema would depict the self as containing an amorphous, nonphysical, internal power, an ability to know, to experience, and to respond, a roving mental focus -- the essence of covert attention without the underpinning details.
From first principles, if you had to build a well-functioning brain that had a powerful, cortical style of covert attention, you would build a machine that, drawing on the information constructed within it, would assert that it has a non-physical consciousness.
That cortical machine, of course, would not know that its subjective conscious experience is a construct or a simplification. It would take the nonphysical nature of conscious experience as a reality, because -- somewhat tautologically -- the brain only knows what it knows. It is captive to its own information.
This passage describes why the attention schema in humans differs from that in computers, at least as they exist currently,
The human attention schema, having been shaped over hundreds of millions of years of evolution, has weird, biologically bumpy content.
It depicts attention as an invisible property, a mind that can experience or take possession of items, a force that empowers me to act and to remember, something that in itself has no physical substance but still lurks privately inside me.
The attention schema is more than a pointer to an object or a couple of lines of code. It builds a rich picture of attention and its predictable consequences. Build a machine with that kind of an attention schema, containing the weird, biologically messy lumps of the real one and you'll have a machine that can claim to be conscious in the same ways that humans do
No major technological hurdle stands in the way of an artificial attention schema. It is, arguably, the easiest and certainly the most circumscribed of all the components to build into the machine.
And this passage gets at the objective nature of the Attention Schema Theory, in principle.
From the perspective of the attention schema theory, we can know, with objective certainty, whether a machine has the same kind of consciousness that people have. And direct personal experience is not the only way -- it's not even a very good way -- to know about one's own consciousness.
"Of course I'm conscious. I know I am, because I have a direct experience of it."
If that isn't the definition of a circular logic loop, I don't know what is. Consciousness is a direct experience. Therefore, the statement is tantamount to, "I know I'm conscious, because I'm conscious."
As I've said before, the machine is captive to the information it contains.
A particular internal model informs the machine that it has consciousness, and therefore it "knows" that it is conscious. The internal model informs the machine that its consciousness is without physical substance and forever private, and therefore it "knows" that its consciousness is unconfirmable by anyone else.
But an internal model is information, and information can be objectively measured. We don't need to rely on personal affirmation.
In the attention schema theory, to go about determining whether a machine is conscious, we should probe its innards to find out whether it contains an attention schema, and we should read the information within the attention schema.
We will then learn, with objective certainty, whether this is a machine that thinks it has a subjective conscious experience in the same way that we think we do. If it has the requisite information in that internal model, then, yes. If not, then no. All of this is, in principle, measurable and confirmable.