Here's some passages from the "Future" chapter in The Idea of the Brain: The Past and Future of Neuroscience by Matthew Cobb that I've blogged about recently. (See here and here and here for my previous posts about the book.)
I especially like the passage that begins with "A related view" below. Almost certainly the brain isn't at all like a computer, for reasons Cobb describes.
I also enjoyed the open-ended possibilities of where brain research is heading that concludes Cobb's book -- the last passage i've shared.
With science, research can go in many different directions. That's a big appeal of science, how it follows truth wherever it might lead. Dead-ends aren't a failure, since they indicate more productive avenues of knowledge.
Here's the excerpts.
These repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing (scientists are even hard put to come up with a precise definition of what a brain is).
As Crick observed the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems.
...The nature of the brain -- simultaneously integrated and composite -- may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. After all, as Marr put it, the brain is composed of 'a whole lot' of information-processing devices.
...The metaphors of neuroscience -- computers, coding, wiring diagrams and so on -- are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think.
But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain there is no agreement that such a moment has arrived.
...A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded onto some device or into another brain.
In the way this is generally presented, this is wrong, or at best hopelessly naive. The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support -- which somehow include consciousness -- are the same thing.
In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware in which what is happening and where it is happening are completely intertwined.
Imagining that we can repurpose our nervous system to run different programs, or upload our mind to a server, might sound scientific but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains and could be transferred into a different head or replaced by another mind.
It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.
But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage and would require unimaginably vast computational power that precisely mimicked the structure of the brain in question.
...Every cell is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, the main way that the nervous system alters its working is by changes in the pattern of activity in networks of cells composed of large numbers of units; it is these networks that channel, shift, and shunt activity.
...When we extend this insight to understanding the brain, the key implication is that, as the title of a 1997 article strikingly put it, 'The Brain Has a Body.' And the body has an environment, and both affect how the brain does what it does.
This might seem trivially obvious, but neither the body nor the environment feature in modeling approaches that seek to understand the brain. The physiological reality of all brains is that they interact with the body and the external environment from the moment they begin to develop.
...Animals are not robots piloted by brains, we are all, whether maggots or humans, individuals with agency and a developmental and evolutionary history. Those factors are all involved with how our brains work and need to be integrated into our models.
...The significance of remembering that brains are in bodies can be seen from the way that brains interact with the gut microbiome. 'Germ-free' mice have no microbes living in their gut, and as a result show altered levels of serotonin in the brain and lower levels of anxious behavior.
The unlikely causal link between microbes and behaviour was shown when the introduction of a normal microbiome into the mice reversed both these effects: fundamental aspects of brain biochemistry can be affected by the microbes that live in the gut.
...Perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us.
Or a theory will somehow pop out of the vast amounts of imaging data we are generating.
Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations.
Or by focusing on simple neural network principles we will understand higher-level organisation.
Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on.
Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains.
Or models developed to explain simple brains will turn out to be scalable and will explain ours too.
Or the default mode network identified in humans will turn out to be applicable to other animals and to hold the key to overall function.
Or unimagined new technology will change all our views by providing a radical new metaphor for the brain.
Or our computer systems will provide us with alarming new insight by becoming conscious.
Or a new framework will emerge from cybernetics, control theory, complexity, and dynamical systems theory, semantics, and semiotics.
Or we will accept that there is no theory to be found because brains have no overall logic just adequate explanations of each tiny part and we will have to be satisfied with that.