Of course the mind is not a computer: no one seriously thinks it is
Why the Computational Theory of Mind is misunderstood and continues to be so important
It’s one of those maddeningly persistent misunderstandings, like the myth that we only use 10% of our brains or that memory works like a filing cabinet. Say the words computational theory of mind, and before you’ve even reached the end of the sentence, someone will leap in with: “But the brain’s not a computer!” As if that were the final word. As if that were what you’d actually said.
Guy Claxton is a perfect example. In The Future of Teaching, he takes issue with what he calls the “computer metaphor” of the mind.
“The computer metaphor for the mind… encourages the belief that knowledge is stuff that can be downloaded into the brain, and that the brain is a kind of passive storage device. This leads to a focus on information transmission and recall, rather than on the development of flexible, usable intelligence.”
The Future of Teaching, 2021, p. 15
He ridicules the idea that children are data processors, that knowledge is information to be “uploaded,” that thinking is computation in the sense a laptop might compute. And in so doing, he manages to completely misrepresent the very theory he claims to reject.
As is so often the case, Claxton is ‘not even wrong’. His misrepresentation is a category error. No cognitive scientist or philosopher of mind thinks we have some sort of organic parallel of silicon circuits and a CPU humming away behind our eyeballs. But that isn’t what the computational theory of mind (CTM) is claiming, not even remotely.
A short history of CTM
The idea that the mind might be understood in mechanistic terms isn’t new. Descartes, in the 17th century, imagined animals as clockwork automata and humans as a strange blend of machine and soul. But it wasn’t until the mid-20th century that a serious, technical theory of mental computation emerged, a theory that didn’t rely on metaphysical speculation but on logic, language, and emerging computer science.
It all began, more or less, with Alan Turing. In 1936, Turing introduced the idea of a hypothetical machine that could manipulate symbols on a strip of tape according to a set of rules. This abstract device - what we now call a Turing machine - was capable, in theory, of computing anything that could be computed. Crucially, it showed that symbolic operations could produce intelligent-seeming results without needing any understanding or intention. Although this was primarily a mathematical breakthrough, it had enormous philosophical consequences.
Fast forward to the 1950s and the dawn of cognitive science. Behaviourism, which had dominated psychology for decades, was beginning to crack under pressure. It could explain stimulus and response but said nothing about thought, intention, or meaning. No one could explain how a child learned language from scratch using only reinforcement. Enter Noam Chomsky, whose 1959 demolition of B.F. Skinner’s Verbal Behavior exposed behaviourism’s limitations and paved the way for new models of the mind.
At the same time, figures like Herbert Simon, Allen Newell, and Marvin Minsky were using early computers to simulate problem-solving and planning. These weren’t just machines crunching numbers, they were behaving, in some sense, intelligently. This prompted a shift: what if human cognition worked on similar principles? What if the mind itself was an information-processing system?
This idea gained philosophical heft through the work of Hilary Putnam and Jerry Fodor. Putnam’s “machine functionalism” argued that mental states are defined by what they do, not what they’re made of. Fodor, meanwhile, proposed that the mind had a language of thought - or Mentalese - a system of internal representation and symbolic rules, much like the structure of a computer program. Mental processes, on this view, were computations carried out over these mental symbols.
By the 1980s, this view was mainstream within cognitive psychology. David Marr famously articulated three levels of explanation for cognitive systems: the computational (what the system does and why), the algorithmic (how it does it), and the implementational (what physically realises it). Marr’s model offered clarity, allowing researchers to discuss mental functions without getting bogged down in the mechanics of neurons or neurotransmitters.
Later developments challenged aspects of the theory. Connectionist models were better at handling perceptual tasks, such as recognising faces, understanding accents, or learning irregular verbs. These models were robust, fault-tolerant, and flexible, everything that classical models found difficult. They suggested that cognition could emerge from distributed, pattern-based processes rather than discrete symbolic manipulation. Others, like John Searle, argued that computation alone couldn’t explain consciousness or understanding, famously through his Chinese Room thought experiment.
At the same time, thinkers like Andy Clark and Francisco Varela developed embodied and enactive theories of cognition. These models rejected the idea of the mind as an abstract computer altogether. Cognition, they argued, is not computation on representations, but an activity rooted in the body, shaped by our sensory-motor engagement with the world.
Keep reading with a 7-day free trial
Subscribe to David Didau: The Learning Spy to keep reading this post and get 7 days of free access to the full post archives.