The cost of borrowed thought: a response to 'Your Brain on ChatGPT'
AI, cognitive debt, drawing bicycles and the illusion of effortlessness
There’s a peculiar satisfaction in solving a problem unaided. The click of understanding. The felt sense of ownership. A little corner of the world, momentarily tamed by one’s own effort. But what happens when that effort is replaced by fluency without friction? When a generative model offers not just suggestions but seductive, elegant, articulate, plausible answers that arrive in seconds? The recent study Your Brain on ChatGPT invites us to consider not simply whether AI is useful in education (it clearly is), but what we lose when we outsource the work of thinking to machines.
The paper’s core claim is not that AI makes us stupid - though some headlines tried to squeeze that juice - but that it reconfigures the mental landscape. Using AI too early in a task, before one has wrestled with the ideas independently, may produce a kind of cognitive atrophy. Brain scans show lower coherence in key cognitive bands; participants struggle to recall or explain ideas they had ‘written’ with AI help; and subjective feelings of ownership dwindle. None of this should comes as a particular shock but it is revealing.
There’s something distinctly Socratic about the concern. Socrates, you’ll recall, was famously suspicious of writing.1 He feared it would weaken memory and replace true understanding with the illusion of it. “They will appear to be omniscient,” Plato wrote in Phaedrus, “and will generally know nothing.” writing was one of the earliest examples of cognitive offloading. What writing was to memory, perhaps AI is to cognition more broadly: a prosthetic that risks becoming a crutch. A shortcut so efficient it bypasses not just effort but engagement.
The authors of the study borrow the term cognitive debt to describe the downstream effects of this shortcutting. Like instant credit, it feels painless in the moment. You get what you want - an answer, an essay, a plausible-sounding synthesis - without having to grind it out. But the cost is deferred and will need repaying. Over time, you become less able to do the thing yourself. More dependent, less confident. less cognitively agile. The danger seen in this way, isn’t that AI will replace thinking, but that it will make the very act of thinking feel unnecessary.
Yet the real subtlety in the study lies not in its critique of AI per se, but in its exploration of sequence. Participants who first drafted independently and then used AI showed no such erosion in neural coherence. In fact, the combination often improved outcomes.2 Like a good editor, the machine refined and clarified without replacing the underlying thought. This is no Luddite tract: the message is not “ban the robots,” but rather, “don’t invite them in too early.”
This matters both pedagogically and philosophically. It suggests that AI is not inherently corrupting but its effects are contextually dependent. When students are supported to think first - stumble, try, fail, revise - AI becomes a tool of amplification, not substitution. But if it’s used to bypass uncertainty, it becomes a hollowing mechanism that simulates success without the undelying substance.
There’s an epistemological knot to untangle here: AI models like GPT generate outputs that mimic understanding but lack any semantic anchor. They are, in Daniel Dennett’s terms, “competence without comprehension.”3 When students use such tools to produce fluent, accurate-sounding prose, they may unwittingly reproduce this emptiness. Like AI itself, students may appear to know stuff but they really don’t, and worse, they won’t be conscious the gap. This is the illusion of knowledge.
This illusion - the seductive belief that because we can recognise something, we therefore understand it - is nowhere more starkly illustrated than in psychologist Rebecca Lawson’s famous 2006 study The science of cycology: Failures to understand how everyday objects work. Not wanting to conflate artistic skill with knowledge of bicycles, Lawson asked participants to complete partially drawn diagrams of bicycles by adding the chain and pedals. Given how familiar bicycles are you might expect near-perfect accuracy. Instead, many responses were structurally impossible. Chains floated unconnected. Pedals appeared on the front wheel. Basic mechanical logic was routinely violated.
The implication is clear: repeated exposure, even daily use, does not guarantee understanding. The sense that because we can use something, we must know how it works is the same illusion that AI-enabled writing encourages: we recognise fluent output and mistake it for our own competence. But recognition is not recall, and familiarity is not understanding.
Lawson’s findings, then, are not just about bicycles. They are about epistemology. They remind us that deep knowledge is generative. And that’s exactly the danger when AI tools offer us polished answers before we’ve done the cognitive lifting ourselves: we risk becoming like those sketchers, confident in our command, until someone hands us a blank page and says, “Now draw.”
Educationally, this challenges one of our fondest myths: that ease is a sign of mastery. We tend to equate fluency with learning, speed with competence, confidence with knowledge. But the desirable difficulty literature reminds us otherwise. ‘Learning’ that feels easy is often shallow. My working definition is ‘Learning is the long-term retention of knowledge and the ability to transfer it to new contexts.’ Retain is about durability - how thing knowledge lasts - and transfer is concerned with flexibility - how well we can use it in new circumstances. If knowledge is retained and cannot be transferred, it’s hard to argue that learning has taken place. The point, then, is not to make thinking feel effortless, but to make effort feel meaningful.
Caution required
Of course, this study is not conclusive. Although its design is elegant, it's scale is modest. We’re talking about 54 students over four months with only 18 completing the final crossover phase, which leaves the most provocative results based on thin ground. We should also bear in mind that this paper is a pre-print and has not been peer reviewed. But it lands a crucial blow against a rising tendency: the belief that access to powerful tools obviates the need for effortful thinking. In fact, it repositions the need for effort and makes its sequencing more critical than ever.
So what should schools do? Well, first, don’t panic. Attempting to ban all access to AI feels like Cnut trying to command the tide not to rise. Instead we need to teach students how to use AI: when, with what prior knowledge, and to what end. We need to build a curriculum where human thought is scaffolded. We need to invite students to wrestle first, polish later. And we need to produce assessments which require students to expose their thinking not just rattle off a polished performance. In other words, teach them that the hardest work is often the most valuable precisely because it cannot be automated. And, most urgently, if we design assessments that can be completed without students having to expend effort, we’ve probably not thought hard enough about what assessment should look like.
There’s a metaphor here, buried in the neural data. Brains that had thought first lit up in synchrony whereas brains that started with AI showed scattered, weaker patterns. Connection, in the deepest sense, requires effort.
“Brain‑to‑LLM participants entered Session 4 after three AI‑free essays. The addition of AI assistance produced a network‑wide spike in alpha‑, beta‑, theta‑, and delta‑band directed connectivity” . What this means - I think - is that when students think first and only afterwards invite AI in, their brains ‘light up’ which we are asked to accept, means that executive functions, memory systems, and visual processing networks all engage more fully. This may not be a correct inference but it does sound plausible.
This phrase comes from The Intentional Stance. Dennett uses it to describe systems - like certain animals, or machines - that can behave in ways that appear intelligent or purposeful, despite lacking any real understanding or awareness. In the context of AI, he later revisited this idea in more depth in From Bacteria to Bach and Back, arguing that current (2017) AI models (including language models) simulate intelligence through pattern recognition and statistical mimicry, not through understanding. As I understand it, this is where we still are.
Thanks for the article, David. I suppose we need to create the conditions for meaningful cognitive engagement where the struggle is worth it.
I worked in a school that destroyed the groundwork for meaningful struggle when administration became obsessed with data and metrics.
It pushed us into these manic modes of better instructional practice to the detriment of meaningful thinking practices. The idea that it has "meaning if it can be measured" will plague our educational systems until disruption arrives. That in itself is a "competence without comprehension" and it happens even without AI's shadow.
This was a great read. I also wrote on this study late last week...https://open.substack.com/pub/jdweber/p/when-thinking-gets-outsourced?r=4hwsmu&utm_campaign=post&utm_medium=web