Are we getting stupider?
Why the future of freedom depends on schools as gymnasia of the body and mind.
Another week, another episode of It’s Your Time You’re Wasting
For those who fancy listening to Martin and I chunter on, you can listen to the podcast. Or, as ever, you can read the write up below.
For most of the twentieth century IQ scores rose steadily. Psychologists called it the Flynn Effect: around three IQ points per decade, across multiple countries, explained variously by improved nutrition, schooling, and the demands of modern life. But since the 1990s, the trend has reversed. In many rich nations, IQ scores are flatlining or even declining. At the same time, artificial intelligence is beginning to take over ever more of our thinking.
So: are we getting stupider? Are we living in what Daisy Christodoulou has called a stupidogenic society: a culture that doesn’t just fail to support intelligence, but actively corrodes it?
She reflects on how older generations, with little formal schooling, often possessed impressive skills like rapid mental arithmetic or musical ability, which have become rarer as technology makes such practice unnecessary. She links this loss to two major trends: the reversal of the Flynn Effect and the rise of cognitive offloading, where tasks once done by hand or mind are now outsourced to machines. While offloading makes life easier and societies richer, it also weakens the ‘mental muscles’ that underpinned deeper skills and lifelong capacities. Just as an ‘obesogenic’ society creates physical laziness, today’s digital environment is “stupidogenic,” encouraging shallow thinking and dependence on machines. Some people will respond with new forms of ‘intellectual fitness’ (puzzles, tutoring, competitions) but for most, the danger is atrophy. Schools, she argues, must resist this trend by acting as gymnasia for the mind, deliberately training the basic skills that technology no longer requires but which remain vital for real intelligence.
The rise and fall of IQ
The reversal of the Flynn Effect has puzzled researchers for the last two decades. After half a century of steady gains, test scores in countries like Norway, Denmark, Britain and Australia began to decline in the 1990s.1 Was this the first sign that our collective brains were genuinely getting weaker?
Bratsberg & Rogeber’s 2018 Norwegian Armed Forces study suggests not. Because every 18-year-old male conscript has sat the same intelligence tests for decades, the dataset is one of the largest and longest-running in existence. Looking closely, the researchers concluded that the apparent rise and fall of IQ might not reflect real shifts in general intelligence at all, but rather the quirks of the tests themselves.
Some of the subtests had aged badly. Vocabulary items relied on words that had fallen out of common use; young people weren’t necessarily less intelligent, they were simply unfamiliar with archaic language. Numerical reasoning tasks often depended on long division and pencil-and-paper methods that schools had largely stopped teaching, again, a change in practice rather than in capacity. By contrast, figure-matrix puzzles - the kind where you identify patterns in sequences of shapes - showed dramatic improvements, most likely because such problems now crop up in apps, puzzles, videogames and test-prep courses. In other words, exposure and familiarity, not raw brainpower, may be driving the trends.
The key lesson here is not that humanity has suddenly grown dim-witted, but that how we measure intelligence matters. Test scores are exquisitely sensitive to context, culture and curriculum. Change the test content, and you can produce the illusion of rising or falling intelligence across generations.
A ‘stupidogenic’ society
But even if some of the measured decline in IQ turns out to be a statistical artefact, that doesn’t mean we can relax. Measurement issues may muddy the picture, but they don’t erase the wider cultural concern.
Rather than cultivating attention and memory, today’s environment tends to fragment them. Quick rewards are valued over patient effort, and in education the emphasis has often shifted from building deep knowledge to promoting vaguely defined ‘skills.’ The result is that many students are left with very little substance to think with.
The work of educational evolutionary psychologist David Geary suggests that human cognition is the product of evolutionary pressures. Our brains were shaped for survival in pre-modern environments: reading social cues, navigating landscapes, spotting predators, securing food. In those contexts, the ‘curriculum’ of daily life provided constant, unavoidable practice in the skills that mattered most for survival
Modern life, however, has severed many of those links. Schools and digital technologies often fail to align with our evolved strengths. They neither demand nor reward the slow accumulation of factual knowledge and the deliberate rehearsal of reasoning. Geary’s point is that higher-order thinking doesn’t emerge spontaneously; it has to be built on secure foundations of memory and practice. Without this scaffolding, potential withers.2
The broader implication is striking: intelligence is not fixed. It rises or falls according to the cognitive diet provided by culture and schooling. A society can be cognitively nutritious (rich in challenge, knowledge, and opportunities for practice) or cognitively impoverished, full of distraction and easy shortcuts. And if Daisy is right, ours may be tending toward the latter.
Offloading and the predictive brain
Andy Clark’s work on the predictive brain reframes what’s happening. Our brains are not passive recording devices but prediction machines, constantly generating expectations about what will happen next and updating them when surprised. This is why perception, thought and action are never simply reactive; they are anticipatory, leaning into the future.
Humans have always found ways to offload parts of this predictive burden onto the environment. Writing stores memory outside the skull. Maps spare us from holding an entire landscape in our head. Calculators relieve us of manual arithmetic. These are extensions of our cognition, scaffolding that lightens the load.
Clark and philosopher David Chalmers illustrated this with the famous Otto thought experiment. Otto suffers from Alzheimer’s and can’t rely on his biological memory. Instead, he carries a notebook where he writes down addresses, facts and reminders. When Otto wants to visit a museum, he consults his notebook, just as another person might consult their brain. In functional terms, the notebook is his memory. Clark and Chalmers argue that the mind extends into the world: the boundary between brain and environment is porous, and external props can become integral parts of thinking.3
But AI represents a leap far beyond Otto’s notebook. A notebook stores information but still requires Otto to interpret and apply it. A calculator gives you an answer but only for the specific sum you enter. Generative AI, by contrast, doesn’t just store or compute: it can generate whole chains of reasoning, essays, arguments, even creative products. What used to require hours of planning, drafting, and revision can now be summoned in seconds.
That is both the marvel and the danger. Offloading arithmetic doesn’t threaten your capacity to reason about politics or write a novel. Offloading reasoning itself risks hollowing out the very skills you need to remain an independent thinker. If Otto had outsourced not just his memory but also his judgement, his creativity, and his problem-solving, would we still say the mind was “extended,” or that it was being replaced?
AI is not just another labour-saving device. It may be eroding the very faculties it promises to serve.
What AI is doing to us
The early evidence is sobering. At MIT, researchers hooked students up to EEGs while they completed essay-writing tasks, sometimes alone, sometimes with ChatGPT. The results were striking: when students leaned on the chatbot, neural activity in brain regions associated with creativity and attention dropped noticeably. Even more tellingly, many struggled to recall quotations from the very essays they had “produced” with AI’s help. It wasn’t just that they were thinking less in the moment — they were also retaining less afterwards.4
A team at Microsoft Research took a broader lens, surveying 319 knowledge workers who used AI at least weekly across nearly a thousand tasks. Most participants described their work as easier and quicker with AI, but only around half of the tasks required any real critical though; things like revising a poor AI output or fact-checking before passing it on to a client. The rest were essentially mindless: tasks that once required mental effort were reduced to mechanical prompting and forwarding. The report concluded that while AI boosts productivity, it may also be “slowly impairing our critical thinking skills.”5
A third study, led by Michael Gerlich at SBS Swiss Business School, tested 666 British participants on a widely used critical-thinking assessment, after surveying how often they used AI and how much they trusted it. The pattern was clear: heavier users scored lower across the board. Gerlich later said that hundreds of teachers contacted him after the study was published, all reporting the same pattern in their classrooms; students relying heavily on AI, yet struggling when forced to think independently.6
Taken together, these studies suggest a worrying feedback loop. Psychologists sometimes describe humans as ‘cognitive misers’: we naturally conserve effort and reach for shortcuts. The more we offload to machines, the less capable we become of doing the work ourselves; the less capable we are, the more tempting it is to offload again. It’s a self-reinforcing cycle of mental atrophy.7 One participant in Gerlich’s study admitted, tellingly: “I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”
The cost of borrowed thought
Borrowed thought is tempting. It makes hard tasks feel easy and saves precious time. Why wrestle with a blank page when ChatGPT can produce an essay draft in seconds? Why memorise multiplication tables when a calculator is in every pocket? Why concentrate on dense prose when TikTok delivers a drip-feed of instant novelty?
But the very effort these technologies spare us is the effort that builds cognitive strength. Without struggle, without the slow work of rehearsal and recall, the mental “muscles” of attention, memory, and reasoning atrophy. Borrowed thought is like outsourcing all your exercise to a personal trainer: you get the illusion of fitness while your own body weakens.
Schools face this temptation acutely. Engagement tricks are the pedagogical equivalent of fireworks: all fizz and sparkle, then gone, leaving nothing but smoke. The effect of AI short cuts is the ability to churn out fluent text without ever troubling the mind to think. It might look like learning, but it’s superficial gloss.
The anxiety is not new. Socrates warned that writing would destroy memory, turning knowledge into a mere reminder of what we once knew. In the 1980s, critics argued calculators would corrode arithmetic fluency. Each time, the concern was that tools would make us dependent, that once-vital skills would wither. And each time, we found ways to integrate the tool without losing everything, though few today would claim that numeracy has not been reshaped, perhaps diminished, by calculator use.
AI is different because of the scale and scope of what it replaces. Writing externalises memory, calculators externalise computation. But AI externalises reasoning itself: the capacity to generate arguments, weigh evidence, synthesise perspectives, and produce language at scale. If students are tempted to skip the struggle of thinking altogether, they may never acquire the fluency they need to judge whether the machine is right or wrong.
Borrowed thought may get us through today’s task. But in the long run, it leaves us intellectually fragile: better at using tools, weaker at being thinkers.
Wider context
We’ve heard these warnings before. Maryanne Wolf worries that the transition from print to screens has rewired us for shallowness. In Reader, Come Home she argues that digital reading fragments attention and undermines the deep comprehension that only slow, sustained engagement with text can cultivate. Nicholas Carr made a similar case in The Shallows: that the internet’s constant hyperlinks and notifications train us to skim, glance and graze rather than reflect.
Long before either, Neil Postman’s Amusing Ourselves to Death (1985) warned that every new medium reshapes not just what we know but how we think. Print fostered linear, logical argument; television, he said, reconfigured public discourse into a spectacle of entertainment. If Postman were alive today, he’d surely point out that social media has taken this logic further still, compressing thought into memes, hashtags and soundbites.
And psychologists remind us that it doesn’t take technology to make us shallow: Daniel Kahneman’s Thinking, Fast and Slow popularised the idea that we are “cognitive misers” by default, always reaching for the easiest, least effortful mode of thought unless forced to do otherwise. Digital platforms and AI simply exploit that tendency.
But there are optimists too. Steven Johnson, in Everything Bad is Good for You, argues that modern media - from complex television dramas to video games - actually stretch our cognitive abilities, demanding we juggle multiple plotlines, systems and rules. Steven Pinker, in Enlightenment Now, points to the long-term arc of human progress: whatever the local blips in IQ tests, our collective problem-solving capacity continues to rise. Perhaps, he suggests, the apparent decline is really a transition, as new literacies emerge alongside the old. Knowing how to check the provenance of a website or manipulate a digital dataset may be just as important in today’s world as being able to recite lines of Milton.
Schools as gymnasia of the body and mind
Everyone understands how physical fitness works. If you go to the gym regularly, you get stronger and fitter. If you stop going, spend your days on the sofa and live off burgers, you soon lose that fitness. No one thinks the effects of exercise are permanent. They only last if you keep working at them.
The role of cognitive fitness is less obvious. We tend to imagine that once we’ve learned to read, write, calculate or reason, those skills stick for life, regardless of what we do afterwards. But the evidence suggests otherwise. Just as neglected muscles waste away, neglected mental capacities weaken. Memory fades if we don’t rehearse it; fluency falters if we don’t practise; attention dwindles if it is constantly fragmented.
The danger of today’s environment is that it encourages us to assume our intellectual ‘training’ is complete as soon as we leave school, while simultaneously providing endless ways to avoid exercising the skills we once had. If we understand that physical fitness requires continuous effort, we should be no less realistic about cognitive fitness: without ongoing practice, the gains do not last.
If Daisy is right, then perhaps schools need to think of themselves as gymnasia of both the body and the mind.
The ancients understood that strength and resilience are forged through discipline, repetition, and strain. You don’t get fit by watching others run; you get fit by doing the laps yourself. The same is true of intellectual fitness.
In the smartphone age, the basics that once came naturally are now in retreat. We read less, we write less, we calculate less, we remember less. Outsourcing thought is effortless; resisting it takes work. That is precisely why schools must be the place where this work is done.
They must drill students in the cognitive equivalents of push-ups and squats:
Reading whole books — not just skimming extracts or summaries.
Mastering times tables — building automaticity that frees the mind for higher reasoning.
Writing extended arguments — learning to sustain a thought from premise to conclusion.
Committing knowledge to memory — so that ideas are there to be combined, tested, and built upon.
These practices may feel old-fashioned, but they are the muscular foundation on which all higher thought depends.
The alternative vision - that schools should focus only on “what machines can’t do” - is a trap. It allows machines to define the ceiling of human potential. Worse, it risks hollowing out the very skills we need to stay independent of them. Critical thinking, creativity, and judgement cannot float free of knowledge and memory; cut away the roots, and the plant withers.
If schools abandon the basics, society will drift into stupidity: distracted, dependent, unable to tell whether the machines are right or wrong. But if schools embrace their true role - cultivating deep literacy, memory, and reasoning - they become the last, best defence against intellectual decline.
Borrowed thought always comes at a cost. In the age of artificial intelligence, that cost is no longer hidden. If we want to remain thinkers rather than mere users, schools must stand firm.
In the end, the most radical thing education can do in the age of artificial intelligence is to insist on real intelligence, training both body and mind for the discipline of freedom.
Bratsberg & Rogeberg (2018) found, in a comprehensive study of 700,000 Norwegian male conscripts found that IQ gains halted and even declined starting in the mid-1990s, with numerical reasoning subtests showing particular drops. Similarly, Danish military conscript data reveal a slowdown of gains through the 1980s, followed by outright declines in the 1990s, reversing what had previously been steady upward trends. (Teesdale & Owen, 2008) In the UK, Shayer, Ginsberg & Coe (2007) compared IQ test results from 1980 and 2008 found that while younger children saw slight gains, 14-year-olds scored dropping by more than two IQ points, with even larger falls among the upper half of testers. Additionally, Flynn himself also revealed that British IQ gains (1938–1979) reversed after 1980 on Raven’s Progressive Matrices. Meanwhile, in Australia, Cotton et al (2005) found that among 6–12-year-olds tested between 1975 and 2003 using Colored Progressive Matrices, researchers observed no increase in IQ scores during that period, suggesting a plateau.
David C. Geary has consistently argued that human cognition evolved for survival in ancestral environments; language, social reasoning, and navigation emerge naturally, but modern academic skills like reading, mathematics, and science are biologically secondary and require explicit instruction. He distinguishes between primary abilities (which develop without schooling) and secondary abilities (which depend on deliberate teaching and practice). Geary stresses that higher-order reasoning only flourishes in structured, knowledge-rich environments that build memory and schemas; without this scaffolding, potential atrophies. See Geary (2007, Educating the Evolved Mind), Geary (2012, Evolutionary Educational Psychology), Geary (2013, The Origin of Mind), and Geary (2021, Cognitive Foundations for Learning).
The “Otto and Inga” case appears in Andy Clark and David Chalmers’ seminal paper, The Extended Mind (Analysis, 58(1), 1998, pp. 7–19). Inga, with intact memory, recalls the address of a museum from her brain; Otto, who has Alzheimer’s, retrieves the same information from his notebook. Clark and Chalmers argue that if Inga’s memory counts as part of her cognitive system, then Otto’s notebook should too. Their broader point is that cognition extends beyond our brains: external artefacts like notebooks, maps, or computers can become constitutive parts of thinking itself.
A controlled study from MIT’s Media Lab examined essay writing across multiple sessions. Participants were split into groups: one using ChatGPT (LLM), another using search engines, and a brain-only (no tool) group. EEG scans revealed that LLM users exhibited the weakest neural connectivity—particularly in regions tied to creativity, attention, and memory. They also struggled to recall their own essays and showed a lack of ownership and originality. The brain-only group, by contrast, showed stronger neural engagement and better recall
A survey of 319 knowledge workers (from Microsoft Research and Carnegie Mellon University) gathered nearly 1,000 real-world examples of AI usage in workplace tasks. Findings indicated that heavy reliance on generative AI was associated with less perceived critical thinking effort. Users who trusted AI too much were less inclined to verify or engage deeply with outputs
Michael Gerlich’s mixed-method study with 666 participants across age groups found a significant negative correlation between frequent AI use and critical-thinking skills. Younger, high-AI users scored especially low, with cognitive offloading (reliance on tools) mediating the effect
The idea of humans as “cognitive misers” goes back to work by Richard E. Nisbett and colleagues in the 1970s, later developed by Daniel Kahneman and others: we conserve effort by default, preferring heuristics and shortcuts to deliberate reasoning. An everyday illustration is the so-called “OK Plateau” described by Joshua Foer in Moonwalking with Einstein. When people learn to type, play an instrument, or even drive, their performance improves rapidly until they reach a level that feels “good enough.” At that point, they stop pushing themselves, performance plateaus, and errors persist indefinitely. The only way to break through is with deliberate practice, forcing attention back onto errors and demanding more effort. The OK Plateau captures perfectly our tendency toward mental economy: left unchecked, we settle for competence rather than mastery.





The role of schools: “gymnasia for the body and mind.” Love this idea.
What a lovely ‘waste’ of time gents once again. Thank you both for introducing me to so many new concepts. Especially like cognitive miserliness. Good laugh too! Deft as a brush…I wonder what AI would make of my attempt at praiseful humour!