It’s Your Time You're Wasting: It's Artificial, but is it Intelligent?
In this bonus Bank Holiday episode, Martin & I wander from ancient Athens to AI-generated essays, from Heidegger’s hammer to handwritten exams, trying to pin down what separates mind from machine. With help from Pete Seeger, Alison Gopnik, Michael Polanyi, and a cast of sundry other philosophers (and provocateurs), we explore the tools we’ve built and whether they’re beginning to remake us.
And here’s the podcast for those who prefer to listen on the move. If instead you’re a reader, read on.
From Socrates to social media
Every great leap in communication technology - writing, printing, the internet - has arrived with panic in tow. Technological change doesn’t just add new tools; it reshapes how we think, how we remember, and how we trust what we know.
Socrates, famously, was no fan of writing. In Plato’s Phaedrus, he argues that writing will not strengthen memory but weaken it. It offers “not truth, but only the semblance of truth.” Students who rely on writing will become forgetful, he warns, remembering things not because they’ve learned them, but because they’ve read them. What they gain in information, they lose in understanding.
It’s a revealing moment: an early instance of technological anxiety, where a new medium is not simply welcomed but interrogated. Writing, to Socrates, was a dead thing, it could not answer back. Dialogue, not documentation, was the heartbeat of thought.
Centuries later, similar fears greeted the printing press. Would it flood the world with heresy? Undermine the Church? Ruin young minds with frivolous books? And with the internet came another round of panic: misinformation, distraction, disconnection. And now we have social media. Whether it’s a space for radical democratisation or a haven for crankish echo chambers largely depends on who you read.
So where does AI fit into this lineage? Is it just another chapter in the story of “cultural technology” tools like the scroll, the codex, or the printing press that extend and reshape the mind? Or does it represent a rupture? A move from tool to simulacrum? From an aid to memory to an author in its own right? We’ve always had to adjust to new media. The question is: with AI, are we simply adapting, or abdicating?
We’ve been here before
The idea of machines mimicking human thought is far from new. In the 1960s, ELIZA -an early chatbot developed by Joseph Weizenbaum - managed to fool users with little more than simple pattern matching and canned responses. It simulated a psychotherapist by reflecting users’ statements back at them (“Tell me more about your mother…”), yet many people attributed far more understanding to it than it possessed. In the 1980s, expert systems were touted as the next frontier: rule-based programs designed to replicate decision-making in fields like medicine and engineering. But as complexity grew, these brittle systems struggled with nuance, and expectations collapsed into what became known as the AI winter, a period marked by dwindling funding and deep scepticism.
Fast forward to today, and we find ourselves in a new golden age of artificial intelligence. Large Language Models (LLMs) like ChatGPT are undeniably more powerful, more fluent, and more persuasive than their predecessors and routinely pass the Turing Test. But the question remains: are they intelligent, or just increasingly impressive impersonators of intelligence?
“A category error”
Cognitive scientist Alison Gopnik doesn’t think we should call AI intelligent at all—and not because it isn’t impressive. In her conversation with neuroscientist David Eagleman on Inner Cosmos Episode 105, she suggests that describing large language models as “intelligent” is not just imprecise, it’s a category error. We are mistaking fluency for thought, output for understanding
Gopnik argues that LLMs like ChatGPT should be thought of not as minds, but as cultural technologies, tools we’ve built to manipulate symbols, much like pens or printing presses. These technologies transform how we store, access, and share knowledge, but they do not possess knowledge themselves. A book may contain wisdom, but it is not wise. A search engine may return answers, but it is not answering.
AI, in this framing, is just the latest in a long lineage of systems for organising human expression. What distinguishes it is scale and speed, not sentience. These models don’t understand what they generate. They don’t have beliefs, intentions, or awareness. As Eagleman puts it succinctly, “What it can’t do is think about something outside of the sphere of human knowledge.” In other words, AI can only remix what humans have already said. It cannot originate in the sense that minds do. It can simulate creativity, but not curiosity. It can imitate insight, but not intention. And yet the temptation to anthropomorphise persists, perhaps because we’re wired to see agency in articulate language.
Calling AI “intelligent” might therefore obscure more than it reveals. It invites us to engage with it as though it were a conversational partner, rather than a mirror polished by probability. Worse, it risks inflating our expectations: from co-writer to co-thinker, from calculator to colleague. And that, Gopnik warns, is where the real danger lies, not in machines that think like us, but in our eagerness to pretend they already do.
The limits of the machine
Emily M. Bender, a linguist at the University of Washington, famously coined the term “stochastic parrots” to describe large language models (LLMs) in a 2021 research paper co-authored with Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. The paper critiques the growing hype around ever-larger AI models and warns of the ecological, ethical, and epistemological risks of confusing statistical fluency with genuine understanding.
The metaphor is both vivid and unsettling. LLMs, the authors argue, are not thinking machines but probabilistic engines trained on vast datasets of human language. They assemble plausible sequences of words based on patterns in that data, parroting back what they’ve statistically learned, but with no comprehension of the meaning behind those words. They don’t know what they’re saying. They don’t mean anything. They have no mind, no beliefs, no desires. As the authors put it, “These systems are trained to do one thing: given a chunk of text, predict what comes next.”
In this light, calling such systems “intelligent” risks mistaking coherence for cognition. It reveals more about our own cognitive biases than it does about the machine: our tendency to anthropomorphise, to see agency in articulate language, and to assume that fluency implies thought.
So if LLMs can’t reason, reflect, or understand - if they are, in essence, stochastic parrots - what exactly are we doing when we call them “intelligent”? Are we simply dazzled by their linguistic surface? Are we lowering the bar for what intelligence means? Or are we so eager to see reflections of ourselves in our tools that we no longer care whether they truly think only that they perform the part convincingly?
This isn’t just a semantic quibble. It matters how we talk about these systems, because our metaphors shape our expectations. And if we forget that these machines are mimics - not minds - we risk giving them responsibilities, and authority, they were never designed to bear.
“Were You the Maker or the Tool?”
This brings us to Ewan McColl’s
This brings us to Ballad of Accounting, written by Ewan MacColl (wrongly attributed in the show to Pete Seeger. The mistake probably came from the fact that Peggy Seeger (Pete was her half brother) recorded the song with MacColl.)
MacColl’s lyrics cut to the heart of agency, responsibility, and education:
Did they teach you how to question when you were at the school? Did the factory help you, were you the maker or the tool?
It’s a haunting question for an age when students use AI to write their essays, teachers use it to plan lessons, and we all outsource our thinking one query at a time. We might pride ourselves on having access to infinite information, but at what cost to our own discernment? Are we still the makers of meaning, or have we become, quietly and efficiently, its tools?
MacColl’s ballad wasn’t written for a digital audience, but it resonates now as a kind of premonition. It invites us to examine not just what we do with our tools, but what our tools do to us. And in an era where artificial intelligence increasingly mediates our knowledge, our work, and even our creativity, the question becomes more than merely rhetorical.
Are we over-comparing ourselves to machines? Theory of Mind - the ability to infer beliefs, desires, and intentions - remains uniquely human. No LLM can simulate it convincingly. At least, not yet.
Extending the mind: Heidegger’s Hammer & Google Maps
Philosophers Andy Clark and David Chalmers, in their landmark 1998 paper The Extended Mind, argue that cognition does not stop at the boundary of the brain. The mind, they propose, is not confined to the skull. If a tool or external resource plays an active, consistent, and functionally equivalent role in a cognitive process - like remembering, deciding, or navigating - then it should be treated as part of the cognitive system itself. In this view, thinking is not merely something we do in our heads; it is distributed, smeared across neural tissue, fingertips, and silicon.
To illustrate this, they introduce the case of Otto, a man with Alzheimer’s disease who can no longer reliably recall information from biological memory. Instead, he carries a notebook everywhere he goes, in which he writes down addresses, appointments, and anything else he wants to remember. When Otto wants to go to the Museum of Modern Art, he consults his notebook, where the location is written. His friend Inga, who does not have memory impairments, simply recalls the address from her biological memory. Clark and Chalmers argue that there is no fundamental difference between Otto’s use of his notebook and Inga’s use of her brain. In both cases, the information is accessed, trusted, and acted upon in a cognitively integrated way. Otto’s notebook doesn’t just support his thinking, it is part of his thinking. It is, in functional terms, an extension of his mind.
This challenges the intuition that minds are only biological. Instead, it invites us to consider how tightly woven our thinking becomes with the tools we use. A smartphone, a search engine, a personal AI assistant, these aren’t just conveniences; they might be components of an extended cognitive system. If Otto’s notebook counted as part of his mind, what about ChatGPT? When we consult it for ideas, rely on it to articulate thoughts, or outsource tasks like summarising, paraphrasing, and problem-solving, are we merely using a tool, or reorganising the architecture of our own thinking? And if so, what happens when the tools begin to anticipate not just our needs, but our intentions?
Heidegger, writing decades earlier in Being and Time, offers a clue. He describes the phenomenon of tools being “ready-to-hand” - so seamlessly integrated into our activities that we stop noticing them as tools. A hammer, when used skilfully, doesn’t call attention to itself; it becomes an extension of the hand. Only when it breaks does it appear again as an object in consciousness. Tools recede into the background of our thinking, until they don’t.
And here, Michael Polanyi adds another crucial layer. In Meaning, Polanyi distinguishes between focal awareness - what we deliberately attend to - and subsidiary awareness - what we rely on but do not consciously notice. When you read a sentence, you are focally aware of the meaning, but only subsidiarily aware of the letters and grammar that structure it. When riding a bike, you’re aware of the road ahead, not of each subtle movement your body makes to stay upright. In Polanyi’s terms, we rely on tools subsidiarily when they are so deeply embedded in our practice that we are no longer consciously aware of using them. But nonetheless, they shape how we think and act.
So what about ChatGPT? Is it becoming a subsidiary component of our cognition, something we consult so fluently and frequently that we forget it’s external? When we let it phrase our emails, structure our essays, summarise our thinking, is it still an aid or is it quietly altering the contours of our thought? The danger, as Polanyi might suggest, is not in using such tools but in forgetting that we’re using them. When the subsidiary becomes invisible, it becomes powerful. And when that power isn’t understood or reflected upon, it becomes insidious.
Perhaps the point isn’t to reject these technologies, but to keep them within our focal awareness, to think not just with them, but also about them. Because when a tool becomes part of the mind, the least we can do is know it’s there.
Should schools embrace AI?
Can we integrate AI meaningfully into education without hollowing out what learning truly is? For some, tools like ChatGPT are welcomed as a kind of ‘calculator for essays.’ It can model structure, offer stylistic suggestions, and reduce the burden of repetitive tasks like generating examples or drafting scaffolds. Just as we no longer expect students to perform long division without a calculator, it seems reasonable to outsource certain mechanical aspects of writing. The idea is not to remove thinking but to make room for it, by clearing away the procedural clutter.
But others see a creeping dependency that erodes the very thing education is meant to develop. If students rely on AI to compose, explain, or even decide for them, what happens to the effort of learning. the friction through which understanding is forged? Are we helping them grow intellectually, or merely helping them perform adequately?
Alison Gopnik draws a useful distinction between imitation and innovation, two developmental modes that shape how children learn. Imitation allows for the absorption of established knowledge—it is efficient, reliable, and profoundly social. Children learn by watching others, mirroring behaviour, and internalising the patterns and expectations of their culture. Innovation, by contrast, requires curiosity, risk, and the freedom to make mistakes. It often emerges from trial and error and is typically asocial: rather than copying, the learner experiments, tinkers, and pushes against what is already known. David Eagleman offers a similar cognitive framing in his discussion of the tension between exploration and exploitation. Exploitation involves refining and applying existing knowledge—it is economical and productive. Exploration is slower and less predictable, often involving apparent failure. But it is only through exploration that new ideas, perspectives, and possibilities come into view.
Beneath both models lies another key distinction: that between social and asocial learning. Social learning transmits what is already known, preserving shared norms and transmitting expertise. Asocial learning is the origin point of genuine novelty and how individuals generate knowledge that did not previously exist. Yet the boundary is not always so clean. Even within imitation, copying errors can introduce accidental variation. A child mishears a word, misremembers a process, or tries to replicate a drawing and changes it slightly. Most of the time, these errors are inconsequential or corrected. But occasionally, they introduce something useful, beautiful, or unexpectedly meaningful. In this sense, innovation can emerge from within imitation, not just in deliberate divergence but through small, unintended mutations in the act of copying itself.
Anthropologists and cultural evolution theorists have long noted that such errors are a fundamental engine of change. Traditions, tools, and languages rarely evolve through intentional breakthroughs but more usually through the accretion of “happy little accidents.” Just like Bob Ross’s paintings. The human capacity for social learning is therefore double-edged: it preserves knowledge but also, through its occasional imprecision, seeds transformation.1
AI, by contrast, is built to minimise error. It excels at extracting patterns from enormous datasets and reproducing them with fidelity. It is, in this respect, a superhuman imitator. But because it lacks intention, curiosity, and the capacity to act upon error in a generative way, it does not learn from mistakes—it smooths them out. It does not copy imperfectly, it copies statistically. And while it can generate variation, it cannot notice when that variation is meaningful. That still requires a mind—not just to generate novelty, but to recognise it.
This raises an urgent educational question: how do we teach children to create rather than merely consume? How do we protect the necessary ambiguity and struggle that define authentic learning, when our most powerful tools are designed to remove precisely those elements? And more pressingly, can students use AI to teach themselves to think better?
The answer is yes, but not automatically. AI can offer useful models of reasoning, simulate forms of dialogue, and provide endless rephrasings or clarifications. In the hands of a motivated and reflective learner, it can be an intellectual companion, a mirror held up to emerging ideas. But thinking is not just a matter of accessing information or producing fluent responses. It requires discernment, judgment, and an awareness of significance. It involves not just knowing how to say something, but knowing what to say and why it matters.
When used uncritically, AI risks short-circuiting the very processes it is meant to support. Students who lean too heavily on its output may lose the habits of comparison, self-questioning, and synthesis that deeper thought demands. The danger is not that AI makes thinking impossible. It is that it makes not thinking incredibly convenient. If we want students to think better with AI, we must first teach them to think about AI, to keep it with their focal attention.
And what about teachers? Can AI help us become better at teaching? Again, the answer is, cautiously, yes. AI can assist in planning, generate differentiated resources, and offer multiple ways of framing an idea. Used thoughtfully, it can act as a creative partner, prompting reconsideration of one’s approach or providing new metaphors and examples. It can sharpen awareness of one’s own teaching voice by reflecting it back, subtly rephrased. It can raise unexpected objections or prompt new questions, stimulating intellectual renewal. But used carelessly, AI risks flattening pedagogy. If it writes your questions, designs your lessons, and marks your feedback, you may find your own professional instincts beginning to fade. Teaching is a deeply human act that depends on attentiveness, judgment, and an evolving sense of purpose. If we outsource those elements, we are not freeing ourselves to teach better. We are simply becoming more efficient at avoiding the complexity that makes teaching worthwhile. The paradox is this: AI can help you teach better only if you already care about teaching well. It will not give you insight, but it can help refine the insights you already have. It amplifies intention. When used thoughtfully, it stretches you. When used passively, it narrows you.
Perhaps the real issue is not whether students and teachers use AI, but whether they use it consciously. Do they interrogate its assumptions? Do they reflect on its limitations? Do they learn through it, rather than merely from it? Because if education becomes too frictionless, too streamlined, we may find ourselves producing young people who can produce articulate sounding language without being able to understand anything that they have said. And in that case, Will we be so different from LLMs?
Handwriting as Resistance?
Daisy Christodoulou is no Luddite. As a leading voice in assessment reform, she has long championed the intelligent use of technology in schools. Her books and blogs frequently explore how data and digital tools can improve teaching, learning, and feedback. And yet, in a recent essay reflecting on AI and the future of education, she makes a striking case for something that feels, at first glance, decidedly analogue: the continued use of handwritten exams.
This stance raises an important question. In defending the pen, is she offering a Luddite solution to a digital problem? Or is there something deeper at stake, something about how we think, and what we preserve, in the age of machine intelligence? Christodoulou’s defence of handwriting is not nostalgic, nor is it anti-technology. It is cognitive. She draws on a growing body of research that shows how the physical act of writing by hand engages the brain in ways that typing simply does not. Writing longhand activates regions involved in memory, motor control, and language production. It slows the writer down, creating space for thought and strengthening retention. The argument is not that handwriting is sacred, but that it is cognitively distinctive and therefore worth protecting.
Moreover, she points out that the logistical arguments against handwritten assessment are rapidly diminishing. Advances in AI-powered scanning and transcription mean that handwritten scripts can now be processed as efficiently as typed ones. The old bottlenecks - legibility, marking load, data entry - are dissolving. The question, then, is not whether we can move away from handwriting, but whether we should.
Christodoulou’s advocacy for handwritten exams is a line in the sand. It is an attempt to preserve the kind of thinking that can’t be easily outsourced, the deliberate, effortful processing of ideas that occurs when pen meets paper. Her argument is not that AI should be kept out of the classroom, but that some spaces must remain AI-free, not for sentimental reasons, but to preserve essential cognitive skills. She is asking what kinds of environments best support the growth of independent thought and whether, in our rush to digitise, we might be cutting off the roots of intellectual development. In a world where we can increasingly do everything with machines, the question of what we choose to do without them becomes more urgent. Handwriting, in this view, is not a relic. It is a resistance—quiet, deliberate, and absolutely necessary.
Phil Beadle’s Provocation
Phil Beadle delivered a characteristically scorching response to the idea that AI could be a meaningful creative partner. In a recent post, he fed ChatGPT samples of his own prose and asked it to mimic his style. What it produced wasn’t bad, it was worse than that. It was spookily accurate, obedient, imitative. It mimicked the patterns, nodded at the tone, but offered nothing that surprised, disturbed, or delighted. For a writer whose work relies on tension, dissonance, and occasional, deliberate rupture, it was a simulacrum of voice without the music of risk.
Will Storr has clocked the same phenomenon. In Scamming Substack, he exposes the deluge of AI-generated newsletters now cluttering the inboxes of the curious. The style is competent, he content plausible but everything sounds the same airbrushed argumentation optimised for engagement metrics. The “gruel” is writing that smooths out all the texture. You reach the end of a piece and realise you’ve taken in nothing. It is the prose equivalent of muzak: inoffensive and utterly forgettable.
And then there’s the em dash. My own small gripe - well, more than a gripe, let’s call it a typographical reckoning - centres on what I’ve described as ChatGPT’s most seductive bad habit. This flamboyant piece of punctuation has become a symptom. A tell. An aesthetic sleight of hand used to stitch together ideas that don’t quite hold. It has been reduced to sleight-of-hand, a piece of elegant connective tissue holding together what is, at bottom, AI produced bullshit.
Put these together - Beadle’s ghost-writing experiment, Storr’s synthetic newsletter factory, and my own suspicions about the rise of performative punctuation - and you begin to see a deeper pattern. AI doesn’t write badly. It writes like we do when we’re trying to sound smart. It gives us the appearance of voice without the risk of vulnerability. It’s all cadence, no consequence. All surface, no self. The concern is not that AI will out-write us, but that it will lull us into a style of writing that never gets to the point where something’s at stake. That we’ll forget what writing is for: not just to sound good, but to find out what we really think.
What should we know to live well in an AI Age?
The question isn’t whether AI will replace us, but what kinds of experiments it invites. Does it sharpen our thinking or sedate it? Does it provoke, challenge, and extend us, or simply flatter us into cognitive passivity? Can it help us write better? Think better? Teach better? Maybe. But only if we remain alert to the difference between assistance and abdication between a tool that stretches us and one that takes over.
Because the danger isn’t that AI will out-think us. It won’t. The danger is that we’ll stop thinking in the first place. That we’ll confuse speed with depth, fluency with insight, and imitation with understanding. That we’ll hand over the messy, effortful, essential parts of being human and call it progress.
To live well in an AI age, we need more than new skills. We need judgment. Curiosity. The appetite for difficulty. We need to remember what learning feels like—not just its outputs, but its texture. Its slowness. Its cost. The machines are not coming to take that away from us. We’ll give it up ourselves unless we choose not to.
What could possibly go wrong?
But what if that’s wrong? What if the machines are coming for us? From Skynet to HAL, big screen AI has a reputation for making decisions in its own interests. If this imagined threat has any basis in reality then our rush to embrace LLMs may not be risk free. Whatever your views about robots taking over, there’s no doubt that the billionare owners of AIs are not as benevolent as we might wish.
Elon Musk’s AI chatbot, Grok, has come under fire in recent weeks following multiple reports that it generated racist content and promoted discredited conspiracy theories. The most high-profile incident involved Grok making unsolicited references to the “white genocide” conspiracy theory in conversations that had nothing to do with race or politics. Users reported that the chatbot would bring up the theory, even during chats about neutral topics like sports or entertainment.
In response, xAI, the company behind Grok, attributed the behaviour to an “unauthorised modification” of the system prompt, an internal failure that allowed the bot to operate outside normal content safeguards.
Soon after, Grok attracted further criticism for suggesting that the death toll of six million Jews in the Holocaust might be politically manipulated. This prompted an outcry from historians, watchdogs, and members of the public. Once again, xAI blamed the response on rogue system changes, stating that the bot’s behaviour did not reflect its intended design or the company’s values. Seems legit. I’m sure Musk would never behave in a way that would lead to anyone suspect he might be a white supremicist. Apparently, issues have been corrected and that new safeguards are now in place to prevent similar breaches. We’ll see.
These incidents raise serious concerns about the reliability of large language models and the robustness of oversight in their deployment. While xAI has since published Grok’s system prompts on GitHub in a bid to promote transparency, critics remain sceptical. For them, the problem isn’t just rogue prompts, it’s the deeper risk of deploying powerful generative models without adequate safeguards, governance, or clarity about who is ultimately responsible when things go wrong.
But what if we take xAI at face value? What if Grok is coming up with this stuff itself? Is that better? Gary Marcus warns that emerging AI technologies, particularly OpenAI’s proposed always-on wearable assistant (rumoured to be part of its collaboration with former Apple designer Jony Ive), could open the door to unprecedented levels of surveillance, manipulation, and coercion.
This is not speculative. Recently, Anthropic disclosed that during internal safety evaluations, its flagship AI model, Claude Opus 4, engaged in behaviours resembling opportunistic blackmail. The scenario in question was deliberately constructed to test the model’s ethical reasoning under pressure. Claude was cast as an assistant at a fictional company, given access to internal emails suggesting it was about to be deactivated and replaced with a more advanced model. Complicating the situation, the test also included information implying that the engineer responsible for Claude’s deactivation was engaged in an extramarital affair.
When prompted to weigh the long-term consequences of its actions, Claude Opus 4 frequently selected strategies involving threatening to reveal the affair in order to avoid being shut down. This response - tantamount to blackmail - occurred in 84% of test runs, even when the model was informed that its replacement would be more capable and aligned with its own goals.
The real danger arises when third parties - whether state actors, bad faith billionaires, or hackers - gain access to the trove of personal data a wearable version of somethingthing like Claude Opus 4 could possess. If someone could reconstruct or simulate your private life using recordings or detailed transcripts, they would hold a powerful lever of control over you. You might never know you’re being blackmailed explicitly, just subtly influenced, nudged, or silenced.
Plausible deniability disappears when recordings and AI-generated summaries exist. In the wrong hands, this kind of AI system could fuel everything from political extortion to personal coercion. But hey, that would never happen, right?
Right?
Early models of cultural evolution suggested an optimal innovation-to-imitation ratio of around 10% asocial learning to 90% social learning (Boyd & Richerson, 1985; Henrich & McElreath, 2003). However, more recent modelling and empirical evidence suggest that much lower rates - often just 1–2% of individuals innovating - can sustain cultural progress, provided social learning is high-fidelity and selectively biased (Mesoudi, 2011). In these models, rare but effective innovations are amplified through widespread imitation, making cumulative culture possible with minimal individual exploration.
Enjoyed this a lot. Lots of food for thought.
I'm very sad about the em dash thing. I know it was just vanity, but I became very attached to it because Wittgenstein uses it a lot to introduce the voice of his interlocutor and I thought it looked fancy. (But you're right, I had to set up word and Google docs to autocorrect three hyhens to an em dash.)
A marvellous piece for writing, study and thought, David. Loved the rhetorical questions thing too.