ChatGPT can plan lessons but it can’t think for you
How generative AI is transforming lesson planning and why teaching still depends on what happens inside teachers' heads
It’s tempting to imagine that lesson planning is a straightforward matter: jot down what you want to teach, bung in a few activities, throw in a ‘hook’ or two, and off you pop. And now, with AI, the process can be slicker than ever. Feed in your objectives, key stage, and preferred pedagogical style and you’re handed a lesson plan that ticks all the boxes. The process is seductively simple, superficially competent and very dangerous.
When I started teaching, planning was either an onerous chore in which I’d meticulously estimate timings for activities (always bewilderingly inaccurate), or a panicked post-hoc ritual in which I racked my brains for a learning objective to justify whatever it was I’d just done with my students. I could fill in a planning pro forma with starter, main, pudding, and a hopeful gesture toward assessment for learning but there wasn’t much thinking behind it. It was, at best, a performance of planning. My problems was, I didn’t understand what I was meant to be thinking about.
AI has now turbocharged that same illusion. It can churn out polished lessons at speed, complete with success criteria and differentiated activities. But the ease is as deceptive as it is counter productive. Just as I once confused a formatted sheet with a coherent plan, we now risk confusing auto-generated content with intellectual readiness and that confusion has consequences.
Because here’s the thing: lesson planning is not the same as intellectual preparation. The former is largely clerical: a logistical scaffold of what, when and how. The latter is cognitive. It demands that we think. Not just about lesson activities, but about the conceptual architecture behind them. Without this intellectual work, teachers are left performing lessons they don’t fully understand, no better off than students copying homework from ChatGPT.
The cost of borrowed thought: a response to 'Your Brain on ChatGPT'
There’s a peculiar satisfaction in solving a problem unaided. The click of understanding. The felt sense of ownership. A little corner of the world, momentarily tamed by one’s own effort. But what happens when that effort is replaced by fluency without friction? When a generative model offers not just suggestions but seductive, elegant, articulate, plausible answers that arrive in seconds? The recent study
This distinction isn’t just a matter of taste. According to Cognitive Load Theory, expertise emerges when we build and automate schema, mental structures that allow us to manage complex information with minimal effort. Teachers who merely follow someone else’s slides are stuck processing everything in working memory, where capacity is limited and error-prone. By contrast, intellectually prepared teachers can anticipate, adapt and explain because they’ve internalised the structure of what they’re teaching not just the superficial features.
The danger of relying on AI to do the planning is that it risks replacing thinking with compliance. The lesson exists, but the teacher does not. This is not an abstract worry. A report by Holmes et al (2023) found that while generative AI improved teachers’ productivity, it reduced their confidence in delivering content, a warning sign that efficiency was being prioritised over understanding. The problem is cognitive offloading: when external tools do the mental work for us, our own reasoning atrophies. AI tools may “disincentivise critical engagement,” especially in high-pressure environments where ease becomes the goal.
This is precisely what Ball, Thames and Phelps (2008) discovered in their work on Mathematical Knowledge for Teaching. Knowing content isn’t enough; teachers need to know how that content behaves in the classroom: which misconceptions are likely, which representations clarify, which examples work, and which don’t. That knowledge isn’t picked up passively, it’s the fruit of deliberate, intellectual work prior to teaching.
And yet, AI tools tend to generate fluently written materials that look expert. But, fluency is not the same as understanding. Polished outputs can deceive both students and teachers into thinking they grasp the material when they’ve actually bypassed the hard part. When Rosenshine laid out his Principles of Instruction, he made it clear that effective teaching depends not just on routines but on knowledge. His emphasis on modelling, guided practice, and scaffolding all presume that the teacher has done the hard cognitive graft of sequencing, explanation, and rehearsal. That isn’t something you can just outsource.
Daniel Willingham puts it bluntly: thinking is hard but it’s also essential. Memory is what remains after we have thought about something, and if teachers haven’t thought carefully about what they’re teaching, they won’t remember it well enough to teach it fluently. Worse, they’ll lack the cognitive capacity to respond to what’s happening in front of them because they’re still trying to decode their own plan.
A recent study by Williamson et al (2024) makes the same point about students: those who relied heavily on generative AI to complete work showed reduced retention, poorer metacognition, and lower performance in follow-up assessments. The problem wasn’t AI per se it was the absence of struggle. Cognitive effort is not a barrier to learning, it’s what makes learning happen.
This is why David Berliner’s research on expert teachers matters. Berliner found that novices tend to follow plans rigidly, whereas experts adapt fluidly, noticing more, interpreting better, and intervening more strategically. That kind of expertise isn’t built in the moment. It’s built beforehand through the slow, rigorous work of intellectual preparation.
None of this is to dismiss the need to plan. Of course teachers should think about lesson structure, pacing, resources and assessment. But the plan is not the point. It is the thought that goes into the plan that makes effective teaching possible.
AI may soon be able to produce every lesson in a scheme of work. But unless we treat those plans as the starting point for our thinking - not a substitute for it - we’re doing what lazy students have always done: outsourcing effort, mimicking mastery, and hoping no one asks a follow-up.
And like those students, we’ll pay for it eventually.
A really interesting piece and such an important conversation… Some of my thinking on this topic - https://open.substack.com/pub/sarahfindlater/p/planning-with-ai-without-losing-professional?r=1p745l&utm_medium=ios
really thought provoking article about both AI generated content and schemes in general. I had already had this realisation I think, seeing that teachers dont take the time to understand the scheme so can't adapt flexibly, cant predict the tricky bits etc. so it was nice to see my slowly dawning thoughts explained so clearly! Thanks