Opening the closed circle: why being wrong helps us find out what's right
Falsifiability, pseudoscience and why "it works for me" is a pedagogical dead end
A closed circle argument is one where there is no possibility of convincing an opponent that they might be wrong. They are right because they’re right.
Imagine you wake to find yourself in a psychiatric ward, deemed by all and sundry to be insane. Any attempt to argue that you are not, in point of fact, mad, is evidence that you are ‘in denial’. Any evidence you cite in support of your sanity is dismissed as an elaborate attempt to buttress your denial. There is no way out of this predicament; no way to demonstrate your sanity that will be accepted by those who have decided they are right because there is no way that they can conceive of being wrong.
If there’s no way in which you can be wrong, then you have created an unfalsifiable argument.
Karl Popper’s insistence on falsifiability was not an arid parlour game about the philosophy of science. Falsifiability provides a vital moral safeguard. If a claim cannot, even in principle, be shown to be false, then it is insulated from reality. It may still be true - it may even be profound - but it has cut the cord that tethers it to the world.
In education we do this all the time.
If, in the face of contradictory evidence, we make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please. If we’re told that students’ exam results might improve if we changed our practice we can say things like, “There’s more to education than exam results” and claim that our students are happier, better rounded, or have an excess of some other vague, unmeasurable trait. We can laugh at the idea of measurement and say, “Just because you can’t measure it, doesn’t mean it isn’t important.” We can insulate ourselves from logic and reason and instead trust to faith that we know what’s best for our students. And who can prove us wrong?
If there are no conceivable conditions under which we would concede error, we have stopped making empirical claims and instead embarked upon making declarations of identity. I teach this way because I am this sort of teacher. My classroom feels right. My values are affirmed. The circle closes.
The thing is, this sort of faith results in stagnation. Science has made tremendous leaps and bounds over the past 200 or so years because scientists learn from their mistakes. Religions have stayed pretty much the same (much to the satisfaction of their adherents as their beliefs represent eternal Truth.) If we as teachers adopt a position of faith, we cannot learn or improve. We cannot receive meaningful feedback about our mistakes if our response is to shift the parameters and respond with, “Yeah, but”.
Only by testing our ideas and accepting where we have made mistakes do we learn.
As Daniel Kahneman and Gary Klein point out in Conditions for Intuitive Expertise: A Failure to Disagree, there are some professions – psychotherapy, clinical psychology, recruitment, radiology, stock broking, the judiciary – where practice and experience doesn’t seem to improve judgement. What all these professions have in common is that it’s very hard to learn from mistakes. If you work in recruitment it’s unlikely that you’ll track the long-term success of the personnel you recruit so you never know when your judgement was successful and when it was not. A psychotherapist takes feedback from the apparent progress of patients in the clinic but has no mechanism for knowing how they behave in the real world. All these professions exist within “wicked domains” where practitioners rely on their subjective judgement without getting meaningful feedback on how successful their judgements might be.
Teaching is, perhaps, a little like this. We get instant and meaningful feedback on some aspects of the job; if our behaviour management isn’t up to snuff, students are unambiguous in their feedback. But how do we know if our ability to get students to retain and transfer new skills and knowledge is good enough? Most of the time we rely on our subjective judgement and certain visible proxies such as whether children are working hard, solving problems and generally performing well in the classroom. It’s easy to look with satisfaction at a sea of happy, contented faces and conclude, “Well, it works for me.” As Kahneman and Klein put it, “human judgments are noisy to an extent that substantially impairs their validity.”
There are two main barriers to teacher improvement. One is that we often fail to notice whether our ability to teach is any good. The other is the way we are held to account. We are asked to justify and explain why students failed to make the grade; we are - usually unintentionally, put under pressure to make excuses and conceal mistakes to avoid being blamed. Instead of admitting that what we’re doing doesn’t appear to be effective we shrug and say, “These things happen” and “What can you expect with kids like these?” Sometimes we blame specification changes or marking protocols: It must be someone else’s fault. I’ve heard failure blamed on timetabling, sports fixtures, room temperature and litany of equally plausible excuses.
Once in an exam analysis meeting, a school leader who taught geography said, with no sense of irony, that the reason the department’s exam results were so poor was because of their outstanding teaching. They concentrated on independent learning and refused to ‘spoon feed’. This obviously meant kids would do less well in anything so mundane as tests.
If we cannot evaluate claims against meaningful, agreed metrics we have no way to establish whether they contain truth. As Carl Sagan, Said,
Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder.
The basis of all reputable science is prediction and falsification: a claim has to be made which we can then attempt to disprove. If we can’t disprove it, the claim holds and we accept the theory as science. If the claim doesn’t hold, we’ve learned something, we move one, we make progress. That’s science.
Pseudoscience doesn’t work like that. It makes claims, sure, but they’re so slippery you can’t disprove any of them. We all know about phrenology, astrology, homeopathy and learning styles, but sometimes junk science is harder to spot.
Carol Dweck’s wildly popular theory of growth mindsets is an interesting case study. Mindsets theory makes several falsifiable predictions:
Having a growth mindset leads to better academic achievement
Having a fixed mindset leads to poorer academic achievement
Giving students a growth mindset intervention (which focuses on explaining the neuroscience involved) improves students’ academic performance.
Dweck’s studies, and those of her colleagues, provide impressive data. But, and it’s a big but, when schools try a growth mindset intervention without support from Dweck or her colleagues, it often doesn’t work. Maybe you’ve tried telling kids about growth mindsets and how this can turn them into academic superheroes? Has it worked? If it has, I’m glad for you, if it hasn’t, the problem might be that either you or your students have a ‘false growth mindset’.
I heard Dweck talk about the false growth mindset some years ago at a conference and thought at the time that it explained away some of the difficulties I have with her theories. Basically, if you don’t get the benefits of a growth mindset it’s because you haven’t really got a growth mindset. You’re doing it wrong. In fact, you’re probably just pretending to have a growth mindset because having a fixed mindset means you’re a bad person.
The problem with a theory that explains away all the objections is that it becomes unfalsifiable. There are no conditions in which the claim could not be true. For instance, when fossil evidence disproved the widely believed ‘fact’ that the world was created in 4004 BC, Philip Henry Gosse came up with the wonderful argument that God created the fossils to make the world look older than it actually is in order to fox us and make Himself appear even more fabulous and omnipotent. Isn’t this a similar trick to the one Dweck is trying to pull off?
If you adjust the definitions of your theory in order to fit the facts then is the theory science or pseudoscience? If no amount of data or evidence can prove Dweck’s claims false because she can just say, Well, that’s a false growth mindset, not a real one, then what’s the difference between her and Gosse?
If a theory can mean whatever the situation requires, its claims are “veridically worthless”.
This is the value of falsifiability. It forces us to specify the conditions of our own defeat. What evidence would persuade me to abandon this practice? What data would cause me to revise my view? What outcome would make me say, I was mistaken?
Without that discipline we default to confirmation. We replicate to reassure. We collect anecdotes. We dismiss awkward findings as artefacts of flawed measurement. The metric is wrong. The cohort is unusual. The test is narrow. The timescale is too short.
Each of these may be defensible. The problem arises when no accumulation of contrary evidence would suffice.
Education is particularly vulnerable because its aims are contested. When someone argues that passing tests is not education, they may be right. Examinations are imperfect proxies. But if improved attainment is ruled irrelevant from the outset, then no improvement can ever count as evidence in favour of a practice. The goalposts move.
The result is epistemic impunity. We believe what we like and damn the evidence. The circle seals itself.
Richard Feynman put it bluntly.
It doesn’t make a difference how beautiful your guess is. It doesn’t make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong.
But being wrong should not be a source of humiliation. Wrongness fuels the engine of progress. When we discover that our confident guess fails we learn something meaningful. We can rule out bad ideas to work on better ones. A theory that cannot be wrong cannot teach us anything.
We can argue that what we like ‘works’ because we like it. And if it’s unsuccessful on verifiable metrics then we can dismiss the metrics as worthless. This is the apotheosis of a closed circle: you can explain away any amount of disconfirming evidence as not fitting your paradigm. You’ve given yourself permission to ignore reality and anyone who suggests you might not be wearing any clothes can safely be dismissed as having a fixed mindset.
So here is an acid test for educational opinion. Under what circumstances would you change your mind? If the answer is none, then the idea in question is not a hypothesis about teaching and learning. It is a creed. And creeds, however sincerely held, are not improved by evidence. If you cannot accept that there are conditions in which you might be wrong, then we should feel free to dismiss your ideas as guff
If we want to improve education, or any other field, we must acknowledge our errors. We need to move from using evidence to confirm our prejudices to using it to explore how we might think and act differently. That way lies progress.




I think a lot of SEND provision operates in a space where assumptions are accepted as best practice without being regularly examined or tested at either a broader research level or at an individual pupil level. Practices are often repeated because they feel right or are widely endorsed, rather than because they have demonstrated effectiveness across settings and are shown to be working for the child in front of us.
There is also a growing tendency for mainstream schools to import practice from specialist settings as a marker of credibility. While there is much to learn from specialist expertise, transfer without contextual evaluation risks replacing one set of assumptions with another. Practice should not gain authority solely through its origin; it should be examined by its impact.
I really applaud the argument for keeping the circle open is compelling, but sadly, it assumes that teachers are afforded the professional space to do so. In practice, SEND provision is often shaped by statutory wording, external recommendations and accountability pressures. Teachers may feel more like implementers than investigators. If questioning provision is perceived as non-compliance or insensitivity, then the opportunity for reflective refinement narrows. Evidence-informed practice requires more than access to research. It requires a culture in which teachers are trusted to exercise professional judgement, to review impact honestly, and to refine provision in response to what they see. Without trust and professional respect, evidence risks becoming procedural rather than meaningful.
Schools are not (and cannot be) responsible for all future misfortune of their pupils. It's a false target. If schools want to help prevent students from going down a path of criminality or other various states of disarray, they should focus with even more clarity on ensuring measurable academic outcomes. No one looks at a 25-year-old in jail and thinks, "gosh, if only his 3rd grade teacher had been less worried about reading scores and done a better job delivering her SEL lessons."