Many conversations about digital strategy in schools begin with the same questions: “what are the most innovative schools doing?” and “how do we do it too?” This seems reasonable. Benchmarking against successful implementations feels prudent, reassuring, and appropriately humble.
But Rory Sutherland, the behavioural economist and advertising guru, has a counterintuitive suggestion: rather than study what your competitors do well, he argues, study what they do badly. Find the gaps they ignore. Improve on the things no one else is bothering to fix. He calls this reverse benchmarking, and once you have heard the idea, you start seeing its logic everywhere.
I had an uplifting conversation recently with the good folks of Willow Learn that brought this concept into sharp focus. Uplifting because they understand that pedagogy comes first, and that listening to sceptics can be the fuel of progress. We were talking about AI adoption in schools, where the standard approach to digital strategy is, almost always, pure forward benchmarking. We look at what the most enthusiastic adopters are doing with AI and aim to replicate it. We package this as best practice, and when it inevitably doesn’t quite work in our context, we usually blame the tech rather than question our own assumptions.
What if the better question is not “what are the evangelists saying?” or “what are the trailblazers doing?” but “what are teachers unhappy about, and why?”
The people who put lifeboats on ships
There is a line I often find myself returning to: listen to pessimists, because they are the ones who put lifeboats on ships. Teacher concerns about AI are usually not irrational fears to be managed on the way to implementation. They are, in many cases, professional judgement doing exactly what it should.
When a teacher says she does not trust a particular tool, she is not being obstructive. She is applying domain expertise. She knows what good practice looks like, and she knows what her students need. And she is noticing, usually correctly, that the tool is not quite there yet.
Behind professional judgement lies years, sometimes decades, of hard-won knowledge and experience, which deserves to be taken seriously.
Disagreement is valuable data
This is why, when I work with school staff on AI, I deliberately survey for disagreement. Not to be provocative for its own sake, but because I think that rooms that produce only agreement are not thinking rooms, they are lecture rooms. The concerns that surface when people feel genuinely heard about accuracy, about academic integrity, or about what is lost when a student outsources their struggle, are not obstacles to be smoothed away. They are the raw material of a more sustainable, lasting strategy.
Teacher resistance will always exist, and rightly so. The question is whether we treat it as fuel or as fumes.
The only way is through
The temptation, when introducing any new technology, is to treat pedagogical caution as friction. Friction slows things down, so our first instinct is to reduce it. But in a pedagogy-first approach, that friction is exactly what generates energy.
In schools that have embedded technology sustainably, it has become part of the fabric rather than a recurring event on the inset calendar. They have almost always done so by keeping the learning question central. Not “what can this tool do?” but “what do we want learners to be able to do, and does this genuinely help?”
That takes longer, at first. But it is the only route to the real measure of success: you know edtech has been genuinely embedded when nobody calls it edtech anymore. When it has simply become how things are done here. Folks must be bored of hearing me say the goal isn’t adoption, it is invisibility.
You cannot guide if you don’t know the terrain
But there is a factor that is routinely overlooked in the rush towards student-facing AI use: before teachers can make sound judgements about whether and how to allow students to use these tools, they need to understand them. Specifically, I think, they need to understand their limitations in their own subject domain.
A history teacher who has tested an AI’s understanding of historiography is in an entirely different position to exercise professional judgement than one who has only seen the enthusiastic slide deck at the inset day. The first knows where the tool fails. She can make an informed call. The second is operating on faith, and possibly passing that faith on to students who are almost always not equipped to calibrate it.
This is not a criticism of enthusiasm. Enthusiasm is necessary, bit it is not sufficient.
Speed is not a strategy
The schools that will get AI genuinely right are not necessarily those that move fastest, but rather the ones that listen best. That means taking the sceptics as seriously as the evangelists; treating a teacher’s reservations not as an obstacle to navigate but as valuable intelligence to take on board; and recognising that the colleague who asks “but what happens when it goes wrong?” is not standing in the way of progress. She is trying to make it more sustainable.
First, put the lifeboats on the ship, only then are you ready to set sail.
Working on something for schools? Let's think together
If you are developing a product or platform for schools, the arguments in this piece are as relevant to you as they are to school leaders. The teachers who push back on your tool are not your problem. They are your best source of intelligence.
I work with edtech companies to help them think through the pedagogical foundations of what they are building to make them more durable and sustainable. If that sounds like a conversation worth having, I would be glad to hear from you.
In the meantime, whether you are a school leader, educator, or edtech company, there are free resources and practical frameworks to support AI adoption and digital strategy available on the Free Resources page, all of them free to use.
How digitally mature is your school?
Get a free, personalised report with priorities that are actually relevant to you.



