I am now well into my second decade of leading technology implementation in schools. Much of that work has focused less on devices themselves and more on the leadership judgement required to introduce change at scale in ways that remain faithful to good teaching and learning.
A familiar pattern tends to emerge when new technologies arrive. They are rarely allowed to be merely useful. Instead, they are expected to disrupt, transform or revolutionise education. Incremental improvement struggles to compete with sweeping promise.
In the early 2010s, I led my first one-to-one tablet rollout in a large all-through school in suburban London. It was ambitious, well resourced, and successful. But it taught me an early lesson. Whenever I said tablets *could* be used to do something, many colleagues heard that they *should* be.
The result was a series of well-intentioned but poorly judged proposals: abandoning pen and paper, discarding physical textbooks, repurposing the library, or redesigning lesson planning templates to mandate tablet use. Any one of these would have been damaging in isolation. However, they were often proposed as a package, revealing confusion between possibility and purpose.
What mattered most was not how often the technology appeared, but how well professional judgement was exercised. Our most effective training focused on teaching and learning rather than the devices themselves, so that teachers could decide when technology genuinely supported learning and when it did not.
If this sounds familiar, it should. We are living through a similar moment with artificial intelligence. Once again, capability is being mistaken for suitability. Because AI can be used almost everywhere, it’s easy to believe that it should be.
This is not an argument against AI. Used well, it is one of the most powerful tools education has seen in decades. But maturity does not lie in ubiquity. It lies in discernment.
Five things AI does well
1. Getting people started
AI is an effective starter motor. It helps people begin, and beginning matters. Early success engenders motivation and builds momentum. By generating outlines, offering alternative phrasings or prompting ideas when thinking struggles, AI lowers the barrier to entry. For students facing the intimidation of a blank page, this can turn procrastination into movement. Of course, the risk arises when fluency is mistaken for mastery and the first draft becomes the final one.
2. Pattern recognition at scale
AI is strong at identifying patterns across large volumes of information: trends in survey data, recurring themes in feedback, inconsistencies across documents. Human attention is finite, and as volume increases our capacity to notice regularity or absence declines. AI can extend our perceptual range by surfacing patterns that merit closer scrutiny, allowing us to ask better questions. Interpretation and judgement, however, still sit with the human in charge.
3. Administrative compression
Timetabling, summarising, reformatting, and collation absorb time and attention without deepening understanding. Used intentionally, AI can compress this work and give time back to students, teachers, and leaders so that it can be used more productively. In schools, where focus and attention can be scarce commodities, this matters. A moral case for AI can begin here, as it allows people time to pay more attention to what really matters.
4. Iterative Development
AI doesn’t get tired, take offence, or become defensive. You can explore multiple versions, test ideas and revise language without fear of judgement. Iteration is central to improvement, yet is often constrained by time and confidence. Used for this purpose, AI can support experimentation and exploration so that progress can be made more quickly because improvement depends on trying, failing and refining.
5. Making assumptions visible
AI can help translate half-formed thoughts into explicit propositions. Unarticulated ideas are difficult to challenge or contest. Once thinking is externalised, it becomes discussable and open to refinement or rejection. In leadership work, this shift from implicit to explicit thinking can improve decision-making.
Five things AI does not do well
1. Judgement in context
AI has no lived experience of your school, pupils or community. It does not carry the weight of past conversations or the moral texture of decisions that look straightforward on paper but feel different in practice. Take the example of the hotel doorman: a consultant or an algorithm might suggest removing a hotel doorman on efficiency grounds. What that misses is the value doormen add through welcome, safety, presence, and professionalism. Professional judgement works in the same way, as contextual knowledge and wisdom are hard to optimise.
2. Relational trust
No one feels truly heard by a tool. Feedback, coaching and pastoral conversations depend on presence, tone and timing, and on sensitivity to what is not being said. A wellbeing chatbot may offer a helpful first step, particularly for those reluctant to speak, but it cannot notice hesitation, adjust in response, or carry responsibility for what follows. Simulated empathy is not the same as relational accountability.
3. Moral responsibility
When a decision goes wrong, “the AI suggested it” is not an acceptable answer. In schools, decisions carry pastoral weight and shape pupils’ sense of fairness, safety and belonging. Leadership requires moral ownership: standing behind decisions and being accountable not only for outcomes, but for intent. That kind of responsibility cannot be delegated to a tool.
4. Productive struggle
Not all difficulty is a problem to be removed. Some struggle is a precondition for learning. Cognitive offloading can be helpful, as when we use calculators or spellcheckers, but learning to write, reason and think clearly involves effort. AI is very good at smoothing that away. When learners outsource the thinking they need to do for themselves, outputs may initially appear to improve while knowledge and understanding gradually thin.
5. Calibration of understanding
AI, when used uncritically, is poor at helping learners judge what they actually know. Fluent explanations and polished solutions can create an illusion of understanding. Many learners recognise the experience of rereading notes that make sense, only to discover later that they cannot explain or apply the ideas. AI can intensify this effect by making work look finished before understanding is secure.
So when should we not use AI?
A useful rule of thumb is this: use AI when it helps us slow down and think more carefully; keep decisions human where presence, trust and judgement matter.
AI is most useful when it creates space to explore ideas, test possibilities and think before deciding.. It is less suitable where relationships, care and moral clarity are central. The most sophisticated use of AI is not deploying it everywhere, but being able to explain clearly where and why it is deliberately not used.
The task ahead, then, is not to force more AI into our work, but to make thoughtful choices about what we preserve, protect and prioritise. Discerning use of AI lies in knowing when it helps and when it hinders.
”The most mature use of AI is knowing when not to use it.



