Skip to main content

Last Thursday I had the pleasure of speaking at the #EducationInsights event organised by The Teaching Awards Trust at the very fancy Bloomsbury Ballroom in London. The event focused on metacognition, and I’m very grateful for the invitation and the opportunity to contribute to such a thoughtful conversation about teaching, learning, and the evolving role of AI in education. I’ve included a transcript of my talk below.

We tend to talk about technology in revolutionary terms. “It will transform how we teach and learn”, or “it will completely disrupt education as we know it”.

It is hardly surprising, then, that the conversation around AI in education can quickly become hyperbolic: Which jobs will it eliminate? Are schools becoming obsolete?

If history teaches us anything, it is that the most ardent prophets of technological revolution are usually disappointed. Theirs are understandable questions, but I increasingly think they are the wrong place to start.

A more important question in the context of AI and education is this: who is doing the thinking?

Why metacognition matters

The Education Endowment Foundation frames metacognition as learners’ ability to plan, monitor and evaluate their learning. It does not replace subject knowledge or motivation, but rather it works in concert with both.

As I understand it, metacognition is how learners learn when they put their minds to it.

Self-regulated learners do not just complete tasks. They make decisions about how to approach them. They notice when something is not working. They adjust. They reflect. And, over time, these habits become internalised.

Crucially, though, the EEF is clear that metacognition is not a generic skill. It is not something that floats free of subject knowledge. It must be taught explicitly, modelled carefully, and embedded, embodied,  in real learning contexts.

That framing is helpful, because it gives us a lens through which to examine AI.

Where AI can help metacognition

Let’s start with how AI can help. Used deliberately, AI can support several aspects of metacognitive development.

First, planning.

AI tools can help students break down tasks, organise revision, generate study questions, or map out steps in a complex piece of work. For some students, this kind of structured prompting can be genuinely enabling.

Second, monitoring understanding.

One of cognitive psychology’s strongest messages is the importance of self-testing and checking progress. AI can generate quizzes, explain misconceptions in different ways, and provide rapid feedback. Used well, this can strengthen students’ ability to notice what they know and what they don’t.

Third, metacognitive dialogue.

Modelling thinking aloud and purposeful classroom talk are both well-established ways of supporting learning. Used carefully, AI can play a role here too: prompting reflection, asking “why” and “how” questions, or allowing students to rehearse explanations or arguments before sharing them with others.

Used this way, AI is not doing the thinking on its own. It is also provoking thinking.

Where AI can undermine learning

The difficulty, though, is that AI does not usually operate in this supportive mode by default, and students do not always approach it with that intention.

Its default mode is efficiency. And efficiency is not usually the friend of learning.

The first and most obvious risk is cognitive offloading.

When AI provides drafts, summarises a chapter, or solves a problem instantly, it removes the need for learners to plan, monitor or evaluate. The task is completed, but the learning loop is broken.

Students develop metacognitive strategies through challenge and deliberate effort. If AI removes the struggle entirely, they lose the opportunity to practise regulation.

A second risk is the illusion of competence.

AI outputs often sound confident, fluent, and authoritative. Students can mistake this fluency for understanding. Students are often poor judges of how well they have learned something, and AI can amplify that misjudgement.

Finally, there is the issue of motivation.

Self-regulated learning depends on effort, persistence, and delayed gratification. When everything becomes instant, frictionless and easy, motivation can dissipate. Students become consumers of answers rather than agents in their own learning.

This is not about assuming students are intentionally trying to bypass learning. It is about understanding how the tools they use can shape their habits.

A more balanced stance

So what does a sensible position look like? I would not ban AI, but neither would I embrace it uncritically. Instead, I would integrate its use through a metacognitive lens.

And to do so I would suggest three principles:

First, AI should scaffold thinking, not replace it.

AI tools can be used to support planning, monitoring and evaluation, provided they are deliberately structured to reinforce those processes rather than bypass them.

Second, AI literacy is metacognitive literacy.

Students need to learn when AI is helpful, when it is misleading, and when it should not be used at all. That is not a technical skill. It is critical thinking and self regulation in action.

Third, we must protect desirable difficulty.

Learning requires challenge at the right level. If AI removes cognitive effort altogether, it may be that the tasks themselves need to be reconsidered.

A final reflection

I don’t think AI will replace thinking altogether, but it can reduce opportunities for thinking.

If metacognition is about self-awareness, regulation and purposeful learning, then the real question is whether we can use AI intentionally enough to ensure that human thinking and decision making remain firmly in the loop.

Because, at the end of the day, if you are not doing the thinking, you’re not doing the learning.

Photos courtesy of The Teaching Awards Trust.

Leave a Reply