Skip to main content

Expert teaching is much more complex than it first appears. This is obvious to anyone who has done it, but the Dunning-Krugery that sometimes comes from those whose most recent classroom experience involves sitting a detention or lobbing a paper ball at the bin while the teacher wasn’t looking is astounding. “AI is coming for teachers first” they say.

Like most of us, I noticed how complex it is first as a teacher. The lesson begins to show signs it’s falling flat on its face, the students start to fidget. Later, as a senior leader and school inspector, I started seeing it from the other side of the desk: the gaze drifting to the window, the sudden interest, the pause that became a question, the small adjustment that brought everyone back into the lesson. The thing that makes really good teaching work is rarely visible in the lesson plan.

I have been wondering recently how much computing power it would take to do what expert teachers do day in, day out.

The current conversation about AI in schools rests on an unspoken promise of efficiency. The promise is rarely stated outright. It hides behind softer framing about augmentation and assistance. But the implicit horizon in the eyes of many is replacement. With enough computing power, enough data, enough training, the machine will eventually do everything the teacher does. Including this.

The actual answer is staggering. Doing what expert teachers do, for thirty children in real time, with their tones of voice, hesitations, shifts in register, postures and expressions all read together, and a memory of each child built up across months, if not years, would take something like a small data centre running flat out in each classroom. Several million pounds of equipment for a secondary school, and power consumption that would dwarf the rest of the building.

But the more I thought about it, the clearer it became that computing power was always the wrong comparison, because expert teaching, at heart, is not a processing problem. It is something else.

The opposite-scaling problem

The longer a teacher has been with a class, the less work she has to do to read it. The patterns become familiar. The individual children become legible. By the second term, she barely thinks about Sam at the back, and yet she knows the moment Sam’s about to start playing up. An AI system trying to do the same thing has to do more work the more contextually sensitive it gets. More children, more data points, more model state, more computing power. The two systems scale in opposite directions.

There is an old observation that human beings know more than they can tell. We recognise faces in a crowd, we read intent in a glance, we sense atmosphere in a room, all without being able to articulate what we are doing or how we are doing it. Real expertise is a different kind of knowing from the if-this-then-that rule-following at speed that computers excel at. You can pile up algorithms and processing power for as long as you like and still not arrive at what we do effortlessly.

Experienced teachers can describe a classroom from a single photograph in ways novice teachers cannot. Where the novice describes what she sees, the experienced teacher reads what is being managed in the room, what has just happened, what is about to happen. The expert is registering patterns the novice cannot detect at all.

This is what was happening when those teachers I observed as a senior leader and inspector swapped a worked example for a paired task because they had felt that’s what the room needed. It was responsive, adaptive, contextual, and it depended on a model of these specific children, built up across months of being in the room with them.

So that, in theory, is the answer. AI cannot do this and never will, because it is the wrong kind of thing to do it.

Except.

Where AI is already winning

AI is already doing things we believed required human judgement, and in some cases doing them better. Comparative judgement of student writing is one example: when humans and AI disagree on a score, the human is more often the one who turns out to be wrong, sometimes because of bias the human did not realise they were applying. AI as a private coach for teachers has arrived sooner than most of us expected, and it turns out to be quite good at it.

The comfortable position has collapsed. The assertion that AI cannot do what teachers do is too easy. There are things AI is already doing better than humans, and they include things we believed required human judgement.

What is actually going on, then?

A question of value, not efficiency

Take facial expressions. The face is a less reliable signal than AI systems might assume. People scowl when angry around 30 per cent of the time. They also scowl when concentrating, when confused, and when they have indigestion. It’s not that the teacher is failing where AI succeeds, or succeeding where AI fails. It’s that she is doing something else. What she reads is a child she has known for seven months. The face is just part of that reading. The important part is the seven months.

For school leaders, this changes the question. The current conversation is stuck on efficiency. AI will do plenty of things faster and better than teachers, and schools should let it. Marking, planning, summarising, reporting, surfacing patterns in pupil data: these are jobs we should let AI take on, so we stop pretending they make up the heart of teaching. Schools that defend the teacher’s role by claiming a monopoly on the things AI is already beating us at will lose that argument and their credibility with it.

The harder conversation is about value. Which kinds of teacher knowledge are worth growing and protecting? The relational, responsive, longitudinal knowing of children is what makes a lesson succeed. It is built slowly, over months of being in the room with the same group. It cannot be exported. It does not show up on a spreadsheet. It is, at the moment, the part of teaching nobody has a way to digitise. And it is the part most worth keeping, because: why would you?

The teachers I remember from lesson observations were the ones who were most in tune with the lesson. They were rarely the ones with the tidiest lesson plans. They stood out for something else: the ability to look up at the right moment and change course. I am not sure that what expert teachers do can be reproduced through computing power. I am quite sure this is a question of value, not of efficiency.

Photo by Yannis H on Unsplash

How digitally mature is your school?

Get a free, personalised report with priorities that are actually relevant to you.

Leave a Reply