When I first started talking to students and teachers about AI, the conversation always flickered between panic and possibility. Parents wanted to ban it. Professors wanted detectors and penalties. Employers wanted people who could use it well. The centre of gravity kept shifting, and I realised something important: what we’re really arguing about is not a piece of software but what we value in education.

We are used to thinking of learning as a private contest: you demonstrate what you know, the teacher measures it, and the grade is a signal of your competence. That model made sense in a world where information was scarce and tests were the best proxy for understanding. But the world has changed. The ability to memorise facts matters less than the capacity to sift truth from noise, to synthesise insight, and to ask better questions. And AI is forcing that change into the open.

Why “cheating” feels like the right word (at least to me)

Call it instinct: when someone hands in an assignment that wasn’t written by their own fingers, it looks and feels like deception. We learned a tidy moral language around plagiarism — copying the words of others without attribution. It’s easy to extend that language to AI. If a program writes the essay, that seems like someone else doing the work. Punish the deception, right?

But the instinct conflates authorship with thinking. There are jobs and daily tasks where you don’t craft everything yourself either. Architects use CAD; journalists rely on sources; analysts use spreadsheets with built-in formulas. Nobody says they’re cheating because they use a tool. The difference is that education has traditionally tested the process of knowing, not just the outcome.

The fear of cheating is valid. There are shortcuts that leave students hollow: ready-made essays that a learner cannot defend, code they cannot explain, presentations they can’t answer questions about. That outcome should worry teachers and employers alike. But the technology itself? It’s neutral. The meaningful question is: what do we want to measure and why?

Work vs school: two different value systems

A paradox I couldn’t ignore: in most workplaces, AI use is a mark of competence. If you can use a language model to draft a client brief, summarise a research paper, or generate a first pass of code — and then refine it — you’re seen as efficient and resourceful. Companies reward people who can use the best tools to produce better results faster.

Schools, on the other hand, have been slow to adapt their incentives. Exams favour recall. Essays often reward a polished end product over the messy labour that produced it. So students adapt: they learn the behaviour that wins grades. When the environment changes because a new tool arrives, the test remains the same. The result is a moral panic about cheating rather than a conversation about redesign.

What’s being tested? Skills or outputs?

This is the fork in the road. If we continue to design assessments that prize outputs alone — the finished essay, the final spreadsheet — we will naturally be tempted to ban any technology that helps create them. If we instead design assessments that prize the process, then tools that help with outputs become instruments we can interrogate, criticise, and master.

What does testing the process look like? It could be oral defences where students must explain reasoning. It could be annotated submissions showing how the tool was used, why certain outputs were accepted and others rejected, and which sources were checked. It could be collaborative projects where the ability to direct a tool and integrate its output into a team is the very skill that’s being measured.

The real risk: outsourcing judgement

Here’s the real worry: students who rely on AI without developing the faculties to evaluate its output get a short-term benefit and a long-term deficit. They might pass assignments, but they don’t build judgment. In professional life, that judgement is everything. Tools will give you proposals and next steps; what matters is whether you can tell which proposals are ethical, accurate, and aligned with real human needs.

Education’s job is to cultivate judgment. So when an AI writes an essay for a student, the teacher should ask: can the student explain the choices made? Can they trace the reasoning? Where did the AI hallucinate? Which sources did it use and were they reliable? Those are the muscles education should develop.

Teaching the use of AI as a core skill

Imagine a classroom exercise where the explicit brief is: “Use an AI tool to produce a draft. Then annotate it, explain where it was right, and where it went wrong. Finally, improve the draft so it’s your own.” That assignment rewards exactly the capabilities the future workplace will need: tool fluency, critical review, synthesis, and authorship.

We forget that most professional skills were once treated as cheating. Using calculators, referencing software, even spellcheckers were once suspect. We adapted. We taught students how to use calculators properly — to estimate, to check plausibility, not just to trust the screen. AI requires the same adulting.

The teacher’s dilemma: fairness and trust

Some teachers worry about fairness. If one student uses AI and another does not, does that create an uneven playing field? Not necessarily. Good teaching practice already accounts for unequal access — by setting expectations and teaching the skill, not by pretending the inequality doesn’t exist.

There’s also a trust problem. The student–teacher relationship is built on trust and assessment design. If we simply slap punitive rules on AI, we may push the behaviour underground. Honest conversation and design that makes honesty the better option are more sustainable.

Preparing students for a future of partnership with machines

The point I always come back to is this: the best preparation we can give young people is not an old exam that values memorisation but a new curriculum that prizes discernment. We need to teach students how to interrogate sources, validate claims, and integrate AI sensibly into their workflows. We need to reward curiosity and the ability to own a product of thought, even when a machine had a hand in it.

And yes, that requires new assessments, teacher retraining, and a cultural shift in how we think about “effort.” Effort that looks like typing for eight hours might be less valuable than a ten-minute decision that saves thousands of pounds or avoids harm. The labour of judgement is often invisible.

A closing provocation – what do you think?

If you ask me whether students should be allowed to use AI, my reflex is not to ban but to scaffold. The better question is: How do we teach students to use AI in a way that makes them more thoughtful, not lazier? That’s the design challenge for educators and policy makers. It’s an opportunity to evolve how we teach rather than a reason to entrench outdated models.

Change is uncomfortable. But learning has always evolved with tools — from pen and paper to the internet. Treat AI as the latest tool worth mastering, and education stops being a gatekeeping ritual and becomes the preparation ground for a much more interesting future.