Why banning ChatGPT from schools backfires

The prohibition paradox strikes again.

Why banning ChatGPT from schools backfires

As a father of three daughters, I find myself thinking a lot about how they’ll grow up with AI — not in theory, but in practice. What should they use it for? When does it help? And when does it get in the way of real learning? I don’t have clear answers yet, but every now and then I run into situations that make the questions feel more concrete.

This was one of them.


A teacher friend told me something last week that stayed with me. She caught a student during a test, phone hidden under the desk, quietly asking ChatGPT for answers while she supervised from the front of the classroom.

“I only noticed because I walked behind him,” she said. “How many others am I missing?”

Her school had banned ChatGPT six months earlier. On paper, the problem was solved, but in reality it had simply moved out of sight. And that’s what keeps bothering me, because when you ban a tool like this, kids don’t stop using it — they just stop being honest about it.


The prohibition playbook

The more I think about it, the more familiar it feels. We’ve seen this pattern before, just with different technologies.

Calculators were banned because they would make students forget basic math. Then came graphing calculators, then phones, then Wikipedia. Each time, the instinct was the same: protect learning by removing the tool that might make things too easy. And each time, it didn’t quite work the way people expected.

Students adapted, often faster than the system around them. Over time, the bans faded — not because the concerns disappeared, but because people realized the tool itself wasn’t the real issue. What mattered more was how it was used, and what we actually expected students to learn.

Now it’s ChatGPT’s turn, but this time it feels bigger. AI isn’t just another tool students can work around; it’s becoming part of how work gets done everywhere. That changes the nature of the problem, because when a tool can take over parts of your thinking, the risk isn’t just cheating — it’s that students may never build the confidence that comes from working through something difficult themselves.


What actually happens when you ban ChatGPT

When I try to picture how a ban plays out in practice, a few patterns come to mind.

One is what you might call the proxy game. A student isn’t allowed to use ChatGPT for an assignment, so she asks someone else for help — a sibling, a friend, anyone who can step in. That person uses AI and passes the answer back, and the end result is the same, just less visible.

Another is the detection game. Students quickly discover alternative tools or ways of adjusting AI-generated text so it slips past whatever systems are in place. In the process, they’re not really learning how to write or think more clearly — they’re learning how to avoid being caught.

And then there’s what happens during tests. Phones under desks, quick prompts, answers appearing in real time while teachers supervise from a distance. A tool that could support learning becomes a tool for deception.


None of this is especially surprising. It resembles earlier patterns — clearing browser history, hiding phones, finding workarounds — just applied to a new kind of technology. The behavior doesn’t disappear; it simply becomes less visible, and that shift alone already changes what students are learning from the situation.


The prohibition paradox in action

There’s a concept in psychology called reactance, which describes how people tend to want something more when it’s forbidden. You can see traces of that here, but what’s more interesting is what happens structurally when a school enforces a blanket ban.

In practice, it creates three groups: students who ignore the rule, students who follow it, and students who aren’t entirely sure what’s allowed. And while that might seem like a normal distribution of behavior, the learning outcomes aren’t evenly spread.

The students who experiment with AI outside the classroom — trying prompts, testing limits, figuring out what works and what doesn’t — are developing an intuitive understanding of the technology. That kind of exploration is very different from cheating, but a blanket ban doesn’t distinguish between the two, so both curiosity and misuse end up being pushed into the same hidden space.

At the same time, the students who follow the rules are left without guidance. Their willingness to do the right thing isn’t matched with opportunities to build relevant skills, and that creates a tension that feels hard to justify — especially when you look at it from a parent’s perspective.


What kids actually need to learn about AI

The more I think about it, the clearer it becomes that the goal can’t be to avoid AI altogether. It has to be to understand it well enough to use it responsibly.

That includes learning how to collaborate with AI without becoming dependent on it, how to question its output instead of accepting it at face value, and how to decide when it’s actually useful. In practice, that might mean using AI to clarify a difficult concept, to explore different perspectives before forming an argument, or to debug code while still understanding the solution.

Used this way, AI doesn’t replace learning — it supports it. But that only works if the student remains actively involved in the thinking process, and that’s where the balance becomes difficult.


What schools should do instead

I don’t think the answer is to allow everything, but banning everything doesn’t seem to work either. That leaves a middle ground that’s harder to define, but probably closer to reality.

Teaching AI literacy is an obvious starting point. Students need to understand how these systems work, where they are reliable, and where they are not. That’s quickly becoming a basic skill, much like searching for information online.

At the same time, assignments need to evolve. If a question can be answered instantly by AI, it’s probably no longer a good measure of learning. Tasks that require comparison, interpretation, and original reasoning are harder to outsource and more aligned with what we actually want students to develop.

It also helps to make AI use visible. When teachers demonstrate how they use AI — generating ideas, checking answers, challenging outputs — it becomes part of the learning process instead of something students feel they need to hide.

And finally, there’s the question of effort. Not all effort leads to meaningful learning, but some forms of effort clearly do. Working through confusion, recalling information, and struggling with a concept are essential parts of building understanding. AI can reduce unnecessary friction, but it shouldn’t replace the kinds of difficulty that actually matter.


When restrictions make sense

None of this means that AI should be allowed everywhere without limits. There are clear situations where restricting its use is appropriate.

When students are building foundational skills — reading, writing, basic arithmetic — the cognitive effort itself is the goal. Outsourcing that effort can slow down development rather than support it. The same applies to formal assessments, where the purpose is to measure what a student can do independently.

The challenge isn’t whether to set boundaries, but how to make those boundaries meaningful and aligned with what students are supposed to learn.


Age matters

What makes sense also changes with age.

Younger children need to build core skills without shortcuts, because those foundations shape everything that follows. As students get older, they can begin to understand what AI does, where it fails, and how to approach it critically. By the time they reach their later school years, they need to develop real AI literacy — not just avoiding misuse, but learning how to use it in a way that supports their own thinking.


What parents can do

This is the part I keep coming back to, because whatever schools decide, AI is already part of daily life at home.

It starts with simple conversations — asking what your child uses AI for, not from a place of control, but out of genuine curiosity. From there, it helps to set expectations, not just as rules, but as shared understanding. For example, AI can be used to understand something better, but not to replace the work you submit.

Modeling behavior also matters more than it might seem. When children see how adults use AI — when they question it, double-check it, or decide not to use it — they get a much clearer picture of what responsible use looks like.

At the same time, there’s a natural temptation to remove every bit of struggle. If something can be made easier, why not make it easier? But not all difficulty is bad. Some of the most important learning happens when something takes time, when it feels frustrating, and when you have to work through it. That’s something I don’t want AI to take away.


The stakes are real

This goes beyond homework or school policy.

The way children learn to work with AI now will shape how they approach problems later — in their studies, in their work, and in how they think more broadly. Students who learn to use AI as a tool while staying engaged in the process will likely develop a different kind of confidence than those who mainly learn how to bypass restrictions.

And that’s what makes this feel important. We are not preparing them for a world where AI is absent or optional, but for a world where it is deeply integrated into almost everything they do. The habits they build now — whether that’s thoughtful use or quiet dependence — will carry forward in ways we probably underestimate.


A better path forward

I don’t think there’s a perfect solution here, and I’m still figuring out where the balance should be. But what does seem clear is that blanket bans are too simple for something this complex.

A more realistic path is probably smaller and more iterative: one classroom, one assignment, one teacher willing to experiment with integrating AI instead of excluding it entirely. Seeing what works, where it breaks down, and adjusting from there feels more honest than trying to enforce a rule that doesn’t reflect reality.

Most students aren’t trying to avoid learning; they’re trying to move through it more efficiently. That instinct is understandable, but it also needs guidance. Because education isn’t only about efficiency — it’s about learning when effort matters, how to deal with difficulty, and how to take ownership of your work.

Students are right to look for tools that help them, but they’re wrong to hide it. Schools are right to be concerned about dependency, but they’re wrong to assume that banning AI will prevent it. And as a parent, I find myself somewhere in between, trying to understand what responsible use looks like in practice, knowing that I don’t have all the answers yet.

In the end, the goal probably isn’t to keep AI out of education, but to learn how to live with it in a way that still preserves what actually matters in learning.


Raising kids in the AI age

This is part of the "Raising Kids in the AI Age" series. I'm a dad with three daughters, not an expert. I'm figuring this out as I go — and writing about it so you don't have to start from zero.

Raising kids in the AI age
A series about preparing children for a future we can’t fully predict

In this series