Your kid trusts ChatGPT more than Google. That's a problem.

A kid asked ChatGPT to name the second longest river in Europe. It confidently described one that doesn't exist, complete with kilometers and countries. She copied it straight into her assignment. AI hallucinations hit different when kids can't tell the difference.

Your kid trusts ChatGPT more than Google. That's a problem.

This is the thing about AI hallucinations that makes them different from a wrong Google result. Google gives you a list of links and you pick one. If the top result looks sketchy, you scroll. There's friction. With ChatGPT, you get one confident answer in a complete sentence, formatted like a fact. No hedging, no "sources vary." Just: here's the truth.

Except sometimes it isn't.


The world's most confident 'liar'

I'm writing a Dutch children's book about AI called "De Slimste Papegaai"

One of the chapters is about a parrot that repeats everything it hears. The parrot sounds incredibly smart because it strings together sentences that sound right. But it has no idea what any of it means. If it heard the word "kaas" a thousand times next to "geel," it will tell you cheese is yellow. Ask it what cheese tastes like and it might say "yellow."

That's what language models do. They predict the next word based on patterns in their training data. When the patterns align with reality, you get accurate answers. When they don't, you get made-up river names in Europe.

The technical term is "hallucination." I hate that word because it implies the AI is experiencing something. It isn't. It's just filling in blanks with whatever is statistically likely. A better word would be "confabulation," which is what doctors call it when a brain-damaged patient invents memories without knowing they're fake. The patient isn't lying. They genuinely believe what they're saying. Same energy.


Why kids are especially vulnerable

Adults have a few built-in advantages when dealing with confident nonsense. We've been lied to before. We've seen authoritative-looking websites turn out to be garbage. We have years of "does this smell right?" instinct.

Kids don't have that yet.

Kids use ChatGPT during homework, during tests, sometimes during class. The math questions usually work out fine. But ask it a history question, a biology fact, a specific date, and things get interesting. It might mix up two events that happened three years apart. Not wildly wrong, just wrong enough that it looks right. The kind of mistake you'd only catch if you already knew the answer.

Most teachers don't even realize kids are using it. The errors just end up in notebooks and assignments, unchallenged, looking like facts.

A Brookings report from early 2026 put it clearly: AI tools "prioritize speed and engagement over learning and well-being" and generate "confidently presented misinformation." The hallucination rate across major models still sits around 9% for general knowledge. For questions about specific people? Some models hit 33% to 48% error rates.

Nearly one in ten general answers might be wrong. For specific facts, it could be one in three.


How I explain it at home

I've tried three different approaches, matched to my kids' ages. Here's what stuck.

For my eleven-year-old: The confident friend analogy.

"You know that friend who always sounds like they know the answer, even when they don't? And sometimes they're right, which makes it harder to tell when they're wrong? ChatGPT is like that. Really helpful, but you still need to check."

She gets this immediately because she has that friend. Every kid does.

For my fourteen-year-old: The autocomplete explanation.

I opened my phone, typed "the capital of" in a text message, and showed her the autocomplete suggestions. "See how your phone guesses what comes next? ChatGPT does the same thing, but with everything. It guesses the next word, then the next, then the next. Sometimes the guess is right. Sometimes it builds a sentence that sounds perfect but describes something that never happened."

Then I had her ask ChatGPT something she already knew the answer to. A fact from a test she'd just studied for. It got the answer slightly wrong. That three-second experiment did more than twenty minutes of explaining.

For my sixteen-year-old: The statistics angle.

"About 9% of the facts ChatGPT gives you are wrong. For specific people and dates, it's closer to a third. Would you trust a textbook that was wrong a third of the time?"

She's old enough to understand probabilities and to find that genuinely alarming. Good. Alarmed is appropriate.


The dinner table test

Here's something I want to try, and I'd recommend it to any parent. Ask ChatGPT a question during dinner. Then have everyone guess whether the answer is right or wrong before checking.

Turn it into a game. "Is that real or did it hallucinate?" Kids love catching adults being wrong. They'll love catching AI being wrong even more. And over time, they'll develop an instinct for the smell of a hallucination. Answers that are too specific, too clean, too perfectly structured. Real facts are messy. AI facts are suspiciously tidy.

Try it with a made-up-sounding question. Ask ChatGPT to describe a scientific study about something your kids care about. Half the time it'll invent an author, a university, and a publication year for a study that doesn't exist. Let your kids be the ones to figure that out.


What actually helps

Banning AI doesn't work. I wrote about this two weeks ago and I'll keep saying it. The tool is everywhere and it's useful. What works is teaching kids to treat it like a first draft, not a final answer.

Some things that could help:

The two-source rule. If ChatGPT tells you a fact for schoolwork, find it in a second source before you use it. If you can't find it anywhere else, it probably doesn't exist.

Ask it to show its work. Newer models can cite sources. Teach your kids to ask "where did you get that?" and then actually check the source. Half the time the source doesn't say what ChatGPT claims it says. That's a lesson worth learning.

Normalize being wrong. The reason kids trust AI so easily is that they associate confidence with correctness. If you create a home where being wrong is normal, where you as a parent say "I don't know, let me check," they'll extend that same healthy skepticism to their AI tools.

Let them catch you. Sometimes I deliberately leave an AI-generated mistake uncorrected and wait to see if the kids spot it. Builds the muscle.


The real concern

My worry isn't that AI gives wrong answers. My worry is that a generation of kids grows up thinking research means "asking one source and accepting whatever it says." That's not an AI problem. That's a critical thinking problem that AI makes worse because it's so good at sounding right.

The schools in New York City just announced they're rolling out AI tools across classrooms. The deputy chancellor specifically said they need to make sure "kids can recognize hallucinations" before deploying it widely. That's encouraging. It would be more encouraging if that was the default, not the exception.

You won't fix this with one conversation. You probably won't fix it with ten. But the first time your kid says "wait, let me check that," you'll know something clicked.

That's all you're going for. Not perfect skepticism. Just a pause before they believe.


Raising kids in the AI age

This is part of the "Raising Kids in the AI Age" series. I'm a dad with three daughters, not an expert. I'm figuring this out as I go — and writing about it so you don't have to start from zero.

Raising kids in the AI age
A series about preparing children for a future we can’t fully predict

In this series