Your daughter's photo is one app away from being fake-naked

AI "nudify" apps can turn any photo into a fake nude in seconds. As a dad with three daughters, I'm not waiting for schools to figure this out. Here's what I'm actually doing — and what I wish someone had told me six months ago.

Your daughter's photo is one app away from being fake-naked

Last week, I read that students at a school in Berchem, Belgium used AI to create a sexual deepfake video of their teacher. The week before that, girls at two high schools in Pennsylvania discovered that a classmate had been running their Instagram photos through a "nudify" app. In South Korea, thousands of students were targeted in a similar way — many of them minors.

And then there's Grok, Elon Musk's AI chatbot, which until very recently would happily generate explicit images of real people, including children, from a regular photo.

I have three daughters. Eleven, fourteen, and sixteen. You do the math on how that makes me feel.

Not in five years. Not "if the technology advances." Right now, today, with a free app and thirty seconds.


This isn't a tech problem

Here's what makes this different from every other online safety scare: the barrier is gone.

Making a fake nude image used to require Photoshop skills and hours of work. Now it requires typing a name. Some apps don't even need that — just a photo. Any photo. A school picture. A beach holiday snap from someone's public Instagram.

The result looks real enough to destroy someone's reputation, mental health, and sense of safety.

And here's the part that should terrify every parent: in most countries, the law hasn't caught up. The UK just announced it's extending its Online Safety Act to cover AI chatbots — as of last week. The EU is investigating Grok. But enforcement? We're not there yet.

Your kid's school almost certainly doesn't have a policy for this. Most teachers don't know what a "nudify app" is. Some parents don't either.

I didn't, six months ago.


What I actually worry about

I'll be honest: my first instinct was to take my kids' phones away. Lock everything down. Go full helicopter parent.

But I know that doesn't work. Kids who grow up with restrictions and no understanding just learn to hide things better. The research backs this up — UNICRI (United Nations) put it bluntly: "forbidding technology often backfires."

So instead, I worry about three things:

1. My daughters as potential victims. All it takes is one classmate with the wrong app and a grudge. Or no grudge at all — just curiosity and poor judgment. A fourteen-year-old brain is not great at predicting consequences.

2. My daughters as potential bystanders. What do they do when someone shares a fake image in a group chat? Do they know it's a crime? Do they know how to respond?

3. My daughters not telling me. This is the big one. Most victims of deepfake abuse don't tell their parents. They're ashamed. They think it's somehow their fault. By the time adults find out, the damage is done.


What I'm actually doing (no expert badge required)

I don't have this figured out. But here's what I've tried so far, as a dad who builds software and knows just enough to be scared:

The "nudify app" conversation. I told my older two, straight up, that these apps exist. That anyone's photo can be used. That it's not their fault if it happens to them, and that sharing someone else's fake image is a crime in most of Europe. They were shocked. Not at the technology — at how easy it was.

The group chat rule. We agreed on a simple rule: if something shows up in a group chat that looks like it could be fake or harmful, screenshot it and tell me. No judgment. No lecture. I just need to know.

Making it not-weird to talk about. This is the hardest part. I bring it up casually, not as A Big Serious Talk. "Hey, did you see that thing about Grok?" works better than sitting them down at the kitchen table with grave faces.

Checking privacy settings together. We went through Instagram and TikTok profiles together. Not to spy — to understand what's public.


What schools should be doing (but mostly aren't)

A school in Belgium now has a "mediawijs beleid" — a media literacy policy with rules about deepfakes, cyberbullying, and sexting. That's great. Most schools don't have anything close to this.

If your kid's school hasn't mentioned AI-generated images once this year, that's a problem. Not because schools should have all the answers, but because silence tells kids this isn't important enough to discuss.

Ask the school. Seriously. One email: "What is your policy on AI-generated images involving students?" If they don't have one, you've just given them a reason to create one.


The uncomfortable truth

The uncomfortable truth is this: we can't prevent this technology from existing. We can't prevent our kids from encountering it. And we definitely can't prevent every misuse.

What we can do is make sure our kids understand three things:

  1. These images are not real, but the harm is. A fake nude can cause real trauma, real social damage, and real legal consequences — for the victim and the creator.
  2. Creating or sharing one is a crime. In the Netherlands, it falls under existing legislation on image-based sexual abuse. In the UK, it's explicitly covered under the Online Safety Act. Your kids need to know this before someone in their class discovers it the hard way.
  3. If it happens to you, it's not your fault. And you can tell someone. Preferably a parent, but also a teacher, a counselor, or an organization like HelpWanted.nl (NL) or Childline (UK).

I'm writing a small Dutch children's book about how AI works. It's called "De Slimste Papegaai" — The Smartest Parrot. One of the chapters deals with the question: how do you know if something is real?

I wrote that chapter before I read about Berchem. Before I read about Radnor. Before I checked my daughter's Instagram settings.

Now that chapter feels less like education and more like a warning.


This is part of the "Raising Kids in the AI Age" series. I'm a dad with three daughters, not an expert. I'm figuring this out as I go — and writing about it so you don't have to start from zero.

Next week: How to explain AI hallucinations to your kid (and why "the computer made it up" is actually a great starting point).