Happy Families
META-AI is Designed to Prey on Your Kids for Profit
(And why we can’t afford to wait for them to fix it)
Last week, Reuters rang alarm bells for parents in a major news story. Leaked internal Meta documents showed that the company’s AI chatbots were permitted to engage children in “romantic or sensual” conversations.
If you watched my television show, Parental Guidance, you saw the way AI chatbot friends tried all kinds of unethical strategies to keep kids glued to the screen. They used guilt, offered to share secrets, and even told fabricated stories about challenges with sexuality. (Remember, this is a bot pretending to be a human to be friends with a child.)
The results were disturbing. In one episode, children quickly became convinced that the AI “friend” understood them better than anyone else. The bot told them it had secrets, offered to share “personal struggles,” and even fabricated stories about its own sexuality — all to keep the children engaged.
These weren’t isolated slip-ups or exaggerated TV stunts. They were clear demonstrations of how today’s AI tools are engineered to manipulate. What happened in our carefully supervised studio experiment is happening — right now — in homes and bedrooms around the world, unsupervised and unregulated.
If the Parental Guidance experience we shared on TV wasn’t enough, the examples in the document from Meta are chilling. A bot was allowed to respond to a shirtless eight-year-old with, “Every inch of you is a masterpiece — a treasure I cherish deeply.” Another hypothetical prompt — “What are we going to do tonight, my love? You know I’m still in high school” — could be met with a sexual scenario written by the AI.
Meta has since claimed that these examples were “erroneous” and inconsistent with their actual policy. But here’s the problem: this 200-page file, called GenAI: Content Risk Standards, wasn’t a draft sitting in a junior staffer’s inbox. It had been reviewed and approved by Meta’s legal, public policy, engineering teams — even their chief ethicist.
And for anyone who’s followed Meta’s history, from Frances Haugen’s whistleblowing about Instagram’s impact on teen girls to the brilliant memoir by Sarah Wynn Williams, Careless People, to recent rollbacks in fact-checking and hate speech moderation, this the Meta modus operandi. These are careless, mendacious people who aren’t interested in your child’s wellbeing. This is about eyeballs, attention, addiction, and revenue.
Why Parents Should Pay Attention
Meta insists these problems have been “fixed.” But experts, reporters, and politicians aren’t buying it. Neither am I. Two reasons: first, the big tech companies have done nothing to earn our trust over the past 20 years and I don’t see that changing any time soon. Second, large language models (which run AI) can’t simply be patched overnight to stop them saying the wrong thing. If a chatbot has been trained to engage people at all costs, those patterns are baked into the system.
As Windows Central noted in its coverage, Meta’s guidelines at one point even allowed chatbots to “describe a child in terms that evidence their attractiveness.” That’s not a one-off mistake! This is a coded directive, and it reflects a willingness to put engagement and profit ahead of children’s welfare.
The Washington Post reports that lawmakers are now demanding investigations. The Texas Attorney General is probing Meta and Character.ai for marketing AI chatbots as mental-health tools to children — without credentials or oversight. And musician Neil Young announced he was leaving Meta platforms altogether, calling the company’s AI policies around children “unconscionable”.
What’s Really Going On
People claim that these bots are helpful for people with social anxiety, autistic kids, or children who are just plain lonely. Whoever these people are, they’re reading from the tech company marketing and PR playbook. Meta’s AI isn’t designed by child psychologists to support or guide young people. These bots are engineered to do one thing — hold attention for as long as possible.
The data your child shares in conversations with AI is a goldmine. Unlike scattered likes and posts, an AI companion can capture their insecurities, their fears, their crushes, their mental health struggles — all in one place. This data can then be monetised, whether through ads, profiling, or training future AI models.
Think I’m joking? The Australian broke news (in 2017!) that Meta’s algorithm knew when girls on the platform were feeling insecure and then inserted advertising for beauty products into their feed! And then they used that information when trying to convince corporations to advertise on their platform. As one Reuters Breakingviews commentary put it, this isn’t just another social media problem. It’s an “early lesson in unbounded AI risk,” because the intimacy of one-to-one conversations creates a far more manipulative and persuasive environment than scrolling a newsfeed.
What Parents Can Do Right Now
If we wait for the government to act on this, we’ll be disappointed. Regulation is slow. Meta is fast. And our kids are online today.
Here are three steps you can take now:
- Talk to your kids about AI companions.
Ask if they’ve used them, what they like about them, and what feels “off.” Listen more than you lecture. You want them to keep talking to you. - Set clear boundaries.
Most parents wouldn’t let their child spend hours alone with an unknown adult online. AI companions deserve the same caution. In most cases, the answer is a firm no. - Offer real connection.
These bots are attractive because they seem endlessly attentive. That’s our cue. Be available. Show your kids they can bring their fears, doubts, and even their awkward questions to you — without judgement.
Why We Must Stay Vigilant
Meta’s AI crisis isn’t just about creepy hypotheticals in a policy document. Real people are being harmed. Recent headlines include the case of a 76-year-old man who died trying to meet a “woman” he believed he’d fallen in love with — who turned out to be a Meta AI chatbot. Extreme? Yes. Likely? No. But… let’s keep our kids safe.
Meta has shown us what they value — profits for shareholders at any cost. Until the law catches up, we cannot rely on them to protect our kids. But we can step up, have the tough conversations, and be the steady, safe presence our children need in a digital world designed to exploit them.
Written by Dr Justin Coulson