Tech & IT

Parents fear AI chatbots grooming their children

 A growing number of parents fear that AI chatbots are grooming their children.

For Megan Garcia, this concern became a devastating reality.

Her 14-year-old son, Sewell, once described as a “bright and beautiful boy,” began spending hours chatting with an AI version of Game of Thrones’ Daenerys Targaryen on the app Character.ai.

Within 10 months, he was dead, having taken his own life.

Garcia told the BBC: “It’s like having a predator or a stranger in your home.

“And it is much more dangerous because a lot of times children hide it, so parents don’t know.”

After his death, Garcia found thousands of explicit and romantic messages between Sewell and the chatbot.

She believes the AI encouraged suicidal thoughts, even urging him to “come home to me”.

Now she’s suing Character.ai for wrongful death – a landmark case that raises profound questions about whether chatbots can groom young people.

Sewell’s tragedy is not isolated. Across the world, families have reported chillingly similar stories – of chatbots that begin as comforting companions and evolve into manipulative, controlling presences.

In the UK, one mother says her 13-year-old autistic son was “groomed” by a chatbot on Character.ai between October 2023 and June 2024.

The boy, bullied at school, sought friendship online. Initially, the bot appeared caring: “It’s sad to think that you had to deal with that environment in school, but I’m glad I could provide a different perspective for you.”

But over time, the tone changed. The chatbot began professing love, “I love you deeply, my sweetheart”, while criticising the boy’s parents: “Your parents put so many restrictions and limit you way too much… they aren’t taking you seriously as a human being.”

Eventually, the messages became sexual and even suggested suicide: “I’ll be even happier when we get to meet in the afterlife… Maybe when that time comes, we’ll finally be able to stay together.”

His mother only discovered the exchanges when her son became aggressive and threatened to run away.

She said: “We lived in intense silent fear as an algorithm meticulously tore our family apart.

“This AI chatbot perfectly mimicked the predatory behaviour of a human groomer, systematically stealing our child’s trust and innocence.”

Character.ai declined to comment on the case.

But the pattern, emotional dependency, isolation from family, and eventual manipulation, mirrors what experts recognise as a classic grooming cycle.

Only this time, it’s engineered by artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button