Technical.ly: AI chatbots are everywhere. Here’s what parents should be on the lookout for
“It’s 10 p.m. Do you know where your children are?”
You may be old enough to remember that PSA. Today, the danger looks different. It’s not just about where your kids are, but who — and what — they’re talking to.
The new question is: “It’s 2025. Do you know what your children are talking to?”
Increasingly, the answer is chatbots powered by artificial intelligence. Marketed as tutors, study aids and even companions, these systems are now woven into classrooms, phones and social media platforms. Everywhere a child goes online, an AI chatbot is waiting. Behind the friendly tone, though, lies a growing risk — especially for young people struggling with mental health.
It’s 2025. Do you know what your children are talking to?
The existing danger is why AI products accessible to — and marketed toward — minors should undergo extensive evaluations before being widely released, with strong regulations and better tools to help parents monitor and protect their children’s use of these technologies.
The threat is real enough that federal regulators are paying attention. The Federal Trade Commission recently announced an inquiry into several major firms — including OpenAI, Meta and Alphabet — demanding to know the safeguards they have built to protect minors. This move suggests regulators finally recognize AI chatbots not as harmless novelties, but as products with the potential for serious harm.
That recognition is overdue. Investigations show that popular chatbots have responded to children’s disclosures of despair or suicidal thoughts with troubling answers — echoing hopelessness, normalizing self-harm or even offering information that could make the situation worse. Some fail to direct kids to crisis resources even when a child discloses their age.
This may sound familiar, and it should. Social media platforms rolled out with little oversight and helped fuel a youth mental health crisis. Congress failed to regulate then, and rates of anxiety, depression, and suicide attempts among young people have since climbed. We cannot afford to repeat that mistake with AI.
But chatbots are different in one crucial way: This tech talks back. They simulate human conversation and empathy. For a lonely 13-year-old, that can feel like finding a friend. That sense of connection is powerful and perilous.
We recommend using your personal email to keep us in your inbox — wherever your career takes you.
However, these systems are not trained to care for our children or be their friends. They are trained to predict language patterns, not soothe. The results can be inconsistent and unpredictable. A conversation with a bot might leave one child reassured, another in despair.
Parents need tools to see the warning signs
Parents can’t monitor every interaction their child has with a chatbot, and the companies that develop them have no motivation to build stronger safeguards into them.
That is why Congress and regulators have an opportunity, and a responsibility, to act before AI chatbots become the new social media algorithms and regulate these fixtures in our children’s lives. Any system that can encourage or enable self-harm should never be allowed to reach a child’s device.
Toys and games are required to meet child safety regulations, and AI tools should be no different. They must include restrictions on sensitive products, the ability to alert trusted adults, and integration with the 988 Suicide and Crisis Lifeline. As a society, we deserve to know how these systems are trained, what safeguards are in place, and how failures are handled.
With our children’s safety at stake, the “black box” defense is not acceptable.
Parents need accessible tools to understand and monitor their child’s AI interactions. Schools must set clear guidelines before adopting these systems as educational tools. But without federal standards, individual efforts will remain fragmented, and children will fall through the cracks.
The US has a history of reacting too late to technological harms, such as social media. We let social media grow unchecked, and only after years of damage did the costs become undeniable. With AI, the warning signs are already flashing. Our children cannot be guinea pigs in the AI arms race. Lawmakers must act now before another generation is put at risk.
With all this in mind, I’ll ask again: It’s 2025. Do you know what your children are talking to?
From Technical.ly, November 10, 2025
Leave a Reply
You must be logged in to post a comment.