PennLive: Opinion – The price of unchecked AI innovation: our kids

No child should be collateral damage in the ongoing race to develop artificial intelligence programs and services. Yet that is the risk we face as the federal government has moved to weaken state-level protection to expedite the growth of a powerful industry that is largely unregulated.

President Trump recently signed an executive order that undermines Pennsylvania’s ability to regulate and reign in AI companies. The order allows federal authorities to challenge and overturn state-level regulations while threatening to withhold funding for broadband and other critical projects from states that refuse to comply, Reuters reported.

The message to state governments is clear: fall in line with the interests of the AI industry or pay the price. In doing so, the federal government has placed technological expansion ahead of our kids’ safety and mental health. Once again, we are aiming to prioritize enterprise goals over consumer protection, and — in this case — the wellbeing of our children. This is an unacceptable tradeoff.

Pennsylvania lawmakers have begun to take reasonable, measured steps to protect young people from emerging AI harms. These efforts do not seek to stymie innovation; rather, they recognize that AI holds long-term promise, yet current deployment is exposing children to serious risks. Stripping states of the ability to respond to those risks ignores the realities that families and educators are confronting.

Those harms are no longer theoretical. Young people are increasingly turning to AI chatbots for emotional support, sometimes during moments of deep vulnerability.

Research by Common Sense Media has found that popular AI chatbots frequently fail to recognize or appropriately respond to signs of anxiety, depression, or suicidal ideation in teens, sometimes offering unsafe or misleading guidance instead. Mental health experts warn these systems lack the judgment, accountability, and clinical training required to support a child in crisis, yet they are available around the clock and often framed as trusted companions.

AI is being used to generate child sexual abuse material, accelerating and scaling a form of exploitation that law enforcement struggles to combat. We have seen cases in Radnor Township, where high school students were depicted in an inappropriate AI-generated video, causing long-term psychological harm and trauma.

Administrators at Lancaster Country Day School were forced to resign after two male students created and shared hundreds of altered images and videos that depicted dozens of their classmates in sexually explicit situations. The case led to a bipartisan group of state senators to amend Pennsylvania’s Child Protective Services Law to include that mandated reporters immediately report of incidents where children share intimate or explicit imagery of other children, including AI-generated deepfakes, to authorities.

Even products directly marketed to families are raising alarms. Reviews of AI-powered toys have found inadequate safety guardrails, with some toys sharing inappropriate and explicit information with children. Parents reasonably assume that products designed for kids meet basic safety standards, but those powered by AI are proving that assumption is becoming increasingly unreliable.

These incidents reveal a troubling pattern: the less oversight that is imposed, the more opportunities there are for harm, particularly for children who lack the power or capacity to protect themselves.

It’s a story that keeps getting rewritten. Social media platforms were unleashed with enormous optimism, but little regulation.

Wary of slowing innovation, policymakers hesitated to reign in social media conglomerates, leading to the ongoing youth mental health crisis that experts continue to link to excessive screen time, algorithmic amplification, and online harassment. Rates of anxiety, depression, and suicide attempts among young people climbed while accountability lagged behind..

AI is evolving at a faster rate than social media. Its reach is broader, its outputs are more convincing, and its ability to influence behavior is more opaque. Waiting for widespread damage before acting would repeat one of the digital age’s most costly policy failures.

Pennsylvania’s parents and children deserve better. We deserve transparency about how these systems are trained, what data they rely on, and how they are designed to interact with children, if even at all. States deserve the right to enact clear standards and regulations for enforcement, and meaningful consequences when companies fail to prevent harm. States must retain the authority to enact and enforce these protections, particularly when federal safeguards are insufficient or slow to materialize.

Congress must act swiftly to protect states’ rights to regulate AI and to establish baseline federal standards that prioritize our children’s mental health and safety. Innovation and responsibility are not mutually exclusive: responsible regulation enables new technologies to earn public trust and deliver long-term benefits.

The wellbeing of our children must come before corporate convenience or unchecked technological ambition. Progress that comes at the exploitation of young people’s mental health is not true progress. Protecting our children is a moral obligation, and one we cannot afford to ignore.

Justin Donofrio is director of prevention programs for Pa Family Support Alliance.

From Pennlive, 1/28/2026

Leave a Reply

Subscribe To Our

Mailing List