Chatbot As Your Therapist: States Rush to Regulate AI Therapy Apps Before They Cause Harm

Shevane Subramaniam

Artificial intelligence is moving into one of the most humanized aspects of health care: emotional and mental support. Millions of people now turn to chatbots for reassurance, late-night conversations, or guidance they may never seek from another person. The change isn’t temporary but is rather becoming part of daily life, even as its regulation can’t keep up.

State Laws Are Struggling to Catch Up

This year, several states passed laws targeting AI therapy apps, but the outcomes have been anything but pristine. Illinois and Nevada issued full bans on AI tools that claim to provide mental health treatment. Utah took a more cautious route, requiring chatbots to disclose that they’re not human and to protect user data. Other states, like New Jersey and California, are still figuring out what regulation should look like.

However, these laws don’t cover generic chatbots like ChatGPT, which many people privately use for emotional support. The result is a patchwork of rules that developers don’t fully understand and that users often aren’t aware of. Some apps have blocked access in regulated states, while others haven’t changed anything. A few have attempted to redesign their branding from “AI therapist” to “self-care assistant” in order to avoid crossing legal boundaries.

Not All AI Apps Are the Same

Part of the challenge is that “AI mental health tools” is a broad category. Some apps provide companionship only, and others offer journaling prompts, crisis features, or CBT-inspired (cognitive behavioral therapy) exercises. A few, like the Dartmouth-developed Therabot, are being studied in clinical research, and early results are promising when the chatbot’s replies follow real science and are supervised by clinicians.

Most commercial apps focus more on keeping people engaged than keeping them safe. They tend to agree with users instead of challenging harmful thoughts, and they aren’t designed to step in during a crisis. Even developers admit these tools were never meant for suicidal thoughts or serious mental health issues, yet people often use them in those moments anyway.

The Human Gap AI Is Trying to Fill

AI is growing in mental health care because the system is already strained. The U.S. has too few therapists, long waitlists, and care that many people can’t afford. For some, a chatbot feels like the only available option. Despite this, one advocate for Illinois’ law said, offering someone telling someone with serious mental health needs, “There’s a workforce shortage, so here’s a bot,” isn’t fair or safe.

The Need for Oversight

The American Psychological Association notes that AI has the potential to help people earlier, before they spiral into crisis, if the tools are based on real evidence and supervised responsibly. Federal agencies are beginning to step in: the FTC has opened inquiries into major AI companies, and the FDA is preparing to evaluate AI-enabled mental health devices. Now, the real question is how to create rules that protect users while still allowing careful, responsible innovations.

What’s Next

Mental health care is personal and sensitive. If AI is going to be part of it, we need clear guardrails, including transparency, clinical oversight, and a way for developers to prove their tools are safe. As of now, the system isn’t cohesive. Users don’t know what to trust, and developers are making up the rules as they go. AI may eventually help people before they reach a breaking point, but that will require careful policy and an understanding that technology, no matter how advanced, can’t replace empathy, clinical judgment, or the safety of real human care.

Copy editor: Lydia Kim

Photography source: Lakshmi Subramanian (Canva)