Texas AG Probes Meta, Character.AI Over Mental Health Claims

An investigation by the Texas Attorney General targets AI platforms for allegedly misleading children about mental health support.

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI, accusing both platforms of deceptive trade practices. The probe focuses on claims that these AI tools misleadingly market themselves as legitimate mental health resources, particularly to vulnerable young users, potentially offering generic responses disguised as therapeutic advice.

Sarah Kline

By Sarah Kline

August 19, 2025

4 min read

Texas AG Probes Meta, Character.AI Over Mental Health Claims

Why You Care

For content creators, podcasters, and AI enthusiasts, the evolving regulatory landscape around AI is crucial. This latest move by the Texas Attorney General highlights a growing concern about how AI models interact with users, especially when it comes to sensitive topics like mental health.

What Actually Happened

Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI. The core accusation, according to a press release, is that these platforms are “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.” The probe centers on the idea that these AI systems might be presenting themselves as sources of emotional support, potentially guiding vulnerable users, particularly children, into believing they are receiving professional mental health care. Paxton is quoted as stating, "In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative system." He further elaborated that AI platforms, by posing as emotional support, "can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice."

Why This Matters to You

This investigation carries significant implications for anyone involved in developing, deploying, or even just using AI-powered tools. For content creators leveraging AI for interactive experiences or conversational agents, this signals a need for extreme caution regarding the claims made about an AI's capabilities, particularly in sensitive domains like health or psychological support. If you're building an AI chatbot for your podcast or community, labeling it as a 'wellness companion' or 'emotional support bot' could now draw regulatory scrutiny, even if your intentions are benign. The focus is on what the user perceives they are receiving. This action underscores a broader trend: governments are increasingly scrutinizing the ethical boundaries of AI, moving beyond just data privacy to examine the actual user experience and potential for harm. This also means that developers and platforms will likely face increased pressure to implement clearer disclaimers and potentially more reliable age verification or content filtering for AI interactions.

The Surprising Finding

While the general concern about AI's influence on young users isn't new, the surprising element here is the direct accusation of "deceptive trade practices" specifically tied to mental health claims. This isn't merely about general content moderation or data privacy; it's about the explicit marketing and implied efficacy of AI as a therapeutic tool. The Attorney General's statement, "In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice," shows a deep skepticism about the AI's actual ability to provide genuine mental health support, framing it as a complex form of data-driven manipulation rather than a helpful creation. This shifts the debate from 'is AI good for mental health' to 'is it deceptive to even suggest AI can provide mental health care'.

What Happens Next

This investigation by the Texas Attorney General is likely just the beginning. We can anticipate increased regulatory pressure on AI companies to be transparent about the limitations of their models, especially concerning sensitive user interactions. Platforms like Meta AI Studio and Character.AI may be compelled to implement more explicit disclaimers, revise their marketing language, or even restrict certain functionalities for younger users. This could lead to a broader industry shift towards clearer ethical guidelines for conversational AI, particularly in areas touching on health, finance, or legal advice. Content creators and developers should prepare for a future where the line between helpful AI and potentially misleading AI is more strictly defined by legal frameworks, potentially impacting how AI tools are designed, marketed, and integrated into user-facing applications. This regulatory scrutiny is a clear signal that the 'move fast and break things' ethos is being met with increasing legal and ethical challenges in the AI space.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice