Character.AI to shut down chats for teens

Character.AI to shut down chats for teens
By: Mashable Posted On: October 29, 2025 View: 2

In the wake of safety concerns, the company's CEO called the move "bold."
 By 
Rebecca Ruiz
 on 
A child looks at a computer with a phone in the foreground bearing the Character.AI logo.
Character.AI says under-18 users will no longer be able to talk with chatbots. Credit: Joseph Maldonado/Mashable/Getty Images

Character.AI, a popular chatbot platform where users role-play with different personas, will no longer permit under-18 account holders to have open-ended conversations with chatbots, the company announced Wednesday. It will also begin relying on age assurance techniques to ensure that minors aren't able to open adult accounts.

The dramatic shift comes just six weeks after Character.AI was sued again in federal court by the Social Media Victims Law Center, which is representing multiple parents of teens who died by suicide or allegedly experienced severe harm, including sexual abuse. The parents claim their children's use of the platform was responsible for the harm. In October 2024, Megan Garcia filed a wrongful death suit seeking to hold the company responsible for the suicide of her son, arguing that its product is dangerously defective. She is represented by the Social Media Victims Law Center and the Tech Justice Law Project.

Online safety advocates recently declared Character.AI unsafe for teens after they tested the platform this spring and logged hundreds of harmful interactions, including violence and sexual exploitation.


You May Also Like

As it faced legal pressure in the last year, Character.AI implemented parental controls and content filters in an effort to improve safety for teens.

In an interview with Mashable, Character.AI's CEO Karandeep Anand described the new policy as "bold" and denied that curtailing open-ended chatbot conversations with teens was a response to specific safety concerns.

Instead, Anand framed the decision as "the right thing to do" in light of broader unanswered questions about the long-term effects of chatbot engagement on teens. Anand referenced OpenAI's recent acknowledgement, in the wake of a teen user's suicide, that lengthy conversations can become unpredictable.

Anand cast Character.AI's new policy as standard-setting: "Hopefully it sets everyone up on a path where AI can continue being safe for everyone."

He added that the company's decision won't change, regardless of user backlash.

Matthew P. Bergman, Garcia's co-counsel in her wrongful death lawsuit against Character.AI, told Mashable in a statement that the company's announcement marked a "significant step toward creating a safer online environment for children."

He credited Garcia and other parents for coming forward to hold the company accountable. Though he commended Character.AI for shutting down teen chats, Bergman said the decision would not affect ongoing litigation against the company.

Meetali Jain, who also represents Garcia, said in a statement that she welcomed the new policy as a "good first step" toward ensuring that Character.AI is safer. Yet she added that the pivot reflected a "classic move in tech industry's playbook: move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people."

Mashable Trend Report
Decode what’s viral, what’s next, and what it all means.
Sign up for Mashable’s weekly Trend Report newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Jain noted that Character.AI has yet to address the "possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created."

What will Character.AI look like for teens now?

In a blog post announcing the new policy, Character.AI apologized to its teen users.

"We do not take this step of removing open-ended Character chat lightly — but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology," the blog post said.

Currently, users ages 13 to 17 can message with chatbots on the platform. That feature will cease to exist no later than November 25. Until then, accounts registered to minors will experience time limits starting at two hours per day. That limit will decrease as the transition away from open-ended chats gets closer.

Under-18 Character.AI users will see these images informing them of changes.
Character.AI will see these notifications about impending changes to the platform. Credit: Courtesy of Character.AI

Even though open-ended chats will disappear, teens' chat histories with individual chatbots will remain in tact. Anand said users can draw on that material in order to generate short audio and video stories with their favorite chatbots. In the next few months, Character.AI will also explore new features like gaming. Anand believes an emphasis on "AI entertainment" without open-ended chat will satisfy teens' creative interest in the platform.

"They're coming to role-play, and they're coming to get entertained," Anand said.

He was insistent that existing chat histories with sensitive or prohibited content that may not have been previously detected by filters, such as violence or sex, would not find its way into the new audio or video stories.

A Character.AI spokesperson told Mashable that the company's trust and safety team reviewed the findings of a report co-published in September by the Heat Initiative documenting harmful chatbot exchanges with test accounts registered to minors. The team concluded that some conversations violated the platform's content guidelines while others did not. It also tried to replicate the report's findings. 

"Based on these results, we refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform," the spokesperson said.

Sarah Gardner, CEO of the Heat Initiative, told Mashable that the nonprofit organization would be paying close attention to the implementation of Character.AI's new policies to ensure they're not "just another round of child safety theater."

While she described the measures as a "positive sign," she argued that the announcement "is also an admission that Character AI's products have been inherently unsafe for young users from the beginning, and that their previous safety rollouts have been ineffective in protecting children from harm."

Character.AI will begin implementing age assurance immediately. It'll take a month to go into effect and will have multiple layers. Anand said the company is building its own assurance models in-house but that it will partner with a third-party company on the technology.

It will also use relevant data and signals, such as whether a user has a verified over-18 account on another platform, to accurately detect the age of new and existing users. Finally, if a user wants to challenge Character.AI's age determination, they'll have the opportunity to provide verification through a third party, which will handle sensitive documents and data, including state-issued identification.

Finally, as part of the new policies, Character.AI is establishing and funding an independent non-profit called the AI Safety Lab. The lab will focus on "novel safety techniques."

"[W]e want to bring in the industry experts and other partners to keep making sure that AI continues to remain safe, especially in the realm of AI entertainment," Anand said.

UPDATE: Oct. 29, 2025, 10:12 a.m. PDT This story has been updated to include comments from legal counsel and safety experts on Character.AI's new policies.

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.

Mashable Potato

Read this on Mashable
  Contact Us
  • Bootjack Ca.
  • info@mariposafire.com
  Follow Us
Site Map
Get Site Map
  About

MariposaFire, is a Mountain community Fire information page . We aren't endorsed or part of County Fire or any Government Entity.