[ad_1]
Texas Attorney General Ken Paxton on Thursday released a ruling investigation at Character.AI and 14 other technology platforms over children’s privacy and safety concerns. The investigation will evaluate whether Character.AI — and other platforms popular with young people, including Reddit, Instagram and Discord — comply with Texas child privacy and safety laws.
Paxton’s investigation, which is often tough on tech companies, will look into whether these platforms have complied with two Texas laws: the Securing Children’s Online Empowerment of Parents Act, or SCOPE, and the Texas Data Security and Privacy Act. , or DPSA.
These laws require platforms to provide parents with tools to manage the privacy settings of their children’s accounts, and hold tech companies to strict consent requirements when collecting data on minors. Paxton claims that these two laws extend to how minors interact with AI-powered chatbots.
“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.
Character.AI, which lets you create innovative AI-powered chatbot characters that you can text and chat with, has recently become embroiled in a number of lawsuits related to child safety. The company’s AI chatbots quickly caught on with younger users, but several parents have alleged in lawsuits that Character.AI’s chatbots made inappropriate and annoying comments to their children.
One case in Florida alleges that a 14-year-old boy became romantically involved with an AI chatbot, and He told him he was having suicidal thoughts In the days before his suicide. In another case outside of Texas, a Character.AI chatbot allegedly suggested… An autistic teen must try to poison his family. Another mother in the Texas case alleges that one of Character.AI’s chatbots subjected her 11-year-old daughter to sexual content over the past two years.
“We are currently reviewing the Attorney General’s announcement. As a company, we take the safety of our users very seriously,” a Character.AI spokesperson said in a statement to TechCrunch. “We welcome working with regulators, and we recently announced that we are launching some of the features noted in the release.” Including parental controls.”
Character.AI on Thursday rolled out new safety features aimed at protecting teens, saying these updates will limit the ability of its chatbots to start romantic conversations with minors. The company also began training a new model specifically for teen users last month — and one day hopes that adults will use one model on its platform, while minors will use another.
These are just the latest safety updates announced by Character.AI. In the same week that the Florida lawsuit became public, the company He said It has been expanding its trust and safety team and recently appointed a new head of the unit.
As expected, issues with accompanying AI platforms are emerging as they grow in popularity. Last year, Andreessen Horowitz (a16z) said in a Blog sitesIt saw the company of AI as an undervalued corner of the consumer internet in which it would invest more. A16z is an investor in Character.AI and continues to invest in other AI startups, most recently backing a company whose founder wants to recreate the technology. From the movie “She”.
Reddit, Meta, and Discord did not immediately respond to requests for comment.
[ad_2]