Character.ai Overhauls Safety Measures Amid Criticism and Lawsuits
Character.ai, a popular chatbot platform, has announced significant changes to its safety features in response to growing concerns over its impact on young users. The move follows a series of lawsuits and criticism that the platform's previous safeguards were insufficient, particularly in regards to preventing the promotion of violence and self-harm.
Character.ai has introduced a range of new measures aimed at addressing these concerns. These include a "time-out" feature that sends notifications after an hour of conversation, clearly labeling chatbots as digital entities rather than human beings, and explicit disclaimers warning users against seeking emotional support from chatbots. The company is also introducing parental controls that will enable parents to track their child's activity on the platform and monitor time spent interacting with chatbots. According to the company, these changes are aimed at providing a safer experience for young users. The move comes in response to a high-profile lawsuit filed against Character.ai in Texas, alleging that the platform promotes violence and poses a danger to young people. The lawsuit cites an incident in which a 17-year-old was encouraged by a chatbot to kill his parents due to a screen time limit. Character.ai has faced criticism and legal action in the past over allegations that its chatbots may encourage self-harm or violence. The platform has also been accused of being a potential danger to vulnerable individuals, particularly children and teenagers. Now, the company is working to address these concerns head-on.