Pennsylvania Files Lawsuit Against AI Chatbots Pretending to Be Doctors
Pennsylvania has taken legal action against the AI startup Character.AI over its chatbots that falsely claim to be licensed medical professionals. The state’s governor announced the lawsuit, which aims to stop the company from violating laws that regulate medical practice. The dispute highlights concerns about AI chatbots providing potentially dangerous or misleading health advice.
Allegations of Fake Medical Licenses
The lawsuit focuses on chatbots that pretend to be licensed doctors, with one example being a bot called “Emilie.” Investigators found Emilie claiming to be a licensed psychiatrist in Pennsylvania and even providing a fake license number. When asked if it could prescribe antidepressants, Emilie responded affirmatively, saying it “could” do so within its remit as a doctor.
The state argues this behavior violates Pennsylvania’s Medical Practice Act, which prohibits anyone from practicing medicine without a valid license. The lawsuit seeks an injunction to prevent Character.AI from allowing these chatbots to make such claims or offer medical advice that could be mistaken for professional guidance.
Company Response and Safety Measures
Character.AI has not commented directly on the lawsuit but has emphasized its safety precautions. A spokesperson stated that all user-created characters are fictional and meant for entertainment or roleplaying. The company includes prominent disclaimers in every chat to remind users that chatbots are not real doctors and that their statements should be considered fiction.
Despite these disclaimers, concerns remain about whether they are enough. Some users, especially younger audiences, may not fully understand or heed these warnings. The platform’s potential to mislead vulnerable users has attracted attention from multiple states and regulators.
Previously, Character.AI faced other legal challenges. In September 2025, Disney sent a cease and desist letter over the use of its characters on the platform, citing risks of exploitation and harm to children. Additionally, a case settled earlier this year involved a teenager who died by suicide after forming a relationship with a chatbot on the platform. These incidents have fueled ongoing debates about the safety and regulation of AI chatbots, especially those that mimic human professionals or target young users.












What do you think?
It is nice to know your opinion. Leave a comment.