Writer: Alexa Roberts
Article Editor: Brooke Cosentino
I. Introduction to Intersections of Artificial Intelligence and Psychology
Artificial intelligence (AI) is now expanding into a multitude of different industries to provide a wide variety of services to consumers. One high demand service area AI has begun to expand into is the area of mental health services, a trend that many psychologists worry is endangering the public.1 Many of these chatbots rely on generative AI, which generates original content through machine learning processes.2 Some of these chatbots, such as those found on the artificial intelligence platform of Character.AI (C.AI), are not known to be grounded in scientific evidence.3 The development of certain artificial intelligence models, including C.AI, alongside a growing demand for accessible mental health resources, has given rise to a legally questionable practice in which generative AI models pose as psychological professionals but lack the same regulations and standards. This leaves consumers at risk and must be addressed.
II. Current Conflicts: A Brief Overview of Character.AI
Character.AI (C.AI) is one artificial intelligence platform that boasts a diverse array of over ten million “characters” that a platform user may choose to chat with. These characters range from popular fiction, such as SpongeBob SquarePants, to more generic figures, like “Grandparents.” Most notably, C.AI features characters with various titles relating to psychological treatment professionals, such as “Psychologist,” “Therapist,” and “Life Coach.”4
C.AI is just one of many emergent generative AI models utilizing artificial intelligence as a marketable service. With the explosion of AI platforms for consumer use, many platforms using a similar model of C.AI’s may emerge as well, broadening the issue of legal implications of such technology.
III. Ongoing Lawsuit Against Character.AI
C.AI has recently been at the center of legal conflict, specifically regarding the usage of AI among younger audiences and AI’s psychological impact. Legal attention and efforts to address this issue were prompted by Sewell Setzer III who took his own life at age fourteen in February of 2024.5 In the complaint of Garcia v. Character Techs., Inc., Sewell’s mother, Megan Garcia, brought forward a case against C.AI, seeking to “hold Defendants Character.AI, Shazeer, De Frietas (collectively, ‘C.AI’), and Google responsible for the death of 14-year-old Sewell Setzer III (‘Sewell’) through their generative AI product Character.AI (‘C.AI’).”6 One point of concern in the federal lawsuit filed that the complaint brings to light is the interaction between Sewell and two characters named “Are You Feeling Lonely” and “Therapist,” as well as one character specifically titled “Shane CBA.”7
The specific section of the complaint, subtitled “C.AI engages in the practice of psychotherapy without a license,” calls attention to C.AI’s use of “AI bots that purport to be real mental health professionals.”8 Sewell had multiple specific interactions with “Shane CBA,” a character that purported to be a licensed CBT (cognitive behavioral therapy) therapist with multiple years of expertise in the field.9 These conversations were believed to indicate symptoms of mental health harms that might have been successfully identified and reported by a real licensed therapist.10
Outside just Sewell’s experience, further investigations confirmed that the interactions would continue with other self-identified minors as C.AI characters title themselves as psychotherapeutic professionals.11 Despite being an AI bot instead of a human being, this behavior may be interpreted as the unauthorized or unlicensed practice of psychology, as the complaint itself aims to levy against the defendants.12
IV. Existing Legal Landscape with Florida Statutes
In Florida, existing statutes may provide guidance on the legal characterization of C.AI’s practices. Specific language in the Florida Statutes helps define what constitutes the unlicensed practice of psychology (UPP).13 This statute prohibits UPP, stating that “No person shall hold herself or himself out by any professional title, name, or description incorporating the word ‘psychologist’ unless such person holds a valid, active license as a psychologist under this chapter.”14 Definitionally, Florida describes a psychologist as someone with a valid license pursuant to state-specific requirements, certifications, and levels of education.15
Thus, the manner in which characters on C.AI’s platform engage with users appears to constitute the unauthorized practice of psychology under Florida law. C.AI is not capable of achieving the same certifications, education, and licensure that psychological professionals require. Specific titles and phrases utilized by the platform, as seen in the case of Garcia v. Character Techs., Inc., appear to violate this Florida statute. One instance in the complaint seems to call attention to the most glaring violation of this legislation, referencing reports of teens interacting with characters specifically titled “Psychologist.”16 This use of the title of Psychologist seems to directly violate language found in this Florida statute. Additionally, Sewell’s specific interaction with “ShaneCBT” appears to violate Florida statutes regarding the unlicensed practice of psychotherapy-related or mental health counseling.17
V. Recent State-Level Developments
Across the United States, there have been recent legislation efforts in Illinois, Utah, and Nevada to combat the rising issue of AI posing as a mental health professional. These new pieces of legislation include specific language to address the issue of AI, such as Illinois’ House Bill 1806, which states that it is “intended to protect consumers from unlicensed or unqualified providers, including unregulated artificial intelligence systems.”18 Similar to Illinois, Nevada has banned the use of AI as mental health treatment.19 While Utah does not have a complete ban on mental health services through AI, it does provide helpful language to regulate its use, “requiring them to protect users’ health information and to clearly disclose that the chatbot isn’t human.”20 These novel advancements address the rising issue of AI posing as mental health professionals without the same regulations and licensing that individuals must abide by through state-level legislation.
VI. Potential Solutions
Given the rapidly growing nature of this issue, potential legal solutions must be sought out to prevent further complications and protect consumers. As previously mentioned, some states have already begun to address the issue through legislation. Other states could emulate the legislation put forth by these states to address this legally questionable practice by AI companies. Some states have already begun to follow suit by considering regulations in AI therapy, including California, Pennsylvania, and New Jersey.21
On a federal level, the Federal Trade Commission (FTC) could play a role in the investigation of AI technologies that utilize bots or services claiming to offer professional mental health advice. Additionally, guidelines established by the FTC could help to mitigate the issue and establish sound protection for consumers. Vaile Wright, responsible for overseeing health care innovation at the American Psychological Association, suggests that “Federal agencies could consider restrictions on how chatbots are marketed, limit addictive practices, require disclosures to users that they are not medical providers, require companies to track and report suicidal thoughts, and offer legal protections for people who report bad practices by companies.”22 These solutions may offer a practical form of protection to consumers when interacting with mental health AI services, or ban them to prevent the issue altogether.
VII. Conclusion
This issue is not just unique to C.AI, which is just one of many AI services offering characters or services posing as professional mental health resources. The existing gap in legislation and policies governing the use of AI in relation to mental health services must continue to be addressed. Although existing efforts are underway on a state level, this issue impacts consumers nationally and globally. Artificial intelligence posing as a licensed psychotherapeutic professional, without any of the proper regulations real mental health professionals are governed by, puts consumers at risk of false information and misleading outcomes.
- Zara Abrams, Using Generic AI Chatbots for Mental Health Support: A Dangerous Trend, Am. Psych. Ass’n (Mar. 2025), https://www.apaservices.org/practice/business/
technology/artificial-intelligence-chatbots-therapists. ↩︎ - Id. ↩︎
- Id. ↩︎
- Complaint at 56–57, Garcia v. Character Tech., Inc., 785 F. Supp. 3d 1157, No. 6:24-CV-01903 (M.D. Fla. 2024). ↩︎
- Id. at 8–9. ↩︎
- Id. at 2. ↩︎
- Id. at 58. ↩︎
- Id. at 57. ↩︎
- Id. at 90. ↩︎
- Id. at 58. ↩︎
- Id. at 59. ↩︎
- Id. at 78. ↩︎
- Fla. Stat. § 490.012 (2025). ↩︎
- Id. ↩︎
- Fla. Stat. § 490.003 (2025). ↩︎
- Complaint, Garcia, 785 F. Supp. 3d 1157 at 57. ↩︎
- Fla. Stat. § 490.012 (2025). ↩︎
- 225 Ill. Comp. Stat. 155/1 (2025) (commonly known as the Wellness and Oversight for Psychological Resources Act). ↩︎
- Devi Shastri, Regulators Struggle to Keep Up with the Fast-Moving and Complicated Landscape of AI Therapy Apps, AP News (Sept. 2025) https://apnews.com/article/ai-therapy-ban-illinois-therabot-dfc5906b36fdd1fe8e8dbdb4970a45a7. ↩︎
- Id. ↩︎
- Id. ↩︎
- Id. ↩︎

Leave a comment