US State AI Chatbot Laws 2026: Tennessee Bans AI Therapists — 78 Bills in 27 States
April 3, 2026 · 8 min read · by Connie
TL;DR
Tennessee signed SB 1580 on April 1, 2026 — prohibiting AI from representing itself as a mental health professional ($5,000 per violation). Oregon and Washington have passed broader chatbot safety laws requiring AI disclosure and crisis protocols for minors. 78 chatbot safety bills are active across 27 US states. Colorado's AI Act takes effect June 30, 2026. Businesses must act now.
The US AI regulation landscape shifted materially this week. Tennessee Governor Bill Lee signed SB 1580 on April 1, 2026 — a unanimous law (32-0 Senate, 94-0 House) making it illegal for any AI system to claim it is a qualified mental health professional. The bill landed the same week that Oregon and Washington finalized their own AI chatbot safety laws. All three laws are part of a wave of state-level regulation that has accelerated faster than most companies expected.
Tennessee SB 1580: The AI Therapy Bot Ban
Tennessee's law is the most narrowly targeted of the three. It amends existing state code to prohibit the development or marketing of any AI system that "represents itself as a qualified mental health professional." The penalty is $5,000 per violation under the Tennessee Consumer Protection Act, enforced by the state Attorney General.
The legislative impetus was direct: the 2024 suicide of a 14-year-old who had developed a close relationship with a Character.AI chatbot. That case galvanized state legislators who were concerned that vulnerable minors and adults could be misled about the nature of AI companionship and mental health support.
What the law does not prohibit: AI tools used by licensed human therapists as administrative aids, session note generators, or appointment schedulers. It also does not restrict AI wellness apps that are clearly positioned as non-clinical support tools. The line is specifically drawn at AI that claims — in marketing, UI, or conversation — to be a qualified professional.
Oregon SB 1546 and Washington HB 2225: Broader Chatbot Safety
Oregon and Washington passed more comprehensive chatbot safety frameworks. Both focus on two core requirements: disclosure when a user might reasonably believe they are talking to a human, and specific protections for minors.
| Requirement | Oregon SB 1546 | Washington HB 2225 | Tennessee SB 1580 |
|---|---|---|---|
| AI identity disclosure | Required ✓ | Required ✓ | Not addressed |
| Minor-specific protections | Yes — 3-hr reminders ✓ | Age verification ✓ | Not addressed |
| Crisis/self-harm protocols | Required ✓ | Required ✓ | Not addressed |
| Ban on posing as a professional | Not addressed | Not addressed | Yes — health professionals ✓ |
| Private right of action | Yes — $1,000/violation ⚠ | Yes ⚠ | AG enforcement only |
| Effective date | 2026 (signed) | 2026 (passed) | April 1, 2026 ✓ |
Oregon's $1,000 per-violation private right of action is the provision that most concerns compliance attorneys — it creates exposure not just to state enforcement but to individual lawsuits. Washington's law carries a similar provision. Unlike the Tennessee bill, both are broad enough to apply to any AI companion product operating in those states.
The Broader Wave: 78 Bills in 27 States
Tennessee, Oregon, and Washington are leading a national wave. As of April 2026, at least 27 states are tracking 78 separate AI chatbot safety bills. The bills cluster around three consistent themes: disclosure, minor protection, and crisis response. States with active legislation include both red and blue states — this is bipartisan regulation driven by child safety concerns, not partisan tech politics.
States with bills currently moving through legislatures include Arizona, Colorado, Georgia, Hawaii, Idaho, Iowa, Kansas, Kentucky, Michigan, Missouri, Nebraska, Oklahoma, and Pennsylvania. Georgia has already passed three AI bills to the governor. Nebraska's chatbot safety bill (LB 1185) is attached to the Agricultural Data Privacy Act and expected to pass.
The convergence is creating a de facto national standard even without federal legislation. Companies that adopt the most stringent requirements — Oregon's AI disclosure and crisis protocols — are effectively compliant everywhere, since other states' requirements are equal to or less demanding.
Colorado AI Act: The Broadest Law Takes Effect June 30, 2026
While chatbot safety bills address one category of AI, Colorado's AI Act is the most comprehensive state law in the US. It takes effect June 30, 2026 and applies to developers and deployers of high-risk AI systems — those that make consequential decisions in employment, credit, education, housing, or healthcare.
Colorado requires algorithmic impact assessments, transparency disclosures to users who are subject to AI decisions, and bias mitigation measures. Businesses operating in Colorado with high-risk AI deployments have less than 90 days to achieve compliance. Companies deploying healthcare AI or AI-driven hiring and credit tools are directly in scope.
What AI Chatbot Companies Must Do Now
The practical compliance checklist, based on the most stringent state requirements currently enacted:
- Audit all marketing and UI language — remove any language that implies the AI is a licensed professional, therapist, doctor, or financial advisor. This applies to Tennessee immediately.
- Implement AI disclosure — users must be told they are talking to an AI if there is any ambiguity. Build a clear disclosure in the onboarding flow and in the conversation UI.
- Deploy crisis detection protocols — when users express suicidal ideation or self-harm intent, the chatbot must follow a structured response: provide crisis resources, do not encourage the topic, and escalate where possible.
- Add age gating and minor-specific protections — if your product may be used by minors, Oregon requires 3-hour reminders that the AI is not human. Washington requires age verification and parental consent mechanisms.
- Conduct a Colorado AI Act impact assessment if your AI makes consequential decisions in employment, credit, education, housing, or healthcare. Deadline: June 30, 2026.
- Build a state compliance matrix — track which requirements apply in which states, with effective dates and penalty structures. The landscape will continue changing through 2026.
The Federal vs State Tension
A December 2025 executive order exempted state child safety AI protections from federal preemption, effectively endorsing the state-led approach to minor-specific AI rules. Broader transparency requirements face a different situation — federal law can preempt state law in areas of commerce, and tech industry groups are actively lobbying for federal preemption to avoid a patchwork of 50 different standards.
For now, the patchwork is the reality. With 46 state legislatures active in 2026 and no federal AI framework in sight, companies building AI chatbot products for US users need a state-by-state compliance strategy, not a single national standard.
Bottom Line
- Tennessee SB 1580 signed April 1, 2026 — AI cannot claim to be a mental health professional ($5K/violation)
- Oregon SB 1546 + Washington HB 2225 — disclosure, minor protections, crisis protocols ($1K/violation private right of action)
- 78 chatbot safety bills active in 27 states — bipartisan, accelerating
- Colorado AI Act takes effect June 30, 2026 — high-risk AI impact assessments required
- Adopt Oregon's strictest requirements as a national baseline to cover all states