Georgia Just Signed the AI Chatbot Disclosure Law — 78 Bills, 27 States, One Compliance Crisis
April 6, 2026 · 9 min read · By Happycapy Guide
- Georgia SB 540 was signed into law on April 6, 2026 — chatbots must disclose every 3 hours for adults, every 1 hour for minors
- 78 chatbot-specific bills are now active across 27 states; over 300 broader AI bills across 38 states
- California, Oregon, Washington, and New York have the strictest requirements — penalties up to $25,000 per violation
- The Trump federal executive order tried to block state AI laws — but explicitly exempted child safety, the core of most bills
- Businesses deploying any conversational AI have 90–180 days to comply before most laws take effect
Georgia's legislature adjourns today. Governor Brian Kemp is expected to sign SB 540 before the session closes, making Georgia the latest state to enact a binding chatbot disclosure law.
What Georgia SB 540 Actually Requires
Georgia Senate Bill 540, sponsored by Senator Jason Anavitarte, passed both chambers with bipartisan support and now awaits the Governor's signature as the legislature adjourns April 6.
The bill's requirements are specific and operational:
- Disclosure interval — adults: Any conversational AI must remind users it is not human at least every 3 hours during an active session
- Disclosure interval — minors: Every 1 hour for users under 18
- Child content guardrails: Operators must implement technical controls preventing chatbots from sharing sexually explicit content with minors
- Crisis protocols: Operators must establish documented response protocols for detecting and responding to statements about suicidal ideation or self-harm
- Enforcement: The state Attorney General has authority to impose fines on non-compliant operators
The law is structured to take effect in summer 2027, giving businesses approximately 15 months to comply. This delay was deliberate — Georgia lawmakers wanted time for potential federal action before the law kicks in.
The National Wave: 78 Bills, 27 States
Georgia is far from alone. Legal analysts at the Future of Privacy Forum (FPF), which maintains a live tracker, counted 78 chatbot-specific bills across 27 states as of February 2026 — a number that continues to grow as more states advance proposals through committee.
The legislative surge was triggered by a wave of high-profile tragedies — teen suicides linked to companion chatbots like Character.AI — combined with explosive growth in conversational AI deployment. The bills share a common structure, largely modeled on Oregon's SB 1546 (passed February 2026), but differ in three critical ways: definition scope (who counts as a "chatbot"), minor thresholds (age cutoffs vary from 13 to 18), and penalty levels.
State-by-State Compliance Snapshot
| State | Key Law / Bill | Disclosure Rule | Max Penalty | Status |
|---|---|---|---|---|
| California | SB 243 (2025) | Every 3 hrs; minors every 1 hr | $2,500/violation/day | In effect Jan 2026 |
| Oregon | SB 1546 (2026) | When user could reasonably think it's human | $25,000/intentional violation | In effect Feb 2026 |
| Washington | HB 2225 (2026) | At set intervals; stricter for minors | $10,000/violation | Signed March 2026 |
| Georgia | SB 540 (2026) | Every 3 hrs adults / 1 hr minors | AG-determined fines | Signing April 6, 2026 |
| New York | S-3008C (2026) | Prohibits simulating emotional relationships with minors | $10,000/day | Advancing in Senate |
| Utah | HB 1192 (2026) | Disclosure + minor protections | $5,000/violation | Signed Feb 2026 |
| Texas | HB 3884 (2026) | Disclosure + age verification | $15,000/violation | In committee |
| Colorado | HB 1263 (2026) | Companion chatbot disclosure | $20,000/violation | Advancing |
| Idaho | SB 1297 (2026) | Conversational AI Safety Act | TBD | Advancing |
| Florida | S 482 (2026) | AI Bill of Rights — broad scope | TBD | Senate committee cleared |
The Federal vs. State Showdown
The Trump administration has pushed back against the state-level wave. In December 2025, President Trump signed an executive order creating a DOJ AI Litigation Task Force specifically to challenge state AI laws that "burden interstate commerce or impose onerous regulatory requirements."
But there is a critical carve-out: the executive order explicitly exempts child safety protections from federal preemption. Since the core provisions of most state chatbot bills — minor disclosure reminders, content guardrails, self-harm protocols — are child safety measures, the federal challenge has limited reach against the laws that matter most.
A bipartisan coalition of 36 state attorneys general has formally filed opposition to federal preemption, signaling a prolonged legal battle. For businesses, the practical outcome is clear: these state laws will be enforced regardless of the federal posture, at least for child-facing applications.
Three Compliance Pillars Every Business Must Address
1. AI Disclosure
Every state law requires operators to disclose that the user is interacting with AI — not a human. The trigger varies. Oregon requires disclosure "when a reasonable person would believe the chatbot is human." California and Georgia require timed reminders at fixed intervals regardless of context. Companies need to audit every customer-facing AI touchpoint and build disclosure triggers into the session management layer.
2. Minor-Specific Protections
This is where penalties are highest and requirements most complex. Beyond disclosure intervals, businesses must implement age verification (20+ states have advancing bills), content filtering for sexually explicit material, and crisis referral protocols that route users showing self-harm language to human support or crisis resources. Age cutoffs vary — Colorado starts at 13 while Oregon uses 18.
3. Data and Wiretap Liability
A separate surge in litigation targets chatbots that record conversations without consent under the Electronic Communications Privacy Act (ECPA) and state wiretap laws — particularly California's CIPA. If your chatbot logs any conversation, you need explicit consent capture at session start, not buried in a terms of service page. Plaintiff attorneys are already filing class actions.
Happycapy gives your team a personal AI agent that handles research, drafting, and workflows — with audit trails and session controls built in. Start free, no credit card required.
Try Happycapy Free →Immediate Compliance Steps
AI chatbot operators in the U.S. must complete six compliance steps before most state disclosure laws take effect in 2026–2027. California (in effect January 2026), Oregon (February 2026), and Washington (March 2026) are already enforceable; Georgia and New York are imminent.
- Audit chatbot definitions. Determine whether your AI falls under "companion chatbot" definitions in your key states. California's definition covers any AI designed to meet social or emotional needs — much broader than most assume.
- Build timed disclosure into session management. Add session timer logic that surfaces "You are speaking with an AI assistant" banners every 3 hours for adults and every 1 hour for minors.
- Implement age verification or age-gating. Age verification is advancing in 20+ states. At minimum, add a "confirm you are 18+" gate or integrate with an age verification service.
- Add crisis detection to your content pipeline. Integrate a content classifier that flags self-harm and suicidal ideation language, then routes those sessions to human support or a crisis hotline redirect.
- Capture consent for conversation recording. Add explicit consent capture at session start for any chatbot that logs conversations, even for internal analytics.
- Document your compliance posture. California and Oregon enforcement is complaint-driven; a documented compliance program significantly reduces enforcement risk and penalty severity.
What This Means for AI Tool Buyers
AI chatbot compliance features are now a mandatory procurement criterion for any platform deployed in the U.S. in 2026. Five states already have enforceable disclosure laws, with penalties up to $25,000 per violation. Procurement teams must require vendors to document timed disclosure intervals, minor content filtering, and crisis protocol capabilities before deployment.
- Does the platform support configurable AI disclosure banners at timed intervals?
- Does it have built-in content filtering for CSAM and self-harm language?
- Can it integrate with age verification services?
- Does it log conversations, and how does it handle consent capture?
- What is their documented compliance roadmap for the emerging state patchwork?
Happycapy's personal AI agent is designed for professionals — with session controls, exportable logs, and privacy-first architecture. Pro starts at $17/mo.
See Happycapy Plans →Frequently Asked Questions
Georgia SB 540 requires AI chatbot operators to disclose that users are talking to a non-human at least every 3 hours for adults and every 1 hour for minors. It bans sharing sexually explicit content with children and mandates crisis protocols for self-harm language. The state Attorney General enforces with fines. The law takes effect summer 2027.
As of April 2026, 78 specific chatbot bills are active in 27 states. California, Oregon, and Washington already have laws in effect. Over 300 broader AI bills are active in 38 state legislatures.
No — at least not for child safety. The December 2025 executive order created a DOJ task force to challenge state AI laws, but explicitly exempts child safety provisions. A bipartisan coalition of 36 state AGs is fighting federal preemption. Businesses should treat state laws as enforceable.
Penalties range from $2,500/violation/day in California to $25,000/intentional violation in Oregon. New York allows $10,000/day for child-related violations. Wiretap class actions are a separate exposure track.
Any conversational AI that could be mistaken for human — customer service bots, companion AI, embedded AI assistants, even AI phone agents. California's definition is especially broad: any AI designed to meet social or emotional needs.
Future of Privacy Forum — 2026 Chatbot Legislation Tracker (fpf.org, April 2026)
AI2Work — "78 AI Chatbot Safety Bills Across 27 States" (February 2026)
Transparency Coalition — AI Legislative Update April 3, 2026
Atlanta Journal-Constitution — "Georgia lawmakers eye AI laws amid warnings from the White House" (March 2026)
Transformer News — "Six states, one playbook: the chatbot bills raising red flags" (March 2026)
Trump Executive Order on AI Governance (December 2025)