South Korea's AI Act Is Now Being Enforced — What Global Companies Need to Do
April 14, 2026 · 9 min read
TL;DR
- South Korea's AI Act took effect January 22, 2026; active enforcement began Q2 2026
- Extraterritorial: applies to any AI product affecting Korean users above defined thresholds
- Key requirements: AI disclosure labels, risk assessments, Korean-language transparency notices
- High-impact AI systems must register with the Korea Communications Commission (KCC)
- 90 countries now have active AI regulation — the compliance burden is compounding
When most people think about AI regulation, they think EU AI Act. But South Korea's AI Framework Act — which took effect January 22, 2026 — is now the second-most significant extraterritorial AI law in force, and active enforcement began this quarter. If your product has Korean users, you need to understand what this requires.
What the South Korea AI Act Covers
The AI Framework Act (officially the "Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI") establishes a risk-tiered approach to AI governance. It covers AI systems deployed in South Korea, regardless of where the provider is based.
| Risk Tier | Examples | Requirements |
|---|---|---|
| High-impact AI | Credit scoring, hiring screening, medical diagnosis, autonomous vehicles, deepfake generation | Registration + risk assessment + technical standards + ongoing monitoring |
| General-purpose AI | LLM-based chatbots, content generators, AI assistants with 1M+ Korean users | Transparency disclosure + AI-generated content labeling + incident reporting |
| Low-impact AI | Spam filters, recommendation systems, search ranking | Voluntary guidelines only — no mandatory requirements |
The Extraterritorial Trigger
The Act applies extraterritorially under either of these conditions:
- Revenue threshold: Annual Korean market revenue exceeding ₩10 billion (~$7.4M USD)
- User threshold: More than 1 million monthly active users in South Korea
Either condition makes a foreign company subject to the Act's requirements for any AI systems they operate that affect Korean users. This covers ChatGPT, Claude, Gemini, Copilot, and thousands of SaaS products with embedded AI features.
OpenAI, Google, Microsoft, and Anthropic are all directly covered. For smaller AI companies, the 1M MAU threshold is the one to watch — Korea has 52 million people with extremely high smartphone penetration.
The 6 Core Requirements
1. AI Interaction Disclosure
Users must be informed when they are interacting with an AI system. This must be presented in Korean, at or before the point of interaction — not buried in terms of service. A "Powered by AI" label or system prompt disclosure satisfies this requirement for general-purpose AI. For automated decision systems (credit, hiring, healthcare), the disclosure must be more prominent and include information about the system's purpose and the right to request human review.
2. AI-Generated Content Labeling
Any content that is substantially AI-generated — text, images, video, audio — and distributed to Korean users must be labeled as such. The label must be "clear and recognizable" but there's flexibility in implementation: watermarks, metadata tags, on-screen labels, or C2PA provenance credentials all satisfy this requirement. This is aligned with the EU AI Act and the US AI Transparency Act.
3. Risk Assessment for High-Impact Systems
High-impact AI systems must conduct documented risk assessments before deployment and annually thereafter. The assessment must cover: accuracy and reliability testing, bias and fairness evaluation, data governance, and security against adversarial inputs. Assessments must be submitted to the Korea Communications Commission (KCC).
4. System Registration
High-impact AI systems must be registered with the KCC before commercial deployment in Korea. Registration requires: system description, risk classification rationale, risk assessment summary, and contact information for a designated compliance officer. Foreign companies must appoint a local representative in Korea to act as the regulatory contact point.
5. Incident Reporting
Any significant AI incident affecting Korean users must be reported to the KCC within 72 hours of detection. "Significant incidents" include: AI system failures causing material harm, unauthorized data processing, AI manipulation of user behavior, and security breaches involving AI systems. The 72-hour window is identical to GDPR's data breach notification requirement.
6. Human Review Rights
For AI systems that make or substantially influence decisions affecting individuals (employment, credit, insurance, healthcare), Korean users have the right to request human review of any AI-generated decision. Companies must have a process to handle these requests within a reasonable timeframe (typically 30 days under implementing regulations).
Enforcement and Penalties
| Violation | Maximum Penalty |
|---|---|
| Operating unregistered high-impact AI system | ₩100M fine (~$74K) + mandatory registration |
| Failure to conduct required risk assessment | ₩50M fine (~$37K) |
| No AI disclosure label for user interactions | ₩30M fine (~$22K) |
| Missing AI-generated content label | ₩30M fine (~$22K) |
| Failure to report incident within 72 hours | ₩50M fine (~$37K) |
| Repeated or willful violations | Criminal referral + up to 3% of annual Korean revenue |
The Global Compliance Picture in 2026
South Korea is the third major jurisdiction with enforceable AI law (after the EU and the US federal AI Transparency Act). The Stanford 2026 AI Index counts 90 countries with active AI regulation — a 90% increase over 2024. If your product serves users globally, you're now managing overlapping AI compliance obligations across multiple frameworks simultaneously.
The good news: the core requirements are converging. EU AI Act + Korea AI Act + US AI Transparency Act share common elements:
- AI disclosure at point of interaction
- Labeling of AI-generated content
- Risk assessments for high-impact systems
- Incident reporting requirements
- Human review rights for consequential decisions
A compliance program that satisfies all three simultaneously is achievable — the frameworks are aligned enough that there's significant overlap. The challenge is the local implementation details: Korean-language disclosures, KCC registration procedures, and local representative requirements are jurisdiction-specific.
Immediate Action Checklist
- Assess your Korean user count and Korea-attributed revenue against the thresholds
- Classify all AI systems used in Korean-facing products by risk tier
- Add Korean-language AI disclosure labels to all user-facing AI interactions
- Implement AI-generated content labeling for content distributed to Korean users
- Identify any high-impact AI systems requiring KCC registration
- Appoint a Korean local representative if covered by extraterritorial provisions
- Establish a 72-hour incident reporting process
- Add "request human review" functionality for consequential AI decisions
Stay ahead of AI regulation — use tools built with compliance in mind.
Happycapy is transparent about which AI model generates what output, maintains audit logs, and gives you full control over your data. Built for a world where AI accountability matters.
Try Happycapy FreeFrequently Asked Questions
What is South Korea's AI Act?
The AI Framework Act, effective January 22, 2026, regulates high-impact AI systems and general-purpose AI in South Korea, with extraterritorial reach covering foreign companies above defined user/revenue thresholds.
Does it apply to foreign companies?
Yes — any company with over 1M monthly Korean users or ₩10B+ in Korean revenue is subject to the Act's requirements for AI systems affecting Korean users.
What's the compliance deadline?
The law took effect January 22, 2026. Active enforcement began Q2 2026. Companies already operating are expected to be in compliance immediately — there is no further grace period.