Foreign AI providers serving Korean users need a domestic representative. Confirm your contractual chain now if you operate in Korea.
South Korea's AI Basic Act came into force, establishing the first legally binding obligations for AI developers, deployers, and users in the country. The Act creates a national AI governance framework, designates 'high-impact AI systems' (those affecting fundamental rights, public safety, or major economic interests), and imposes transparency, human oversight, and risk management obligations on organisations deploying high-impact systems. Korea joins the EU as the second jurisdiction globally with an enacted general-purpose AI governance law.
**What the Act requires of APAC enterprises.** Organisations deploying AI in Korea — whether Korean companies or international firms with Korean operations — face obligations across three tiers. First: all AI systems must disclose to users that they are interacting with AI when the AI has a 'significant impact' on the interaction. Second: high-impact AI systems (defined by Presidential Decree, with initial examples including credit scoring, employment decisions, and healthcare AI) must conduct and document risk assessments prior to deployment. Third: real-time surveillance AI and biometric identification AI in public spaces face additional restrictions requiring ministerial approval.
**Interaction with existing Korea data law (PIPA).** The AI Basic Act operates alongside Korea's Personal Information Protection Act (PIPA), which already governs AI training data and automated decision-making disclosures. The practical result is a two-layer compliance framework: PIPA governs the data inputs and outputs; the AI Basic Act governs the system-level risk management and transparency obligations. Organisations already PIPA-compliant must run an additional AI Basic Act gap assessment for each deployed AI system that falls in the high-impact categories.
**Timeline for compliance.** The Act's core transparency provisions are immediately in force. The high-impact AI risk assessment requirements follow a phased schedule with detailed obligations published in a Presidential Decree expected in Q3 2026. Organisations should begin identifying which deployed AI systems will fall in the high-impact categories now, rather than waiting for the Decree.
**AIMenta's editorial read.** Korea's AI Basic Act is the most significant APAC regulatory development of 2025 for enterprise AI deployments. For Korean operations specifically, begin the high-impact system inventory immediately and engage Korean legal counsel on the Presidential Decree timeline. For non-Korean APAC operations watching regional trends, Korea's Act signals that other APAC jurisdictions (Taiwan, Thailand, Vietnam) are likely to pass similar framework legislation within 24 months.
How AIMenta helps clients act on this
Where this story lands in our practice — explore the relevant service line and market.
Beyond this story
Cross-reference our practice depth.
News pieces sit on top of working capability. Browse the service pillars, industry verticals, and Asian markets where AIMenta turns these stories into engagements.
Other service pillars
By industry
Other Asian markets
Related stories
-
Company ·
NAVER HyperCLOVA X Expands APAC Enterprise Offering with Korean and Japanese Language AI Models
NAVER expands HyperCLOVA X to target APAC enterprise markets with Korean and Japanese-native LLM, offering an alternative to US providers with in-region data residency. Significant for Korean and Japanese enterprises where English-primary models underperform.
-
Regulation ·
MAS confirms AI model risk management guidelines mandatory for Singapore's largest financial institutions by end-2026
The Monetary Authority of Singapore published its formal response to the AI in Finance industry consultation, confirming that AI model risk management guidelines will become mandatory for D-SIBs (Domestic Systemically Important Banks) and major insurers by Q4 2026, with an expectation of industry-wide adoption for all MAS-regulated entities by mid-2027.
-
Regulation ·
Japan METI updates AI governance guidelines: supply chain transparency now required for enterprise procurement
Japan's Ministry of Economy, Trade and Industry updated its AI Governance Guidelines to version 3.0, introducing supply-chain transparency requirements for enterprises procuring AI systems and aligning the framework with G7 Hiroshima AI process principles. The guidelines are advisory rather than mandatory but carry significant regulatory expectation weight.
-
Regulation ·
Korea MSIT releases AI Basic Act implementation guidelines with 2027 compliance timeline
South Korea's Ministry of Science and ICT published detailed implementation guidelines for the AI Basic Act, specifying risk classification criteria, compliance obligations for high-impact AI systems, and sector-specific safe-harbour conditions. Enterprises have until Q1 2027 to achieve full compliance.