For APAC providers serving EU users, the Code is the operational benchmark — start your documentation package now, not in July.
The Code translates the EU AI Act's GPAI articles into concrete implementation expectations. Providers above the 10^25 FLOP systemic-risk threshold face the heaviest burden — model evaluations, adversarial testing, incident reporting, and cybersecurity protections. Providers below the threshold still face documentation and transparency requirements under Article 53.
The Code of Practice covers four main domains. **Transparency**: providers must publish model cards disclosing training data sources, known limitations, and benchmark performance. **Copyright**: training pipelines must implement opt-out mechanisms for rights holders by the August 2026 deadline. **Risk classification**: providers must self-assess whether their model meets the systemic-risk threshold, using a FLOP count methodology the Commission has now clarified. **Incident reporting**: confirmed AI-related incidents must be reported to national authorities within 72 hours — the same window as GDPR data breach notifications, which is deliberate.
For APAC enterprise teams that fine-tune or distribute foundation models touching EU users, the practical implication is immediate: begin assembling the technical documentation now. The compliance obligations attach to anyone placing a GPAI model on the EU market, regardless of where the provider is incorporated. A Hong Kong fine-tuning shop distributing a derivative model via EU cloud marketplaces is in scope.
AIMenta advises APAC clients to run a rapid gap analysis against the Code's four domains before the August 2026 deadline. The documentation burden for non-systemic-risk models is manageable — typically 2-4 weeks of legal and engineering time. For models that might cross the FLOP threshold (large multi-modal or domain-specific foundation models), the assessment is more complex and should begin immediately.
How AIMenta helps clients act on this
Where this story lands in our practice — explore the relevant service line and market.
Beyond this story
Cross-reference our practice depth.
News pieces sit on top of working capability. Browse the service pillars, industry verticals, and Asian markets where AIMenta turns these stories into engagements.
Other service pillars
By industry
Other Asian markets
Related stories
-
Regulation ·
South Korea AI Basic Act Enters Enforcement Phase with Mandatory AI Impact Assessments for High-Risk Systems
South Korea's AI Basic Act enters enforcement phase — requires AI impact assessments for high-risk systems in finance, healthcare, and public administration. APAC enterprises with Korean operations must audit AI deployments for compliance before enforcement deadline.
-
Regulation ·
MAS Updates AI Governance Framework for Singapore FSI with Mandatory Explainability Requirements for Credit and AML AI
MAS releases AI governance framework update for Singapore FSI — mandatory explainability requirements for credit decisions and trade surveillance AI. APAC financial institutions using AI in lending, fraud detection, or AML must update governance documentation.
-
Regulation ·
MAS confirms AI model risk management guidelines mandatory for Singapore's largest financial institutions by end-2026
The Monetary Authority of Singapore published its formal response to the AI in Finance industry consultation, confirming that AI model risk management guidelines will become mandatory for D-SIBs (Domestic Systemically Important Banks) and major insurers by Q4 2026, with an expectation of industry-wide adoption for all MAS-regulated entities by mid-2027.
-
Regulation ·
Japan METI updates AI governance guidelines: supply chain transparency now required for enterprise procurement
Japan's Ministry of Economy, Trade and Industry updated its AI Governance Guidelines to version 3.0, introducing supply-chain transparency requirements for enterprises procuring AI systems and aligning the framework with G7 Hiroshima AI process principles. The guidelines are advisory rather than mandatory but carry significant regulatory expectation weight.
-
Regulation ·
Korea MSIT releases AI Basic Act implementation guidelines with 2027 compliance timeline
South Korea's Ministry of Science and ICT published detailed implementation guidelines for the AI Basic Act, specifying risk classification criteria, compliance obligations for high-impact AI systems, and sector-specific safe-harbour conditions. Enterprises have until Q1 2027 to achieve full compliance.