Skip to main content
Global
AIMenta
Webinar 4 min read

Webinar Recap: AI Governance in APAC — What the MAS, HKMA, and APPI Frameworks Require in Practice

Seventy-three attendees across six markets joined our February session on operationalising AI governance requirements. Key questions and answers from the session.

AE By AIMenta Editorial Team ·

Seventy-three practitioners attended our February session on AI governance across APAC regulated markets. Here is what we covered and the questions that generated the most discussion.

This session brought together compliance leads, CIOs, and AI project managers from financial services, healthcare, and public sector organisations across Hong Kong, Singapore, Japan, Malaysia, Korea, and Taiwan. The 90-minute format combined a 45-minute presentation with 45 minutes of live Q&A.

What We Covered

Section 1 — The regulatory landscape as of February 2026

We opened with a market-by-market review of the four frameworks that most directly affect enterprise AI deployment in APAC:

  • MAS Model Risk Management (MRM) Framework — Singapore's framework requires that any AI system used in credit, pricing, or risk assessment decisions must have a documented model validation process, a named model owner, and quarterly performance reviews. The February 2026 update clarified that LLM-based RAG systems used for customer communications are now in scope — a significant change from the prior guidance.
  • HKMA AI Governance Principles — Hong Kong Monetary Authority's principles, issued in 2024, require licensed institutions to designate an "AI system owner" for each production AI application, maintain model cards, and implement human-in-the-loop controls for high-impact decisions. The HKMA has signalled that 2026 examinations will specifically probe AI governance documentation.
  • Japan APPI (2022 amendments, in force) — The amended Act explicitly covers automated profiling and requires disclosure when significant decisions affecting individuals are made solely by automated systems. METI's AI Guidelines, updated December 2025, added specific provisions for generative AI in enterprise settings.
  • Korea AI Basic Act (effective February 2026) — Korea's framework went live in February 2026, establishing a tiered risk classification for AI systems and mandatory conformity assessments for high-risk AI in financial services and healthcare.

Section 2 — Operationalising governance: what good looks like

We walked through four governance components that satisfy requirements across all four frameworks:

  1. Model card per workflow (not per product) — documenting performance on your specific data, known failure modes, and intended use cases
  2. Human-in-the-loop thresholds — defining which decisions require human review and at what confidence score the system defers
  3. Drift monitoring dashboard — tracking model performance against baseline on a weekly cadence with alerting on >5% degradation
  4. Audit trail export — enabling compliance teams to pull a full log of inputs, outputs, and human overrides for any time period

We showed anonymised examples of each document type from live client deployments.

Section 3 — Common gaps we see in practice

The four gaps we encounter most frequently in client governance reviews:

  • Governance documentation exists at the product level but not at the workflow level (a single model card for "our AI system" rather than per-workflow cards)
  • Drift monitoring is configured but alerts are going to an unmonitored email address
  • Human-in-the-loop thresholds are defined but never tested — no one has verified that the escalation path actually works
  • Model owners are named but have no actual authority to pause or retrain the model

Selected Q&A

Q: Does the MAS MRM framework apply to AI systems that support decisions rather than make them directly?

A: Yes, as of the February 2026 clarification. If an AI system's output materially influences a human decision in credit, pricing, or risk assessment — even if the human signs off — the framework applies. "Human in the loop" does not create a safe harbour if the human is rubber-stamping AI recommendations without independent judgment.

Q: We are using a third-party LLM via API for a customer-facing application. Who owns the model governance?

A: The deploying organisation. The LLM vendor is responsible for the model's general capabilities; you are responsible for how you deploy it, what prompts you use, what guardrails you implement, and what outcomes you generate. You cannot outsource the governance obligation to the model provider.

Q: Our RAG system retrieves from an internal knowledge base. Does the retrieved content affect our compliance posture?

A: Yes, in two ways. First, if the retrieved content includes personal data, APPI/PDPA/PDPA-MY handling requirements apply to the retrieval step — not just the storage. Second, if the retrieved content is regulatory guidance, financial advice, or medical information, the output may trigger sector-specific disclosure requirements even if the retrieval itself is accurate.

Q: How do we handle model governance for open-source models we self-host?

A: Self-hosted models have the same governance obligations as API-accessed models — in fact, more, because you also own the infrastructure and training (or fine-tuning) process. The advantage is that you have more control over data residency and audit trails.

Resources

The next webinar session is scheduled for Q2 2026. Contact us to join the invitation list.

Where this applies

How AIMenta turns these ideas into engagements — explore the relevant service lines, industries, and markets.

Beyond this insight

Cross-reference our practice depth.

If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.

Keep reading

Related reading

Research

Indonesia Enterprise AI in 2026: Digital Economy Leadership, PDP Law Compliance, and the Archipelago Challenge

Indonesia is Southeast Asia's largest AI market by addressable size — but adoption is bifurcated between Jakarta's world-class digital economy and a large provincial enterprise sector that is 3–4 years behind. A practitioner guide to Indonesian enterprise AI: PDP Law, OJK fintech regulation, Jakarta ecosystem dynamics, Bahasa Indonesia NLP advantages, and the infrastructure constraints of a 17,000-island nation.

Research

China Enterprise AI in 2026: Regulatory Complexity, Domestic Model Leadership, and the Hong Kong Gateway

China is the world's second-largest AI market — and the most complex for foreign enterprise AI practitioners. Three overlapping regulatory frameworks (CAC Generative AI rules, PIPL data localisation, Algorithmic Recommendation Provisions), a domestic model ecosystem that has closed the capability gap (Qwen 3, DeepSeek), and a manufacturing AI sector at global scale. A practitioner guide to operating AI strategy in and around Mainland China.

Research

Government and Public Sector AI in APAC 2026: Procurement, Data Sovereignty, and the Three-Tier Market

Government is the largest AI buyer in APAC by aggregate contract value — but the procurement process, data sovereignty constraints, and explainability requirements make it fundamentally different from private enterprise AI. A practitioner guide to APAC government AI: the three-tier market structure, formal tender systems (GeBIZ, KONEPS, MyProcurement), citizen-facing AI governance, and data localisation constraints by jurisdiction.

Want this applied to your firm?

We use these frameworks daily in client engagements. Let's see what they look like for your stage and market.