Seventy-three practitioners attended our February session on AI governance across APAC regulated markets. Here is what we covered and the questions that generated the most discussion.
This session brought together compliance leads, CIOs, and AI project managers from financial services, healthcare, and public sector organisations across Hong Kong, Singapore, Japan, Malaysia, Korea, and Taiwan. The 90-minute format combined a 45-minute presentation with 45 minutes of live Q&A.
What We Covered
Section 1 — The regulatory landscape as of February 2026
We opened with a market-by-market review of the four frameworks that most directly affect enterprise AI deployment in APAC:
- MAS Model Risk Management (MRM) Framework — Singapore's framework requires that any AI system used in credit, pricing, or risk assessment decisions must have a documented model validation process, a named model owner, and quarterly performance reviews. The February 2026 update clarified that LLM-based RAG systems used for customer communications are now in scope — a significant change from the prior guidance.
- HKMA AI Governance Principles — Hong Kong Monetary Authority's principles, issued in 2024, require licensed institutions to designate an "AI system owner" for each production AI application, maintain model cards, and implement human-in-the-loop controls for high-impact decisions. The HKMA has signalled that 2026 examinations will specifically probe AI governance documentation.
- Japan APPI (2022 amendments, in force) — The amended Act explicitly covers automated profiling and requires disclosure when significant decisions affecting individuals are made solely by automated systems. METI's AI Guidelines, updated December 2025, added specific provisions for generative AI in enterprise settings.
- Korea AI Basic Act (effective February 2026) — Korea's framework went live in February 2026, establishing a tiered risk classification for AI systems and mandatory conformity assessments for high-risk AI in financial services and healthcare.
Section 2 — Operationalising governance: what good looks like
We walked through four governance components that satisfy requirements across all four frameworks:
- Model card per workflow (not per product) — documenting performance on your specific data, known failure modes, and intended use cases
- Human-in-the-loop thresholds — defining which decisions require human review and at what confidence score the system defers
- Drift monitoring dashboard — tracking model performance against baseline on a weekly cadence with alerting on >5% degradation
- Audit trail export — enabling compliance teams to pull a full log of inputs, outputs, and human overrides for any time period
We showed anonymised examples of each document type from live client deployments.
Section 3 — Common gaps we see in practice
The four gaps we encounter most frequently in client governance reviews:
- Governance documentation exists at the product level but not at the workflow level (a single model card for "our AI system" rather than per-workflow cards)
- Drift monitoring is configured but alerts are going to an unmonitored email address
- Human-in-the-loop thresholds are defined but never tested — no one has verified that the escalation path actually works
- Model owners are named but have no actual authority to pause or retrain the model
Selected Q&A
Q: Does the MAS MRM framework apply to AI systems that support decisions rather than make them directly?
A: Yes, as of the February 2026 clarification. If an AI system's output materially influences a human decision in credit, pricing, or risk assessment — even if the human signs off — the framework applies. "Human in the loop" does not create a safe harbour if the human is rubber-stamping AI recommendations without independent judgment.
Q: We are using a third-party LLM via API for a customer-facing application. Who owns the model governance?
A: The deploying organisation. The LLM vendor is responsible for the model's general capabilities; you are responsible for how you deploy it, what prompts you use, what guardrails you implement, and what outcomes you generate. You cannot outsource the governance obligation to the model provider.
Q: Our RAG system retrieves from an internal knowledge base. Does the retrieved content affect our compliance posture?
A: Yes, in two ways. First, if the retrieved content includes personal data, APPI/PDPA/PDPA-MY handling requirements apply to the retrieval step — not just the storage. Second, if the retrieved content is regulatory guidance, financial advice, or medical information, the output may trigger sector-specific disclosure requirements even if the retrieval itself is accurate.
Q: How do we handle model governance for open-source models we self-host?
A: Self-hosted models have the same governance obligations as API-accessed models — in fact, more, because you also own the infrastructure and training (or fine-tuning) process. The advantage is that you have more control over data residency and audit trails.
Resources
- APAC AI Regulation Snapshot (Q2 2026)
- Securing AI Agents in APAC Enterprise
- AI Procurement in Asia 2026
The next webinar session is scheduled for Q2 2026. Contact us to join the invitation list.
Where this applies
How AIMenta turns these ideas into engagements — explore the relevant service lines, industries, and markets.
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.