About This Roundtable
AIMenta hosted a closed practitioner roundtable in March 2026 with heads of AI, chief data officers, and senior technology leaders from financial institutions across Singapore, Hong Kong, and Australia — collectively representing organisations with over US$4 trillion in assets under management and more than 50 production AI deployments.
This recap synthesises the key themes from a 3-hour session. Participants spoke under Chatham House rules; no individual or institution is identified. The intent is to surface honest, unfiltered practitioner experience — not to validate vendor claims or promote best-practice narratives.
Theme 1: Where Banks Actually Started
We began with a simple question: "What was your first production AI deployment, and why?"
The responses were more concentrated than expected. Across 11 participants, 9 of the first production AI deployments fell into one of three categories:
Fraud detection / transaction monitoring (5 institutions)
The consistent explanation: "We had a clear business case, the data was already there, and the regulatory pressure made it easy to justify investment." Fraud AI was the most common first deployment not because it's the most exciting use case, but because it has the most quantifiable ROI, the least change management complexity (the AI replaces a system, not a human workflow), and strong regulatory tailwinds (FATF guidance effectively requires ML-augmented AML for systemically important banks).
Document processing / intelligent data capture (3 institutions)
Specifically: invoice processing, KYC document extraction, and trade finance document handling. The pattern was similar — clear cost case, high-volume repetitive process, no patient consent or clinical complexity, measurable output quality.
Customer service chatbot (1 institution)
Notably, this was described as "more difficult than we expected" by the participant. The chatbot deployed relatively quickly, but maintaining quality and managing the multilingual requirement (English, Cantonese, Mandarin) consumed significantly more resource than the initial business case anticipated. "We underestimated the knowledge base maintenance effort. The AI is only as good as what we feed it."
Key insight: The financial institutions that started with fraud or document AI are now running 5–10 production AI deployments. The institution that started with customer service is still primarily on that use case two years later. Starting with the right use case — high data quality, clear ROI, limited change management — accelerated the programme significantly.
Theme 2: The Data Quality Problem (Again)
Every participant cited data quality as either their primary challenge or in their top three. The specific manifestations differed:
Customer data inconsistency "We discovered during our first KYC AI project that we had the same customer spelled six different ways across our core banking systems — across their name, address, and ID documents. Before the AI could help us with KYC, we spent four months just on data standardisation."
Historical data gaps "Our fraud model needed three years of labelled transaction data to train properly. We had the data, but it was across three different legacy systems, and the fraud labels were in a spreadsheet that a team in operations maintained manually. Getting from raw data to training data took longer than training the model."
Concept drift (the quiet failure) "We deployed our AML model and it performed well for the first nine months. Then fraud patterns changed — specifically a new variant of account takeover using legitimate-looking transactions — and the model's precision dropped from 82% to 61% before our monitoring caught it. We hadn't built adequate model monitoring into the production deployment."
The consistent message: Data readiness is not a pre-AI problem that you solve once before starting. It's an ongoing operational discipline. Institutions that treat data governance as a project (with a start and end date) consistently struggle. Institutions that have made data quality a permanent operational function (with ownership, SLAs, and monitoring) progress faster on AI.
Reference: AIMenta's AI Data Readiness Playbook covers the data infrastructure requirements in detail.
Theme 3: Regulatory Navigation in Practice
APAC financial institutions operate across multiple regulatory jurisdictions, and the regulatory experience around AI varied significantly:
Singapore (MAS) — the most enabling environment "MAS has been genuinely forward-leaning. The FEAT principles are substantive, but they're also practical — they tell you what outcome to achieve (fairness, explainability, accountability) rather than mandating specific technology approaches. We've found MAS engagement on AI genuinely helpful rather than obstructive."
"The AI in Finance workgroup through MAS has given us access to guidance we couldn't get from our own compliance team. We've developed a practice of bringing proposed AI deployments to the workgroup before implementation — not as a regulatory filing, but as a consultation. It saves significant back-and-forth later."
Hong Kong (HKMA) — increasing depth of scrutiny "HKMA's model risk management guidance has moved AI scrutiny significantly closer to traditional quantitative risk model governance. Our AI models now go through the same validation process as our market risk models — independent validation, challenger model testing, backtesting. This adds 3–4 months to deployment, but it's made our models more robust."
"The hardest question from HKMA examiners is always: 'How do you know the model's decision was correct in a specific case?' We had to build case-by-case explainability into our credit AI before we could satisfy that examination standard."
Australia (APRA/ASIC) — operational resilience focus "APRA's primary concern with our AI deployments has been operational resilience — concentration risk with AI vendors, business continuity if an AI system fails, and third-party AI vendor management. We've had to produce AI system dependency maps and vendor contingency plans as part of standard CPS 234 compliance."
"ASIC's focus is more on consumer outcomes — they want to see evidence that AI-assisted decisioning isn't creating unfair consumer outcomes. We run bias audits quarterly on any AI system that makes decisions affecting individual customers."
The consistent message across jurisdictions: Engage with regulators before deployment, not after. Every participant who described a smooth regulatory process had engaged the relevant regulator (MAS, HKMA, APRA) during the design phase, not at the production submission stage.
Theme 4: Change Management — The Underrated Variable
The technical AI deployment is frequently the easiest part of an enterprise AI project. The change management is consistently harder.
The fraud analyst resistance pattern "When we deployed our fraud AI, we told the team it would 'enhance' their work. What it actually did was take away the interesting case selection — the AI prioritised the queue, and analysts only saw cases the AI had already determined were high priority. After six months, we had significant attrition in the fraud team because the experienced analysts felt de-skilled."
This pattern — AI removing judgment from experienced practitioners — appeared in several forms across the session. The resolution was not to reduce the AI's scope, but to redesign the human role: experienced fraud analysts became model supervisors, training data curators, and novel pattern escalation specialists rather than queue workers.
The 'gotcha' testing culture "Our clinical teams — in our insurance subsidiary — actively tried to find cases where the AI was wrong. When they found one, they used it as evidence the system wasn't ready for deployment. We had to change the conversation from 'the AI makes mistakes' (which it does) to 'does the AI make fewer mistakes than the alternative?'"
The measurement gap "We said the AI would improve productivity by 30%. Eighteen months later, nobody had actually measured it. The head of operations was certain the AI had improved productivity — but when we ran the numbers, the improvement was 11%. Not bad, but significantly different from the 30% we'd promised the board."
Reference: AIMenta's AI ROI Measurement Framework covers measurement methodology.
Theme 5: What's Working in 2026
After discussing challenges, we asked: "What's actually working well in your AI programme right now?"
Fraud AI at maturity "Our fraud ML system is now three years old and genuinely mature. False positive rate is down 68% from our pre-AI baseline. We've expanded from card fraud to encompass account takeover, push payment fraud, and trade finance anomaly detection. The ROI case is so clear that budget approval for expansions takes two weeks instead of six months."
Document AI for trade finance "We process letters of credit, bills of lading, and commercial invoices using AI. Before, it took a team of 12 to process our daily trade finance document volume. Now that same volume is handled by 4 people, with the AI handling approximately 75% of documents straight-through and the humans handling exceptions. The 25% exception rate sounds high but the documents in that category are genuinely complex."
Credit AI for SME lending "We've been able to extend credit to small businesses that our traditional scorecard declined. The alternative data model — particularly the transaction behaviour features from their operating account — identifies creditworthy SMEs that the bureau data alone would have declined. Our SME NPL rate on AI-approved-only credits is lower than our traditional scorecard approvals in the same risk band."
Theme 6: What's Not Working — The Honest Part
We explicitly asked: "What did you try that didn't work, or that you've paused?"
Generative AI in customer-facing compliance roles "We piloted an LLM-based system for answering customer questions about our KYC requirements. The accuracy was about 85%, which sounds high — but in a compliance context, a 15% error rate means 15% of customers get wrong information about their regulatory obligations. We pulled the deployment after three months. We're using it internally for compliance team support, where an expert is always in the loop, but customer-facing is off the table until accuracy is significantly higher."
AI-generated regulatory reports "We tried using LLMs to draft sections of our regulatory reports. The AI could produce plausible-looking report language, but the factual accuracy was problematic — numbers didn't always match source data, and conclusions didn't always follow from the analysis. The QA required to verify AI-generated regulatory reports took as long as writing them manually. We abandoned it."
Unsupervised anomaly detection for AML "We deployed an unsupervised ML model to identify novel AML patterns not covered by our supervised models. The model identified 'anomalies' — but 98% were false positives because the model lacked the business context to distinguish legitimate-but-unusual transactions from suspicious-but-unusual ones. The false positive volume overwhelmed our analysts. We've since rebuilt it with much more constraint and business context baked in."
The consistent message: High-stakes, customer-facing, or regulatory-output AI applications require higher accuracy than most LLMs currently achieve. The frontier is internal workflow AI and AI with expert-in-the-loop architectures.
Key Takeaways
The session produced five durable takeaways for APAC financial institutions:
1. Choose your first use case based on data availability and ROI clarity, not ambition. Fraud detection and document processing are boring choices — and the institutions that made them are 3–5 years ahead of those that started with more ambitious applications.
2. Data quality is a permanent operational function, not a pre-AI project. Build data governance as an ongoing discipline with ownership, SLAs, and monitoring.
3. Engage regulators before deployment. Every smooth regulatory experience in the room involved early, proactive regulator engagement. Every difficult experience involved a post-implementation surprise.
4. Design the human role as carefully as the AI role. Change management failures in financial services AI typically come from not redesigning the human workflow alongside the AI deployment — not from the AI itself failing.
5. Measure ROI from Day 1. Commit to specific, measurable outcomes before deployment. Without measurement, AI programmes lose momentum, budget, and credibility — regardless of actual impact.
Resources
- AI in APAC Financial Services Playbook — full use case guide, regulatory requirements, and 90-day roadmap
- AI Data Readiness Playbook — data infrastructure requirements for AI
- AI ROI Measurement Framework — measuring returns on AI investment
- AI Change Management for APAC Enterprises — workforce adoption framework
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.