Skip to main content
Global
AIMenta
A Tier-1 Singapore Private Bank Composite AI for Financial Services in Asia

A Singapore wealth manager rebuilds 320 RM workflows with a RAG copilot, recovering 68% of review time

A Tier-1 Singapore private bank cut RM portfolio-review time 68% and added S$420M of net new assets with a RAG copilot. AIMenta led the 7-month build.

Engagement

US$180K-$240K

Timeline

7 months (discovery to hand-over)

Client size

4,000-7,000

Outcomes

68%

Reduction in measured time per portfolio review

11.2

Reviews per RM per week (was 7.4)

51%

Increase in client touchpoints across the desk

S$420M

Attributed net new assets in 3 quarters

+14

NPS point gain on the HNW segment

8.1/10

RM satisfaction (was 5.9)

12

Internal teams now reusing the retrieval index

Context

Your private-banking arm in Singapore covers 480 relationship managers and 14,000 high-net-worth clients. Average client portfolio review took an RM 4 hours: pulling holdings from the core system, screening news, drafting commentary, and producing a client-ready PDF. Each RM ran 6 to 9 reviews a week. That is roughly 30 hours of admin per RM per week. Senior management calculated that figure represented US$36M of annual capacity bleed across the desk. The Head of Private Banking wanted RMs back in front of clients, not in front of slide decks.

Challenge

Three constraints. First, MAS guidelines on AI in advisory require explainability of every model-generated recommendation and clear separation between informational and advisory output. Second, holdings data sits in three systems (the core platform, a 2019 portfolio-management overlay, and an Excel-based alternatives tracker for private-credit exposure). Third, RMs in this segment will not adopt a tool that adds clicks. Anything that costs them time at the desk dies in pilot.

Approach

We deployed a 4-phase RAG-powered RM copilot using a discovery, design-partner, scale, hand-over model. Discovery (4 weeks) covered shadowing 22 RMs across three desks, mapping the actual review workflow, and indexing 11 internal data sources behind a retrieval layer. We co-designed the copilot UI with six RMs serving as design partners. They had veto power on every UX decision.

The design-partner phase ran 10 weeks. The copilot pulled holdings, ran a curated news scan against a vetted source list (47 publications, no social media), drafted commentary in the RM's preferred tone, and produced the PDF. Every output cited the source paragraph. The RM reviewed and edited before sending. We instrumented every keystroke to measure actual time saved, not self-reported time.

The scale phase rolled the copilot to 320 RMs across three months. The hand-over phase ran in parallel: two of your data-platform engineers and one front-office quant took ownership of the retrieval index, the source curation rota, and the weekly model-eval review. By month seven, AIMenta operated in monthly advisory mode only.

Results

Measured (not self-reported) time per portfolio review fell from 4 hours 5 minutes to 1 hour 18 minutes, a 68% reduction. Reviews per RM per week rose from 7.4 to 11.2. Total client touchpoints across the desk increased 51%. Net new assets attributable to the additional touchpoints, attributed by the head of business intelligence, came to S$420M (US$310M) over the first three quarters. RM satisfaction rose from 5.9 to 8.1 out of 10. Client satisfaction (Net Promoter Score on the segment) rose 14 points.

The MAS audit reviewed 200 sampled outputs. Each contained the required source citation, the informational/advisory boundary marker, and the RM sign-off log. The retrieval index now serves 12 internal teams beyond the RM desk.

Lessons

The design-partner model with veto rights bought trust that no top-down rollout could have. Measuring keystrokes, not self-report, exposed where the copilot was actually saving time and where it was theatre. Source curation is the moat: garbage in still produces garbage out, regardless of model quality.

What we learned

  • RM design partners with veto rights buy adoption that no top-down rollout can replicate at this client tier.
  • Measure keystrokes, not self-reported time, or the copilot will appear to save time it never actually saved.
  • Source curation is the moat: a vetted 47-publication index outperforms an open-web scrape on every review-quality metric.

My RMs were drafting decks at 9pm. Now they are seeing clients at 2pm. That is the only metric the board asked about.

— Head of Private Banking (anonymized)

This case study is a synthetic composite drawn from multiple AIMenta engagements. Metrics, timelines, and outcomes reflect aggregated reality across similar client profiles. No single client is depicted.

Engagement context

How AIMenta delivers this kind of capability — explore the service lines, vertical depth, and market context behind this engagement.

Beyond this engagement

Explore adjacent capability, sector, and market depth.

This engagement sits inside a wider capability set. Browse other service pillars, industries, and Asian markets where AIMenta delivers similar work.

More precedent

Related case studies

A similar engagement for your team?

Tell us your situation. We'll map it against the closest precedent and tell you what's realistic in 90 days.