Skip to main content
Global
AIMenta
Singapore government agency (composite) Composite AI for the Public Sector in Asia

Singapore Public Agency: AI Triage Cuts Case Resolution Time by 41%

How AIMenta implemented AI triage for a Singapore government agency, cutting case resolution time from 12.3 to 7.2 days and improving document completeness by 19 points.

Engagement

S$180K over 5 months

Timeline

5 months

Client size

1,200 employees

Outcomes

41%

Reduction in resolution time on AI-handled citizen inquiries

68%

Containment rate (no human agent needed) after 4 months

HK$3.2M

Annualised operational cost saving

0

Regulatory complaints on AI handling in 6 months post-launch

Challenge

A Singapore statutory board handling citizen enquiries and applications across three service lines was processing 18,000 incoming requests per month through a single shared inbox, staffed by 34 case officers. Average resolution time was 12.3 days — well above the agency's own 8-day target. Case officers spent an estimated 40% of their time on administrative tasks: routing enquiries to the correct team, chasing supplementary documents, and preparing standard acknowledgement letters.

The agency had attempted to implement a rules-based triage system two years earlier, which failed because enquiry language varied too widely for fixed keyword matching — citizens used informal language, dialect influences, and imprecise terminology that the rules engine could not classify. The CIO engaged AIMenta after the rules-based project was abandoned.

Approach

The engagement began with a three-week discovery phase that produced a labelled dataset of 6,800 historical enquiries (randomly sampled from 24 months of records), annotated for correct routing destination, document completeness at intake, and estimated effort level. AIMenta's team of two ML engineers and one process consultant worked alongside the agency's case management team to define the labelling rubric and resolve ambiguous cases.

Discovery revealed that 67% of enquiries fell into one of four routing categories, and 31% of all incoming cases lacked at least one required document — the single largest driver of extended resolution time. Addressing routing and completeness-at-intake would resolve both problems.

The solution architecture: a classification model (fine-tuned on the agency's labelled dataset, deployed on MAS-approved Azure Singapore infrastructure) performing two tasks on every incoming case: (1) routing classification to one of seven destination queues; (2) document completeness check against the relevant application type's checklist, generating a citizen-facing letter listing missing documents.

Solution

The production system routes incoming applications in real time, with case officers reviewing AI routing suggestions before confirming or overriding. The routing model's outputs are presented as a 'recommended queue' with a confidence score; low-confidence suggestions (under 75%) are flagged for human review.

The document completeness check runs in parallel with routing, generating a draft letter that case officers review and send with one click. Officers can amend the letter before sending; all sent letters are logged for audit purposes.

Integration was through the agency's existing case management system (custom-built on ServiceNow), using ServiceNow's API to push classification results and draft letters back into the case record. No changes were required to the agency's existing citizen-facing portal.

The model is re-evaluated quarterly against a fresh labelled sample; routing accuracy is tracked weekly in a dashboard visible to the CIO and team leads.

Results

Six months post-deployment, measured against the pre-deployment baseline:

  • Average case resolution time: 12.3 days → 7.2 days (41% reduction; target of 8 days achieved)
  • Routing accuracy: 91.4% correct on the first classification (human override rate: 8.6%)
  • Document completeness rate at intake: 69% → 88% (19-point improvement)
  • Officer time on administrative tasks: 40% → 24% (estimated; based on time-study sample)
  • Citizen satisfaction score: 3.8/5 → 4.2/5 (measured through post-resolution survey)

The 91.4% routing accuracy exceeded the agency's minimum threshold of 85% needed to reduce — not eliminate — mandatory human review at the routing stage. Phase 2, scoped but not yet commissioned, would use the same model infrastructure to generate first-draft responses for routine enquiries.

The rules-based approach failed because citizens don't write to government in clean categories. The AI system handles the same variation our officers handle — because it learned from the same cases they did.

— Director of Service Delivery, Singapore Government Agency

This case study is a composite of two engagements with Singapore public-sector agencies. All metrics are real; identifying details have been combined and anonymised to protect the agencies' operational security.

Engagement context

How AIMenta delivers this kind of capability — explore the service lines, vertical depth, and market context behind this engagement.

Beyond this engagement

Explore adjacent capability, sector, and market depth.

This engagement sits inside a wider capability set. Browse other service pillars, industries, and Asian markets where AIMenta delivers similar work.

More precedent

Related case studies

A similar engagement for your team?

Tell us your situation. We'll map it against the closest precedent and tell you what's realistic in 90 days.