Skip to main content
Global
AIMenta
A Top-5 Malaysian Last-Mile Logistics Operator Composite AI for Logistics and Supply Chain in Asia

A Kuala Lumpur logistics operator lifts on-time delivery to 93% with driver-trained route optimisation

A Top-5 Malaysian last-mile operator lifted on-time delivery to 93.1% and saved MYR 4.2M with driver-trained route optimisation from AIMenta.

Engagement

US$150K-$210K

Timeline

6 months

Client size

2,500-4,000

Outcomes

93.1%

On-time delivery rate (was 84.6%)

+8.5pp

Improvement vs SLA threshold of 92%

90 min

ETA window (was 4 hours)

22%

E-commerce customers opting into 60-min premium tier

+11%

Vehicle utilisation across the 1,400 fleet

-19%

Driver overtime

MYR 4.2M

Annualised overtime saving

MYR 1.8M

SLA penalty avoided in Q1 post-rollout

Context

Your Kuala Lumpur logistics operator runs 1,400 vehicles across last-mile and middle-mile delivery in peninsular Malaysia, serving e-commerce, grocery, and B2B parcel customers. On-time delivery rate sat at 84.6% against a contractual SLA of 92% with the largest customer. Drivers ran static routes generated by a 2018 routing tool that did not consider real-time traffic, weather, or driver familiarity with delivery clusters. Customers asked for ETA windows of 2 hours; the operator could only commit to 4 hours. The COO had been told by three vendors that "AI route optimization will fix this", and had become sceptical of the category.

Challenge

Three constraints. First, the existing routing tool was deeply embedded in the dispatcher workflow: 28 dispatchers built their day around it. Replacing it overnight would have caused a service collapse. Second, driver acceptance of route changes was the single largest variable in actual on-time performance: drivers routinely deviated from suggested routes based on local knowledge. Third, customer ETA promises had to be conservative to avoid SLA breaches, which compressed the operator commercial offering.

Approach

We ran a 4-phase model: shadow, augment, replace, hand-over. Shadow (4 weeks) instrumented the existing routing tool and 80 driver routes in two depots, capturing every deviation and the reason given by the driver. Drivers were paid for the time spent in shadow interviews. The 4,200 deviations recorded became the training data the existing vendor never had.

Augment (8 weeks) layered a re-routing recommender on top of the existing tool. Dispatchers received suggested re-routes when traffic, weather, or driver-familiarity scoring suggested an alternative would beat the static plan. The dispatcher decided whether to push the re-route to the driver. The driver decided whether to accept it. Every decision was logged.

Replace (10 weeks) introduced a new routing engine that consumed the same dispatcher interface but produced dynamic plans. The static tool was kept live for one month as a fallback. ETA prediction was added at the same time, calibrated against actual delivery times rather than planned times. Hand-over (parallel from week 16) trained two of your dispatch-tech engineers and the head of operations on engine retraining and ETA-model recalibration.

Results

On-time delivery rate rose from 84.6% to 93.1%, exceeding the 92% SLA across the three months following full rollout. Customer-facing ETA windows narrowed from 4 hours to 90 minutes, and the operator added a premium 60-minute-ETA tier that 22% of e-commerce customers opted into within the first quarter. Vehicle utilisation rose 11% across the fleet, equivalent to running 1,400 vehicles with the output of 1,560. Driver overtime fell 19%, saving an estimated MYR 4.2M (US$880K) per year. Two SLA-breach contract penalties (totalling MYR 1.8M / US$380K) were avoided in the first quarter post-rollout.

Customer satisfaction (the largest e-commerce customer scorecard) moved the operator from "watch list" to "preferred carrier" status, securing a three-year volume contract.

Lessons

Paying drivers for shadow time produced training data the prior vendors never had access to, and was the cheapest input that made the largest model-quality difference. Preserving the dispatcher interface bought time for trust to build before the engine was replaced. Calibrating ETAs against actual rather than planned delivery times was the change that made the customer-facing promise honest.

What we learned

  • Paying drivers for shadow time produced training data the prior three vendors never had access to, the cheapest input with the largest model impact.
  • Preserving the dispatcher interface bought the operational trust that allowed the routing engine to be replaced cleanly in the third phase.
  • Calibrating ETAs against actual rather than planned delivery times was the change that made the customer-facing promise commercially honest.

My drivers built the model. The engineers wrote the code. That order matters.

— COO (anonymized)

This case study is a synthetic composite drawn from multiple AIMenta engagements. Metrics, timelines, and outcomes reflect aggregated reality across similar client profiles. No single client is depicted.

Engagement context

How AIMenta delivers this kind of capability — explore the service lines, vertical depth, and market context behind this engagement.

Beyond this engagement

Explore adjacent capability, sector, and market depth.

This engagement sits inside a wider capability set. Browse other service pillars, industries, and Asian markets where AIMenta delivers similar work.

More precedent

Related case studies

A similar engagement for your team?

Tell us your situation. We'll map it against the closest precedent and tell you what's realistic in 90 days.