Context
Your Jakarta agribusiness sources, processes, and exports tropical commodities (cocoa, coffee, palm derivatives) from 14,000 smallholder farmers across Sumatra, Sulawesi, and Java. Quality grading at the collection points was manual, performed by 380 grader-buyers using visual inspection and moisture meters. Inter-grader agreement was 71%, meaning two graders looking at the same batch reached the same grade only 71% of the time. That variance translated into farmer disputes, downstream blending errors, and an estimated IDR 84B (US$5.4M) per year of margin leakage. Two large European customers were tightening traceability requirements for EUDR (EU Deforestation Regulation) compliance, putting US$48M of contracted revenue at risk.
Challenge
Three constraints. First, collection-point conditions were physically harsh: dust, humidity, intermittent power, and patchy mobile coverage. Cloud-only inference was not viable. Second, smallholder farmers had to trust the grading system, or the operator would lose supply to competitors. Third, EUDR required plot-level traceability data that the operator had collected on paper for the prior decade.
Approach
We ran a 5-phase model: digitise, vision, edge-deploy, traceability, hand-over. Digitise (6 weeks) reconstructed the farmer-plot registry from a decade of paper records, geo-tagging 14,000 plots and reconciling with a satellite-derived deforestation baseline. Vision (8 weeks) trained a grading model on 24,000 labelled batch images collected from collection points across all three regions, using grader-buyer labels paired with reference-lab grades.
Edge-deploy (10 weeks) shipped a ruggedised tablet with on-device inference to 42 collection points. The grader-buyer photographed the batch, the model proposed a grade with confidence and reasoning (visible defects highlighted), and the grader-buyer confirmed or overrode. Every override fed back into the next training cycle. The farmer saw the same screen the grader saw, removing the perceived asymmetry.
Traceability (8 weeks) connected the plot registry to the batch-grading workflow, producing a per-batch EUDR audit trail with plot polygon, deforestation-risk score, harvest date, and grading record. Hand-over (parallel from week 18) trained two of your agronomy data engineers and the head of supply on edge-model retraining and traceability reporting.
Results
Inter-grader agreement rose from 71% to 92%. Farmer-disputed grades fell 64% (from 1,840 disputes per quarter to 660). Downstream blending errors fell 47%, recovering an estimated IDR 56B (US$3.6M) per year in margin. EUDR-compliant traceability covered 100% of EU-bound batches by month nine, securing the US$48M of contracted revenue and opening two new European contracts worth a combined US$22M annualised. Time per grading event fell from 9 minutes to 3 minutes 20 seconds, allowing collection points to process 2.7x the daily volume during peak harvest without additional grader headcount.
Smallholder satisfaction (post-grading SMS survey, n=8,400) rose from 5.4 to 7.8 out of 10, driven by the shared-screen transparency and the dispute reduction.
Lessons
The shared-screen design (farmer sees what grader sees) addressed the trust problem that no algorithmic accuracy could have solved alone. Edge inference was non-negotiable in this physical environment, and forced cleaner model engineering than a cloud build would have. Reconstructing the plot registry from paper was unglamorous foundation work that turned an EUDR risk into an EUDR moat.
What we learned
- The shared-screen design where the farmer sees the grader screen addressed the trust problem that no algorithmic accuracy improvement could have solved alone.
- Edge inference was non-negotiable in this physical environment and forced cleaner model engineering than a cloud build ever would have.
- Reconstructing the plot registry from paper was the unglamorous foundation that turned an EUDR risk into an EUDR commercial advantage.
When the farmer reads the same screen as the grader, the dispute disappears before it is filed.
This case study is a synthetic composite drawn from multiple AIMenta engagements. Metrics, timelines, and outcomes reflect aggregated reality across similar client profiles. No single client is depicted.
Engagement context
How AIMenta delivers this kind of capability — explore the service lines, vertical depth, and market context behind this engagement.
Beyond this engagement
Explore adjacent capability, sector, and market depth.
This engagement sits inside a wider capability set. Browse other service pillars, industries, and Asian markets where AIMenta delivers similar work.
Other service pillars
Other industries
Other Asian markets