Skip to main content
Malaysia
AIMenta
intermediate · Machine Learning

Few-Shot Learning

Learning a new task from only a handful of examples — sometimes just two or three.

Few-shot learning means adapting to a new task from a small number of examples — typically 2 to 20 demonstrations. In the classical ML literature, the term referred to meta-learning approaches (MAML, prototypical networks) that explicitly trained models to adapt from few examples. In the foundation-model era, it overwhelmingly refers to **in-context learning** — placing a handful of input-output examples into the LLM prompt and letting the model pattern-match against them without any weight updates.

In-context few-shot works because foundation models have seen enough of every common task during pretraining that demonstrations act as a task-identifier: the examples tell the model which of the many behaviours it could produce to actually produce. Two or three well-chosen examples often close the gap between brittle zero-shot behaviour and reliable, format-consistent output. The right number is empirical — past five or six demonstrations, returns usually diminish and context cost starts to matter.

For APAC mid-market, few-shot is the second step in every LLM project: zero-shot baseline first, add two to five examples if quality is not there, only then consider fine-tuning or more elaborate techniques. The examples you choose matter more than most teams appreciate — representative examples outperform randomly-drawn ones, and **balanced examples** across the output space outperform skewed samples. Invest an afternoon in curating the shot examples before spending a week on prompt-wording alternatives.

The operational warning: few-shot prompts are fragile across model upgrades. A set of examples tuned for one model version may produce differently-formatted output on a new version — especially when the vendor shifts instruction-following defaults. Keep few-shot prompts under version control, and always re-baseline when upgrading the backing model. Building a golden eval set of 30-50 labelled cases per task pays for itself within the first vendor upgrade.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies