Skip to main content
Mainland China
AIMenta

Multi-Agent System (MAS)

A system of multiple interacting AI agents that collaborate, compete, or coordinate — the architectural pattern behind complex LLM-based workflows.

A multi-agent system (MAS) is a system composed of multiple interacting AI agents — each with its own goals, capabilities, and (often) specialised context — that collaborate, compete, or coordinate to accomplish tasks no single agent would handle well. Classical multi-agent research (1980s-2000s) focused on distributed problem-solving, auction-based coordination, and game-theoretic interaction. The 2023+ wave in LLM-based agents has re-purposed the term for orchestrations like researcher + writer + critic loops, debate-based answer refinement, and specialised-role pipelines for complex analysis tasks.

The contemporary patterns fall into a few shapes. **Orchestrator + workers** (a lead agent delegates to specialised sub-agents — researcher, coder, editor — then synthesises) is the dominant architecture in frameworks like CrewAI, LangGraph, AutoGen. **Debate and critique** (two or more agents argue or critique each other's outputs; a judge agent picks) improves quality on reasoning tasks at the cost of latency and token budget. **Role-based simulation** (agents with assigned personas playing out a scenario) is used for strategic planning, scenario analysis, and training-data generation. **Competitive / market-based** (agents bid for resources or negotiate) is the pattern most directly inherited from classical MAS research.

For APAC mid-market teams, multi-agent systems are the pattern to reach for when a single prompt — even with tool use and long context — cannot reliably handle the task. Typical triggers: outputs that genuinely require distinct reasoning stages (plan, execute, verify), tasks that require large context but each stage only uses a slice, workflows where different sub-tasks need different model capabilities or model providers. The warning: multi-agent systems add latency (agents wait for each other), cost (more tokens, often duplicated context), and debugging difficulty (failures are distributed across agents) that single-agent systems do not have. The default should be single-agent-with-tools; escalate to multi-agent when single-agent demonstrably cannot reach the quality bar.

The non-obvious operational note: **multi-agent quality is bounded by the critique agent's calibration**. When a critic agent is too lenient, errors pass through; when too strict, progress stalls in endless revisions. Most production multi-agent failures trace to miscalibrated critics rather than weak workers. Budget evaluation and iteration specifically on the critic or judge components, not just on the primary task agents.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies