MCP is becoming the lingua franca for agent tool integration. Treating MCP server compatibility as a procurement criterion now is reasonable.
Anthropic published the Model Context Protocol (MCP) 1.1 specification, adding multi-server orchestration, persistent session management, and a standardised resource and tool registration mechanism. The update moves MCP from a single-agent tool-calling interface toward a genuine inter-agent communication layer — allowing AI agents to discover, delegate to, and receive results from other specialised agents at runtime without hardcoded routing logic.
**Why this matters for enterprise AI architecture.** The dominant pattern for enterprise AI in production today is single-agent with tool calls: one LLM, a list of approved tools, and a human-in-the-loop checkpoint before any write action. MCP 1.1's multi-server orchestration enables a second pattern: agent networks where a planning agent decomposes a task, delegates subtasks to domain-specific agents (a document retrieval agent, a CRM write agent, a compliance checking agent), and aggregates results. This is the architecture required for complex workflows — contract review + CRM update + calendar booking from a single user request — that single-agent systems cannot reliably execute.
**Practical implications for APAC mid-market.** For enterprises currently deploying or evaluating AI, the 1.1 update signals that the agent-to-agent routing problem is being standardised at the protocol level, not solved ad hoc by each vendor. Teams building on top of Claude, or tools that have adopted MCP (including Cursor, Windsurf, and several enterprise workflow platforms), should expect the specification to stabilise rather than drift. That stability makes multi-agent architecture planning less risky than 12 months ago.
**Governance and control surface.** Multi-agent orchestration significantly increases the audit trail requirement. Each agent-to-agent delegation creates a new action scope. Enterprises implementing MCP-based architectures should build approval logging from the outset — not retrofit it after a runaway delegation chain causes a data access incident.
**AIMenta's editorial read.** MCP 1.1 is a credible foundation for production multi-agent systems. The protocol is maturing faster than most enterprise adoption curves, which creates a planning opportunity: organisations that build MCP-compatible agent architecture now will have significantly less integration work as the ecosystem consolidates.
How AIMenta helps clients act on this
Where this story lands in our practice — explore the relevant service line and market.
Beyond this story
Cross-reference our practice depth.
News pieces sit on top of working capability. Browse the service pillars, industry verticals, and Asian markets where AIMenta turns these stories into engagements.
Other service pillars
By industry
Other Asian markets
Related stories
-
Open source ·
Alibaba Qwen3 Matches GPT-4o on APAC Language Benchmarks — Open-Source Frontier Moment for the Region
Alibaba's Qwen team has released Qwen3, its third-generation open-source large language model family, with benchmark results showing state-of-the-art performance on Chinese, Japanese, and Korean language understanding and reasoning tasks — matching or exceeding GPT-4o on several APAC-language benchmarks. The Qwen3 family spans model sizes from 0.6B to 235B parameters, with the flagship Qwen3-235B-A22B achieving performance competitive with Claude 3.7 Sonnet and GPT-4o on multilingual coding, mathematical reasoning, and instruction following benchmarks.
-
Model release ·
Anthropic releases Claude with extended reasoning + agent SDK improvements
Anthropic shipped extended-thinking improvements to its Claude model family alongside an updated Claude Agent SDK and new tool-use primitives for production agent deployments.
-
Open source ·
Mistral AI Releases Mistral Large 2 as Open Weights — 123B Parameter Frontier Model Available for On-Premises Deployment
Mistral AI has released Mistral Large 2 as an open-weights model under the Mistral Research License, making a 123 billion parameter language model with a 128K context window available for download, fine-tuning, and on-premises deployment. Mistral Large 2 achieves benchmark scores competitive with Claude 3.5 Sonnet and GPT-4o on standard evaluations (MMLU, HumanEval, MATH, and reasoning tasks) — making it the first open-weights model in the frontier performance tier. For APAC enterprises with data sovereignty requirements, on-premises deployment mandates, or API cost constraints at scale, Mistral Large 2 represents a significant opening — frontier AI capability deployable within enterprise-controlled infrastructure without ongoing API charges.
-
Model release ·
Meta releases Llama 4 family with native multimodal support
Meta's Llama 4 family adds native vision and audio understanding alongside reasoning improvements, all under the existing community license.