Skip to main content
Vietnam
AIMenta
Acronym foundational · AI Agents & Autonomy

Function Calling

OpenAI's term for tool use — when a model emits structured JSON specifying which function to call and with what arguments.

Function calling is the mechanism that turned LLMs from chatbots into programmable components. The pattern: the application presents the model with a schema (name, description, parameter JSON schema) for each available function; the model, given a user request, returns structured JSON indicating which function to call and with what arguments; the application runs the function and feeds the result back into the model for a final response. Anthropic calls it **tool use**, Google calls it **tool calling**, OpenAI originated the term **function calling** — the wire format differs but the pattern is the same.

The capability unlocked three architectures that now underpin most production LLM work: **retrieval-augmented generation** (the model calls a `search_docs` tool), **agentic workflows** (the model calls multiple tools in sequence to complete a goal), and **API-fronted LLMs** (natural-language queries translated into structured API calls against internal services). Before function calling, all three patterns required fragile regex parsing of free-text model output; now they are as reliable as the underlying schema.

The failure modes that bite in production: **schema drift** (the model's understanding of what a field does diverges from the actual service), **hallucinated arguments** (the model invents plausible-looking IDs that don't exist), **silent success** (the model returns a successful-looking response without having called the tool), and **over-calling** (the model calls a tool for questions it could answer from context alone). Good production systems validate every tool call against a schema, log the call and response for post-hoc review, and include **tool-choice fences** (force a specific tool, or force text-only, when you know the right answer).

For APAC mid-market, function calling is the most reliable route to wrapping legacy systems with a natural-language front-end. Start with read-only tools (search, lookup, report), move to scoped write tools (create-ticket, schedule-meeting) only after the read layer has been running cleanly for a quarter — the blast radius of a hallucinated write is large enough that gradualism pays.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies