Skip to main content
Vietnam
AIMenta
Acronym intermediate · AI Agents & Autonomy

ReAct (Reason + Act)

An agent design pattern that interleaves explicit reasoning ("thought") with actions and observations, making the agent's logic auditable.

ReAct (Yao et al., 2022) is the agent pattern that interleaves **thinking** with **acting**. Instead of asking a model to decide on a tool call in one step, the prompt structure forces an explicit sequence: the model writes a `Thought:` (what it is reasoning about), then an `Action:` (the tool to call and arguments), then receives an `Observation:` (the tool's result), and loops until it arrives at a final answer. The interleaving was the contribution — chain-of-thought alone gave reasoning, tool use alone gave action, ReAct produced both in an auditable trace.

The pattern became the mental model for agent design even where the literal `Thought/Action/Observation` tags are not used. Modern agent frameworks (LangChain, LlamaIndex, CrewAI, AutoGen, OpenAI's Assistants API, Anthropic's tool-use loop) all implement some variant: the model emits a tool call, an orchestrator executes it, the result is fed back, the loop continues. The reasoning step may now be implicit (hidden in reasoning-model chains-of-thought) or explicit (a separate field in the response), but the pattern is the same.

For APAC enterprise teams building internal agents, ReAct is the right baseline architecture because it produces **observability as a byproduct**. Every action in the loop is a structured record — which tool, which arguments, which result. That trace is exactly what a production operations team needs for debugging, audit, and cost attribution. Agent patterns that hide the reasoning step (one-shot planning, pure chain-of-thought) deliver comparable quality on simple tasks but become opaque failure black boxes in production.

The failure mode to anticipate is **looping** — an agent that revisits the same tool with small variations, never converging. Production systems cap iteration count, budget total tool calls, and include an explicit `final_answer` tool that the model is trained or prompted to prefer once confidence is high.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies