How a ReAct Agent Really Works

September 7, 2025 in agents 4 minutes

A practical explanation of the ReAct framework—how LLMs combine reasoning, tool use, and observations to act as agents.

Background

Large Language Models (LLMs) can be seen as text-in, text-out systems: they take textual input and produce textual output.

Recently, the idea of the AI agent has gained huge traction. In simple terms, an agent is more than just a model—it’s a system where an LLM dynamically directs its own reasoning, decides when to use external tools, and manages how tasks get accomplished. (The same intuition also extends to multimodal models, but we’ll focus here on text.)

For a concise introduction to agents versus workflows, I recommend Anthropic’s article: Building effective agents.

One general definition of an agent is:

Agents are systems where LLMs dynamically manage their own reasoning and tool use, maintaining control over how they accomplish tasks.


The ReAct Framework

ReAct is one of the most influential frameworks in the development of modern AI agents, introduced by Yao et al. (2023): React: Synergizing Reasoning and Acting in Language Models (ICLR).

At its core, ReAct combines two processes:

  • Reasoning – internal chains of thought
  • Acting – external tool use
ReAct framework diagram
The two core components of ReAct: Reasoning and Acting.

In the original paper, ReAct is compared to other prompting strategies:

Prompting comparison
Comparison of four prompting methods: (a) Standard, (b) Chain-of-Thought (reason only), (c) Act-only, and (d) ReAct (reason + act).

Notice in panel (d), the cycle alternates between Thought → Action → Observation. But how does this look in practice, given that an LLM is still just a text generator? Let’s break it down.


How the Components Work

1. Thoughts

Modern LLMs can generate explicit “thinking” traces (sometimes hidden, sometimes shown). With a proper prompt, you can ask the model to produce reasoning steps before its final output. For example:

  • Input: query
  • Output: Thought: ... reasoning ... + Answer: ...

This part is straightforward: it’s just instructing the model to verbalize its reasoning.


2. Actions

Here’s where agents become powerful. You can equip your system with external tools (e.g., a web search API, a calculator, or a database lookup).

To enable tool use, you extend the model’s prompt with:

  • When to use the tool (e.g., “use search if you need fresh facts”)
  • What the tool does (e.g., “this tool queries Google”)
  • How to call the tool (e.g., {"query": "textual_query"} wrapped in special tokens like <tool_use></tool_use>)

Example model output:

Thought: I need to look this up.
Action: <tool_use>{"query": "LLM ReAct framework"}</tool_use>

Your code can then parse the JSON between <tool_use> tags and call the actual tool. If the model decides not to use a tool, it simply produces a direct answer.


3. Observations

After a tool call, the system receives results (the observation). These are stored as text and appended back into the model’s context.


Putting It All Together

A ReAct agent runs inside a loop:

  1. Initial input:
    system prompt (with tool instructions) + user query

  2. Model output:

    • Thought + Action (→ tool call), or
    • Thought + Answer (→ stop, task complete)
  3. If Action:

    • Parse the tool call
    • Run the function in Python (or another environment)
    • Capture the output as Observation
  4. Next iteration input:
    prompt + query + thought1 + action1 + observation1

  5. Repeat until:

    • The model outputs an Answer (final)
    • Or the loop hits a predefined step limit

Beyond the Naive ReAct

The vanilla ReAct loop works, but has limitations:

  • Context growth: Each loop iteration adds more text, which quickly bloats the prompt. A common fix is context compression or summarization of past steps.
  • Advanced designs: Many modern coding agents (e.g., Claude Code, ChatGPT “agent mode”) still follow the ReAct spirit, but with extra modules for memory, planning, and evaluation.

Key Takeaways

  • ReAct is about alternating reasoning and acting, not just one or the other.
  • With careful prompt design, you can capture thoughts, actions, and observations as structured text.
  • The looped process makes agents more capable than a single-shot LLM, but requires context management to scale.

✦ That’s the essence of how a ReAct agent works in practice.
Future posts will dive into improvements like memory compression, evaluation strategies, and deployment tips.