AI Agent Patterns: Tool Use, ReAct, and Chain of Thought
Prologue: What Happens When You Build an Agent Without Patterns
The first time I built an AI agent, I just chained LLM calls together without any real plan. It kind of worked—until it didn't. When errors occurred, I had no idea where things went wrong. I couldn't trace why the model made certain decisions or why it chose particular tools.
That's when it clicked: agents need to be built on proven patterns, not ad-hoc LLM calls.
The gap between a simple Q&A chatbot and an agent that executes code, reads files, and calls APIs is enormous. Three patterns bridge that gap:
- Tool Use — how agents interact with the external world
- ReAct — a loop that alternates between reasoning and acting
- Chain of Thought — a prompting technique that forces step-by-step reasoning
Let's break each one down.
1. Tool Use: Giving the LLM Hands
The Concept
An LLM is, at its core, a function that takes text in and produces text out. No matter how smart it is, it can't read files, call APIs, or run calculations on its own. Tool Use is the pattern that fixes this.
Think of an LLM as an incredibly intelligent strategist with no arms or legs. Tools are the prosthetics. The strategist says "read file A," and the system actually reads it and passes back the result.
How Function Calling Works
Modern LLMs support Function Calling (also called Tool Calling). Instead of outputting plain text, the model can output structured JSON saying "call this function with these arguments."
// Tool definitions — telling the model what's available
const tools = [
{
type: "function" as const,
function: {
name: "read_file",
description: "Reads a file from the local filesystem and returns its contents",
parameters: {
type: "object",
properties: {
path: {
type: "string",
description: "Absolute path to the file",
},
},
required: ["path"],
},
},
},
];
When the model needs to use a tool, it responds like this:
{
"role": "assistant",
"tool_calls": [
{
"id": "call_abc123",
"function": {
"name": "read_file",
"arguments": "{\"path\": \"/tmp/data.json\"}"
}
}
]
}
Your system executes the tool and sends the result back to the model.
The Tool Use Loop
async function runAgentLoop(userMessage: string): Promise<string> {
const messages: Anthropic.MessageParam[] = [
{ role: "user", content: userMessage },
];
while (true) {
const response = await client.messages.create({
model: "claude-opus-4-5",
max_tokens: 4096,
tools: tools as Anthropic.Tool[],
messages,
});
// No more tool calls — we're done
if (response.stop_reason === "end_turn") {
const textBlock = response.content.find((b) => b.type === "text");
return textBlock?.type === "text" ? textBlock.text : "";
}
messages.push({ role: "assistant", content: response.content });
const toolResults: Anthropic.ToolResultBlockParam[] = [];
for (const block of response.content) {
if (block.type === "tool_use") {
const result = await executeTool(block.name, block.input as Record<string, string>);
toolResults.push({
type: "tool_result",
tool_use_id: block.id,
content: result,
});
}
}
messages.push({ role: "user", content: toolResults });
}
}
The while (true) loop is the key. Keep executing as long as the model requests tools. Stop at end_turn.
Tool Design Tips
| Principle | Bad | Good |
|---|
| Name tools with verb+noun | data, process | read_file, search_web |
| Write specific descriptions | "Process data" | "Parse a CSV and return a JSON array" |
| Type parameters explicitly | value: any | count: number (1-100 range) |
| Return errors as strings | throw Error | return "Error: ..." |
2. ReAct: Reason, Act, Observe, Repeat
The Concept
ReAct stands for Reasoning + Acting. It came from a Google Research paper in 2022. The idea: make LLMs explicitly alternate between thinking (Thought) and doing (Action).
The difference from plain Tool Use? Tool Use focuses on which tool to call. ReAct focuses on why you're calling it and what you learned from the result.
Thought: The user wants me to debug their Python script. First I need to read it.
Action: read_file(path="./buggy_script.py")
Observation: def calculate(x, y):\n return x / y\n\ncalculate(10, 0)
Thought: This will throw a ZeroDivisionError. Let me verify by running it.
Action: execute_code(code="...")
Observation: ZeroDivisionError: division by zero
Thought: As expected. I need to add a zero check before the division.
Action: write_file(path="./fixed_script.py", content="...")
Observation: File saved successfully
Thought: Fix is complete. Time to explain it to the user.
Final Answer: ...
The biggest advantage is traceability. Thoughts are logged, so when something breaks you know exactly where the reasoning went wrong.
3. Chain of Thought: Think Before You Act
The Concept
Chain of Thought (CoT) is more of a prompting technique than a system architecture pattern, but it's essential in agent design.
The core idea is simple: instead of asking "what's the answer?", ask "show your work, then give the answer." Accuracy goes up significantly.
Why does it work? LLMs generate tokens sequentially. If you write the answer first, subsequent generation tends to rationalize that answer. But if you write out the reasoning first, the logical flow guides each next token.
// Zero-Shot CoT: just add "Let's think step by step"
const prompt = `
Solve this problem step by step.
Problem: A server costs $1,200/month. If traffic triples, costs increase 2.5x.
What is the annual cost increase?
Let's think step by step:
`;
// Few-Shot CoT: show worked examples
const fewShotPrompt = `
Example of step-by-step problem solving:
Example: If 5 people make 5 widgets in 5 days, how many widgets do 100 people make in 100 days?
Step 1: 1 person makes 5/5 = 1 widget in 5 days
Step 2: 1 person makes 1/5 widget per day
Step 3: 100 people make 100 * (1/5) = 20 widgets per day
Step 4: 100 people make 20 * 100 = 2,000 widgets in 100 days
Answer: 2,000
Now solve this problem the same way:
[actual problem]
`;
In agent systems, CoT shines during the planning phase. Instead of immediately calling tools, the agent thinks through the plan first.
4. When to Use Which Pattern
| Pattern | Core Idea | Use When | Watch Out For |
|---|
| Tool Use | Connect to external systems | Always — it's the foundation | Tool description quality is everything |
| ReAct | Interleave reasoning and action | Multi-step problems, when you need debuggability | Must handle parsing failures |
| Chain of Thought | Step-by-step reasoning | Complex calculations, planning phases | Token costs go up |
| Plan-and-Execute | CoT + ReAct combined | Long-running complex tasks | Wrong plan = wrong everything |
5. Safety Considerations
Agents are powerful. That means they're also dangerous if unchecked.
// Path allowlist
const ALLOWED_PATHS = ["/tmp/agent-workspace"];
const isPathAllowed = (path: string) =>
ALLOWED_PATHS.some((allowed) => path.startsWith(allowed));
// Block dangerous shell patterns
const BLOCKED_PATTERNS = ["rm -rf", "sudo", "curl | bash"];
const isSafeCommand = (cmd: string) =>
!BLOCKED_PATTERNS.some((p) => cmd.includes(p));
// Token budget guard
class CostGuard {
private total = 0;
private readonly limit = 100_000;
check(tokens: number): boolean {
this.total += tokens;
return this.total <= this.limit;
}
}
// Human-in-the-loop for destructive actions
async function requireApproval(description: string): Promise<boolean> {
// In a real system, this would surface a UI prompt or Slack notification
console.log(`[APPROVAL REQUIRED] ${description}`);
return false; // default deny
}
The rule of thumb: minimize blast radius. Give agents the minimum permissions they need, add spending limits, and require human approval for anything irreversible.
Epilogue: Patterns First, Code Second
The biggest lesson from building AI agents: understand the patterns before writing any code.
Without Tool Use, the agent can't do anything. Without ReAct, it can't handle complex multi-step problems. Without Chain of Thought, planning falls apart.
These three patterns compose well. Use them together and you have a solid foundation for almost any agent system. Best of all, they're framework-agnostic—they work whether you're using Anthropic, OpenAI, LangChain, or rolling your own.
Start with a simple file-management agent. Then graduate to multi-agent pipelines. The patterns scale.