Build Your
First Agent

Enough theory. In the next 20 minutes you'll build a real AI agent in Python — one that reasons, calls tools, and loops until it's done. No framework, just ~70 lines of code.

What You'll Build

A tiny "math + weather" agent. It picks the right tool for your question, calls it, reads the result, and answers.

Example

You: "What's 48 times 17, and is it raining in Tokyo?"

Agent thinks: "I need to call calculator, then get_weather, then answer."

Agent replies: "48 × 17 = 816. Tokyo is currently 14°C with light rain."

Step 0 — Prereqs

🐍

Python 3.10+

Any recent version works. python3 --version

🔑

API Key

Grab one from console.anthropic.com. Free credits to start.

20 minutes

That's honestly all this takes.

Step 1 — Install the SDK

# Create a fresh directory
mkdir my-first-agent && cd my-first-agent

# Install the Anthropic Python SDK
pip install anthropic

# Set your API key (for this shell session)
export ANTHROPIC_API_KEY="sk-ant-..."

That's the entire setup. No framework to learn first, no YAML to edit.

Step 2 — Your First LLM Call

Before we add tools, let's make sure we can talk to Claude. Create agent.py:

from anthropic import Anthropic

client = Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Say hi in one word."}
    ],
)

print(response.content[0].text)

Run python agent.py. You should see Claude say hi. If that works, you have an LLM. Now let's upgrade it into an agent.

Step 3 — Define Your Tools

A tool is just a function + a schema describing it. We'll give the agent two tools: a calculator and a (fake) weather lookup.

# --- the actual tool functions ---

def calculator(expression: str) -> str:
    try:
        return str(eval(expression, {"__builtins__": {}}, {}))
    except Exception as e:
        return f"error: {e}"

def get_weather(city: str) -> str:
    # A real agent would call a weather API here.
    fake = {"tokyo": "14°C, light rain", "paris": "18°C, sunny"}
    return fake.get(city.lower(), "unknown city")

# --- the tool schemas Claude sees ---

tools = [
    {
        "name": "calculator",
        "description": "Evaluate a math expression and return the result.",
        "input_schema": {
            "type": "object",
            "properties": {"expression": {"type": "string"}},
            "required": ["expression"],
        },
    },
    {
        "name": "get_weather",
        "description": "Get the current weather for a city.",
        "input_schema": {
            "type": "object",
            "properties": {"city": {"type": "string"}},
            "required": ["city"],
        },
    },
]

TOOLS_BY_NAME = {"calculator": calculator, "get_weather": get_weather}

Key idea

The model can't actually run your code. It only sees the schema and decides when to call a tool. You run the function and send the result back. That back-and-forth is the agent loop.

Step 4 — The Agent Loop

This is the whole magic. Call the model. If it asks for a tool, run it, append the result, call the model again. Stop when it has a final answer.

def run_agent(user_message: str, max_steps: int = 10):
    messages = [{"role": "user", "content": user_message}]

    for step in range(max_steps):
        response = client.messages.create(
            model="claude-sonnet-4-6",
            max_tokens=1024,
            tools=tools,
            messages=messages,
        )

        # Did Claude give a final answer? Then stop.
        if response.stop_reason == "end_turn":
            text = "".join(
                b.text for b in response.content if b.type == "text"
            )
            return text

        # Otherwise, it wants to use tools. Run them.
        messages.append({"role": "assistant", "content": response.content})
        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                fn = TOOLS_BY_NAME[block.name]
                result = fn(**block.input)
                print(f"  → {block.name}({block.input}) = {result}")
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": result,
                })
        messages.append({"role": "user", "content": tool_results})

    return "(hit max steps)"

That's it. That's an agent.

Read the loop carefully — this same pattern scales from toy demos to production coding agents. It's really just: call model → run tools → call model → repeat.

Step 5 — Run It

if __name__ == "__main__":
    answer = run_agent(
        "What's 48 times 17, and is it raining in Tokyo?"
    )
    print("\n" + answer)

Run it. You'll see the tool calls stream by, then the final answer. Try your own questions.

Expected output

  → calculator({'expression': '48 * 17'}) = 816
  → get_weather({'city': 'Tokyo'}) = 14°C, light rain

48 × 17 = 816. Tokyo is currently 14°C with light rain, so yes — it's raining there.

Step 6 — Make It Yours

You now have a working agent. Here's what to try next:

Add web search

Wrap a real search API (Brave, Tavily, Exa) as a tool. Suddenly your agent can research.

Add file tools

Give it read_file and write_file. Now it can work on your codebase.

🧠

Add a system prompt

Set system="..." on the message call. Give the agent a personality or strict rules.

Add memory

Persist messages to a file so your agent remembers past conversations.

Stream the output

Use client.messages.stream(...) so responses appear word-by-word.

👥

Add a second agent

Have this agent call another agent (a specialist) as a tool. That's multi-agent in 10 lines.

Full Code

The entire agent, in one copy-pastable file. Save as agent.py and run.

from anthropic import Anthropic

client = Anthropic()

def calculator(expression: str) -> str:
    try:
        return str(eval(expression, {"__builtins__": {}}, {}))
    except Exception as e:
        return f"error: {e}"

def get_weather(city: str) -> str:
    fake = {"tokyo": "14°C, light rain", "paris": "18°C, sunny"}
    return fake.get(city.lower(), "unknown city")

tools = [
    {
        "name": "calculator",
        "description": "Evaluate a math expression.",
        "input_schema": {
            "type": "object",
            "properties": {"expression": {"type": "string"}},
            "required": ["expression"],
        },
    },
    {
        "name": "get_weather",
        "description": "Get the current weather for a city.",
        "input_schema": {
            "type": "object",
            "properties": {"city": {"type": "string"}},
            "required": ["city"],
        },
    },
]

TOOLS_BY_NAME = {"calculator": calculator, "get_weather": get_weather}

def run_agent(user_message: str, max_steps: int = 10):
    messages = [{"role": "user", "content": user_message}]
    for _ in range(max_steps):
        response = client.messages.create(
            model="claude-sonnet-4-6",
            max_tokens=1024,
            tools=tools,
            messages=messages,
        )
        if response.stop_reason == "end_turn":
            return "".join(
                b.text for b in response.content if b.type == "text"
            )
        messages.append({"role": "assistant", "content": response.content})
        results = []
        for block in response.content:
            if block.type == "tool_use":
                fn = TOOLS_BY_NAME[block.name]
                out = fn(**block.input)
                print(f"  → {block.name}({block.input}) = {out}")
                results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": out,
                })
        messages.append({"role": "user", "content": results})
    return "(hit max steps)"

if __name__ == "__main__":
    print(run_agent("What's 48 times 17, and is it raining in Tokyo?"))

Where to Go Next