DevTools for AI agents — profile every call, find every bottleneck
When your agent breaks or runs slowly, good luck figuring out which tool call failed, which LLM call consumed most of your tokens, or where latency piled up. AgentLens gives you per-call latency, token accounting, success/failure tracking, call chains, and JSON trace export — with zero required dependencies.
$ agentlens view trace.json Call Trace — research-agent # Name Type Model Latency Tokens Status 1 research_pipeline chain — 1243 ms — ✓ ok 2 plan_research llm gpt-4o 612 ms 487 ✓ ok 3 web_search tool — 87 ms — ✓ ok 4 fetch_page tool — 134 ms — ✓ ok 5 fetch_page tool — 12 ms — ✗ error 6 summarize_text tool — 18 ms — ✓ ok
$ pip install agentlens # With SDK integrations: $ pip install agentlens[openai] $ pip install agentlens[anthropic] $ pip install agentlens[all]
$ npm install agentlens # or $ yarn add agentlens
# Decorate tool functions and LLM calls from agentlens import Profiler from agentlens.reporter import Reporter profiler = Profiler("my-agent") reporter = Reporter(profiler) @profiler.tool("web_search") def web_search(query: str) -> list: ... @profiler.llm(model="gpt-4o") def call_gpt(messages: list): return openai_client.chat.completions.create( model="gpt-4o", messages=messages ) # Group calls into named chains with profiler.chain("research_pipeline"): results = web_search("AI agent frameworks") response = call_gpt([{"role": "user", "content": str(results)}]) reporter.print_table() reporter.print_summary() reporter.export_json("trace.json")
from agentlens.integrations.openai import ProfiledOpenAI
profiler = Profiler("gpt-agent")
client = ProfiledOpenAI(openai.OpenAI(api_key="..."), profiler=profiler)
# Use exactly like a normal OpenAI client — profiling is automatic
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
import { Profiler, Reporter } from "agentlens";
const profiler = new Profiler("my-agent");
const reporter = new Reporter(profiler);
const searchWeb = profiler.wrapTool("search_web", async (query: string) => {
const results = await fetch(`https://api.search.com?q=${query}`);
return results.json();
});
const callGPT = profiler.wrapLLM("gpt-4o", async (messages) => {
return openai.chat.completions.create({ model: "gpt-4o", messages });
}, { name: "plan_step" });
await profiler.runChain("research_pipeline", async () => {
const results = await searchWeb("AI agent frameworks");
return await callGPT([{ role: "user", content: JSON.stringify(results) }]);
});
reporter.printTable();
reporter.exportJSON("trace.json");
| Method | Description |
|---|---|
| Profiler(name, tags?) | Create a profiler instance |
| .tool(name?, tags?) | Decorator for tool functions |
| .llm(model?, name?, tags?) | Decorator for LLM calls |
| .chain(name) | Context manager to group calls |
| .start_call(name, call_type, model?) | Manual: start a call |
| .end_call(call, success, error?, token_usage?) | Manual: finish a call |
| .calls | List of all ProfiledCall objects |
| .summary() | Dict of aggregate stats |
| .get_calls(call_type?, success_only?, failed_only?) | Filtered call list |
| .clear() | Reset recorded calls |
| Method | Description |
|---|---|
| Reporter(profiler) | Create a reporter |
| .print_table() | Print a detailed call table |
| .print_summary() | Print aggregate statistics |
| .print_timeline() | Print ASCII latency timeline |
| .export_json(path) | Export full trace as JSON |
agentlens view