Modern AI agents orchestrate dozens of tool calls per request. A naive implementation runs them sequentially. Synapse analyses the calls, builds a dependency DAG, and fires independent ones in parallel — dramatically cutting pipeline latency.
fetch_wikipedia --+
fetch_arxiv --+
fetch_github --+--> aggregate --> format_report
fetch_news --+
fetch_patents --+
Stage 0 (5 calls in parallel) -> 200 ms
Stage 1 (1 call) -> 100 ms
Stage 2 (1 call) -> 50 ms
Wall clock: 350 ms vs 1150 ms sequential => 3.3x speedup
# Quick Start
Python
import asyncio
from synapse import Orchestrator, ToolCall
async def main():
orch = Orchestrator(tools={
"fetch_user": fetch_user,
"fetch_catalog": fetch_catalog,
"build_cart": build_cart,
"send_receipt": send_receipt,
})
report = await orch.run([
ToolCall(id="user", name="fetch_user", inputs={"user_id": 42}),
ToolCall(id="catalog", name="fetch_catalog", inputs={"category": "widgets"}),
ToolCall(id="cart", name="build_cart",
inputs={"user": "$results.user", "catalog": "$results.catalog"}),
ToolCall(id="receipt", name="send_receipt",
inputs={"cart": "$results.cart"}),
])
print(report)
asyncio.run(main())
TypeScript
import { Orchestrator, ToolCall } from "synapse-orchestrator";
const orch = new Orchestrator({
tools: { fetchUser, fetchCatalog, buildCart, sendReceipt },
});
const report = await orch.run([
{ id: "user", name: "fetchUser", inputs: { userId: 42 } },
{ id: "catalog", name: "fetchCatalog", inputs: { category: "widgets" } },
{ id: "cart", name: "buildCart",
inputs: { user: "$results.user", catalog: "$results.catalog" } },
{ id: "receipt", name: "sendReceipt",
inputs: { cart: "$results.cart" } },
]);
console.log(`Speedup: ${report.speedupEstimate.toFixed(2)}x`);