Skip to main content

StateGraph Engine

The graph engine is the core of the orchestration system. Inspired by LangGraph but fully provider-agnostic.

from agent_orchestrator.core.graph import END, START, StateGraph
from agent_orchestrator.core.llm_nodes import llm_node
from agent_orchestrator.providers.local import LocalProvider

provider = LocalProvider(model="qwen2.5-coder:7b-instruct")

analyze = llm_node(provider=provider, system="Analyze the code.",
prompt_key="code", output_key="analysis")
fix = llm_node(provider=provider, system="Fix the code.",
prompt_template=lambda s: f"Analysis:\n{s['analysis']}\n\nCode:\n{s['code']}",
output_key="fixed")

graph = StateGraph()
graph.add_node("analyze", analyze)
graph.add_node("fix", fix)
graph.add_edge(START, "analyze")
graph.add_edge("analyze", "fix")
graph.add_edge("fix", END)

result = await graph.compile().invoke({"code": "def avg(lst): return sum(lst) / len(lst)"})

Features

  • Parallel execution — independent nodes run via asyncio.gather
  • Conditional routing — route to different nodes based on LLM output
  • Human-in-the-loop — pause graph execution for user input, resume later
  • Checkpointing — save/restore graph state (InMemory, SQLite, Postgres)
  • LLM node factoriesllm_node(), multi_provider_node(), chat_node()
  • Reducers — control how state merges (append, replace, merge_dict, etc.)

Graph Types (Dashboard)

TypeDescription
AutoClassify input then route to appropriate sub-graph
ChatSimple conversational graph
Code ReviewAnalyze code quality and suggest improvements
Analyze + FixTwo-step: analyze then fix
Parallel ReviewMultiple reviewers running in parallel