Hero Agent should start using service python generation flow #2
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_agent#2
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
In hero_logic, we have a flow that allows us to generate python scripts for working against openrpc services. We want hero agent to use this hero_logic flow so it can works against hero services
Plan & deliberation
Current state
service_agent.jsontemplate is a 5-node DAG:service_selection→code_generation→script_execution(Python3 via hero_proc) →error_debug→result_summary.Entry points:
play_start(workflow_sid, input_data),play_status(play_sid).crates/hero_agent/src/mcp_client.rs) + 56 built-in Rust tools (hero_agent_tools). No python-generation flow today — it routes everything through LLM + atomic tool calls.So "use the hero_logic mcp" is aspirational. Two integration shapes are possible.
Option A — Native RPC tool in
hero_agent_toolsAdd a
service_agenttool that callshero_logicdirectly over its OpenRPC socket:play_start(workflow_sid=service_agent, input_data={...}).play_status(play_sid)and streams progress into the agent's SSE channel (per-node updates: selection → code → exec → summary).Pros: ship fast; tight SSE integration; no extra hop.
Cons: couples hero_agent to hero_logic's RPC shape.
Option B — MCP wrapper around hero_logic
Expose hero_logic as an MCP server so any MCP-compatible client (hero_agent, hero_shrimp, Claude Desktop, future agents) picks it up through tool discovery.
Pros: generic, reusable, zero hero_agent code once the MCP server exists.
Cons: extra surface to maintain; one more hop; harder to stream node-level progress cleanly over MCP today.
Recommendation
Start with Option A, extract to Option B once shape stabilizes. Ship a
service_agentnative tool inhero_agent_toolsnow; once it's proven, lift it into a thin MCP wrapper crate (hero_logic_mcp) that any agent can consume.What the agent adds on top of
service_agentservice_agentis single-shot and headless. The agent contributes:service_agentvs. a built-in tool (file ops, git, web, shell) vs. direct LLM answer.service_agentalone cannot make that call.input_datafor the workflow.play_statuspolling into the agent's SSE stream so the user sees node-level progress.service_agentgives up after itserror_debugnode, the agent can re-prompt, adjust inputs, or fall back to atomic tools.Phased plan
Phase 1 — native integration (this issue):
hero_logicSDK dep tohero_agent(or reuseopenrpc_client!against its spec).crates/hero_agent_tools/src/service_agent.rsimplementingTool:{ task: string, service_hints?: string[], timeout_ms?: u64 }.execute():play_start→ loopplay_statuswith backoff → stream node results via tool context → return{ result, python_code, logs }.all_tools().service_agentover atomic tools.Phase 2 — generalize (follow-up issue):
hero_logic_mcpcrate that re-exposesplay_start/play_status+ template listing over MCP. Agent then discovers it via existing MCP client; native tool becomes optional/removable.Open questions before I start
service_agent? My default: keep built-ins as fast path.hero_contextflow end-to-end today, or do we need wiring?service_agentas a tool and let the LLM choose, or add a lightweight classifier that hard-routes OpenRPC-service intents? I'd start with LLM-chooses (simpler, observable) and only add a classifier if selection is unreliable.hero_logic_mcpcrate already planned elsewhere? If so, skip Option A and wait for it.Happy to start on Phase 1 once the above are settled.
Got it — revised plan with
hero_router's MCP gateway as the integration point.Revised architecture
hero_routeralready exposes every OpenRPC service as MCP atPOST /mcp/{service_name}—tools/listmaps to the service's OpenRPC methods,tools/callmaps to JSON-RPC invocations. It also injectsX-Hero-Context/X-Hero-Claims/X-Forwarded-Prefixon every forwarded request.So there is no
hero_logic_mcpcrate to write and no native RPC tool inhero_agent_tools. hero_agent just points its existing MCP client at the router's/mcp/{service}endpoint, andhero_logic(plus every other service) shows up as tools.Phase 1 — wire hero_agent's MCP client to hero_router
crates/hero_agent/src/mcp_client.rscurrently supports UDS-HTTP and stdio. hero_router exposes MCP over TCP-HTTP at/mcp/{service}— add a TCP-HTTP transport variant so the MCP client can POST tohttp://{router_host}:{router_port}/mcp/{service}.hero_logic(forplay_start/play_status→ theservice_agentworkflow), and likely a curated set of others (hero_proc, osis, etc.) so the LLM can either call a service directly or delegate tohero_logicwhen it wants python-generation.X-Hero-Context/X-Hero-Claimsthrough the MCP client into the outbound HTTP request to hero_router so claims flow end-to-end (agent → router → service). This is the main wiring gap today. Will handle as part of this issue.hero_logic.play_start(service_agent, …)for multi-service python-generation tasks. LLM chooses.play_statusis poll-based; simplest shape is the LLM pollsplay_statusas repeat tool calls and watches node-level progress (service_selection→code_generation→script_execution→ …). A thin wrapper that polls and emits SSE progress is a possible follow-up.Phase 2 — nice-to-haves (follow-ups)
Decisions locked in from your response
service_agent(via hero_logic MCP) and direct service tool calls and built-in tools.Starting on Phase 1.
Phase 1 implemented on
development(unstaged — ready for review).Changes
crates/hero_agent/src/mcp_client.rsTransport::Http { url }variant alongside existingSocketandStdio.urlfield inMcpServerConfigwith${VAR}/${VAR:-default}env expansion (so the same config works across dev / CI / tfgrid).http_post()helper using the already-presentreqwestdep.tokio::task_local!FORWARDED_HEADERSthat carries anArc<HashMap<String,String>>. Bothunix_postandhttp_postread it and attach headers on every outbound MCP call. Stdio transport ignores it (subprocess has no HTTP surface).crates/hero_agent_server/src/routes.rsrpc_endpointandchatnow takeaxum::http::HeaderMap. They extractX-Hero-Context/X-Hero-Claims/X-Forwarded-Prefix(lowercased, axum-style) and scope them intoFORWARDED_HEADERSfor the duration of the request.tokio::spawndoesn't carry task-locals across the boundary,rpc_chatandchatsnapshot the scope before spawning and re-scope inside the spawned task, somcp.call_toolinsidehandle_messagesees the headers.examples/mcp.json— example config showing the hero_router/mcp/{service}pattern forhero_logic,hero_osis,hero_proc.docs/spec.md— config docs updated with all three transports and a note thaturl(via hero_router) is preferred for Hero services since it's the only path that propagates claims.How
hero_logicbecomes usableexamples/mcp.jsoninto$HERO_AGENT_DATA_DIR/mcp.json(default~/hero/var/agent/mcp.json).hero_router(default 127.0.0.1:9988) withhero_logicrunning behind it.hero_agent_server. On startup it loads the MCP config,tools/listhitshttp://127.0.0.1:9988/mcp/hero_logic, and hero_logic's OpenRPC methods (play_start,play_status,template_list,workflow_from_template, …) appear as MCP tools.play_startwithworkflow_sid=service_agent+input_datato kick off the python-generation workflow, and pollplay_statusfor progress.Claims flow end-to-end:
hero_router → hero_agent(withX-Hero-Context/X-Hero-Claims) →hero_agent → hero_router /mcp/hero_logic(same headers re-attached) →hero_router → hero_logic.Build status
cargo build --workspace✅cargo test -p hero_agent --lib✅ (59 existing + 3 new)Not committed — let me know if you want me to squash into a branch + PR, or tweak anything first (e.g. system prompt nudges, auto-discovery from hero_router's service list as Phase 2).
Landed in
developmentas573dac2.Built on the existing HTTP MCP transport (
cf9ca97) — this adds hero context header forwarding and env-var expansion forurl, so pointing anmcpServersentry athttp://${HERO_ROUTER_HOST:-127.0.0.1}:${HERO_ROUTER_PORT:-9988}/mcp/hero_logicnow works end-to-end with claims propagating viaX-Hero-Context/X-Hero-Claims/X-Forwarded-Prefix.See
examples/mcp.jsonfor the router-fronted config pattern.Closing — Phase 2 (auto-discovery of services from hero_router, system-prompt nudges) can be separate follow-up issues.