hero_runner_rhai #13
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_lib_rhai#13
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
move functionality in this repo
/Volumes/T7/code0/hero_code/crates/hero_runner_rhai
remove in original repo once its here
its more logical here
make 2 crates
requirements
make sure we follow best practices
Implementation Spec for Issue #13 — hero_runner_rhai
Objective
Move the existing
hero_runner_rhaicrate fromhero_code/crates/hero_runner_rhai/into thehero_lib_rhaiworkspace and split it into two functional crates:hero_runner_rhai_server(the pre-fork worker pool wrapped in a Hero-compliant OpenRPC service exposing session start/stop/list with real-time log streaming) andhero_runner_rhai_ui(an Axum + Bootstrap admin dashboard following the standardhero_ui_dashboardpattern). A small third cratehero_runner_rhaiprovides the--start/--stopCLI per thehero_proc_service_selfstartpattern. Engine creation happens once at server startup before the tokio runtime so every session is a true copy-on-writefork(2)of an already-built engine. Logs from each forked session are piped over a dedicated log pipe to the server, fanned out via a tokio broadcast channel for SSE consumers, and optionally persisted into hero_proc with the sourcehero_runner_rhai_serverand tagssession=<id>.Requirements
hero_code/crates/hero_runner_rhai/tohero_lib_rhai/crates/hero_runner_rhai_server/(source repo deletion is out of scope — see Notes).hero_runner_rhai_serverandhero_runner_rhai_ui. A third small CLI cratehero_runner_rhaiowns lifecycle.hero_ui_dashboardskill (Bootstrap 5.3.3, navbar/sidebar/tabs,/rpcproxy to the server) and connects tohero_runner_rhai_servervia the SDK pattern.fork(2)-ed children — the engine is built once in the parent before any tokio runtime starts, so children inherit registered modules and their lazy caches via copy-on-write rather than rebuilding them.session_start,session_stop,session_list,session_get,session_logs(paged),session_result,health.engine.on_print/engine.on_debugto hero_proc usinghero_proc_sdkwith sourcehero_runner_rhai_serverand tagssession=<id>,kind=stdout|stderr— controlled per-session and via a global env flag.tokio::sync::broadcast::Sender<LogLine>→ SSE on the UI (/sse/session/:id) and OpenRPC long-poll for SDK consumers, plus optional hero_proc forward./hero_sockets: bindrpc.sockandui.sockunder$HERO_SOCKET_DIR/hero_runner_rhai/, expose/health,/.well-known/heroservice.json,/openrpc.json./hero_proc_service_selfstart: the CLI binary owns--start/--stop; server and UI binaries are plain foreground processes with no lifecycle flags./hero_proc_sdk:ActionBuilder+ServiceBuilderwithis_process(),kill_other, and health checks (openrpc_socket onrpc.sockandui.sock).cargo check --workspace; existing crates unchanged.Files to Modify/Create
Workspace:
Cargo.toml(root) — addcrates/hero_runner_rhai,crates/hero_runner_rhai_server,crates/hero_runner_rhai_uito[workspace] members; add any missing entries to[workspace.dependencies](nix,libc,axum,hyper,hyper-util,tower,askama,rust-embed,tokio-util,futures,chrono,clap,hero_proc_sdk,hero_rpc_openrpc,hero_rpc_derive).Server crate (
crates/hero_runner_rhai_server/):Cargo.toml— packagehero_runner_rhai_server; lib + binhero_runner_rhai_server; deps include allherolib_*_rhai,rhai,nix,libc,serde,serde_json,tokio,axum,hyper,hyper-util,tower,tokio-util,futures,anyhow,thiserror,hero_proc_sdk,hero_rpc_openrpc,hero_rpc_derive,ureq,log,env_logger,chrono.src/lib.rs— module declarations + re-exports ofWorkerPool,SessionManager,SessionId,SessionStatus,LogLine,ScriptRequest,ScriptResult.src/types.rs— migrated verbatim, plus newSessionInfo,SessionStatus { Pending, Running, Succeeded, Failed, Cancelled },LogLine { session_id, timestamp_ms, kind, line },WorkerFrame { Started { pid }, Log(LogLine), Result(ScriptResult) }.src/ipc.rs— migrated verbatim (length-prefixed JSON over UnixStream).src/engine.rs— migrated;register_all_modulesandconfigure_engine_limitsbecome the single canonical engine builder also called by the parent at startup.src/resolver.rs— migrated verbatim.src/worker.rs— adapted: grandchild opens a dedicated log pipe; engineon_print/on_debugwrite framedWorkerFrame::Logrecords to it; finalWorkerFrame::Resultends the session; worker forwards every frame from the log pipe to its parent stream.src/pool.rs— adapted:dispatch(req) -> SessionHandlereturns the worker index, a live frame receiver, and a oneshot for the finalScriptResult. Parent reader thread demuxesWorkerFrame::Started,Log, andResult.src/session.rs— NEW:SessionManager(Arc, RwLock<HashMap<SessionId, SessionEntry>>);start(req) -> SessionId,stop(id)(SIGTERM then SIGKILL vianix),list(),get(id),subscribe(id) -> (backfill, broadcast::Receiver<LogLine>). Per-session bounded ring buffer (default 1024 lines).src/proc_log.rs— NEW: optional log forwarder usinghero_proc_sdk; activated by envHERO_RUNNER_RHAI_FORWARD_LOGS=1and per-sessionforward_logsflag insession_start.src/openrpc.rs— NEW: OpenRPC dispatcher (POST /rpc) handling all methods; thin async functions overArc<SessionManager>; supports batch + 204 for notifications.src/openrpc.json— NEW: hand-written OpenRPC 1.x document; consumed at compile time byopenrpc_client!for the SDK and served at runtime asGET /openrpc.json.src/sockets.rs— NEW:socket_dir(),socket_path("hero_runner_rhai", typ),bind_unix_socket(path, axum::Router)from thehero_socketsskill.src/discovery.rs— NEW:/healthand/.well-known/heroservice.jsonhandlers.src/sse.rs— NEW:GET /sse/session/:idAxum SSE handler emitting one event perLogLinewithevent: stdout|stderr; sends ring-buffer backfill first, then live frames.src/main.rs— NEW: load-bearing order — (1)env_logger::init(), (2) build canonicalEngineonce (warms module registries / lazy caches), (3)WorkerPool::new(N)BEFORE any tokio runtime, (4) constructtokio::runtime::Builder::new_multi_thread().enable_all().build()?andblock_on(serve(pool)), (5) insideservebuildSessionManager+axum::Routerand bindrpc.sock(and SSE-augmented routes) viabind_unix_socket. DefaultNfrom envHERO_RUNNER_RHAI_WORKERS(default 4).src/sdk.rs— NEW:hero_rpc_derive::openrpc_client!("src/openrpc.json")generatesHeroRunnerRhaiServerClient. Re-exported ashero_runner_rhai_server::sdk::Client.UI crate (
crates/hero_runner_rhai_ui/):Cargo.toml— packagehero_runner_rhai_ui; binhero_runner_rhai_ui; depends onhero_runner_rhai_server(path) forsdk::Client, plusaxum,hyper,hyper-util,tower,tokio,askama,rust-embed,serde,serde_json,anyhow,futures,chrono. Nohero_proc_sdk, noclap.src/main.rs— NEW: bindsui.sockunderhero_runner_rhai/; builds router with/,/health,/.well-known/heroservice.json,/rpc(proxy → serverrpc.sock),/sse/session/:id(proxy → server SSE), embedded static assets.src/assets.rs— NEW:rust-embedstruct servingstatic/.src/routes.rs— NEW:/(server-side aggregation viasdk::Client::session_list),/rpcproxy,/sse/session/:idproxy.src/templates.rs— NEW: Askama template structs.templates/base.html— NEW: navbar/footer skeleton perhero_ui_dashboard.templates/index.html— NEW: tabs Sessions / Run Script / Logs / Stats / Admin / Docs.templates/partials/sidebar.html— NEW: System Stats sidebar (Workers idle/busy, Sessions running/total, Memory, CPU).static/css/{bootstrap.min.css,bootstrap-icons.min.css,dashboard.css}— NEW.static/js/{bootstrap.bundle.min.js,dashboard.js}— NEW;dashboard.jsincludesrpcCall(),switchTab(),tailSession(id)SSE wiring, theme toggle.static/{favicon.svg,openrpc.json}— NEW.scripts/download-assets.sh— NEW: pulls Bootstrap 5.3.3 / bootstrap-icons intostatic/.CLI crate (
crates/hero_runner_rhai/):Cargo.toml— packagehero_runner_rhai; binhero_runner_rhai; deps:hero_proc_sdk,clap(derive),tokio,anyhow. No server/UI deps.src/main.rs— NEW: clap-derive CLI with--start/--stop. Resolvesbin_dir = current_exe().parent(); builds twoActionBuilders (hero_runner_rhai_server,hero_runner_rhai_ui) withis_process(),interpreter("exec"),stop_signal("SIGTERM"), appropriatekill_other.socketpaths, andhealth_checksof typeopenrpc_socket.ServiceBuilder::new("hero_runner_rhai")aggregates both with.requires(&["hero_proc"]).--start→hp.restart_service("hero_runner_rhai", spec, 30).await?.--stop→hp.stop_service("hero_runner_rhai", 15).await?.Helper scripts and docs:
build_runner.sh(root) — builds three crates and copies binaries into~/hero/bin/.run_runner.sh(root) — builds then~/hero/bin/hero_runner_rhai --start.stop_runner.sh(root) —~/hero/bin/hero_runner_rhai --stop.docs/hero_runner_rhai.md— overview, architecture diagram, OpenRPC method list, log-streaming design.crates/hero_runner_rhai*/README.md— three READMEs.README.mdto mention the new crates in the workspace layout.Source repo (cannot be modified by this skill):
/Volumes/T7/code0/hero_code/crates/hero_runner_rhai/— flag for manual deletion in a follow-up issue againsthero_code.Implementation Plan
Step 1 — Migrate the source crate verbatim into the workspace
Files:
crates/hero_runner_rhai_server/Cargo.toml,crates/hero_runner_rhai_server/src/{lib,types,ipc,engine,worker,pool,resolver}.rs, rootCargo.toml.hero_code/crates/hero_runner_rhai/src/intocrates/hero_runner_rhai_server/src/unchanged.Cargo.toml, renamenametohero_runner_rhai_server, declare[lib]and[[bin]] name = "hero_runner_rhai_server"(binarymain.rsfollows in Step 5).crates/hero_runner_rhai_serverto rootCargo.toml[workspace] members.nixandlibcto[workspace.dependencies](versions matchinghero_code's workspace).cargo check -p hero_runner_rhai_serverto confirm clean library compile.hero_code(sibling repo — see Notes).Dependencies: none.
Step 2 — Adapt worker/grandchild for live log streaming
Files:
crates/hero_runner_rhai_server/src/{types,worker,engine,pool}.rs.types.rs, addLogLine,LogKind { Stdout, Stderr }, andWorkerFrame { Started { pid }, Log(LogLine), Result(ScriptResult) }(serde-tagged enum).engine.rs, changeon_print/on_debugto write aWorkerFrame::Logrecord into a writer fd injected at engine build time. KeepVec<String>capture as a backstop forScriptResult.stdout/stderr.worker.rs::grandchild_main, open a dedicated log pipe alongside the result pipe; emitWorkerFrame::Started { pid }immediately after fork; write line frames during execution; write finalWorkerFrame::Resultand exit.pool.rs, replaceexecute_blockingwith a streamingdispatch(req) -> (UnboundedReceiver<WorkerFrame>, JoinHandle). Reader thread reads frames in a loop untilResult(_).Dependencies: Step 1.
Step 3 — Build the SessionManager and broadcast log fan-out
Files:
crates/hero_runner_rhai_server/src/{session,proc_log}.rs,lib.rs.SessionManager { pool: Arc<WorkerPool>, sessions: RwLock<HashMap<String, SessionEntry>>, forward_to_proc: bool }.SessionEntry { id, started_at, status, child_pid: Option<i32>, log_tx: broadcast::Sender<LogLine>, ring: Arc<Mutex<VecDeque<LogLine>>>, final_result: OnceCell<ScriptResult> }.start(req): UUID, dispatch via pool, spawn task that consumesWorkerFrames.stop(id):nix::sys::signal::kill(SIGTERM)then SIGKILL after 5s.list()returns lightweightSessionInfo.subscribe(id)returns(Vec<LogLine>, broadcast::Receiver<LogLine>).proc_log.rs: when forwarding enabled, spawn one task per session that subscribes and pushes viahero_proc_sdklog API with sourcehero_runner_rhai_serverand tagssession=<id>,kind=stdout|stderr.Dependencies: Step 2.
Step 4 — Author the OpenRPC document and dispatcher
Files:
crates/hero_runner_rhai_server/src/openrpc.json,openrpc.rs,sdk.rs.openrpc.json(OpenRPC 1.2.6) with all methods.JsonRpcDispatcheroverArc<SessionManager>; exposeaxumPOST /rpc; handle JSON-RPC 2.0 envelope (single + batch + notifications).sdk.rs: invokehero_rpc_derive::openrpc_client!("src/openrpc.json")→HeroRunnerRhaiServerClient. Re-exportpub mod sdk;fromlib.rs.Dependencies: Step 3.
Step 5 — Server binary, sockets, SSE, discovery, health
Files:
crates/hero_runner_rhai_server/src/{main,sockets,discovery,sse}.rs.sockets.rs: copy fromhero_socketsskill.discovery.rs:/healthand/.well-known/heroservice.json.sse.rs:/sse/session/:idreads fromSessionManager::subscribe; sends backfill, then live frames.main.rs: critical ordering — engine warm-up;WorkerPool::new(N)BEFORE any tokio runtime; manually construct multi-thread runtime viatokio::runtime::Builderandblock_on(serve(pool)). Do NOT use#[tokio::main]. Add SIGTERM handler for graceful shutdown.Dependencies: Step 4.
Step 6 — Build the UI crate and SSE-aware dashboard
Files:
crates/hero_runner_rhai_ui/Cargo.toml,src/{main,assets,routes,templates}.rs,templates/{base,index}.html,templates/partials/sidebar.html,static/{css,js,favicon.svg,openrpc.json}/...,scripts/download-assets.sh.hero_runner_rhai_server::sdk::Clientfor server-side aggregation./rpcproxies JSON body to serverrpc.sock./sse/session/:idproxies SSE stream verbatim.session_start), Logs (session selector + EventSource live tail), Stats, Admin (stop-all), Docs.dashboard.js:rpcCall(method, params),switchTab(),tailSession(id), theme toggle.ui.sockunder$HERO_SOCKET_DIR/hero_runner_rhai/(same dir as server).Dependencies: Step 5.
Step 7 — CLI binary with --start/--stop and hero_proc registration
Files:
crates/hero_runner_rhai/Cargo.toml,src/main.rs./hero_proc_service_selfstarttemplate with twoActionBuilders aggregated byServiceBuilder::new("hero_runner_rhai").requires(&["hero_proc"]).--start→hp.restart_service("hero_runner_rhai", spec, 30).--stop→hp.stop_service("hero_runner_rhai", 15).Dependencies: Step 6.
Step 8 — Bash helpers, docs, and final wiring
Files:
build_runner.sh,run_runner.sh,stop_runner.sh,docs/hero_runner_rhai.md, three crate READMEs, rootREADME.md..shfiles executable, with shebang andcdinto script dir.cargo check --workspaceandcargo build --workspaceclean.Dependencies: Step 7.
Acceptance Criteria
cargo build -p hero_runner_rhai_serversucceeds.cargo build -p hero_runner_rhai_uisucceeds.cargo build -p hero_runner_rhaisucceeds.cargo check --workspacepasses.~/hero/bin/hero_runner_rhai --startregisters and runs both components via hero_proc;hero_proc service.listshowshero_runner_rhaihealthy.~/hero/var/sockets/hero_runner_rhai/{rpc.sock, ui.sock}exist after start.curl --unix-socket .../rpc.sock http://localhost/openrpc.jsonreturns the document.session_startreturns asession_id;session_listincludes it;session_logsreturns the printed line;session_resultreturnssuccess: true.child_piddistinct from the server PID;ps -p <child_pid>confirms it.curl --unix-socket .../rpc.sock http://localhost/sse/session/<id>streams stdout lines as produced (10 lines, 1/s).HERO_RUNNER_RHAI_FORWARD_LOGS=1,hero_proc log.listreturns the same lines taggedsession=<id>.Notes
hero_code— a different repo. This skill only modifieshero_lib_rhai. Removal fromhero_codeand switching its consumers to import the new path must be a follow-up commit/PR there.nix::unistd::fork()after a multithreaded tokio runtime exists is undefined behaviour. Engine andWorkerPoolare built BEFORE any tokio runtime;main.rsmanually constructs the runtime viatokio::runtime::Builderand usesblock_on.#[tokio::main]is NOT used on the server binary.WorkerFrame::Logrecords produced by the engine'son_print/on_debugcallbacks; parent demuxes and fans out viatokio::sync::broadcast. Three consumer paths: SSE on the UI (humans), OpenRPC long-pollsession_logswithafter_tscursor (SDKs), and optional hero_proc forward (durable storage). A bounded ring buffer (1024 lines) per session backs late subscribers. Preferred over redirecting fd 1/2 because Rhai callbacks are line-aware; preferred over hero_proc-only because ephemeral sessions stay in-process.session_startaccepts an optionalforward_logsboolean; when set it overrides the global env flag for that session.hero_runner_rhai) for--start/--stopper/hero_proc_service_selfstart. The functionality lives in two crates as requested; the CLI is plumbing.pub mod sdkinside the server crate avoids a fourth crate while still providing a typed client; can be extracted later without API break.Engineand registering all modules in the parent before fork populates the binary's resident set, lazy_static caches, and global module tables. Subsequent grandchild engine constructions are then largely COW-shared.WorkerFrame::Started { pid }immediately after each grandchild fork so the parent cankill(SIGTERM)per-session.openrpc_socketonui.sock(HTTP over UDS), nottcp_port, per/hero_sockets.hero_codeandhero_procalready use to avoid duplicate-version explosion inCargo.lock.Test Results
Workspace cargo test
New crates
The three new crates (
hero_runner_rhai,hero_runner_rhai_server,hero_runner_rhai_ui) currently have no unit tests. Validation in this phase:cargo build --workspace— Finished (no errors).cargo check --workspace --all-targets— Finished (1 pre-existing warning incrates/hero_runner_rhai_server/src/resolver.rs::fetch_content, migrated verbatim from the source crate).Failures (if any)
None
Notes
cargo test. Recommended manual smoke test:./build_runner.sh && ./run_runner.sh, then exercise OpenRPC viacurl --unix-socket ~/hero/var/sockets/hero_runner_rhai/rpc.sock.Implementation Complete
Crates added
crates/hero_runner_rhai— CLI lifecycle binary (--start/--stop) that registers the runner with hero_proc.crates/hero_runner_rhai_server— fork-based Rhai runner with OpenRPC over UDS, SSE live log streaming, optional hero_proc log forwarding, and an embedded SDK module for in-process callers.crates/hero_runner_rhai_ui— Bootstrap 5 admin dashboard with six tabs (Sessions, Run Script, Logs, Stats, Admin, Docs), proxying/rpcand/sseto the server over UDS.Files added (high level)
members; new[workspace.dependencies]entries fornix,libc,uuid,axum,tower,tokio-stream,futures,askama,rust-embed,mime_guess,hyper,hyper-util,http-body-util,chrono,clap,hero_proc_sdk,hero_rpc_openrpc.crates/hero_runner_rhai_server/src/{types.rs, ipc.rs, engine.rs, resolver.rs, worker.rs, pool.rs}— migrated verbatim fromhero_code/crates/hero_runner_rhai/, then refactored.crates/hero_runner_rhai_server/src/{session.rs, proc_log.rs, openrpc.rs, openrpc.json, sdk.rs, sockets.rs, discovery.rs, sse.rs, main.rs}— new modules.crates/hero_runner_rhai_ui/src/{main.rs, routes.rs, proxy.rs, templates.rs, assets.rs},templates/{base.html, index.html, partials/sidebar.html},static/{css,js,favicon.svg,openrpc.json},scripts/download-assets.sh.crates/hero_runner_rhai/src/main.rs.build_runner.sh,run_runner.sh,stop_runner.sh.docs/hero_runner_rhai.md; three crate READMEs; updated rootREADME.md.Architecture decisions
WorkerPool::new(N)is called BEFORE any tokio runtime exists inhero_runner_rhai_servermain.rs. The runtime is constructed manually viatokio::runtime::Builder::new_multi_thread().enable_all().build()?andblock_on(serve(pool)).#[tokio::main]is NOT used on the server binary.WorkerPool::new, the parent builds and drops a canonicalEngineso that allregister_all_modulespaths and anylazy_static/ global tables inherolib_*_rhaiare resident in the binary's process image. Subsequent grandchild engine constructions are cheap because those globals are inherited via copy-on-write.WorkerFrame::Started { pid }, then a stream ofWorkerFrame::Log { line }while the script runs, then a finalWorkerFrame::Result { result }. The worker forwards every frame to the parent over its existing UnixStream. The parent'sSessionManagerfans out viatokio::sync::broadcast::Sender<LogLine>and a per-session ring buffer (1024 lines) so late subscribers can backfill. Three consumer paths: SSE on/sse/session/{id}for the UI, OpenRPCsession_logslong-poll for SDK consumers, and an optional hero_proc log forwarder enabled byHERO_RUNNER_RHAI_FORWARD_LOGS=1or per-sessionforward_logs: true.hero_runner_rhai) for lifecycle perhero_proc_service_selfstart. Functionality lives in the two requested crates; the CLI is plumbing.hero_runner_rhai_server::sdk::Clientis hand-rolled overhero_rpc_openrpc::OpenRpcTransport::unix_socketrather than auto-generated viaopenrpc_client!, to avoid name-colliding parallel types and to keepOption<ScriptResult>returns ergonomic. Can be extracted into a separate crate later without API break.OpenRPC method surface
health— service statussession_start({ script, working_dir?, env_vars?, timeout_ms?, forward_logs? }) -> { session_id }session_stop({ session_id }) -> { ok }— sends SIGTERM, then SIGKILL after 5 ssession_list() -> { sessions: SessionInfo[] }session_get({ session_id }) -> SessionInfosession_logs({ session_id, after_ts_ms?, limit? }) -> { lines, next_ts_ms }session_result({ session_id }) -> ScriptResult | nullTests
cargo test --workspace --no-fail-fast: 110 passed, 0 failed, 6 ignored. No regressions; full results posted in the previous comment../build_runner.sh && ./run_runner.sh.Pre-existing workspace breakage repaired in passing
The
developmentbranch was already broken before this PR —cargo check --workspace --libfailed withforge_tokenandinputsfield errors against stale cached revs ofhero_proc_sdkandherolib_tools. Acargo update -p hero_proc_sdkto reve1898890resolved both errors transitively. No source edits toproc_rhai/src/secrets.rswere needed.Follow-up required in the source repo (out of scope here)
The original crate at
/Volumes/T7/code0/hero_code/crates/hero_runner_rhai/is untouched. A separate commit/PR inhero_codeshould:crates/hero_runner_rhai/fromhero_code.hero_codeconsumers (if any) to import the new path:hero_runner_rhai_serverfromhero_lib_rhai.Build & run
OpenRPC smoke test (hero_proc must be running):
UI is reachable at
~/hero/var/sockets/hero_runner_rhai/ui.sock(typically exposed via hero_router).