initial hero runner #1
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_code#1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
We want to create a runner. It's basically, let's see, like a runtime. We want to support the right language and Python language, where we have an opener PC interface following our best practices, and we embed the full Python environment, but also the right environment. And then we have the hooks to add in right features or python features, whatever we prefer. And then over RPC, we can basically start a job, which is starting, yeah, running Python codes in a certain directory. Um, same thing with rye in a certain directory, so it has, it can be script, uh, by this script. It's always a script, which we then execute. We can do it in a directory, we can have environmental variables, time outs, and then the idea is to fork the runtime process. So we load all the functionality in memory, and for each execution, we get a fork. so that we don't expand the memory too much and we don't have to reload the full python or ride kernel, if you want. I want us to write a performance test so we can see how fast it goes. Um, so for Python and for I for both, and um, Yeah. For logging, we want to do the exact same kind of logging, even we can copy paste the interface of how we did it for, um, our hero proc. I will give you the link.
see code from
/Volumes/T7/code0/hero_proc/crates/hero_proc_lib
also see the hero_proc_ui make similar one here, repeat how we did logging & jobs
in this case jobs are the scripts running in runtime python or rhai
(we only need the jobs, the idea is that the hero_proc will be able to call hero_runner when its a python or rhai script)
using following skills
test the _ui _server
make integrationtests
Implementation Spec for Issue #1: Initial Hero Runner
Objective
Create a Rust workspace called
hero_runner_v2that provides a runner/runtime for executing Rhai and Python (via RustPython) scripts. The project exposes an OpenRPC (JSON-RPC 2.0) interface over a Unix domain socket, following the exact crate architecture established byhero_proc. hero_proc will call hero_runner when a Python or Rhai script needs to run. The runtime forks (or spawns) a pre-warmed child process for each script execution to avoid reloading the full interpreter kernel each time.Requirements
hero_runner_lib,hero_runner_server,hero_runner_sdk,hero_runner_ui, plus integration test cratesopenrpc.jsonspec defining all RPC methods; SDK generated viaopenrpc_client!macrorhaicrate)Crate Structure
hero_runner_libhero_runner_serverhero_runner_sdkhero_runner_uihero_runner_integration_testImplementation Plan (6 Steps)
Step 1: Scaffold workspace + hero_runner_lib
Create Cargo workspace, .gitignore, buildenv.sh. Build shared library with SQLite-backed job and log models, CRUD operations.
Step 2: Create hero_runner_server
OpenRPC spec, JSON-RPC dispatch over Unix socket, Rhai + Python engine integration, worker pool with process forking.
Step 3: Create hero_runner_sdk
Generated OpenRPC client, JobBuilder, HeroLogger, socket helpers, lifecycle (self-start/stop).
Step 4: Create hero_runner_ui
Web dashboard with job management, log viewer, engine status. Self-start/stop registration.
Step 5: Create test scripts + integration tests
Rhai/Python test scripts, job lifecycle tests, engine-specific tests, UI tests, performance benchmarks.
Step 6: Add Makefile + CI
Build/test/run targets, Forgejo Actions workflows.
Acceptance Criteria
cargo build --workspacecompilescargo test -p hero_runner_libpassessystem.pingover Unix socketjob.submitworksrpc.discoverreturns OpenRPC specTest Results
All tests passing after fixing 5 test failures.
Summary
Fixes Applied
Fixed 5 failing tests caused by two bugs in
crates/hero_runner_lib/src/db/jobs/model.rs:JobFilter::default()setlimitto 0 instead of 100 — The#[derive(Default)]producedlimit: 0, but tests expected the serde default of 100. Fixed by implementingDefaultmanually forJobFilter.Jobstruct requiredidfield during deserialization — Thejob.submitRPC handler deserializes params into aJob, but submit requests don't includeid. Added#[serde(default)]toid,status,stdout,stderr, andcreated_atfields.Test Breakdown by Crate
Implementation Summary
Crates Created
hero_runner_libhero_runner_serverhero_runner_sdkhero_runner_uihero_runner_integration_testhero_runner_testsKey Features
Files Changed
Architecture
Follows hero_proc patterns exactly:
$HOME/hero/var/sockets/hero_runner_server.sockImplementation committed:
76cf1e7Browse:
lhumina_code/hero_runner_v2@76cf1e7