service_compute.nu — hero_compute lifecycle module #154

Open
opened 2026-04-28 09:35:11 +00:00 by mahmoud · 1 comment
Owner

Add service_compute.nu per the tracker in #75. Module exposes install | start [--reset] | stop | status for the hero_compute stack.

Scope

  • Module file: tools/modules/services/service_compute.nu
  • Follow the standard pattern documented in the nu_service and nu_service_use skills.
  • Template to copy from: service_codescalers.nu — full-featured, supports --root. Same shape as the hero_compute manager.
  • Source repo: hero_compute (canonical layout already applied; reference repo for /hero_rust_repo_create).

Service-specific notes

  • Pure Rust workspace at 0.1.7+; build via cargo build --release.
  • Four binaries to install to ~/hero/bin/ (per scripts/buildenv.sh::BINARIES): hero_compute, hero_compute_server, hero_compute_ui, hero_compute_explorer.
  • hero_compute is the lifecycle manager — selfstart pattern, registers and starts the other three with hero_proc. See hero_proc_service_selfstart.
  • Requires --root — KVM device access, raw networking, Mycelium overlay membership. Match service_mycelium.nu / service_codescalers.nu for the privileged-service shape.
  • Multi-node modes: local (default), master (explorer hub), worker (joins a master). The repo's Makefile drives this via make start MODE=master MASTER_IP=<ip>. Surface this in the nu module's start flags or document it in the start-output test plan.
  • Runtime dependencies (versions pinned in the repo's scripts/buildenv.sh):
    • cloud-hypervisor (VMM backend)
    • my_hypervisor (VM frontend)
    • mycelium (IPv6 overlay)
    • hero_proc (process supervisor — already required by every service)
  • Uses hero_compute_registry for VM images at runtime — install step does not need to clone the registry; the running stack pulls images on demand.
  • The repo's scripts/install.sh shows the full system-deps + binaries flow you'll port into the nu module — use it as the donor for the install subcommand.
  • System packages needed (Debian/Ubuntu): libssl-dev, pkg-config, iproute2, busybox-static, virtiofsd, e2fsprogs. Reuse installers.nu::install_base where possible.

Acceptance criteria

  1. use services/mod.nu * makes service_compute available.
  2. On a target host (root for --root mode):
    • service_compute install --root clones the repo, installs system deps + cloud-hypervisor + my_hypervisor + mycelium, runs cargo build --release, copies the four binaries to ~/hero/bin/.
    • service_compute start --root registers with hero_proc and becomes healthy. The hero_compute manager handles the selfstart of _server / _ui / _explorer.
    • service_compute status --root reports state.
    • service_compute stop --root cleanly unregisters.
  3. The start output prints sockets / UI URL / a short test plan, per the nu_service_use skill.
  4. (Optional) Multi-node start: service_compute start --root --mode master and --mode worker --master-ip <ip> mirror the Makefile.

References

  • Parent tracker: #75
  • Pattern skills: nu_service, nu_service_use
  • Self-start pattern: hero_proc_service_selfstart
  • Reference repo for canonical layout: hero_rust_repo_create (uses hero_compute as donor)
Add `service_compute.nu` per the tracker in #75. Module exposes `install | start [--reset] | stop | status` for the `hero_compute` stack. ## Scope - Module file: `tools/modules/services/service_compute.nu` - Follow the standard pattern documented in the [`nu_service`](../../src/branch/development/claude/skills/nu_service/SKILL.md) and [`nu_service_use`](../../src/branch/development/claude/skills/nu_service_use/SKILL.md) skills. - Template to copy from: `service_codescalers.nu` — full-featured, supports `--root`. Same shape as the hero_compute manager. - Source repo: [`hero_compute`](https://forge.ourworld.tf/lhumina_code/hero_compute) (canonical layout already applied; reference repo for `/hero_rust_repo_create`). ## Service-specific notes - Pure Rust workspace at `0.1.7`+; build via `cargo build --release`. - Four binaries to install to `~/hero/bin/` (per `scripts/buildenv.sh::BINARIES`): `hero_compute`, `hero_compute_server`, `hero_compute_ui`, `hero_compute_explorer`. - `hero_compute` is the lifecycle manager — selfstart pattern, registers and starts the other three with hero_proc. See [`hero_proc_service_selfstart`](../../src/branch/development/claude/skills/hero_proc_service_selfstart/SKILL.md). - **Requires `--root`** — KVM device access, raw networking, Mycelium overlay membership. Match `service_mycelium.nu` / `service_codescalers.nu` for the privileged-service shape. - Multi-node modes: `local` (default), `master` (explorer hub), `worker` (joins a master). The repo's Makefile drives this via `make start MODE=master MASTER_IP=<ip>`. Surface this in the nu module's `start` flags or document it in the start-output test plan. - Runtime dependencies (versions pinned in the repo's `scripts/buildenv.sh`): - `cloud-hypervisor` (VMM backend) - `my_hypervisor` (VM frontend) - `mycelium` (IPv6 overlay) - `hero_proc` (process supervisor — already required by every service) - Uses [`hero_compute_registry`](https://forge.ourworld.tf/lhumina_code/hero_compute_registry) for VM images at runtime — install step does **not** need to clone the registry; the running stack pulls images on demand. - The repo's [`scripts/install.sh`](https://forge.ourworld.tf/lhumina_code/hero_compute/src/branch/development/scripts/install.sh) shows the full system-deps + binaries flow you'll port into the nu module — use it as the donor for the `install` subcommand. - System packages needed (Debian/Ubuntu): `libssl-dev`, `pkg-config`, `iproute2`, `busybox-static`, `virtiofsd`, `e2fsprogs`. Reuse `installers.nu::install_base` where possible. ## Acceptance criteria 1. `use services/mod.nu *` makes `service_compute` available. 2. On a target host (root for `--root` mode): - `service_compute install --root` clones the repo, installs system deps + cloud-hypervisor + my_hypervisor + mycelium, runs `cargo build --release`, copies the four binaries to `~/hero/bin/`. - `service_compute start --root` registers with hero_proc and becomes healthy. The `hero_compute` manager handles the selfstart of `_server` / `_ui` / `_explorer`. - `service_compute status --root` reports state. - `service_compute stop --root` cleanly unregisters. 3. The `start` output prints sockets / UI URL / a short test plan, per the `nu_service_use` skill. 4. (Optional) Multi-node start: `service_compute start --root --mode master` and `--mode worker --master-ip <ip>` mirror the Makefile. ## References - Parent tracker: #75 - Pattern skills: `nu_service`, `nu_service_use` - Self-start pattern: `hero_proc_service_selfstart` - Reference repo for canonical layout: `hero_rust_repo_create` (uses hero_compute as donor)
mahmoud self-assigned this 2026-04-28 09:35:11 +00:00
mahmoud removed their assignment 2026-04-28 10:03:57 +00:00
Author
Owner

Updated scope after research on the actual repo. Hero_compute is at 0.1.7+ in the canonical layout; the four binaries (hero_compute manager + _server, _ui, _explorer) are in scripts/buildenv.sh::BINARIES. --root confirmed needed for KVM and Mycelium. Multi-node modes and runtime deps (cloud-hypervisor, my_hypervisor, mycelium) added. The repo's scripts/install.sh is the donor for the install flow.

Updated scope after research on the actual repo. Hero_compute is at 0.1.7+ in the canonical layout; the four binaries (`hero_compute` manager + `_server`, `_ui`, `_explorer`) are in `scripts/buildenv.sh::BINARIES`. `--root` confirmed needed for KVM and Mycelium. Multi-node modes and runtime deps (cloud-hypervisor, my_hypervisor, mycelium) added. The repo's `scripts/install.sh` is the donor for the install flow.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_skills#154
No description provided.