service_compute.nu — hero_compute lifecycle module #154
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_skills#154
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Add
service_compute.nuper the tracker in #75. Module exposesinstall | start [--reset] | stop | statusfor thehero_computestack.Scope
tools/modules/services/service_compute.nunu_serviceandnu_service_useskills.service_codescalers.nu— full-featured, supports--root. Same shape as the hero_compute manager.hero_compute(canonical layout already applied; reference repo for/hero_rust_repo_create).Service-specific notes
0.1.7+; build viacargo build --release.~/hero/bin/(perscripts/buildenv.sh::BINARIES):hero_compute,hero_compute_server,hero_compute_ui,hero_compute_explorer.hero_computeis the lifecycle manager — selfstart pattern, registers and starts the other three with hero_proc. Seehero_proc_service_selfstart.--root— KVM device access, raw networking, Mycelium overlay membership. Matchservice_mycelium.nu/service_codescalers.nufor the privileged-service shape.local(default),master(explorer hub),worker(joins a master). The repo's Makefile drives this viamake start MODE=master MASTER_IP=<ip>. Surface this in the nu module'sstartflags or document it in the start-output test plan.scripts/buildenv.sh):cloud-hypervisor(VMM backend)my_hypervisor(VM frontend)mycelium(IPv6 overlay)hero_proc(process supervisor — already required by every service)hero_compute_registryfor VM images at runtime — install step does not need to clone the registry; the running stack pulls images on demand.scripts/install.shshows the full system-deps + binaries flow you'll port into the nu module — use it as the donor for theinstallsubcommand.libssl-dev,pkg-config,iproute2,busybox-static,virtiofsd,e2fsprogs. Reuseinstallers.nu::install_basewhere possible.Acceptance criteria
use services/mod.nu *makesservice_computeavailable.--rootmode):service_compute install --rootclones the repo, installs system deps + cloud-hypervisor + my_hypervisor + mycelium, runscargo build --release, copies the four binaries to~/hero/bin/.service_compute start --rootregisters with hero_proc and becomes healthy. Thehero_computemanager handles the selfstart of_server/_ui/_explorer.service_compute status --rootreports state.service_compute stop --rootcleanly unregisters.startoutput prints sockets / UI URL / a short test plan, per thenu_service_useskill.service_compute start --root --mode masterand--mode worker --master-ip <ip>mirror the Makefile.References
nu_service,nu_service_usehero_proc_service_selfstarthero_rust_repo_create(uses hero_compute as donor)Updated scope after research on the actual repo. Hero_compute is at 0.1.7+ in the canonical layout; the four binaries (
hero_computemanager +_server,_ui,_explorer) are inscripts/buildenv.sh::BINARIES.--rootconfirmed needed for KVM and Mycelium. Multi-node modes and runtime deps (cloud-hypervisor, my_hypervisor, mycelium) added. The repo'sscripts/install.shis the donor for the install flow.