Slice-based VM compute manager for the Hero Ecosystem, built on hero_rpc OSIS framework
  • Rust 52.3%
  • JavaScript 21.2%
  • HTML 13.6%
  • Shell 10.7%
  • CSS 1.9%
  • Other 0.3%
Find a file
Mahmoud-Emad 280c2b3078
All checks were successful
Test / test (push) Successful in 1m49s
Build Linux / build-linux (linux-arm64-gnu, true, aarch64-unknown-linux-gnu) (push) Successful in 3m30s
Build Linux / build-linux (linux-amd64-musl, false, x86_64-unknown-linux-musl) (push) Successful in 3m55s
chore: bump version to 0.1.6
2026-04-07 13:14:00 +02:00
.forgejo/workflows feat: Add install script, uninstall script, and CI test workflow (#64) 2026-04-07 09:20:22 +00:00
crates fix: resolve empty RPC responses after AxumRpcServer migration 2026-04-07 13:12:10 +02:00
docs docs: add release guide for contributors 2026-04-07 12:17:40 +02:00
schemas feat: add ExplorerService proxied deployment & VM APIs 2026-03-24 13:10:17 +02:00
scripts chore: bump version to 0.1.4 2026-04-07 11:44:04 +02:00
sdk/js refactor: Rename hero_cloud to hero_compute 2026-03-15 14:40:06 +02:00
.env.example feat: Replace TCP bridges with Hero Router for cross-node communication (#71) 2026-04-07 09:39:04 +00:00
.gitignore feat: implement VM details modal dialog 2026-03-18 11:35:58 +02:00
Cargo.lock chore: bump version to 0.1.6 2026-04-07 13:14:00 +02:00
Cargo.toml chore: bump version to 0.1.6 2026-04-07 13:14:00 +02:00
Makefile refactor: Improve hero-compute service reliability and usability 2026-03-31 11:53:41 +02:00
README.md feat: Add single command install script 2026-04-06 15:00:02 +02:00

Hero Compute

Slice-based virtual machine manager for the Hero Ecosystem. Divides a physical host into 4 GB RAM slices, each backing exactly one VM. Built on the hero_rpc OSIS framework with JSON-RPC 2.0 over Unix sockets.

New here? Read the Hero Compute Explainer for a visual guide to how slices, VMs, secrets, and the explorer work together.

How It Works

On bootstrap the server reads /proc/meminfo and df, reserves 1 GB for the OS, and carves the rest into slices:

Example: 64 GB RAM, 2 TB SSD
  usable     = 64 - 1 = 63 GB
  slices     = floor(63 / 4) = 15
  disk/slice = floor(2000 / 15) = 133 GB

Deploy a VM into any free slice; start/stop/restart return immediately while the hypervisor works in the background.

Requirements

  • Linux (x86_64) bare-metal server with hardware virtualization (KVM)
  • Rust toolchain (1.92+)
  • System packages: libssl-dev, pkg-config, iproute2, busybox-static
  • hero_proc process supervisor (must be running)
  • my_hypervisor (VM hypervisor)
  • cloud-hypervisor (VMM backend)

Quick Start

Single command install (downloads all binaries, configures, and starts services):

curl -sfL https://forge.ourworld.tf/lhumina_code/hero_compute/raw/branch/development/scripts/install.sh | bash

The installer checks each component and only downloads what's missing or outdated. Re-run safely at any time to update. See the Setup Guide for full installation details and multi-node deployment.

Or build from source:

make configure   # install deps + build
make start       # start in local mode
# Open http://<server-ip>:9001

Register the node, then deploy a VM. The UI guides you through image selection (images come from the hero_compute_registry, all with SSH key auth pre-configured). Add your SSH key in Settings, then SSH in via Mycelium IPv6: ssh root@<ip>.

Multi-Node Setup

# Master node (explorer hub -- other nodes connect here):
make start MODE=master

# Worker node (connects to a master):
make start MODE=worker MASTER_IP=<master-ip>

See Setup Guide for full installation and multi-node instructions.

Service Architecture

Hero Compute uses the hero_proc_service_selfstart pattern:

  • hero_compute -- CLI binary that registers all components with hero_proc (--start / --stop)
  • hero_compute_server -- JSON-RPC daemon (foreground, managed by hero_proc)
  • hero_compute_ui -- Admin dashboard (foreground, binds TCP port 9001 directly)
  • hero_compute_explorer -- Multi-node registry (foreground, managed by hero_proc)
hero_compute --start                                  # Local mode (default)
hero_compute --start --mode master                    # Explorer hub
hero_compute --start --mode worker --master-ip X.X.X  # Worker node
hero_compute --stop                                   # Stop everything

Make Targets

Target Description
make configure Install all dependencies and build
make start Build + start in local mode (single node)
make start MODE=master Start as master (explorer hub for workers)
make start MODE=worker MASTER_IP=x.x.x.x Start as worker connected to a master
make stop Stop all services
make status Show service status via hero_proc
make build Build all binaries (musl static, x86_64-unknown-linux-musl)
make clean Remove build artifacts
make test Run unit tests
make lint Run clippy linter
make fmt Format code

Security -- VM Secrets

VMs are protected by a secret -- a capability token you set at deploy time. All VM operations (start, stop, delete, list) require the matching secret.

Important: The secret is your identity, not a password. Anyone who knows your secret can see and manage your VMs. This is by design for simplicity.

  • Always use generated secrets. The UI auto-generates a 16-character random secret on first visit. Use it.
  • Never use common words or short strings. If two users pick the same secret, they share VM access.
  • Treat it like a private key. Store it securely. Don't share it.
  • Empty secret = no protection. All operations work without a secret (backward compatible, for single-tenant setups).

See API Reference -- Security Model for full details.

Documentation

Crates

Crate Description
hero_compute CLI -- registers and manages all service components via hero_proc
hero_compute_server JSON-RPC daemon -- VM lifecycle, slice management
hero_compute_explorer Multi-node registry -- aggregates nodes via heartbeats
hero_compute_sdk Generated OpenRPC client library
hero_compute_ui Admin dashboard (Bootstrap + Askama + Axum)
hero_compute_examples SDK usage examples

License

Apache-2.0