Rethinking hero RPC, OSIS and backend architecture — hero_sdk + hero_core #13
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_rpc#13
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Rethinking hero RPC, OSIS and backend architecture
Summary
Rename and restructure hero_rpc → hero_sdk and hero_osis → hero_core to simplify the service architecture, reduce boilerplate, and provide a true SDK for building hero services.
Agreed Architecture
hero_sdk (formerly hero_rpc)
A single workspace containing everything needed to build hero services:
Key decisions:
all-domainscomposite, default = no domains#[cfg(target_arch = "wasm32")]), not feature gatinghero_sdk_osiskeeps the OSIS name (no confusion since hero_osis becomes hero_core)HeroServer — unified builder API
Services are single binaries. The server auto-creates 3 socket types:
Per-domain sockets enable:
The service socket (
{service}.sock) handles context management — adding/removing contexts to the service (not automatic, user-controlled). Root context in hero_core manages global context lifecycle.hero_client! macro
Generates a unified typed client composing domain clients:
hero_core (formerly hero_osis)
Thin service embedding all hero_sdk models:
The root context manages context lifecycle (create/delete contexts). Other services add contexts manually via their service socket.
Service structure (single binary)
Services extend hero_sdk's base logic without overriding. E.g.,
pay_for_ordercalls hero_sdk'smake_transactionand adds app logic on top.Cross-domain communication: via domain sockets (RPC clients), not in-process wiring. Unix socket IPC is ~10-50μs — negligible. Keeps domains independent and splittable.
External services (hero_embedder, hero_indexer)
hero_core uses OpenRPC clients to communicate with hero_embedder and hero_indexer — they remain separate services. Heavy deps (ONNX Runtime) stay out of hero_sdk.
Implementation Plan
All on
development_rethinkingbranch. No repo renaming yet (backward compat).Deep Analysis & Architectural Proposal
I've done a thorough exploration of the entire Hero ecosystem — hero_rpc (7 crates), hero_osis (6 crates, 16 domains), and 23 hero_* service repos — to understand the current state, pain points, and what the ideal architecture looks like.
The Core Problem
The current split creates three friction layers:
hero_rpc is misnamed and overloaded — it's not just an RPC library. It contains: schema parser (oschema), code generators, storage layer (DBTyped/OTOML), RPC server runtime (3,400-line server.rs), multi-socket orchestrator (OServer), service lifecycle (HeroRpcServer/HeroUiServer), proc macros, and OpenRPC tooling. It's really a full SDK.
hero_osis is trapped — it has 16 carefully designed domain models (identity, communication, calendar, AI, finance, network, etc.) with ~50 schemas, but they're locked inside a single service binary. Any other service that wants User, Contact, or Chat types must either depend on hero_osis (heavy) or redefine them (fragile).
Every new service reinvents the wheel — across 23 services, I found ~3,000+ lines of duplicated Makefile/build boilerplate, repeated server startup patterns, copy-paste lifecycle wrappers, and two competing implementation approaches (schema-driven vs manual Axum) with no middle ground.
What Works Well Today
Before proposing changes, it's worth acknowledging what's solid:
Proposed Architecture: hero_sdk + hero_core
1. hero_rpc becomes hero_sdk
The rename reflects reality: this IS a software development kit. But beyond renaming, the key structural change is absorbing the shared domain models and simplifying the server API.
Why models belong in hero_sdk (not in a separate repo):
2. hero_osis becomes hero_core
hero_core becomes a thin service that embeds ALL hero_sdk models as the canonical hero backend:
3. The Dream DX: Creating a New Service
This is the real payoff. Today, creating a hero service means setting up 5-7 crates, 150+ lines of Makefile, copy-pasting buildenv.sh, lifecycle wrapper, server startup, socket handling, and manually registering routes, health checks, and OpenRPC.
With hero_sdk, a new service looks like:
This single call automatically:
~/hero/var/sockets/my_service.sock/health,/openrpc.json,/.well-known/heroservice.json--start,--stop,--status)Custom domains follow the same pattern, simplified:
Key Design Decisions to Discuss
A. Socket strategy: one socket per service vs one per domain?
Currently OServer creates
hero_db_{context}_{domain}.sock— one socket per domain. This is fine for hero_core which runs all domains, but for individual services it adds complexity.Proposal: Default to one socket per service (simpler), with an opt-in to split into per-domain sockets for services that need it. The server internally routes
domain.Type.methodcalls to the right handler regardless. Most services only have 1-3 domains anyway.B. Where does osis (storage/indexing) live?
The
osiscrate name is confusing because "OSIS" means different things at different levels — as a concept (Object Storage with Indexing & SmartID), as a crate (the storage engine), and as a service (hero_osis the backend).Proposal: Rename the crate to
hero_sdk_storage(or juststoragewithin the workspace). It contains: DBTyped, SmartID, OTOML persistence, Tantivy indexing, and the CRUD dispatch layer. The name "OSIS" survives as the overarching concept but not as a crate name.C. Should hero_sdk models include business logic or just types?
Two options:
ChatService.send_message(),UserService.authenticate()).Proposal: Option 1 for now. Types + CRUD is the 80/20 — it covers most use cases and keeps hero_sdk lean. Services that need custom logic implement it in their own crate. We can revisit adding standard service methods later.
D. What about the monolith risk?
Moving 16 domains + schemas + generators + server into one repo makes hero_sdk large. Mitigations:
E. Cross-domain wiring (AI + Flow, etc.)
Currently hero_osis_server has explicit wiring:
ai.wire_flow_domain(flow). With the new architecture, this becomes:Or better: domains declare their optional dependencies, and the server auto-wires when both are present.
Migration Path
This doesn't need to happen all at once. Suggested phases:
Phase 1: Rename + restructure hero_rpc to hero_sdk
osiscrate tostorageserver+serviceinto unifiedservercratePhase 2: Move models from hero_osis to hero_sdk/models
Phase 3: Simplify HeroServer API
.with_domain(),.with_ui(),.run())Phase 4: Rename hero_osis to hero_core
Phase 5: Migrate existing services
Questions for Discussion
Do we want
hero_sdkto be the repo name, or keephero_rpcas the repo and usehero_sdkas the crate name? (I lean toward full rename for clarity)Should the unified HeroServer default to one socket per service or keep the per-domain socket pattern?
For hero_core: should it expose ALL models by default, or should users compose their own hero_core with selected features?
How do we handle the hero_embedder dependency? It requires ONNX Runtime which is heavy. Keep it as a separate opt-in or integrate into hero_sdk with a feature gate?
Timeline preference: big-bang migration or incremental phases?
Few comments:
1: likewise, there should be a $service_client crate that also embeds the client for the hero server with the domains imported, and also can have more domains added customly, just like the server. this could even be hero_client. would be great if generation of custom schemas and integration into client and server is parallel. maybe, for the client, we can even use some macro or derive to pass it the domains we want from the hero_sdk and those generated from our custom implemented schemas and just be like okay i want a client that supports all this because my server has all this. opinions?
2: about sockets, we want three sockets (sort of, see below about server) by default to begin with:
~/hero/var/sockets/my_service.sock
~/hero/var/sockets/my_service_ui.sock
~/hero/var/sockets/my_service_server//.sock
the *s in the _server socket is: we want ...service_server/$context/$domain where the domain is the models domain and the context is the context. there are two purposes here: 1. context namespacing which has many benefits. 2. modular domain openrpc server, which helps a lot especially with context limited agents using them.
the _ui is simply the servers ui endpoint, and service.sock can be like a generic service screen showing sockets health domains service description etc.
as for B. Where does osis (storage/indexing) live?
lets just have it in hero_sdk as hero_sdk_osis (instead of storage) as you proposed. its ok to keep osis name because now this will be the only osis since hero_osis becomes hero_core
c: option 2
d: agreed. just make sure to not to make a mess out of feature gating. we just want per domain feature gating, and perhaps one for all domains, and default is no domains.
e: i dont exactly get E. Cross-domain wiring (AI + Flow, etc.), elaborate.
as for migration path, i sugges: 1,2,3,4 all at once, all except repo renaming to keep backward compatibility, and working on a branch called development_rethinking and importing that branch as dep. then i suggest we start with an example service, like hero_ride or hero_food or smt. then we can use that example to make sure things are as we desire to then one by one migrate remaining services
as for qs:
1,2,3 should already be answered. 4: hero_embedder is another hero service and hero_osis should be using an openrpc client for hero embedder to work against that service, for the reasons you mention, as it should also be doing against hero_indexer. 5 should also be answered
Great feedback, this crystallizes things. Let me address each point, elaborate on cross-domain wiring, and then lay out a concrete implementation plan with the food delivery example.
1. Client Composition:
hero_client!macroAgreed — the client should mirror the server. Since each domain already generates its own typed client (from oschema codegen), the service client is just a compositor that holds all domain clients and knows the socket paths by convention.
Proposed macro:
This generates:
Usage:
Parallelism: Since client generation comes from oschema (same pipeline that generates server handlers), adding a new domain to both server and client is just one line each — add the domain to
HeroServer::new().with_domain()and tohero_client!{ custom_domains: [...] }. The oschema build.rs generates both server handlers AND client code from the same schema.2. Socket Strategy — confirmed
Benefits:
delivery.sock, not the whole service. Context-limited scoping./openrpc.json— clean, focused specsThe
hero_food.socktop-level socket serves as a service registry — listing all contexts, domains, health status, and socket paths. Lightweight introspection endpoint.3. Single Binary / Module Architecture — the hero_food example
Agreed on single binary. Here's the concrete structure:
main.rs (~15 lines):
build.rs:
Example schema —
schemas/delivery.oschema:Custom business logic —
src/models/delivery/mod.rs:client/mod.rs:
That's it. The entire service is:
Compare to today: 5-7 crates, 150+ lines Makefile, manual lifecycle wrapper, manual socket handling, manual route registration.
Elaboration on E: Cross-Domain Wiring
Today in hero_osis_server/main.rs there's this:
What this does: the AI domain handler receives an
Arc<FlowDomain>reference so it can callself.flow_domain.execute_workflow(workflow_id, params)directly in-process — no network hop, no serialization. When an AI agent decides to run a workflow, it bypasses the Flow domain's socket and calls the handler directly via the Arc pointer.The question was: in the new architecture with per-domain sockets, how should domains that need each other communicate?
Answer: Since we're standardizing on per-domain sockets and you've confirmed that hero_embedder/hero_indexer should use OpenRPC clients (service-to-service over sockets), the consistent approach is: domains within the same service also communicate via their domain sockets.
Example: AI domain wants to execute a Flow workflow:
Why this is fine:
When in-process might still make sense: Hot paths where even microseconds matter (e.g., a loop processing thousands of items). For those rare cases, we can keep an optional
.with_wiring()escape hatch. But the default should be socket-based.Implementation Plan
All on
development_rethinkingbranch, importing as git dep. No repo renaming yet (backward compat).Step 1: Restructure hero_rpc workspace into hero_sdk layout
What changes:
crates/osis/→crates/hero_sdk_osis/(the storage/db/index layer)crates/server/+crates/service/→crates/server/(unified HeroServer)crates/models/(empty, scaffold for domain models)hero_sdktop-level crate that re-exports everything viapreludeFiles touched: Cargo.toml (workspace), each crate's Cargo.toml, internal
usepaths.No behavioral change — just reorganization.
Step 2: Build the unified HeroServer API
What changes:
HeroServerbuilder:.new(service_name)— initializes config.with_sdk_domain::<D>()— registers a domain from hero_sdk models.with_domain::<D>()— registers a custom domain.with_ui(router)— adds UI router.run()— creates all sockets, starts listening{service}.sock— service info{service}_ui.sock— UI{service}_server/{context}/{domain}.sock— per-domain/health,/openrpc.json,/.well-known/heroservice.jsonper domain socket--start,--stop,--status)HeroDomain— what a domain must implement to be registeredStep 3: Move models from hero_osis → hero_sdk/models
What changes:
hero_osis/schemas/→hero_sdk/schemas/hero_sdk/crates/models/identity,communication,calendar, etc.all-domainsfeature enables all.hero_sdk/modelsinstead of having its ownStep 4: Build
hero_client!macroWhat changes:
hero_sdk/crates/derive/sdk_domainsandcustom_domainsStep 5: Create hero_food example service
What changes:
hero_food(or a directory within hero_sdk/examples/)hero_client!Step 6: Slim hero_osis → hero_core
What changes:
hero_sdkwithall-domainsOrder of execution:
Steps 1-4 happen in hero_rpc repo on
development_rethinkingbranch.Step 5 can start as soon as steps 1-2 are done (doesn't need models moved yet — just needs HeroServer working).
Step 6 happens after step 5 validates the DX.
Questions (if any remain)
For
hero_client!— should it also generate a WASM-compatible client automatically (for use in hero_archipelagos/Dioxus)? The currentOsisClientalready has#[cfg(target_arch = "wasm32")]paths. We could make the generated client work in both native and WASM out of the box.For the service info socket (
hero_food.sock) — what should it serve beyond health/discovery? Should it support administrative commands like listing active contexts, showing domain stats, or triggering context imports/exports? (CurrentlyOServer's core socket does some of this.)For the standard service methods (option 2 — business logic in hero_sdk models): how much logic should live in hero_sdk vs in individual services? For example,
ChatService.send_message()requires knowing about message routing, notifications, etc. Should hero_sdk provide a "default" implementation that services can override? Or just the interface/trait that services must implement?yes, not as a separate client but as one unified client. there use to be code for this, like a unified client generator, which can be used for instpiration. target or feature gating (whichever is more appropriate, for wasm id say target) should prevent bloating from this additional infra.
for now, can remain. but, something i forgot to mention which reminds me: contexts will be managed in hero_core service's root context. the root context is where contexts are added and removed etc. this means, when a context is created, there should be a way to make the service aware of the new context, so it can dynamically listen to it. but perhaps instead of this being an automatic trigger, users can manually add a context to the service, since all services dont need to be aware of all contexts. as such, perhaps that service sock can offer that functionality of adding removing contexts.
hero_sdk is meant to offer generic logic and models for you entire digital life. but basic stuff like send_message. any application specific logic, should be defined in app. instead of saying hero_sdk provide a "default" implementation that services can override? Or just the interface/trait that services must implement? i would put it as hero_sdk provides basic implementation that services can extend. we dont want them to override, because we want the base functionality to be generic enough so it can cater to all apps. like pay_for_order should use a common make_transaction (or whatever it is) service, and add more to it, not override it
Rethinking hero RPC, OSIS and backend architectureto Rethinking hero RPC, OSIS and backend architecture — hero_sdk + hero_coreImplementation Complete —
development_rethinkingbranchThe initial restructuring is pushed to
development_rethinking. Full workspace compiles clean (zero warnings).What was done
1. Crate renaming (
hero_rpc_*→hero_sdk_*)hero_sdk_oschema,hero_sdk_derive,hero_sdk_generator,hero_sdk_osis,hero_sdk_openrpc,hero_sdk_client,hero_sdk_server,hero_sdk_models2. Structural changes
crates/openrpc_http_client_lib/→crates/client/(hero_sdk_client)crates/service/merged intocrates/server/(hero_sdk_server now contains both OServer and HeroServer APIs)crates/models/(hero_sdk_models) — scaffold ready for domain model migrationcrates/hero_sdk/— top-level re-export crate with prelude3. HeroServer builder API (
crates/server/src/builder.rs)Socket convention:
{service}.sock(info),{service}_ui.sock(UI), domain sockets via DomainServer.4. hero_food example (
example/hero_food/)delivery(Order, Driver, DeliveryZone) andrestaurant(Restaurant, MenuItem)5. Backward compatibility
What's next (follow-up tasks)
{service}_server/{context}/{domain}.sockinstead of legacyhero_db_{ctx}_{domain}.sockHeroDomainimpls so manual bridge code is unnecessaryAll core infrastructure is in place. The remaining tasks are incremental improvements on this foundation.
Follow-up: Additional improvements pushed
Second commit on
development_rethinkingaddresses all remaining items:1.
hero_client!macro (crates/client/src/lib.rs)Compose domain clients into a typed service client — works on both WASM and native:
Shared token management:
client.set_token(token)propagates to all domain clients.2. Multi-context domain registration fixed
Changed
DomainFactoryfromFnOncetoFnso each domain is registered across all requested contexts (was previously limited to the first context only).3. New socket path convention
HeroServer builder now uses:
Legacy
OServerAPI still useshero_db_{ctx}_{domain}.sockfor backward compatibility.Added
DomainServer::spawn_at(socket_path, ...)for custom socket paths.4. Auto-generated
HeroDomainimplsThe code generator now emits:
No more manual bridge code needed — the hero_food example's main.rs is now clean:
Summary — all agreed items implemented
Remaining for future PRs:
use super::core::*;path when files are inserver/subdirectoryCurrently there is a lot of mess in generated code, and also in repo.
we need to get rid of the deprecated old crates in this repo and update readme so it actually becomes the new hero_sdk repo. So far we've been appending without modifying existing stuff. Lets rename example to examples, get rid of old examples, only keep hero food, clean up crates as necessary, like why do we have all these crates still, are all necessary for rethinked hero_rpc?
for hero_food example in examples: why do we have schema in docs/schemas and schemas, why do we have generated code in both /core and /server folders but same files also in root. we should have a clean unified way for keeping generated code, i'd suggest flat per domain to keep it simple. also types_generated_wasm and types_generated can be merged into one and target gated.
Code in generated files such as :
should be imported from hero_sdk, no need for each domain to define the same.
Third commit on
development_rethinking: generator fix + hero_sdk_models scaffold1. Fixed generator import paths for server/ subdirectory layout
Problem: Generated
osis_server_generated.rsuseduse super::core::*;which breaks when the file is inside aserver/subdirectory (sincesuperpoints toserver, not the domain module).Fix: Generator now uses absolute
use crate::{domain}::core::*;imports — matching the pattern already used byrpc_generated.rs. Works in both:src/{domain}/server/osis_server_generated.rsserver_crate/src/{domain}/osis_server_generated.rs2. Generator now writes proper subdirectory layout
The Generator previously wrote all files flat to
src/{domain}/but the build system'smod.rsexpectedcore/andserver/subdirectories. Now:src/{domain}/core/(types, openrpc, mod.rs)src/{domain}/server/(osis_server_generated, rpc_generated, rpc, mod.rs)3. Target-aware domain module generation
generate_domain_mod()now checks the generation target:pub mod core;(no server module)pub mod core;andpub mod server;4. hero_sdk_models scaffolded with all 18 domains
Copied all schemas from hero_osis (57 .oschema files across 18 domains):
ai, base, business, calendar, code, communication, embedder, files, finance, flow, identity, job, ledger, media, money, network, projects, settingsbuild.rsgenerates models-only (types + WASM types)all-domainsmeta-featureUpdated status
Remaining for separate PRs:
Addressed the cleanup items from comment #16495 in commit
c5fff2b:1. Repo cleanup
crates/service/, old examples (petstore_client,petstore_server,recipe_server)build.sh,run.sh,install.sh,buildenv.sh)example/→examples/, updated workspace membersdocs/,sdk/)2. Flat generated code layout
core/andserver/subdirectories — all generated files now live directly insrc/{domain}/generate.rs,build.rs,rust_osis.rs,rust_rpc.rs) to produce flat layout by defaultmod.rsuses target-gated type imports (nativetypes.rsvs WASMtypes_wasm_generated.rs)3. Shared CrudError
CrudErrorenum tohero_sdk_osis::rpcpub use hero_sdk_osis::rpc::CrudError;instead of inline definitionhero_sdkprelude4. README
hero_sdkarchitecture with current crate names and flat layoutAll changes verified:
cargo check --workspacepasses cleanly,cargo test -p hero_sdk_generator --lib(103/105 pass, 1 pre-existing scaffold failure, 1 ignored),cargo test -p hero_sdk_osis --lib(64/64 pass).Few other improvements:
Addressed comment #16502 in commit
2dda443:docs/schemas/generation — generator no longer producesdocs/schemas/directories with README/schema.md/html files. Theopenrpc.jsoninsrc/{domain}/is the only generated spec artifact.src/services/placeholder — the delivery and restaurant domains ARE the custom services, no separate services module needed. Removed from generator (generate_services_placeholder,pub mod services;inlib.rs), hero_food, and models..gitignoreentries fordocs/schemas/andsdk/as build artifacts.schemas/directories (source.oschemafiles) are kept — only the generateddocs/schemas/output was removed.Part 1 complete: hero_food imports hero_sdk_models (
f89001d)hero_sdk_modelsdep withidentityfeaturehero_sdk_models::identity::Addressfor structured address parsingtodo\!())cargo check --workspaceclean, all RPC methods tested end-to-endMoving to Part 2: hero_osis
development_rethinkingbranch migration.zinitfor lifecycle and development #7Updated Socket Strategy — Context via Headers, Not Socket Directories
Based on recent architectural changes in hero_skills (hero_sockets, hero_context, hero_proc_sdk skills), the socket-per-context model proposed in this issue needs to be revised.
What Changed
The updated
hero_socketsskill (in hero_skills repo) establishes:$HERO_SOCKET_DIR/<service_name>/rpc.sock,ui.sock,rest.sock,resp.sock,web_<name>.sockX-Hero-Context: <integer>— not via separate socketsX-Hero-Claims: admin,users.read— authorization capabilitiesThe
hero_contextskill defines a 3-dimension security model:Trust model: missing claims header = FULL TRUST (internal call). Claims present = restricted.
Impact on hero_sdk Rethinking
Before (proposed in this issue):
After (aligned with updated skills):
Why This Is Better
$HERO_SOCKET_DIR/*/rpc.sock, no nested directoriesidentity.get,delivery.create) on a singlerpc.sock, or optionally separate domain sockets in the same service directoryRevised HeroServer API
Context is extracted from the
X-Hero-Contextheader in each request, not from which socket was connected to. The server routes to the correct context-scoped storage internally.What Stays the Same
Action Items
rpc.sock+ui.sockin service directoryX-Hero-Contextheader in RPC dispatchImplementation Plan — Socket Strategy Alignment (
development_13)Based on the updated
hero_socketsandhero_contextskills in hero_skills, here's the concrete plan to align hero_rpc with the new architecture. This builds on the existingdevelopmentbranch (which already hasHERO_SOCKET_DIRsupport incrates/service/) and addresses the action items from comment #17993.Current State Analysis
Already aligned (in
crates/service/):HeroRpcServer→ binds$HERO_SOCKET_DIR/<service_name>/rpc.sock✅HeroUiServer→ binds$HERO_SOCKET_DIR/<service_name>/ui.sock✅HERO_SOCKET_DIRenv var ✅/health,/openrpc.json,/.well-known/heroservice.json✅Needs alignment (in
crates/server/— OServer):hero_db_core.sock(old flat naming, should be in service dir)hero_db_{context}_{domain}.sock(should be singlerpc.sock)X-Hero-Contextheader extraction (context is in socket path, not header)X-Hero-Claimsheader extraction (no claims-based auth)Needs alignment (in
crates/osis/— RequestContext):hero_context: u32fieldhero_claims: Option<Vec<String>>fieldX-Hero-Context/X-Hero-ClaimsparsingChanges
1. Extend
RequestContextwith Hero Context & ClaimsFile:
crates/osis/src/rpc/request_context.rsAdd fields per the
hero_contextskill spec:Update
from_headers()to parse:X-Hero-Context→ integer (default 0)X-Hero-Claims→Some(vec![...])if present,Noneif missing (= full trust)X-Forwarded-Prefix→ optional stringAdd authorization helper:
2. Unify OServer to Single
rpc.sockPer ServiceFiles:
crates/server/src/server/config.rs,server.rs,core_server.rs,domain_server.rsChange OServer from spawning N+1 sockets (1 core + N domain) to a single
rpc.sock:core_socket()anddomain_socket()withservice_rpc_socket(name)that returns$HERO_SOCKET_DIR/<name>/rpc.sockrpc.sockidentity.User.getgoes to identity domain,context.listgoes to core managementX-Hero-Contextheader → determines which context's storage to useBefore:
After:
3. Multi-Context Storage via Headers
Instead of separate sockets per context, the server determines context from
X-Hero-Context:~/hero/var/osisdb/{context_id}/{domain}/4. Update recipe_server Example
Update the recipe_server example to demonstrate:
rpc.sockbinding5. Update HeroLifecycle for hero_proc Self-Start
Align
start()with thehero_proc_service_singlebinskill:restart_service()(idempotent) instead of separateservice_set+service_startkill_otherfor socket cleanuphealth_checkswithopenrpc_socketis_process()on actionsWhat's NOT Changing
crates/service/(HeroRpcServer/HeroUiServer) — already alignedOrder of Implementation
Implementing on
development_13branch. Will push after each significant change.First commit pushed to
development_13— Socket Strategy AlignmentBranch:
development_13— commita0f4a08What was implemented
1. RequestContext extended (
crates/osis/src/rpc/request_context.rs)hero_context: u32— fromX-Hero-Contextheader (default 0 = admin)hero_claims: Option<Vec<String>>— fromX-Hero-Claimsheader (None = FULL TRUST)forwarded_prefix: Option<String>— fromX-Forwarded-Prefixheaderis_trusted(),has_claim(claim),context_name()2. Unified single-socket OServer (
crates/server/src/server/unified_server.rs)UnifiedServerBuilder— accumulates domain registrations, serves through singlerpc.sockcontext.list,domain.list, etc.) on ONE socketrecipe.list→ recipes domain,context.list→ managementX-Hero-Contextheader, NOT from socket path3. OServerConfig updated (
crates/server/src/server/config.rs)rpc_socket(name)→$HERO_SOCKET_DIR/<name>/rpc.sockui_socket(name)→$HERO_SOCKET_DIR/<name>/ui.sockservice_socket_dir(name)→$HERO_SOCKET_DIR/<name>/core_socket()anddomain_socket()deprecated but kept4. HeroLifecycle aligned with hero_proc self-start (
crates/service/src/lifecycle.rs)start_with_overrides()now usesrestart_service()(idempotent)kill_otherwith socket cleanup (rpc.sock)health_checkswithopenrpc_sockethealth checkis_process()on actionsstop()usesstop_service()with timeout5. Examples updated
Socket layout change
Before:
After:
Backward compatibility
core_server.rsanddomain_server.rskept but deprecatedOServer::register()API unchanged — works the same but routes to unified socket#[deprecated]Test results
hero_rpc_osis: 68/68 pass ✅hero_rpc_generator: 103/103 pass ✅cargo check --workspace: clean (only 2 expected deprecation warnings) ✅hero_skills Compliance Fix (commit
17478db)Cross-checked implementation against all 5 hero_skills SKILL.md files. Fixed:
Issues Found & Fixed
Discovery manifest field name (
unified_server.rs)socket_type→socketper hero_sockets spec/.well-known/heroservice.jsonendpoint now returns{"socket": "rpc"}(notsocket_type)Missing
.interpreter("exec")on action builder (lifecycle.rs).interpreter("exec")on all daemon actionsRecipe server example updated for hero_proc (
example/recipe_server/src/main.rs)OServer::run_cli()withHeroLifecycleinstead of bareOServer::new().run()start,stop,status,logs,servesubcommandsCompliance Checklist ✅
HERO_SOCKET_DIRrespected, defaults to~/hero/var/sockets$HERO_SOCKET_DIR/<service>/rpc.sock0o660POST /rpc,GET /health,GET /openrpc.json,GET /.well-known/heroservice.jsonX-Hero-Context(integer, default 0),X-Hero-Claims(None=trust),X-Forwarded-Prefixrestart_service()idempotent,kill_otherwith socket cleanup,.is_process(),.interpreter("exec")openrpc_socketDocs & Example Alignment (commit
b896338)Full alignment of docs and recipe_server example with hero_skills conventions:
Recipe Server Example
start/stop/serve) not flagsmake run→recipe_server start,make stop→recipe_server stop), removed manual PID/kill-server logic, uses correct socket path$HERO_SOCKET_DIR/recipe-server/rpc.sockX-Hero-Contextheader$HERO_SOCKET_DIRwith proper default fallbackRoot Docs
HeroLifecycle/OServer::run_cli()pattern, correctmain.rsexample withclapdep, hero_proc integration section, context headers, management methods, Rust version corrected to 1.93+~/hero/var/sockets/{context}/hero_recipes.sock, now$HERO_SOCKET_DIR/hero-recipes/rpc.sock), updated architecture & runtime flow for unified socket, added hero_skills and hero_proc referencesRemaining Note
hero_rpc uses CLI subcommands (
start/stop/serve) rather than the flags (--start/--stop) pattern from hero_skills. This is intentional — the subcommand pattern is richer (supportsstatus,logs,run,install, seed args, env overrides). Both patterns userestart_service()under the hood.Progress: CLI standardized to hero_skills singlebin pattern
Replaced the subcommand-based CLI (
start/stop/serve) with the hero_skills singlebin pattern (--start/--stop/bare) across the entire codebase:Code changes
crates/server/src/server/cli.rs—ServerClinow uses--start/--stopbool flags instead of subcommandscrates/server/src/server/server.rs—OServer::run_cli()dispatches on flags, not subcommandscrates/service/src/hero_server.rs—HeroServer,HeroRpcServer,HeroUiServerall use--start/--stopflags. InternalHeroCli<A>struct replaced from subcommand-based to flag-based.crates/service/src/lifecycle.rs—exec_command()no longer appendsservesubcommand (bare binary = foreground mode)crates/server/src/lib.rs— RemovedServerCommand/LifecycleCommandre-exportsExample & docs
example/recipe_server/— main.rs, Makefile, README all updated for--start/--stopGETTING_STARTED.md— All CLI examples use--start/--stop/bare patternREADME.md— Updated for foreground-onlycargo runTests
recipe_servercompiles cleanCommit:
eb53a1dScaffolder aligned with hero_skills conventions (
17cc961)The
hero_rpc_generatorworkspace scaffolder now generates projects that follow hero_skills patterns:_openrpc→_server,_http→_ui(matches naming convention skill)--start/--stopsinglebin pattern (matches selfstart skill)HeroUiServerfrom hero_service — Unix socket only, no raw TCP (matches hero_sockets skill)build_lib.shpattern with standard targets (matches build_lib skill)All 186 tests pass (105 generator + 13 server + 68 OSIS).
All work merged to development (branch
development_13merged)Summary of changes:
rpc.sockper service)X-Hero-Contextheader for context isolation--start/--stopsinglebin CLI pattern (HeroServer, HeroRpcServer, HeroUiServer, OServer)_server/_uicrate naming, Makefile + buildenv.sh generation, HeroUiServer templaterecipeservice.get_by_category) now properly dispatchedRemaining items for separate issues:
core_socket()/domain_socket()(code exists but unused)