No description
Find a file
Jan De Landtsheer 129f6d37bf
docs: ADR 003 (socket paths) + update ADR 002 status to Accepted
ADR 003: Configurable socket paths for my_os_server. Documents that
current ~/hero/var/sockets/hero_db_root_os.sock default is wrong for
MOS (system daemon, no home dir) and standard Linux (should use /run/
or $XDG_RUNTIME_DIR). Blocked on hero_rpc ADR 001 for the real fix
(OServerConfig path overrides). Phase 1: no code changes, document
the issue. Phase 2: adopt hero_rpc configurable paths once available.

ADR 002: Updated status from Proposed to Accepted. All items already
implemented during Phase 2 of initial build:
- 5 RAID methods (list_md_arrays, md_detail, create_md_array,
  stop_md_array, create_btrfs_raid) at vol_handler.rs:358-430
- mountinfo parser replacing /proc/mounts at vol_handler.rs:124-150
- find_mosdata_mount() using mountinfo::find_by_fstype("btrfs")
  at vol_handler.rs:1025-1051
2026-04-06 01:05:56 +02:00
crates feat: initial implementation of my_os_server — unified OS SAL daemon 2026-04-06 00:37:34 +02:00
docs docs: ADR 003 (socket paths) + update ADR 002 status to Accepted 2026-04-06 01:05:56 +02:00
.gitignore feat: initial implementation of my_os_server — unified OS SAL daemon 2026-04-06 00:37:34 +02:00
Cargo.lock feat: initial implementation of my_os_server — unified OS SAL daemon 2026-04-06 00:37:34 +02:00
Cargo.toml feat: initial implementation of my_os_server — unified OS SAL daemon 2026-04-06 00:37:34 +02:00
README.md feat: initial implementation of my_os_server — unified OS SAL daemon 2026-04-06 00:37:34 +02:00

my_os_server

Unified OS-level System Abstraction Layer (SAL) daemon for the Geomind MyceliumOS platform.

Exposes networking (net.*) and storage (vol.*) management over a single hero_rpc JSON-RPC 2.0 interface. One socket, one protocol, one ACL — regardless of whether the underlying node is a full MOS deployment or a standard Linux box.

Why

Before my_os_server, managing a node meant:

  • Two socketsmosnetd for networking, mos_volmgrd for storage — each with its own line-delimited JSON-RPC protocol, no shared auth.
  • MOS-only — both daemons assume MOS boot conditions (ramdisk root, my_init, kernel params). No way to run on standard Linux for dev, CI, or non-MOS deployments.
  • No access control — any local process can call any method on either daemon.

my_os_server solves all three: single HTTP-over-UDS endpoint, dual-mode binaries (MOS proxy or native library calls), and hero_rpc ACL from day one.

Binaries

Two binaries, built from one workspace, exposing the same RPC interface:

Binary Environment How it works
my-os-server-native Standard Linux Calls mosnet-lib and mos_volmgr_common directly as libraries
my-os-server-mos MOS nodes Proxies to running mosnetd and mos_volmgrd daemons via hero_rpc Unix sockets

A consumer (hero_compute, hero_proc, CLI tools, future web UI) connects to the same socket and calls the same methods — it never knows which binary is behind it.

Architecture

┌─────────────────────────────────────────────────────────────┐
│                       Consumers                             │
│  hero_compute  │  hero_proc  │  CLI tools  │  web UI       │
└───────┬────────┴──────┬──────┴──────┬──────┴──────┬────────┘
        └───────────────┴─────────────┴─────────────┘
                            │
                  hero_rpc OsisClient
                  (HTTP-over-UDS JSON-RPC 2.0)
                            │
                ┌───────────▼───────────┐
                │     my_os_server      │
                │                       │
                │  domain: root/os      │
                │  net.* + vol.*        │
                └───────────┬───────────┘
                            │
          ┌─────────────────┼─────────────────┐
          │                                   │
 ┌────────▼─────────┐             ┌───────────▼──────────┐
 │  MOS binary       │             │  Native binary       │
 │  (proxy mode)     │             │  (direct calls)      │
 │                   │             │                      │
 │  net.* → mosnet.* │             │  mosnet-lib          │
 │  vol.* → storage.*│             │  mos_volmgr_common   │
 └────────┬──────────┘             └──────────────────────┘
          │
 ┌────────┼──────────┐
 │                    │
mosnetd           mos_volmgrd
(upstream)        (upstream)

Workspace Layout

my_os_server/
├── Cargo.toml                          # workspace root
├── docs/
│   ├── PRD.md                          # product requirements document
│   └── adr/
│       ├── 001-implementation-plan.md  # 7-step execution plan
│       └── 002-volmgr-adr-008-009-sync.md
└── crates/
    ├── my_os_server_common/            # shared types, method registry, ACL config
    │   └── src/
    │       ├── lib.rs                  # method constants, Right enum, method→right mapping
    │       └── acl_config.rs           # TOML ACL loader, group/ace builder
    ├── my_os_server_native/            # native implementation binary
    │   └── src/
    │       ├── main.rs                 # OServer setup, OsDomain handler
    │       ├── net_handler.rs          # ~30 net.* methods via mosnet-lib
    │       ├── vol_handler.rs          # ~26 vol.* methods via mos_volmgr_common
    │       └── state.rs               # JSON state persistence (mounts, taps, fw, NAT)
    └── my_os_server_mos/               # MOS proxy binary
        └── src/
            ├── main.rs                 # OServer setup, OsProxyDomain handler
            └── proxy.rs               # namespace remapping + upstream forwarding

RPC Interface

Domain: root/os Socket: ~/hero/var/sockets/hero_db_root_os.sock Protocol: HTTP/1.1 over Unix domain socket, JSON-RPC 2.0

Networking Methods (net.*)

Read-only

Method Description
net.status Daemon overview (pid, uptime, mode, IPs)
net.config Resolved kernel/boot configuration
net.lease DHCPv4 + DHCPv6 + SLAAC + prefix delegation
net.prefix Delegated IPv6 prefix state
net.vlan VLAN filtering configuration
net.health Structured health check
net.verify Drift detection (expected vs live state)
net.interfaces Interfaces with addresses, link state, bridge membership
net.routes Kernel routing table
net.neighbors ARP/NDP neighbor table
net.bridge Bridge state and ports
net.firewall nftables tables, chains, rules
net.dns Nameservers, search domains
net.sysctl Forwarding, proxy_arp, proxy_ndp per interface
net.conntrack nf_conntrack entries (optional zone filter)
net.conntrack.stats Conntrack statistics
net.nat.status NAT state (boot config + dynamic rules)
net.vm.list Managed VM tap interfaces
net.ovs OVS bridge/ports/interfaces (MOS only)
net.flows OpenFlow flow table (MOS only)
net.datapath Datapath info and flows (MOS only)
net.ovs.conntrack OVS datapath conntrack (MOS only)
net.ovs.conntrack.stats OVS conntrack statistics (MOS only)

Write (requires Write right)

Method Description
net.lease.restart Re-run DHCP DORA, reconfigure IP/routes
net.routes.add Add a route
net.routes.delete Delete a route
net.bridge.add Create a bridge
net.bridge.delete Delete a bridge
net.bridge.connect Connect two bridges (patch ports / veth)
net.bridge.add_port Add interface as bridge port
net.bridge.remove_port Remove port from bridge
net.vm.create_tap Create persistent tap, attach to bridge
net.vm.delete_tap Delete managed tap interface
net.firewall.add_rule Add input allow rule
net.firewall.remove_rule Remove input allow rule
net.nat.add_dnat Add DNAT port forward
net.nat.remove_dnat Remove DNAT port forward
net.nat.add_snat Add SNAT / masquerade
net.nat.remove_snat Remove SNAT / masquerade

Admin (requires Admin right)

Method Description
net.conntrack.flush Flush kernel conntrack entries
net.ovs.conntrack.flush Flush OVS datapath conntrack (MOS only)

Storage Methods (vol.*)

Read-only

Method Description
vol.list_disks All block devices
vol.list_partitions Partitions with filesystem info
vol.list_filesystems MOS-labeled filesystems
vol.list_mounts Mount points (enriched mountinfo)
vol.list_subvolumes Btrfs subvolumes on MOSDATA
vol.list_free_space Unpartitioned space on a disk
vol.inventory Full storage inventory snapshot
vol.feasible_topologies Feasible topology templates
vol.usage Filesystem usage statistics
vol.device_stats Btrfs device error counters
vol.scrub_status Btrfs scrub status
vol.fs_usage Detailed btrfs filesystem usage
vol.fs_df Btrfs filesystem df
vol.balance_status Btrfs balance status
vol.list_md_arrays Linux MD (software RAID) arrays
vol.md_detail Detailed MD array information

Write (requires Write right)

Method Description
vol.create_partition Create a GPT partition
vol.create_filesystem Format (btrfs, ext4, vfat, swap, bcachefs)
vol.mount Mount a filesystem
vol.unmount Unmount a path
vol.create_subvolume Create a btrfs subvolume
vol.delete_subvolume Delete a btrfs subvolume
vol.snapshot_subvolume Snapshot a subvolume
vol.set_quota Set qgroup quota
vol.scrub_start Start a btrfs scrub
vol.balance_start Start a btrfs balance
vol.defrag Defragment a path
vol.resize_max Resize filesystem to maximum

Admin (requires Admin right)

Method Description
vol.create_md_array Create MD RAID-1 array
vol.stop_md_array Stop an MD array
vol.create_btrfs_raid Create btrfs RAID filesystem

Built-in Methods (hero_rpc standard)

Method Description
rpc.discover Aggregated OpenRPC 1.3.2 spec
rpc.health {"status": "ok"}

Authentication and ACL

Uses hero_rpc's RequestContext and ACL system:

  • Identity: Public key (X-Public-Key header) or bearer token (Authorization: Bearer)
  • Rights: Admin > Write > Read (hierarchical)
  • Default: Anonymous local callers get Read; authenticated callers get per-group rights

ACL configuration via TOML:

[[group]]
name = "admins"
users = ["<pubkey_hex>"]

[[group]]
name = "operators"
users = ["<pubkey_hex>"]
member_groups = ["admins"]

[[ace]]
right = "admin"
groups = ["admins"]

[[ace]]
right = "write"
groups = ["operators"]

[policy]
anonymous_local = "read"
anonymous_remote = "none"

Building

# Build both binaries
cargo build --release

# Binaries at:
#   target/release/my-os-server-native
#   target/release/my-os-server-mos

Workspace dependencies (path deps, not published):

Dependency Path Purpose
hero_rpc_server ../../lhumina_code/hero_rpc/crates/server RPC server framework
hero_rpc_osis ../../lhumina_code/hero_rpc/crates/osis JSON-RPC dispatch, RequestContext
hero_rpc_openrpc ../../lhumina_code/hero_rpc/crates/openrpc Transport layer (proxy forwarding)
mosnet-lib ../mosnet/crates/mosnet-lib Networking operations (native binary)
mos_volmgr_common ../mos_volmgr/crates/mos_volmgr_common Storage operations (native binary)

System tools (native binary, non-fatal if missing):

btrfs, mount, umount, blkid, sgdisk, df, mkfs.btrfs, mkfs.ext4, mkfs.vfat, mkswap, mdadm, udevadm

Running

# Native mode (standard Linux, no upstream daemons)
my-os-server-native --state-dir /var/lib/my_os_server -v

# Native mode with ACL
my-os-server-native --acl /etc/my_os_server/acl.toml -v

# MOS mode (proxy to mosnetd + mos_volmgrd)
my-os-server-mos -v

# Query via hero_rpc
echo '{"jsonrpc":"2.0","id":1,"method":"net.interfaces","params":{}}' | \
  hero-rpc-call ~/hero/var/sockets/hero_db_root_os.sock

State Management (Native Binary)

The native binary tracks daemon-managed resources in JSON state files under --state-dir:

Registry Purpose
Mounts Tracks filesystems mounted by this daemon (restored on restart)
TAP interfaces VM taps created by net.vm.create_tap
Firewall rules Dynamically added input rules
DNAT entries Port forwarding rules
SNAT entries Masquerade / source NAT rules

On startup, mount state is verified against live /proc/self/mountinfo — stale entries are pruned automatically.

MOS Proxy — Namespace Remapping

The MOS binary remaps method namespaces when forwarding to upstream daemons:

Incoming Upstream Target daemon
net.* mosnet.* mosnetd (hero_db_root_mosnet.sock)
vol.* storage.* mos_volmgrd (hero_db_root_storage.sock)

If an upstream daemon is down, its methods return a structured error while the other domain continues working.

Upstream Prerequisites

This project required three prerequisite ADRs in upstream projects, all completed:

Step ADR Project What
1 ADR 028 mosnet Extract mosnet-lib from mosnet workspace
2 ADR 029 mosnet Migrate mosnetd RPC to hero_rpc
3 ADR-007 mos_volmgr Migrate mos_volmgrd RPC to hero_rpc

Additionally, ADR 002 tracks sync with mos_volmgr's ADR-008 (MD/btrfs RAID methods) and ADR-009 (mountinfo parser).

Implementation Status

Phase Status Description
Phase 1 Done Workspace scaffold + native read-only binary
Phase 2 Done Write operations + state management
Phase 3 Done MOS proxy binary with namespace remapping
Phase 4 Done ACL integration (TOML config, anonymous policy, per-method rights)
Phase 5 Planned TCP/TLS remote access
Phase 6 Planned CLI companion (my-os-ctl)

License

Part of the Geomind MyceliumOS project.