Compare commits

...

14 Commits

Author SHA1 Message Date
4cd8c54c44 Expand --report-current to show partitions and all mounts (except pseudo-filesystems) 2025-10-20 14:55:38 +02:00
224adf06d8 Standardize 2-copy bcachefs topology naming to 'bcachefs-2copy' across code and docs; align parser and Display; update docs and ADR 2025-10-20 14:21:47 +02:00
69370a2f53 config: reuse reserved label and GPT name constants 2025-10-13 10:35:31 +02:00
3d14f77516 mount: prefer boot disk ESP and run cargo fmt
* choose ESP matching the primary data disk when multiple ESPs exist,
  falling back gracefully for single-disk layouts
* keep new helper to normalize device names and reuse the idempotent
  mount logic
* apply cargo fmt across the tree
2025-10-10 14:49:39 +02:00
5746e285b2 Mount repeat fixes, update docs 2025-10-10 14:11:52 +02:00
cc126d77b4 mount: mount ESP at /boot alongside data runtimes
Add /boot root mount planning for VFAT ESP outputs, reuse the idempotent
mount logic to skip duplicates, and ensure /etc/fstab includes both
/var/mounts/{UUID} and the ESP when enabled.
2025-10-10 10:11:58 +02:00
285adeead4 mount: ensure idempotent results and persist runtime roots
* read /proc/self/mountinfo to detect already mounted targets and reuse their metadata
* reject conflicting mounts by source, allowing reruns without duplicating entries
* return mount results sorted by target for deterministic downstream behavior
* write subvolume and /var/mounts/{UUID} entries to fstab when requested
2025-10-10 10:07:48 +02:00
c8b76a2a3d Refine default orchestration flow and documentation
- Document defaults-only configuration, kernel topology override, and deprecated CLI flags in README
- Mark schema doc as deprecated per ADR-0002
- Warn that --topology/--config are ignored; adjust loader/main/context flow
- Refactor orchestrator run() to auto-select mount/apply, reuse state when already provisioned, and serialize topology via Display
- Add Callgraph/FUNCTION_LIST/ADR docs tracking the new behavior
- Derive Eq for Topology to satisfy updated CLI handling
2025-10-09 16:51:12 +02:00
d374176c0b feat(orchestrator,cli,config,fs): implement 3 modes, CLI-first precedence, kernel topo, defaults
- Orchestrator:
  - Add mutually exclusive modes: --mount-existing, --report-current, --apply
  - Wire mount-existing/report-current flows and JSON summaries
  - Reuse mount planning/application; never mount ESP
  - Context builders for new flags
  (see: src/orchestrator/run.rs:1)

- CLI:
  - Add --mount-existing and --report-current flags
  - Keep -t/--topology (ValueEnum) as before
  (see: src/cli/args.rs:1)

- FS:
  - Implement probe_existing_filesystems() using blkid to detect ZOSDATA/ZOSBOOT and dedupe by UUID
  (see: src/fs/plan.rs:1)

- Config loader:
  - Precedence now: CLI flags > kernel cmdline (zosstorage.topo) > built-in defaults
  - Read kernel cmdline topology only if CLI didn’t set -t/--topology
  - Default topology set to DualIndependent
  - Do not read /etc config by default in initramfs
  (see: src/config/loader.rs:1)

- Main:
  - Wire new Context builder flags
  (see: src/main.rs:1)

Rationale:
- Enables running from in-kernel initramfs with no config file
- Topology can be selected via kernel cmdline (zosstorage.topo) or CLI; CLI has priority
2025-09-30 16:16:51 +02:00
b0d8c0bc75 docs: sync with code (topologies, mount scheme, CLI flags, UEFI/BIOS, fstab) and fix relative src links in docs/ to ../src/ 2025-09-29 23:24:25 +02:00
7cef73368b topology: unify CLI and types::Topology (ValueEnum + aliases); bcachefs-2copy uses --replicas=2; update orchestrator call to make_filesystems(cfg); minor overlay fix; docs previously synced 2025-09-29 22:56:23 +02:00
2d43005b07 topology: add bcachefs-2copy; add bcachefs-single; rename single->btrfs-single; update planner, fs mapping, CLI, defaults, preview topo strings, README 2025-09-29 18:02:53 +02:00
cd63506d3c mount: use bcachefs -o X-mount.subdir={name} for subvolume mounts; update SAFETY notes; sync README and PROMPT with root/subvol scheme and bcachefs option 2025-09-29 17:41:56 +02:00
04216b7f8f apply mode: wire partition apply + mkfs; btrfs RAID1 flags and -f; UEFI detection and skip bios_boot when UEFI; sgdisk-based partition apply; update TODO and REGION markers 2025-09-29 15:10:57 +02:00
38 changed files with 7422 additions and 1050 deletions

View File

@@ -24,27 +24,38 @@ Device Discovery
Partitioning Requirements
- Use GPT exclusively. Honor 1MiB alignment boundaries.
- For BIOS compatibility, create a small `bios_boot` partition (exact size TBD—assume 1MiB for now, placed first).
- For BIOS compatibility on BIOS systems, create a small `bios_boot` partition (size 1MiB, placed first). When running under UEFI (`/sys/firmware/efi` present), the BIOS boot partition is suppressed.
- Create a 512MiB FAT32 ESP on each disk, label `ZOSBOOT`. Each ESP is independent; synchronization will be handled by another tool (out of scope). Ensure unique partition UUIDs while keeping identical labels.
- Remaining disk capacity is provisioned per configuration (see below).
- Before making changes, verify the device has no existing partitions or filesystem signatures; abort otherwise.
Filesystem Provisioning
- All data mounts are placed somewhere under `/var/cache`. Precise mountpoints and subvolume strategies are configurable.
- Mount scheme and subvolumes:
* Root mounts for each data filesystem at `/var/mounts/{UUID}` (runtime only). For btrfs root, use `-o subvolid=5`; for bcachefs root, no subdir option.
* Create or ensure subvolumes on the primary data filesystem with names: `system`, `etc`, `modules`, `vm-meta`.
* Mount subvolumes to final targets:
- `/var/cache/system`
- `/var/cache/etc`
- `/var/cache/modules`
- `/var/cache/vm-meta`
* Use UUID= sources for all mounts (never device paths).
* Subvolume options:
- btrfs: `-o subvol={name},noatime`
- bcachefs: `-o X-mount.subdir={name},noatime`
- Supported backends:
* Single disk: default to `btrfs`, label `ZOSDATA`.
* Two disks/NVMe: default to individual `btrfs` filesystems per disk, each labeled `ZOSDATA`, mounted under `/var/cache/<UUID>` (exact path pattern TBD). Optional support for `btrfs` RAID1 or `bcachefs` RAID1 if requested.
* Mixed SSD/NVMe + HDD: default to `bcachefs` with SSD as cache/promote and HDD as backing store, label resulting filesystem `ZOSDATA`. Alternative mode: separate `btrfs` per device (label `ZOSDATA`).
* Two disks/NVMe (dual_independent): default to independent `btrfs` per disk, each labeled `ZOSDATA`; root-mount all under `/var/mounts/{UUID}`, pick the first data FS as primary for final subvol mounts.
* Mixed SSD/NVMe + HDD: default to `bcachefs` with SSD as cache/promote and HDD as backing store, resulting FS labeled `ZOSDATA`. Alternative mode: separate `btrfs` per device (label `ZOSDATA`).
- Reserved filesystem labels: `ZOSBOOT` (ESP), `ZOSDATA` (all data filesystems). GPT partition names: `zosboot` (bios_boot and ESP), `zosdata` (data), `zoscache` (cache).
- Filesystem tuning options (compression, RAID profile, etc.) must be configurable; define sensible defaults and provide extension points.
Configuration Input
- Accept configuration via:
* Kernel command line parameter (name TBD, e.g., `zosstorage.config=`) pointing to a YAML configuration descriptor.
* Optional CLI flags when run in user space (must mirror kernel cmdline semantics).
* On-disk YAML config file (default path TBD, e.g., `/etc/zosstorage/config.yaml`).
- Establish clear precedence: kernel cmdline overrides CLI arguments, which override config file defaults. No interactive prompts inside initramfs.
- YAML schema must at least describe disk selection rules, desired filesystem layout, boot partition preferences, filesystem options, mount targets, and logging verbosity. Document the schema and provide validation.
* Kernel command line parameter `zosstorage.config=` pointing to a YAML configuration descriptor.
* Optional CLI flags when run in user space (mirror kernel cmdline semantics).
* On-disk YAML config file (default path `/etc/zosstorage/config.yaml`).
- Precedence: kernel cmdline overrides CLI arguments, which override config file, which override built-in defaults. No interactive prompts inside initramfs.
- YAML schema must describe disk selection rules, desired filesystem layout, boot partition preferences, filesystem options, mount targets, and logging verbosity. See [docs/SCHEMA.md](docs/SCHEMA.md) and [src/types.rs](src/types.rs:1).
State Reporting
- After successful provisioning, emit a JSON state report (path TBD, e.g., `/run/zosstorage/state.json`) capturing:
@@ -59,7 +70,7 @@ Logging
- By default, logs go to stderr; design for optional redirection to a file (path TBD). Avoid using `println!`.
System Integration
- Decide whether to generate `/etc/fstab` entries; if enabled, produce deterministic ordering and documentation. Otherwise, document alternative mount management.
- `/etc/fstab` generation: optional via CLI/config. When enabled, write only the four final subvolume/subdir mount entries (system, etc, modules, vm-meta) with `UUID=` sources in deterministic order. Root mounts under `/var/mounts/{UUID}` are runtime-only and excluded from fstab.
- After provisioning, ensure the initramfs can mount the new filesystems (e.g., call `udevadm settle` if necessary). No external services are invoked.
- No responsibility for updating `vmlinuz.efi`; another subsystem handles kernel updates.
@@ -84,11 +95,17 @@ Documentation & Deliverables
- Include architectural notes describing module boundaries (device discovery, partitioning, filesystem provisioning, config parsing, logging, reporting).
Open Items (call out explicitly)
- Exact sizes and ordering for `bios_boot` partition awaiting confirmation; note assumptions in code and documentation.
- Mount point naming scheme under `/var/cache` (per-UUID vs. config-defined) still to be finalized.
- Filesystem-specific tuning parameters (compression, RAID values, `bcachefs` options) require explicit defaults from stakeholders.
- Path/location for YAML config, kernel cmdline key, JSON report path, and optional log file path need final confirmation.
- Decision whether `/etc/fstab` is generated remains pending.
- BIOS vs UEFI: `bios_boot` partition size fixed at 1MiB and created only on BIOS systems; suppressed under UEFI (`/sys/firmware/efi` present).
- Mount scheme finalized:
- Root mounts for each data filesystem at `/var/mounts/{UUID}` (runtime only).
- Final subvolume/subdir mounts from the primary data filesystem to `/var/cache/{system,etc,modules,vm-meta}`.
- Filesystem-specific tuning parameters (compression, RAID values, `bcachefs` options) remain open for refinement; sensible defaults applied.
- Config paths and keys stabilized:
- Kernel cmdline key: `zosstorage.config=`
- Default config file: `/etc/zosstorage/config.yaml`
- Default report path: `/run/zosstorage/state.json`
- Optional log file: `/run/zosstorage/zosstorage.log`
- `/etc/fstab` generation policy decided: optional flag; writes only the four final subvolume/subdir entries.
Implementation Constraints
- Stick to clear module boundaries. Provide unit tests where possible (e.g., config parsing, device filtering).

View File

@@ -2,7 +2,7 @@
One-shot disk provisioning utility intended for initramfs. It discovers eligible disks, plans a GPT layout based on a chosen topology, creates filesystems, mounts them under a predictable scheme, and emits a machine-readable report. Safe-by-default with a non-destructive preview mode.
Status: first-draft preview capable. Partition apply, mkfs, and mounts are gated until the planning is validated in your environment.
Status: apply mode implemented. Partition application (sgdisk), filesystem creation (vfat/btrfs/bcachefs), mount scheme with subvolumes, and optional fstab writing are available. Preview mode remains supported.
Key modules
- CLI and entrypoint:
@@ -20,21 +20,22 @@ Key modules
- [src/partition/plan.rs](src/partition/plan.rs)
- Filesystem planning/creation and mkfs integration:
- [src/fs/plan.rs](src/fs/plan.rs)
- Mount planning and application (skeleton):
- Mount planning and application:
- [src/mount/ops.rs](src/mount/ops.rs)
Features at a glance
- Topology-driven planning with built-in defaults: Single, DualIndependent, BtrfsRaid1, SsdHddBcachefs
- Non-destructive preview: --show/--report outputs JSON summary (disks, partition plan, filesystems, planned mountpoints)
- Topology auto-selection with built-in defaults; optional kernel cmdline override via `zosstorage.topology=` (see ADR-0002)
- Non-destructive preview: `--show`/`--report` outputs JSON summary (disks, partition plan, filesystems, planned mountpoints)
- Safe discovery: excludes removable media by default (USB sticks) unless explicitly allowed
- Config-optional: the tool runs without any YAML; sensible defaults are always present and may be overridden/merged by config
- No external YAML configuration; defaults-only per ADR-0002 (sane built-ins, topology may be overridden by kernel cmdline)
Requirements
- Linux with /proc and /sys mounted (initramfs friendly)
- External tools discovered at runtime:
- blkid (for probing UUIDs and signatures)
- sgdisk (for GPT application) — planned
- mkfs.vfat, mkfs.btrfs, bcachefs (for formatting) — invoked by fs/plan when enabled in execution phase
- sgdisk (for GPT application)
- mkfs.vfat, mkfs.btrfs, bcachefs (for formatting)
- udevadm (optional; for settle after partitioning)
- Tracing/logging to stderr by default; optional file at /run/zosstorage/zosstorage.log
Install and build
@@ -44,8 +45,6 @@ Install and build
Binary is target/release/zosstorage.
CLI usage
- Topology selection (config optional):
-t, --topology single|dual-independent|btrfs-raid1|ssd-hdd-bcachefs
- Preview (non-destructive):
--show Print JSON summary to stdout
--report PATH Write JSON summary to a file
@@ -55,19 +54,30 @@ CLI usage
-l, --log-level LEVEL error|warn|info|debug (default: info)
-L, --log-to-file Also write logs to /run/zosstorage/zosstorage.log
- Other:
-c, --config PATH Merge a YAML config file (overrides defaults)
-s, --fstab Enable writing /etc/fstab entries (when mounts are applied)
-a, --apply Perform partitioning, filesystem creation, and mounts (destructive)
-f, --force Present but not implemented (returns an error)
Deprecated (ignored with warning; see ADR-0002)
-t, --topology VALUE Ignored; use kernel cmdline `zosstorage.topology=` instead
-c, --config PATH Ignored; external YAML configuration is not used at runtime
Examples
- Single disk plan with debug logs:
sudo ./zosstorage --show -t single -l debug
- RAID1 btrfs across two disks; print and write summary:
sudo ./zosstorage --show --report /run/zosstorage/plan.json -t btrfs-raid1 -l debug -L
- SSD+HDD bcachefs plan, include removable devices (for lab cases):
sudo ./zosstorage --show -t ssd-hdd-bcachefs --allow-removable -l debug
- Single disk plan with debug logs (defaults to btrfs_single automatically):
sudo ./zosstorage --show -l debug
- Two-disk plan (defaults to dual_independent automatically), write summary:
sudo ./zosstorage --show --report /run/zosstorage/plan.json -l debug -L
- Include removable devices for lab scenarios:
sudo ./zosstorage --show --allow-removable -l debug
- Quiet plan to file:
sudo ./zosstorage --report /run/zosstorage/plan.json -t dual-independent
sudo ./zosstorage --report /run/zosstorage/plan.json
- Apply single-disk plan (DESTRUCTIVE; wipes target disk; defaults select topology automatically):
sudo ./zosstorage --apply
Kernel cmdline override (at boot)
- To force a topology, pass one of:
zosstorage.topology=btrfs-single | bcachefs-single | dual-independent | btrfs-raid1 | ssd-hdd-bcachefs | bcachefs-2copy
- The override affects only topology; all other settings use sane built-in defaults.
Preview JSON shape (examples)
1) Already provisioned (idempotency success):
@@ -107,14 +117,15 @@ Preview JSON shape (examples)
}
],
"filesystems_planned": [
{ "kind": "vfat", "from_roles": ["esp"], "label": "ZOSBOOT", "planned_mountpoint": null },
{ "kind": "btrfs", "from_roles": ["data"], "devices_planned": 2, "label": "ZOSDATA", "planned_mountpoint_template": "/var/cache/{UUID}" }
{ "kind": "vfat", "from_roles": ["esp"], "label": "ZOSBOOT" },
{ "kind": "btrfs", "from_roles": ["data"], "devices_planned": 2, "label": "ZOSDATA" }
],
"mount": {
"scheme": "per_uuid",
"base_dir": "/var/cache",
"fstab_enabled": false,
"target_template": "/var/cache/{UUID}"
"root_mount_template": "/var/mounts/{UUID}",
"final_targets": ["/var/cache/system", "/var/cache/etc", "/var/cache/modules", "/var/cache/vm-meta"]
}
}
@@ -135,8 +146,18 @@ Defaults and policies
btrfs (data) label: ZOSDATA
bcachefs (data/cache) label: ZOSDATA
- Mount scheme:
per-UUID under /var/cache/{UUID}
/etc/fstab generation is disabled by default
- Root mounts (runtime only): each data filesystem is mounted at /var/mounts/{UUID}
- btrfs root options: rw,noatime,subvolid=5
- bcachefs root options: rw,noatime
- Subvolume mounts (from the primary data filesystem only) to final targets:
- /var/cache/system
- /var/cache/etc
- /var/cache/modules
- /var/cache/vm-meta
- Subvolume mount options:
- btrfs: -o rw,noatime,subvol={name}
- bcachefs: -o rw,noatime,X-mount.subdir={name}
- /etc/fstab generation is disabled by default; when enabled, only the four subvolume mounts are written (UUID= sources, deterministic order)
Tracing and logs
- stderr logging level controlled by -l/--log-level (info by default)

34
TODO.md
View File

@@ -9,18 +9,30 @@ Conventions:
Core execution
- [ ] Add “apply mode” switch to orchestrator to perform destructive actions after preview validation
- Wire phase execution in [orchestrator.run(&Context)](src/orchestrator/run.rs:101): apply partitions → udev settle → mkfs → mount → maybe write fstab → build/write report
- Introduce a CLI flag (e.g. `--apply`) guarded by clear logs and safety checks (not preview)
- [ ] Partition application (destructive) in [fn apply_partitions(...)](src/partition/plan.rs:287)
- Translate [PartitionPlan](src/partition/plan.rs:80) to sgdisk commands (create GPT, partitions in order with alignment and names)
- Enforce idempotency: skip if table already matches plan (or abort with explicit validation error)
- Ensure unique partition GUIDs; capture partition device paths and GUIDs for results
- Call [util::udev_settle()](src/util/mod.rs:128) after changes; robust error mapping to Error::Tool / Error::Partition
- [-] Add “apply mode” switch to orchestrator to perform destructive actions after preview validation
- [x] Introduce CLI flag --apply guarded by clear logs and safety checks (not preview) [src/cli/args.rs](src/cli/args.rs)
- [x] Wire partition application and udev settle [orchestrator::run()](src/orchestrator/run.rs:1) → [partition::apply_partitions()](src/partition/plan.rs:1)
- [-] Wire mkfs → mount → maybe write fstab → build/write report [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [x] Wire mkfs: plan_filesystems + make_filesystems [src/orchestrator/run.rs](src/orchestrator/run.rs) + [src/fs/plan.rs](src/fs/plan.rs)
- [ ] Wire mounts (plan/apply) [src/mount/ops.rs](src/mount/ops.rs)
- [ ] maybe write fstab [src/mount/ops.rs](src/mount/ops.rs)
- [ ] build/write report [src/report/state.rs](src/report/state.rs)
- [x] Partition application (destructive) in [partition::apply_partitions()](src/partition/plan.rs:1)
- [x] Boot mode detection and BIOS boot policy
- [x] Implement UEFI detection via /sys/firmware/efi: [is_efi_boot()](src/util/mod.rs:151)
- [x] Planner skips BIOS boot partition when UEFI-booted: [partition::plan_partitions()](src/partition/plan.rs:133)
- [ ] Future: revisit bootblock/bootloader specifics for BIOS vs EFI (confirm if any BIOS-targets require bios_boot) [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md)
- [x] Translate [PartitionPlan](src/partition/plan.rs:1) to sgdisk commands (create GPT, partitions in order with alignment and names)
- [x] Enforce idempotency when required via [idempotency::is_empty_disk()](src/idempotency/mod.rs:1); abort on non-empty
- [x] Capture partition GUIDs, names, device paths via sgdisk -i parsing; map to PartitionResult
- [x] Call [util::udev_settle()](src/util/mod.rs:1) after changes; consistent Error::Tool/Error::Partition mapping
- [-] Filesystem creation in [fn make_filesystems(...)](src/fs/plan.rs:182)
- [x] Base mkfs implemented for vfat/btrfs/bcachefs (UUID capture via blkid)
- [ ] Apply btrfs raid profile from config (e.g., `-m raid1 -d raid1`) for [Topology::BtrfsRaid1](src/types.rs:29) and the desired profile in [struct BtrfsOptions](src/types.rs:89)
- [ ] Optionally map compression options for btrfs and bcachefs from config (e.g., `-O compress=zstd:3` or format-equivalent)
- [x] Apply btrfs RAID profile when topology requires it (multi-device): pass -m raid1 -d raid1 in mkfs.btrfs [src/fs/plan.rs](src/fs/plan.rs)
- [x] Force mkfs.btrfs in apply path with -f to handle leftover signatures from partial runs [src/fs/plan.rs](src/fs/plan.rs)
- [ ] Compression/tuning mapping from config
- [ ] btrfs: apply compression as mount options during mounting phase [src/mount/ops.rs](src/mount/ops.rs)
- [ ] bcachefs: map compression/checksum/cache_mode to format flags (deferred) [src/fs/plan.rs](src/fs/plan.rs)
- [ ] Consider verifying UUID consistency across multi-device filesystems and improve error messages
- [ ] Mount planning and application in [mount::ops](src/mount/ops.rs:1)
- [ ] Implement [fn plan_mounts(...)](src/mount/ops.rs:68): map FsResult UUIDs into `/var/cache/{UUID}` using [cfg.mount.base_dir](src/types.rs:136), and synthesize options per FS kind
@@ -44,7 +56,7 @@ CLI, config, defaults
- [x] Built-in sensible defaults (no YAML required) [src/config/loader.rs](src/config/loader.rs:320)
- [x] Overlays from CLI: log level, file logging, fstab, removable policy, topology [src/config/loader.rs](src/config/loader.rs:247)
- [x] Preview flags (`--show`, `--report`) and topology selection (`-t/--topology`) [src/cli/args.rs](src/cli/args.rs:55)
- [ ] Add `--apply` flag to toggle execute mode and keep preview non-destructive by default [src/cli/args.rs](src/cli/args.rs:55)
- [x] Add `--apply` flag to toggle execute mode and keep preview non-destructive by default [src/cli/args.rs](src/cli/args.rs)
- [ ] Consider environment variable overlays [src/config/loader.rs](src/config/loader.rs:39)
- [ ] Consider hidden/dev flags behind features (e.g., `--dry-run-verbose`, `--trace-io`) [src/cli/args.rs](src/cli/args.rs:26)

View File

@@ -1,185 +0,0 @@
# zosstorage example configuration (full surface)
# Copy to /etc/zosstorage/config.yaml on the target system, or pass with:
# - CLI: --config /path/to/your.yaml
# - Kernel cmdline: zosstorage.config=/path/to/your.yaml
# Precedence (highest to lowest):
# kernel cmdline > CLI flags > CLI --config file > /etc/zosstorage/config.yaml > built-in defaults
version: 1
# -----------------------------------------------------------------------------
# Logging
# -----------------------------------------------------------------------------
logging:
# one of: error, warn, info, debug
level: info
# when true, also logs to /run/zosstorage/zosstorage.log in initramfs
to_file: false
# -----------------------------------------------------------------------------
# Device selection rules
# - include_patterns: device paths that are considered
# - exclude_patterns: device paths to filter out
# - allow_removable: future toggle for removable media (kept false by default)
# - min_size_gib: ignore devices smaller than this size
# -----------------------------------------------------------------------------
device_selection:
include_patterns:
- "^/dev/sd\\w+$"
- "^/dev/nvme\\w+n\\d+$"
- "^/dev/vd\\w+$"
exclude_patterns:
- "^/dev/ram\\d+$"
- "^/dev/zram\\d+$"
- "^/dev/loop\\d+$"
- "^/dev/fd\\d+$"
allow_removable: false
min_size_gib: 10
# -----------------------------------------------------------------------------
# Desired topology (choose ONE)
# single : Single eligible disk; btrfs on data
# dual_independent : Two disks; independent btrfs on each
# ssd_hdd_bcachefs : SSD + HDD; bcachefs with SSD as cache/promote and HDD backing
# btrfs_raid1 : Optional mirrored btrfs across two disks (only when explicitly requested)
# -----------------------------------------------------------------------------
topology:
mode: single
# mode: dual_independent
# mode: ssd_hdd_bcachefs
# mode: btrfs_raid1
# -----------------------------------------------------------------------------
# Partitioning (GPT only)
# Reserved GPT names:
# - bios boot : "zosboot" (tiny BIOS boot partition, non-FS)
# - ESP : "zosboot" (FAT32)
# - Data : "zosdata"
# - Cache : "zoscache" (only for ssd_hdd_bcachefs)
# Reserved filesystem labels:
# - ESP : ZOSBOOT
# - Data (all filesystems including bcachefs): ZOSDATA
# -----------------------------------------------------------------------------
partitioning:
# 1 MiB alignment
alignment_mib: 1
# Abort if any target disk is not empty (required for safety)
require_empty_disks: true
bios_boot:
enabled: true
size_mib: 1
gpt_name: zosboot
esp:
size_mib: 512
label: ZOSBOOT
gpt_name: zosboot
data:
gpt_name: zosdata
# Only used in ssd_hdd_bcachefs
cache:
gpt_name: zoscache
# -----------------------------------------------------------------------------
# Filesystem options and tuning
# All data filesystems (btrfs or bcachefs) use label ZOSDATA
# ESP uses label ZOSBOOT
# -----------------------------------------------------------------------------
filesystem:
btrfs:
# Reserved; must be "ZOSDATA"
label: ZOSDATA
# e.g., "zstd:3", "zstd:5"
compression: zstd:3
# "none" | "raid1" (raid1 typically when topology.mode == btrfs_raid1)
raid_profile: none
bcachefs:
# Reserved; must be "ZOSDATA"
label: ZOSDATA
# "promote" (default) or "writeback" if supported by environment
cache_mode: promote
# Compression algorithm, e.g., "zstd"
compression: zstd
# Checksum algorithm, e.g., "crc32c"
checksum: crc32c
vfat:
# Reserved; must be "ZOSBOOT"
label: ZOSBOOT
# -----------------------------------------------------------------------------
# Mount scheme and optional fstab
# Default behavior mounts data filesystems under /var/cache/<UUID>
# -----------------------------------------------------------------------------
mount:
# Base directory for mounts
base_dir: /var/cache
# Scheme: per_uuid | custom (custom reserved for future)
scheme: per_uuid
# When true, zosstorage will generate /etc/fstab entries in deterministic order
fstab_enabled: false
# -----------------------------------------------------------------------------
# Report output
# JSON report is written after successful provisioning
# -----------------------------------------------------------------------------
report:
path: /run/zosstorage/state.json
# -----------------------------------------------------------------------------
# Examples for different topologies (uncomment and set topology.mode accordingly)
# -----------------------------------------------------------------------------
# Example: single disk (uses btrfs on data)
# topology:
# mode: single
# filesystem:
# btrfs:
# label: ZOSDATA
# compression: zstd:3
# raid_profile: none
# Example: dual independent btrfs (two disks)
# topology:
# mode: dual_independent
# filesystem:
# btrfs:
# label: ZOSDATA
# compression: zstd:5
# raid_profile: none
# Example: SSD + HDD with bcachefs
# topology:
# mode: ssd_hdd_bcachefs
# partitioning:
# cache:
# gpt_name: zoscache
# filesystem:
# bcachefs:
# label: ZOSDATA
# cache_mode: promote
# compression: zstd
# checksum: crc32c
# Example: btrfs RAID1 (two disks)
# topology:
# mode: btrfs_raid1
# filesystem:
# btrfs:
# label: ZOSDATA
# compression: zstd:3
# raid_profile: raid1
# -----------------------------------------------------------------------------
# Notes:
# - Never modify devices outside include_patterns or inside exclude_patterns.
# - Idempotency: if expected GPT names and filesystem labels are already present,
# zosstorage exits success without making changes.
# - --force flag is reserved and not implemented; will return an "unimplemented" error.
# - Kernel cmdline data: URLs for zosstorage.config= are currently unimplemented.
# -----------------------------------------------------------------------------

View File

@@ -6,34 +6,34 @@ Purpose
- After approval, these will be created in the src tree in Code mode.
Index
- [src/lib.rs](src/lib.rs)
- [src/errors.rs](src/errors.rs)
- [src/main.rs](src/main.rs)
- [src/cli/args.rs](src/cli/args.rs)
- [src/logging/mod.rs](src/logging/mod.rs)
- [src/types.rs](src/types.rs)
- [src/config/loader.rs](src/config/loader.rs)
- [src/device/discovery.rs](src/device/discovery.rs)
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/mount/ops.rs](src/mount/ops.rs)
- [src/report/state.rs](src/report/state.rs)
- [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [src/idempotency/mod.rs](src/idempotency/mod.rs)
- [src/util/mod.rs](src/util/mod.rs)
- [src/lib.rs](../src/lib.rs)
- [src/errors.rs](../src/errors.rs)
- [src/main.rs](../src/main.rs)
- [src/cli/args.rs](../src/cli/args.rs)
- [src/logging/mod.rs](../src/logging/mod.rs)
- [src/types.rs](../src/types.rs)
- [src/config/loader.rs](../src/config/loader.rs)
- [src/device/discovery.rs](../src/device/discovery.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- [src/report/state.rs](../src/report/state.rs)
- [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- [src/idempotency/mod.rs](../src/idempotency/mod.rs)
- [src/util/mod.rs](../src/util/mod.rs)
Conventions
- Shared [type Result<T>](src/errors.rs:1) and [enum Error](src/errors.rs:1).
- Shared [type Result<T>](../src/errors.rs:1) and [enum Error](../src/errors.rs:1).
- No stdout prints; use tracing only.
- External tools invoked via [util](src/util/mod.rs) wrappers.
- External tools invoked via [util](../src/util/mod.rs) wrappers.
---
## Crate root
References
- [src/lib.rs](src/lib.rs)
- [type Result<T> = std::result::Result<T, Error>](src/errors.rs:1)
- [src/lib.rs](../src/lib.rs)
- [type Result<T> = std::result::Result<T, Error>](../src/errors.rs:1)
Skeleton (for later implementation in code mode)
```rust
@@ -63,8 +63,8 @@ pub const VERSION: &str = env!("CARGO_PKG_VERSION");
## Errors
References
- [enum Error](src/errors.rs:1)
- [type Result<T>](src/errors.rs:1)
- [enum Error](../src/errors.rs:1)
- [type Result<T>](../src/errors.rs:1)
Skeleton
```rust
@@ -107,8 +107,8 @@ pub type Result<T> = std::result::Result<T, Error>;
## Entrypoint
References
- [fn main()](src/main.rs:1)
- [fn run(ctx: &Context) -> Result<()>](src/orchestrator/run.rs:1)
- [fn main()](../src/main.rs:1)
- [fn run(ctx: &Context) -> Result<()>](../src/orchestrator/run.rs:1)
Skeleton
```rust
@@ -143,8 +143,8 @@ fn real_main() -> Result<()> {
## CLI
References
- [struct Cli](src/cli/args.rs:1)
- [fn from_args() -> Cli](src/cli/args.rs:1)
- [struct Cli](../src/cli/args.rs:1)
- [fn from_args() -> Cli](../src/cli/args.rs:1)
Skeleton
```rust
@@ -170,6 +170,18 @@ pub struct Cli {
#[arg(long = "fstab", default_value_t = false)]
pub fstab: bool,
/// Print preview JSON to stdout (non-destructive)
#[arg(long = "show", default_value_t = false)]
pub show: bool,
/// Write preview JSON to a file (non-destructive)
#[arg(long = "report")]
pub report: Option<String>,
/// Perform partitioning, filesystem creation, and mounts (DESTRUCTIVE)
#[arg(long = "apply", default_value_t = false)]
pub apply: bool,
/// Present but non-functional; returns unimplemented error
#[arg(long = "force")]
pub force: bool,
@@ -186,8 +198,8 @@ pub fn from_args() -> Cli {
## Logging
References
- [struct LogOptions](src/logging/mod.rs:1)
- [fn init_logging(opts: &LogOptions) -> Result<()>](src/logging/mod.rs:1)
- [struct LogOptions](../src/logging/mod.rs:1)
- [fn init_logging(opts: &LogOptions) -> Result<()>](../src/logging/mod.rs:1)
Skeleton
```rust
@@ -218,13 +230,13 @@ pub fn init_logging(opts: &LogOptions) -> Result<()> {
## Configuration types
References
- [struct Config](src/types.rs:1)
- [enum Topology](src/types.rs:1)
- [struct DeviceSelection](src/types.rs:1)
- [struct Partitioning](src/types.rs:1)
- [struct FsOptions](src/types.rs:1)
- [struct MountScheme](src/types.rs:1)
- [struct ReportOptions](src/types.rs:1)
- [struct Config](../src/types.rs:1)
- [enum Topology](../src/types.rs:1)
- [struct DeviceSelection](../src/types.rs:1)
- [struct Partitioning](../src/types.rs:1)
- [struct FsOptions](../src/types.rs:1)
- [struct MountScheme](../src/types.rs:1)
- [struct ReportOptions](../src/types.rs:1)
Skeleton
```rust
@@ -247,8 +259,10 @@ pub struct DeviceSelection {
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum Topology {
Single,
BtrfsSingle,
BcachefsSingle,
DualIndependent,
Bcachefs2Copy,
SsdHddBcachefs,
BtrfsRaid1,
}
@@ -351,8 +365,8 @@ pub struct Config {
## Configuration I/O
References
- [fn load_and_merge(cli: &Cli) -> Result<Config>](src/config/loader.rs:1)
- [fn validate(cfg: &Config) -> Result<()>](src/config/loader.rs:1)
- [fn load_and_merge(cli: &Cli) -> Result<Config>](../src/config/loader.rs:1)
- [fn validate(cfg: &Config) -> Result<()>](../src/config/loader.rs:1)
Skeleton
```rust
@@ -374,10 +388,10 @@ pub fn validate(cfg: &crate::config::types::Config) -> Result<()> {
## Device discovery
References
- [struct Disk](src/device/discovery.rs:1)
- [struct DeviceFilter](src/device/discovery.rs:1)
- [trait DeviceProvider](src/device/discovery.rs:1)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](src/device/discovery.rs:1)
- [struct Disk](../src/device/discovery.rs:1)
- [struct DeviceFilter](../src/device/discovery.rs:1)
- [trait DeviceProvider](../src/device/discovery.rs:1)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](../src/device/discovery.rs:1)
Skeleton
```rust
@@ -418,12 +432,12 @@ pub fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>> {
## Partitioning
References
- [enum PartRole](src/partition/plan.rs:1)
- [struct PartitionSpec](src/partition/plan.rs:1)
- [struct PartitionPlan](src/partition/plan.rs:1)
- [struct PartitionResult](src/partition/plan.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](src/partition/plan.rs:1)
- [enum PartRole](../src/partition/plan.rs:1)
- [struct PartitionSpec](../src/partition/plan.rs:1)
- [struct PartitionPlan](../src/partition/plan.rs:1)
- [struct PartitionResult](../src/partition/plan.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](../src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](../src/partition/plan.rs:1)
Skeleton
```rust
@@ -485,12 +499,12 @@ pub fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>> {
## Filesystems
References
- [enum FsKind](src/fs/plan.rs:1)
- [struct FsSpec](src/fs/plan.rs:1)
- [struct FsPlan](src/fs/plan.rs:1)
- [struct FsResult](src/fs/plan.rs:1)
- [fn plan_filesystems(...)](src/fs/plan.rs:1)
- [fn make_filesystems(...)](src/fs/plan.rs:1)
- [enum FsKind](../src/fs/plan.rs:1)
- [struct FsSpec](../src/fs/plan.rs:1)
- [struct FsPlan](../src/fs/plan.rs:1)
- [struct FsResult](../src/fs/plan.rs:1)
- [fn plan_filesystems(...)](../src/fs/plan.rs:1)
- [fn make_filesystems(...)](../src/fs/plan.rs:1)
Skeleton
```rust
@@ -531,8 +545,8 @@ pub fn plan_filesystems(
todo!("map ESP to vfat, data to btrfs or bcachefs according to topology")
}
/// Create the filesystems and return identity info (UUIDs, labels).
pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
//// Create the filesystems and return identity info (UUIDs, labels).
pub fn make_filesystems(plan: &FsPlan, cfg: &Config) -> Result<Vec<FsResult>> {
todo!("invoke mkfs tools with configured options via util::run_cmd")
}
```
@@ -542,11 +556,11 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
## Mounting
References
- [struct MountPlan](src/mount/ops.rs:1)
- [struct MountResult](src/mount/ops.rs:1)
- [fn plan_mounts(...)](src/mount/ops.rs:1)
- [fn apply_mounts(...)](src/mount/ops.rs:1)
- [fn maybe_write_fstab(...)](src/mount/ops.rs:1)
- [struct MountPlan](../src/mount/ops.rs:1)
- [struct MountResult](../src/mount/ops.rs:1)
- [fn plan_mounts(...)](../src/mount/ops.rs:1)
- [fn apply_mounts(...)](../src/mount/ops.rs:1)
- [fn maybe_write_fstab(...)](../src/mount/ops.rs:1)
Skeleton
```rust
@@ -565,9 +579,13 @@ pub struct MountResult {
pub options: String,
}
/// Build mount plan under /var/cache/<UUID> by default.
//// Build mount plan:
//// - Root-mount all data filesystems under `/var/mounts/{UUID}` (runtime only)
//// - Ensure/create subvolumes on the primary data filesystem: system, etc, modules, vm-meta
//// - Plan final mounts to `/var/cache/{system,etc,modules,vm-meta}` using
//// `subvol=` for btrfs and `X-mount.subdir=` for bcachefs.
pub fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan> {
todo!("create per-UUID directories and mount mapping")
todo!("root mounts under /var/mounts/{UUID}; final subvol/subdir mounts to /var/cache/{system,etc,modules,vm-meta}")
}
/// Apply mounts using syscalls (nix), ensuring directories exist.
@@ -575,9 +593,12 @@ pub fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>> {
todo!("perform mount syscalls and return results")
}
/// Optionally generate /etc/fstab entries in deterministic order.
//// Optionally generate /etc/fstab entries for final subvolume/subdir mounts only.
//// - Write exactly four entries: system, etc, modules, vm-meta
//// - Use UUID= sources; deterministic order by target path
//// - Exclude runtime root mounts under `/var/mounts/{UUID}`
pub fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()> {
todo!("when enabled, write fstab entries")
todo!("when enabled, write only the four final subvolume/subdir entries with UUID= sources")
}
```
@@ -586,10 +607,10 @@ pub fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()> {
## Reporting
References
- [const REPORT_VERSION: &str](src/report/state.rs:1)
- [struct StateReport](src/report/state.rs:1)
- [fn build_report(...)](src/report/state.rs:1)
- [fn write_report(...)](src/report/state.rs:1)
- [const REPORT_VERSION: &str](../src/report/state.rs:1)
- [struct StateReport](../src/report/state.rs:1)
- [fn build_report(...)](../src/report/state.rs:1)
- [fn write_report(...)](../src/report/state.rs:1)
Skeleton
```rust
@@ -632,8 +653,8 @@ pub fn write_report(report: &StateReport, path: &str) -> Result<()> {
## Orchestrator
References
- [struct Context](src/orchestrator/run.rs:1)
- [fn run(ctx: &Context) -> Result<()>](src/orchestrator/run.rs:1)
- [struct Context](../src/orchestrator/run.rs:1)
- [fn run(ctx: &Context) -> Result<()>](../src/orchestrator/run.rs:1)
Skeleton
```rust
@@ -662,8 +683,8 @@ pub fn run(ctx: &Context) -> Result<()> {
## Idempotency
References
- [fn detect_existing_state() -> Result<Option<StateReport>>](src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](src/idempotency/mod.rs:1)
- [fn detect_existing_state() -> Result<Option<StateReport>>](../src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](../src/idempotency/mod.rs:1)
Skeleton
```rust
@@ -685,11 +706,11 @@ pub fn is_empty_disk(disk: &Disk) -> Result<bool> {
## Utilities
References
- [struct CmdOutput](src/util/mod.rs:1)
- [fn which_tool(name: &str) -> Result<Option<String>>](src/util/mod.rs:1)
- [fn run_cmd(args: &[&str]) -> Result<()>](src/util/mod.rs:1)
- [fn run_cmd_capture(args: &[&str]) -> Result<CmdOutput>](src/util/mod.rs:1)
- [fn udev_settle(timeout_ms: u64) -> Result<()>](src/util/mod.rs:1)
- [struct CmdOutput](../src/util/mod.rs:1)
- [fn which_tool(name: &str) -> Result<Option<String>>](../src/util/mod.rs:1)
- [fn run_cmd(args: &[&str]) -> Result<()>](../src/util/mod.rs:1)
- [fn run_cmd_capture(args: &[&str]) -> Result<CmdOutput>](../src/util/mod.rs:1)
- [fn udev_settle(timeout_ms: u64) -> Result<()>](../src/util/mod.rs:1)
Skeleton
```rust
@@ -722,6 +743,11 @@ pub fn run_cmd_capture(args: &[&str]) -> Result<CmdOutput> {
pub fn udev_settle(timeout_ms: u64) -> Result<()> {
todo!("invoke udevadm settle when present")
}
/// Detect UEFI environment by checking /sys/firmware/efi; used to suppress BIOS boot partition on UEFI.
pub fn is_efi_boot() -> bool {
todo!("return Path::new(\"/sys/firmware/efi\").exists()")
}
```
---
@@ -735,24 +761,24 @@ Approval gate
- Add initial tests scaffolding and example configs.
References summary
- [fn main()](src/main.rs:1)
- [fn from_args()](src/cli/args.rs:1)
- [fn init_logging(opts: &LogOptions)](src/logging/mod.rs:1)
- [fn load_and_merge(cli: &Cli)](src/config/loader.rs:1)
- [fn validate(cfg: &Config)](src/config/loader.rs:1)
- [fn discover(filter: &DeviceFilter)](src/device/discovery.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config)](src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan)](src/partition/plan.rs:1)
- [fn plan_filesystems(parts: &[PartitionResult], cfg: &Config)](src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan)](src/fs/plan.rs:1)
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config)](src/mount/ops.rs:1)
- [fn apply_mounts(plan: &MountPlan)](src/mount/ops.rs:1)
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config)](src/mount/ops.rs:1)
- [fn build_report(...)](src/report/state.rs:1)
- [fn write_report(report: &StateReport)](src/report/state.rs:1)
- [fn detect_existing_state()](src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk)](src/idempotency/mod.rs:1)
- [fn which_tool(name: &str)](src/util/mod.rs:1)
- [fn run_cmd(args: &[&str])](src/util/mod.rs:1)
- [fn run_cmd_capture(args: &[&str])](src/util/mod.rs:1)
- [fn udev_settle(timeout_ms: u64)](src/util/mod.rs:1)
- [fn main()](../src/main.rs:1)
- [fn from_args()](../src/cli/args.rs:1)
- [fn init_logging(opts: &LogOptions)](../src/logging/mod.rs:1)
- [fn load_and_merge(cli: &Cli)](../src/config/loader.rs:1)
- [fn validate(cfg: &Config)](../src/config/loader.rs:1)
- [fn discover(filter: &DeviceFilter)](../src/device/discovery.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config)](../src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan)](../src/partition/plan.rs:1)
- [fn plan_filesystems(parts: &[PartitionResult], cfg: &Config)](../src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan)](../src/fs/plan.rs:1)
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config)](../src/mount/ops.rs:1)
- [fn apply_mounts(plan: &MountPlan)](../src/mount/ops.rs:1)
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config)](../src/mount/ops.rs:1)
- [fn build_report(...)](../src/report/state.rs:1)
- [fn write_report(report: &StateReport)](../src/report/state.rs:1)
- [fn detect_existing_state()](../src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk)](../src/idempotency/mod.rs:1)
- [fn which_tool(name: &str)](../src/util/mod.rs:1)
- [fn run_cmd(args: &[&str])](../src/util/mod.rs:1)
- [fn run_cmd_capture(args: &[&str])](../src/util/mod.rs:1)
- [fn udev_settle(timeout_ms: u64)](../src/util/mod.rs:1)

View File

@@ -8,40 +8,40 @@ Conventions
- External tooling calls are mediated via utility wrappers.
Module index
- [src/main.rs](src/main.rs)
- [src/lib.rs](src/lib.rs)
- [src/errors.rs](src/errors.rs)
- [src/cli/args.rs](src/cli/args.rs)
- [src/logging/mod.rs](src/logging/mod.rs)
- [src/types.rs](src/types.rs)
- [src/config/loader.rs](src/config/loader.rs)
- [src/device/discovery.rs](src/device/discovery.rs)
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/mount/ops.rs](src/mount/ops.rs)
- [src/report/state.rs](src/report/state.rs)
- [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [src/idempotency/mod.rs](src/idempotency/mod.rs)
- [src/util/mod.rs](src/util/mod.rs)
- [src/main.rs](../src/main.rs)
- [src/lib.rs](../src/lib.rs)
- [src/errors.rs](../src/errors.rs)
- [src/cli/args.rs](../src/cli/args.rs)
- [src/logging/mod.rs](../src/logging/mod.rs)
- [src/types.rs](../src/types.rs)
- [src/config/loader.rs](../src/config/loader.rs)
- [src/device/discovery.rs](../src/device/discovery.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- [src/report/state.rs](../src/report/state.rs)
- [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- [src/idempotency/mod.rs](../src/idempotency/mod.rs)
- [src/util/mod.rs](../src/util/mod.rs)
Common errors and result
- [enum Error](src/errors.rs:1)
- [enum Error](../src/errors.rs:1)
- Top-level error type covering parse/validation errors, device discovery errors, partitioning failures, filesystem mkfs errors, mount errors, report write errors, and external tool invocation failures with stderr capture.
- [type Result<T> = std::result::Result<T, Error>](src/errors.rs:1)
- [type Result<T> = std::result::Result<T, Error>](../src/errors.rs:1)
- Shared result alias across modules.
Crate root
- [src/lib.rs](src/lib.rs)
- [src/lib.rs](../src/lib.rs)
- Exposes crate version constants, the prelude, and re-exports common types for consumers of the library (tests/integration). No heavy logic.
Entrypoint
- [fn main()](src/main.rs:1)
- [fn main()](../src/main.rs:1)
- Initializes logging based on CLI defaults, parses CLI flags and kernel cmdline, loads and validates configuration, and invokes the orchestrator run sequence. Avoids stdout; logs via tracing only.
Orchestrator
- [struct Context](src/orchestrator/run.rs:1)
- [struct Context](../src/orchestrator/run.rs:1)
- Aggregates resolved configuration, logging options, and environment flags suited for initramfs execution.
- [fn run(ctx: &Context) -> Result<()>](src/orchestrator/run.rs:1)
- [fn run(ctx: &Context) -> Result<()>](../src/orchestrator/run.rs:1)
- High-level one-shot flow:
- Idempotency detection
- Device discovery
@@ -52,131 +52,141 @@ Orchestrator
- Aborts the entire run on any validation or execution failure. Returns Ok on successful no-op if already provisioned.
CLI
- [struct Cli](src/cli/args.rs:1)
- [struct Cli](../src/cli/args.rs:1)
- Mirrors kernel cmdline semantics with flags:
- --config PATH
- --log-level LEVEL
- --log-to-file
- --fstab
- --show
- --report PATH
- --apply
- --force (present, returns unimplemented error)
- [fn from_args() -> Cli](src/cli/args.rs:1)
- [fn from_args() -> Cli](../src/cli/args.rs:1)
- Parses argv without side effects; suitable for initramfs.
Logging
- [struct LogOptions](src/logging/mod.rs:1)
- [struct LogOptions](../src/logging/mod.rs:1)
- Holds level and optional file target (/run/zosstorage/zosstorage.log).
- [fn init_logging(opts: &LogOptions) -> Result<()>](src/logging/mod.rs:1)
- [fn init_logging(opts: &LogOptions) -> Result<()>](../src/logging/mod.rs:1)
- Configures tracing-subscriber for stderr by default and optional file layer when enabled. Must be idempotent.
Configuration types
- [struct Config](src/types.rs:1)
- [struct Config](../src/types.rs:1)
- The validated configuration used by the orchestrator, containing logging, device selection rules, topology, partitioning, filesystem options, mount scheme, and report path.
- [enum Topology](src/types.rs:1)
- Values: single, dual_independent, ssd_hdd_bcachefs, btrfs_raid1 (opt-in).
- [struct DeviceSelection](src/types.rs:1)
- [enum Topology](../src/types.rs:1)
- Values: btrfs_single, bcachefs_single, dual_independent, bcachefs-2copy, ssd_hdd_bcachefs, btrfs_raid1 (opt-in).
- [struct DeviceSelection](../src/types.rs:1)
- Include and exclude regex patterns, minimum size, removable policy.
- [struct Partitioning](src/types.rs:1)
- [struct Partitioning](../src/types.rs:1)
- Alignment, emptiness requirement, bios_boot, esp, data, cache GPT names and sizes where applicable.
- [struct BtrfsOptions](src/types.rs:1)
- [struct BtrfsOptions](../src/types.rs:1)
- Compression string and raid profile (none or raid1).
- [struct BcachefsOptions](src/types.rs:1)
- [struct BcachefsOptions](../src/types.rs:1)
- Cache mode (promote or writeback), compression, checksum.
- [struct VfatOptions](src/types.rs:1)
- [struct VfatOptions](../src/types.rs:1)
- Reserved for ESP mkfs options; includes label ZOSBOOT.
- [struct FsOptions](src/types.rs:1)
- [struct FsOptions](../src/types.rs:1)
- Aggregates BtrfsOptions, BcachefsOptions, VfatOptions and shared defaults such as ZOSDATA label.
- [enum MountSchemeKind](src/types.rs:1)
- [enum MountSchemeKind](../src/types.rs:1)
- Values: per_uuid, custom (future).
- [struct MountScheme](src/types.rs:1)
- [struct MountScheme](../src/types.rs:1)
- Base directory (/var/cache), scheme kind, fstab enabled flag.
- [struct ReportOptions](src/types.rs:1)
- [struct ReportOptions](../src/types.rs:1)
- Output path (/run/zosstorage/state.json).
Configuration IO
- [fn load_and_merge(cli: &Cli) -> Result<Config>](src/config/loader.rs:1)
- [fn load_and_merge(cli: &Cli) -> Result<Config>](../src/config/loader.rs:1)
- Loads built-in defaults, optionally merges on-disk config, overlays CLI flags, and finally overlays kernel cmdline via zosstorage.config=. Validates the YAML against types and constraints.
- [fn validate(cfg: &Config) -> Result<()>](src/config/loader.rs:1)
- [fn validate(cfg: &Config) -> Result<()>](../src/config/loader.rs:1)
- Ensures structural and semantic validity (e.g., disk selection rules not empty, sizes non-zero, supported topology combinations).
Device discovery
- [struct Disk](src/device/discovery.rs:1)
- [struct Disk](../src/device/discovery.rs:1)
- Represents an eligible block device with path, size, rotational flag, and identifiers (serial, model if available).
- [struct DeviceFilter](src/device/discovery.rs:1)
- [struct DeviceFilter](../src/device/discovery.rs:1)
- Derived from DeviceSelection; compiled regexes and size thresholds for efficient filtering.
- [trait DeviceProvider](src/device/discovery.rs:1)
- [trait DeviceProvider](../src/device/discovery.rs:1)
- Abstraction for listing /dev and probing properties, enabling test doubles.
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](src/device/discovery.rs:1)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](../src/device/discovery.rs:1)
- Returns eligible disks or a well-defined error if none are found.
Partitioning
- [enum PartRole](src/partition/plan.rs:1)
- [enum PartRole](../src/partition/plan.rs:1)
- Roles: BiosBoot, Esp, Data, Cache.
- [struct PartitionSpec](src/partition/plan.rs:1)
- [struct PartitionSpec](../src/partition/plan.rs:1)
- Declarative spec for a single partition: role, optional size_mib, gpt_name (zosboot, zosdata, zoscache), and reserved filesystem label when role is Esp (ZOSBOOT).
- [struct DiskPlan](src/partition/plan.rs:1)
- [struct DiskPlan](../src/partition/plan.rs:1)
- The planned set of PartitionSpec instances for a single Disk in the chosen topology.
- [struct PartitionPlan](src/partition/plan.rs:1)
- [struct PartitionPlan](../src/partition/plan.rs:1)
- Combined plan across all target disks, including alignment rules and safety checks.
- [struct PartitionResult](src/partition/plan.rs:1)
- [struct PartitionResult](../src/partition/plan.rs:1)
- Result of applying a DiskPlan: device path of each created partition, role, partition GUID, and gpt_name.
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](src/partition/plan.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](../src/partition/plan.rs:1)
- Produces a GPT-only plan with 1 MiB alignment, bios boot first (1 MiB), ESP 512 MiB, data remainder, and zoscache on SSD for ssd_hdd_bcachefs.
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](../src/partition/plan.rs:1)
- Executes the plan via sgdisk and related utilities. Aborts if target disks are not empty or if signatures are detected.
Filesystems
- [enum FsKind](src/fs/plan.rs:1)
- [enum FsKind](../src/fs/plan.rs:1)
- Values: Vfat, Btrfs, Bcachefs.
- [struct FsSpec](src/fs/plan.rs:1)
- [struct FsSpec](../src/fs/plan.rs:1)
- Maps PartitionResult to desired filesystem kind and label (ZOSBOOT for ESP; ZOSDATA for all data filesystems including bcachefs).
- [struct FsPlan](src/fs/plan.rs:1)
- [struct FsPlan](../src/fs/plan.rs:1)
- Plan of mkfs operations across all partitions and devices given the topology.
- [struct FsResult](src/fs/plan.rs:1)
- [struct FsResult](../src/fs/plan.rs:1)
- Output of mkfs: device path(s), fs uuid, label, and kind.
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](src/fs/plan.rs:1)
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](../src/fs/plan.rs:1)
- Determines which partitions receive vfat, btrfs, or bcachefs, and aggregates tuning options.
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan, cfg: &Config) -> Result<Vec<FsResult>>](../src/fs/plan.rs:1)
- Invokes mkfs.vfat, mkfs.btrfs, mkfs.bcachefs accordingly via utility wrappers and returns filesystem identities.
Mounting
- [struct MountPlan](src/mount/ops.rs:1)
- Derived from FsResult entries: creates target directories under /var/cache/<UUID> and the mounts required for the current boot.
- [struct MountResult](src/mount/ops.rs:1)
- [struct MountPlan](../src/mount/ops.rs:1)
- Derived from FsResult entries. Plans:
- Root mounts for all data filesystems under `/var/mounts/{UUID}` (runtime only).
- btrfs root options: `rw,noatime,subvolid=5`
- bcachefs root options: `rw,noatime`
- Final subvolume/subdir mounts (from the primary data filesystem) to:
- `/var/cache/system`, `/var/cache/etc`, `/var/cache/modules`, `/var/cache/vm-meta`
- [struct MountResult](../src/mount/ops.rs:1)
- Actual mount operations performed (source, target, fstype, options).
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](src/mount/ops.rs:1)
- Translates filesystem identities to mount targets and options.
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](src/mount/ops.rs:1)
- Performs mounts using syscalls (nix crate) with minimal dependencies. Ensures directories exist.
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](src/mount/ops.rs:1)
- When enabled, generates /etc/fstab entries in deterministic order. Disabled by default.
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](../src/mount/ops.rs:1)
- Translates filesystem identities to mount targets and options, including `subvol=` for btrfs and `X-mount.subdir=` for bcachefs.
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](../src/mount/ops.rs:1)
- Performs mounts using syscalls (nix crate). Ensures directories exist and creates subvolumes/subdirs if missing.
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](../src/mount/ops.rs:1)
- When enabled, writes only the four subvolume/subdir mount entries to `/etc/fstab` in deterministic order using `UUID=` sources. Root mounts under `/var/mounts` are excluded.
Reporting
- [const REPORT_VERSION: &str](src/report/state.rs:1)
- [const REPORT_VERSION: &str](../src/report/state.rs:1)
- Version string for the JSON payload schema.
- [struct StateReport](src/report/state.rs:1)
- [struct StateReport](../src/report/state.rs:1)
- Machine-readable state describing discovered disks, created partitions, filesystems, labels, mountpoints, status, and timestamp.
- [fn build_report(disks: &[Disk], parts: &[PartitionResult], fs: &[FsResult], mounts: &[MountResult], status: &str) -> StateReport](src/report/state.rs:1)
- [fn build_report(disks: &[Disk], parts: &[PartitionResult], fs: &[FsResult], mounts: &[MountResult], status: &str) -> StateReport](../src/report/state.rs:1)
- Constructs a StateReport matching REPORT_VERSION.
- [fn write_report(report: &StateReport) -> Result<()>](src/report/state.rs:1)
- [fn write_report(report: &StateReport) -> Result<()>](../src/report/state.rs:1)
- Writes JSON to /run/zosstorage/state.json (configurable).
Idempotency
- [fn detect_existing_state() -> Result<Option<StateReport>>](src/idempotency/mod.rs:1)
- [fn detect_existing_state() -> Result<Option<StateReport>>](../src/idempotency/mod.rs:1)
- Probes for expected GPT names (zosboot, zosdata, zoscache where applicable) and filesystem labels (ZOSBOOT, ZOSDATA). If present and consistent, returns a StateReport; orchestrator exits success without changes.
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](../src/idempotency/mod.rs:1)
- Determines disk emptiness: absence of partitions and known filesystem signatures.
Utilities
- [struct CmdOutput](src/util/mod.rs:1)
- [struct CmdOutput](../src/util/mod.rs:1)
- Captures status, stdout, stderr from external tool invocations.
- [fn which_tool(name: &str) -> Result<Option<String>>](src/util/mod.rs:1)
- [fn which_tool(name: &str) -> Result<Option<String>>](../src/util/mod.rs:1)
- Locates a required system utility in PATH, returning its absolute path if available.
- [fn run_cmd(args: &[&str]) -> Result<()>](src/util/mod.rs:1)
- [fn run_cmd(args: &[&str]) -> Result<()>](../src/util/mod.rs:1)
- Executes a command (args[0] is binary) and returns Ok when exit status is zero; logs stderr on failure.
- [fn run_cmd_capture(args: &[&str]) -> Result<CmdOutput>](src/util/mod.rs:1)
- [fn run_cmd_capture(args: &[&str]) -> Result<CmdOutput>](../src/util/mod.rs:1)
- Executes a command and returns captured output for parsing (e.g., blkid).
- [fn udev_settle(timeout_ms: u64) -> Result<()>](src/util/mod.rs:1)
- [fn udev_settle(timeout_ms: u64) -> Result<()>](../src/util/mod.rs:1)
- Calls udevadm settle with a timeout when available; otherwise no-ops with a warning.
- [fn is_efi_boot() -> bool](../src/util/mod.rs:1)
- Detects UEFI environment by checking `/sys/firmware/efi`; used to suppress BIOS boot partition creation on UEFI systems.
Behavioral notes and contracts
- Safety and idempotency:
@@ -188,10 +198,13 @@ Behavioral notes and contracts
- Data filesystems use label ZOSDATA regardless of backend kind.
- Cache partitions in bcachefs topology use GPT name zoscache.
- Topology-specific behavior:
- single: one data filesystem (btrfs) on the sole disk.
- dual_independent: two separate btrfs filesystems, one per disk.
- btrfs_single: one data filesystem (btrfs) on the sole disk.
- bcachefs_single: one data filesystem (bcachefs) on the sole disk.
- dual_independent: independent btrfs filesystems on each eligible disk (one or more).
- bcachefs-2copy: multi-device bcachefs across two or more data partitions with `--replicas=2` (data and metadata).
- ssd_hdd_bcachefs: bcachefs spanning SSD (cache/promote) and HDD (backing), labeled ZOSDATA.
- btrfs_raid1: only when explicitly requested; otherwise default to independent btrfs.
- UEFI vs BIOS: when running under UEFI (`/sys/firmware/efi` present), the BIOS boot partition is suppressed.
Module dependency overview

View File

@@ -30,116 +30,122 @@ Top level
- [tests/integration_ssd_hdd.rs](tests/integration_ssd_hdd.rs)
Crate sources
- [src/main.rs](src/main.rs)
- [src/lib.rs](src/lib.rs)
- [src/errors.rs](src/errors.rs)
- [src/cli/args.rs](src/cli/args.rs)
- [src/logging/mod.rs](src/logging/mod.rs)
- [src/config/loader.rs](src/config/loader.rs)
- [src/types.rs](src/types.rs)
- [src/device/discovery.rs](src/device/discovery.rs)
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/mount/ops.rs](src/mount/ops.rs)
- [src/report/state.rs](src/report/state.rs)
- [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [src/idempotency/mod.rs](src/idempotency/mod.rs)
- [src/util/mod.rs](src/util/mod.rs)
- [src/main.rs](../src/main.rs)
- [src/lib.rs](../src/lib.rs)
- [src/errors.rs](../src/errors.rs)
- [src/cli/args.rs](../src/cli/args.rs)
- [src/logging/mod.rs](../src/logging/mod.rs)
- [src/config/loader.rs](../src/config/loader.rs)
- [src/types.rs](../src/types.rs)
- [src/device/discovery.rs](../src/device/discovery.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- [src/report/state.rs](../src/report/state.rs)
- [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- [src/idempotency/mod.rs](../src/idempotency/mod.rs)
- [src/util/mod.rs](../src/util/mod.rs)
Module responsibilities
- [src/main.rs](src/main.rs)
- [src/main.rs](../src/main.rs)
- Entrypoint. Parse CLI, initialize logging, load and merge configuration per precedence, call orchestrator. No stdout spam.
- [src/lib.rs](src/lib.rs)
- [src/lib.rs](../src/lib.rs)
- Crate exports, prelude, version constants, Result alias.
- [src/errors.rs](src/errors.rs)
- [src/errors.rs](../src/errors.rs)
- Common error enum and Result alias via thiserror.
- [src/cli/args.rs](src/cli/args.rs)
- [src/cli/args.rs](../src/cli/args.rs)
- CLI definition mirroring kernel cmdline semantics; provide non-interactive interface. Stub --force returns unimplemented.
- [src/logging/mod.rs](src/logging/mod.rs)
- [src/logging/mod.rs](../src/logging/mod.rs)
- Initialize tracing; levels error, warn, info, debug; default to stderr; optional file target.
- [src/config/loader.rs](src/config/loader.rs) and [src/types.rs](src/types.rs)
- [src/config/loader.rs](../src/config/loader.rs) and [src/types.rs](../src/types.rs)
- YAML schema types, validation, loading, and merging with CLI and kernel cmdline.
- [src/device/discovery.rs](src/device/discovery.rs)
- [src/device/discovery.rs](../src/device/discovery.rs)
- Device discovery under /dev with filters and allowlist; probe emptiness safely.
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- GPT-only planning and application; 1 MiB alignment; create bios boot, ESP, data and cache partitions with strict safety checks.
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- Filesystem provisioning: vfat for ESP, btrfs for ZOSDATA, bcachefs for SSD+HDD mode; all data filesystems labeled ZOSDATA.
- [src/mount/ops.rs](src/mount/ops.rs)
- Mount per-UUID under /var/cache/<UUID>. Optional fstab writing, disabled by default.
- [src/report/state.rs](src/report/state.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- Mount scheme:
- Root-mount all data filesystems under `/var/mounts/{UUID}` (runtime only)
- btrfs root: `rw,noatime,subvolid=5`
- bcachefs root: `rw,noatime`
- Create or ensure subvolumes on the primary data filesystem: `system`, `etc`, `modules`, `vm-meta`
- Mount final subvolume/subdir targets at `/var/cache/{system,etc,modules,vm-meta}`
- Optional fstab writing: only the four final targets, deterministic order, `UUID=` sources; disabled by default
- [src/report/state.rs](../src/report/state.rs)
- Build and write JSON state report with version field.
- [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- One-shot flow orchestration with abort-on-any-validation-error policy.
- [src/idempotency/mod.rs](src/idempotency/mod.rs)
- [src/idempotency/mod.rs](../src/idempotency/mod.rs)
- Detect prior provisioning via GPT names and labels; return success-without-changes.
- [src/util/mod.rs](src/util/mod.rs)
- [src/util/mod.rs](../src/util/mod.rs)
- Shell-out, udev settle, and helpers.
Public API surface (signatures; implementation to follow after approval)
Entrypoint and orchestrator
- [fn main()](src/main.rs:1)
- [struct Context](src/orchestrator/run.rs:1)
- [fn run(ctx: &Context) -> Result<()>](src/orchestrator/run.rs:1)
- [fn main()](../src/main.rs:1)
- [struct Context](../src/orchestrator/run.rs:1)
- [fn run(ctx: &Context) -> Result<()>](../src/orchestrator/run.rs:1)
CLI
- [struct Cli](src/cli/args.rs:1)
- [fn from_args() -> Cli](src/cli/args.rs:1)
- [struct Cli](../src/cli/args.rs:1)
- [fn from_args() -> Cli](../src/cli/args.rs:1)
Logging
- [struct LogOptions](src/logging/mod.rs:1)
- [fn init_logging(opts: &LogOptions) -> Result<()>](src/logging/mod.rs:1)
- [struct LogOptions](../src/logging/mod.rs:1)
- [fn init_logging(opts: &LogOptions) -> Result<()>](../src/logging/mod.rs:1)
Config
- [struct Config](src/types.rs:1)
- [enum Topology](src/types.rs:1)
- [struct DeviceSelection](src/types.rs:1)
- [struct FsOptions](src/types.rs:1)
- [struct MountScheme](src/types.rs:1)
- [fn load_and_merge(cli: &Cli) -> Result<Config>](src/config/loader.rs:1)
- [fn validate(cfg: &Config) -> Result<()>](src/config/loader.rs:1)
- [struct Config](../src/types.rs:1)
- [enum Topology](../src/types.rs:1)
- [struct DeviceSelection](../src/types.rs:1)
- [struct FsOptions](../src/types.rs:1)
- [struct MountScheme](../src/types.rs:1)
- [fn load_and_merge(cli: &Cli) -> Result<Config>](../src/config/loader.rs:1)
- [fn validate(cfg: &Config) -> Result<()>](../src/config/loader.rs:1)
Device discovery
- [struct Disk](src/device/discovery.rs:1)
- [struct DeviceFilter](src/device/discovery.rs:1)
- [trait DeviceProvider](src/device/discovery.rs:1)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](src/device/discovery.rs:1)
- [struct Disk](../src/device/discovery.rs:1)
- [struct DeviceFilter](../src/device/discovery.rs:1)
- [trait DeviceProvider](../src/device/discovery.rs:1)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](../src/device/discovery.rs:1)
Partitioning
- [struct PartitionSpec](src/partition/plan.rs:1)
- [struct PartitionPlan](src/partition/plan.rs:1)
- [struct PartitionResult](src/partition/plan.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](src/partition/plan.rs:1)
- [struct PartitionSpec](../src/partition/plan.rs:1)
- [struct PartitionPlan](../src/partition/plan.rs:1)
- [struct PartitionResult](../src/partition/plan.rs:1)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](../src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](../src/partition/plan.rs:1)
Filesystems
- [enum FsKind](src/fs/plan.rs:1)
- [struct FsSpec](src/fs/plan.rs:1)
- [struct FsPlan](src/fs/plan.rs:1)
- [struct FsResult](src/fs/plan.rs:1)
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](src/fs/plan.rs:1)
- [enum FsKind](../src/fs/plan.rs:1)
- [struct FsSpec](../src/fs/plan.rs:1)
- [struct FsPlan](../src/fs/plan.rs:1)
- [struct FsResult](../src/fs/plan.rs:1)
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](../src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](../src/fs/plan.rs:1)
Mounting
- [struct MountPlan](src/mount/ops.rs:1)
- [struct MountResult](src/mount/ops.rs:1)
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](src/mount/ops.rs:1)
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](src/mount/ops.rs:1)
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](src/mount/ops.rs:1)
- [struct MountPlan](../src/mount/ops.rs:1)
- [struct MountResult](../src/mount/ops.rs:1)
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](../src/mount/ops.rs:1)
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](../src/mount/ops.rs:1)
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](../src/mount/ops.rs:1)
Reporting
- [const REPORT_VERSION: &str](src/report/state.rs:1)
- [struct StateReport](src/report/state.rs:1)
- [fn build_report(...) -> StateReport](src/report/state.rs:1)
- [fn write_report(report: &StateReport) -> Result<()>](src/report/state.rs:1)
- [const REPORT_VERSION: &str](../src/report/state.rs:1)
- [struct StateReport](../src/report/state.rs:1)
- [fn build_report(...) -> StateReport](../src/report/state.rs:1)
- [fn write_report(report: &StateReport) -> Result<()>](../src/report/state.rs:1)
Idempotency
- [fn detect_existing_state() -> Result<Option<StateReport>>](src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](src/idempotency/mod.rs:1)
- [fn detect_existing_state() -> Result<Option<StateReport>>](../src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](../src/idempotency/mod.rs:1)
Errors and Result
- [enum Error](src/errors.rs:1)
- [type Result<T> = std::result::Result<T, Error>](src/errors.rs:1)
- [enum Error](../src/errors.rs:1)
- [type Result<T> = std::result::Result<T, Error>](../src/errors.rs:1)
Execution flow
@@ -190,8 +196,15 @@ Filesystem provisioning defaults
- Filesystem tuning options configurable with sensible defaults and extension points
Mount scheme and fstab policy
- Mount under /var/cache/<UUID> using filesystem UUID to create stable subdirectories
- Optional /etc/fstab generation disabled by default; when enabled, produce deterministic order with documentation
- Runtime root mounts:
- Each data filesystem is root-mounted at `/var/mounts/{UUID}` (runtime only)
- btrfs: `rw,noatime,subvolid=5`; bcachefs: `rw,noatime`
- Final targets (from primary data filesystem only):
- `/var/cache/system`, `/var/cache/etc`, `/var/cache/modules`, `/var/cache/vm-meta`
- btrfs subvolume option: `-o subvol={name},noatime`
- bcachefs subdir option: `-o X-mount.subdir={name},noatime`
- /etc/fstab generation:
- Disabled by default. When enabled, write only the four final targets with `UUID=` sources in deterministic order. Root mounts under `/var/mounts` are excluded.
Idempotency detection
- Consider the system provisioned when expected GPT names and filesystem labels are present and consistent

1716
docs/Callgraph.svg Normal file

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 122 KiB

View File

@@ -8,18 +8,18 @@ Goal
Core Principles
1) Contract-first per module
- API signatures and responsibilities are documented in [docs/API-SKELETONS.md](docs/API-SKELETONS.md) and mirrored by crate modules:
- [src/types.rs](src/types.rs)
- [fn load_and_merge()](src/config/loader.rs:1), [fn validate()](src/config/loader.rs:1)
- [fn from_args()](src/cli/args.rs:1)
- [struct LogOptions](src/logging/mod.rs:1), [fn init_logging()](src/logging/mod.rs:1)
- [fn discover()](src/device/discovery.rs:1)
- [fn plan_partitions()](src/partition/plan.rs:1), [fn apply_partitions()](src/partition/plan.rs:1)
- [fn plan_filesystems()](src/fs/plan.rs:1), [fn make_filesystems()](src/fs/plan.rs:1)
- [fn plan_mounts()](src/mount/ops.rs:1), [fn apply_mounts()](src/mount/ops.rs:1), [fn maybe_write_fstab()](src/mount/ops.rs:1)
- [const REPORT_VERSION](src/report/state.rs:1), [fn build_report()](src/report/state.rs:1), [fn write_report()](src/report/state.rs:1)
- [struct Context](src/orchestrator/run.rs:1), [fn run()](src/orchestrator/run.rs:1)
- [fn detect_existing_state()](src/idempotency/mod.rs:1), [fn is_empty_disk()](src/idempotency/mod.rs:1)
- [struct CmdOutput](src/util/mod.rs:1), [fn which_tool()](src/util/mod.rs:1), [fn run_cmd()](src/util/mod.rs:1), [fn run_cmd_capture()](src/util/mod.rs:1), [fn udev_settle()](src/util/mod.rs:1)
- [src/types.rs](../src/types.rs)
- [fn load_and_merge()](../src/config/loader.rs:1), [fn validate()](../src/config/loader.rs:1)
- [fn from_args()](../src/cli/args.rs:1)
- [struct LogOptions](../src/logging/mod.rs:1), [fn init_logging()](../src/logging/mod.rs:1)
- [fn discover()](../src/device/discovery.rs:1)
- [fn plan_partitions()](../src/partition/plan.rs:1), [fn apply_partitions()](../src/partition/plan.rs:1)
- [fn plan_filesystems()](../src/fs/plan.rs:1), [fn make_filesystems()](../src/fs/plan.rs:1)
- [fn plan_mounts()](../src/mount/ops.rs:1), [fn apply_mounts()](../src/mount/ops.rs:1), [fn maybe_write_fstab()](../src/mount/ops.rs:1)
- [const REPORT_VERSION](../src/report/state.rs:1), [fn build_report()](../src/report/state.rs:1), [fn write_report()](../src/report/state.rs:1)
- [struct Context](../src/orchestrator/run.rs:1), [fn run()](../src/orchestrator/run.rs:1)
- [fn detect_existing_state()](../src/idempotency/mod.rs:1), [fn is_empty_disk()](../src/idempotency/mod.rs:1)
- [struct CmdOutput](../src/util/mod.rs:1), [fn which_tool()](../src/util/mod.rs:1), [fn run_cmd()](../src/util/mod.rs:1), [fn run_cmd_capture()](../src/util/mod.rs:1), [fn udev_settle()](../src/util/mod.rs:1)
2) Grep-able region markers in code
- Every module contains the following optional annotated regions:
@@ -55,22 +55,22 @@ Core Principles
6) Module ownership and boundaries
- Add a “Module Responsibilities” section in each modules header doc comment summarizing scope and non-goals.
- Example references:
- [src/device/discovery.rs](src/device/discovery.rs)
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/mount/ops.rs](src/mount/ops.rs)
- [src/report/state.rs](src/report/state.rs)
- [src/device/discovery.rs](../src/device/discovery.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- [src/report/state.rs](../src/report/state.rs)
7) Invariants and safety notes
- For code that must uphold safety or idempotency invariants, annotate with:
// SAFETY: explanation
// IDEMPOTENCY: explanation
- Example locations:
- [fn apply_partitions()](src/partition/plan.rs:1) must enforce empty-disks rule when configured.
- [fn make_filesystems()](src/fs/plan.rs:1) must not run if partitioning failed.
- [fn apply_partitions()](../src/partition/plan.rs:1) must enforce empty-disks rule when configured.
- [fn make_filesystems()](../src/fs/plan.rs:1) must not run if partitioning failed.
8) Error mapping consistency
- Centralize conversions to [enum Error](src/errors.rs:1). When calling external tools, wrap failures into Error::Tool with stderr captured.
- Centralize conversions to [enum Error](../src/errors.rs:1). When calling external tools, wrap failures into Error::Tool with stderr captured.
- Annotate mapping areas with:
// ERROR: mapping external failure to Error::Tool
@@ -111,7 +111,7 @@ Checklist for adding a new feature
- Add examples if config or output formats change
- Update [config/zosstorage.example.yaml](config/zosstorage.example.yaml) or add a new example file
- Keep error mapping and logging consistent:
- Ensure any external tool calls map errors to [enum Error](src/errors.rs:1)
- Ensure any external tool calls map errors to [enum Error](../src/errors.rs:1)
- Run cargo build and update any broken references
Optional automation (future)

294
docs/FUNCTION_LIST.md Normal file
View File

@@ -0,0 +1,294 @@
# Function Reference - Call Graph Analysis
> This documentation is automatically derived from [`Callgraph.svg`](Callgraph.svg) and provides a comprehensive overview of all functions in the zosstorage project, organized by module.
## Table of Contents
- [Main Entry Points](#main-entry-points)
- [CLI & Configuration](#cli--configuration)
- [Orchestration](#orchestration)
- [Device Discovery](#device-discovery)
- [Partition Management](#partition-management)
- [Filesystem Operations](#filesystem-operations)
- [Mount Operations](#mount-operations)
- [Idempotency & State](#idempotency--state)
- [Reporting](#reporting)
- [Utilities](#utilities)
- [Logging](#logging)
- [Type Definitions](#type-definitions)
---
## Main Entry Points
### [`src/main.rs`](../src/main.rs)
| Function | Purpose |
|----------|---------|
| `main()` | Application entry point; initializes the program and handles top-level errors |
| `real_main()` | Core application logic; orchestrates the main workflow after initialization |
---
## CLI & Configuration
### [`src/cli/args.rs`](../src/cli/args.rs)
**Structs:** `Cli`, `LogLevelArg` (enum)
| Function | Purpose |
|----------|---------|
| `from_args()` | Parses command-line arguments and returns a `Cli` configuration object |
### [`src/config/loader.rs`](../src/config/loader.rs)
| Function | Purpose |
|----------|---------|
| `load_and_merge()` | Loads configuration from multiple sources and merges them into a unified config |
| `validate()` | Validates the merged configuration for correctness and completeness |
| `to_value()` | Converts configuration structures to internal value representation |
| `merge_value()` | Recursively merges configuration values, handling conflicts appropriately |
| `cli_overlay_value()` | Overlays CLI-provided values onto existing configuration |
| `kernel_cmdline_topology()` | Extracts topology information from kernel command line parameters |
| `parse_topology_token()` | Parses individual topology tokens from kernel cmdline |
| `default_config()` | Generates default configuration values when no config file is present |
---
## Orchestration
### [`src/orchestrator/run.rs`](../src/orchestrator/run.rs)
**Structs:** `Context`
| Function | Purpose |
|----------|---------|
| `Context::new()` | Creates a new orchestration context with default settings |
| `Context::with_show()` | Builder method to enable show/dry-run mode |
| `Context::with_apply()` | Builder method to enable apply mode (actual execution) |
| `Context::with_report_path()` | Builder method to set the report output path |
| `Context::with_mount_existing()` | Builder method to configure mounting of existing filesystems |
| `Context::with_report_current()` | Builder method to enable reporting of current system state |
| `Context::with_topology_from_cli()` | Builder method to set topology from CLI arguments |
| `Context::with_topology_from_cmdline()` | Builder method to set topology from kernel cmdline |
| `run()` | Main orchestration function; coordinates all storage operations |
| `build_device_filter()` | Constructs device filter based on configuration and user input |
| `enforce_empty_disks()` | Validates that target disks are empty before proceeding |
| `role_str()` | Converts partition role enum to human-readable string |
| `build_summary_json()` | Builds a JSON summary of operations performed |
---
## Device Discovery
### [`src/device/discovery.rs`](../src/device/discovery.rs)
**Structs:** `Disk`, `DeviceFilter`, `SysProvider`
**Traits:** `DeviceProvider`
| Function | Purpose |
|----------|---------|
| `DeviceFilter::matches()` | Checks if a device matches the configured filter criteria |
| `SysProvider::new()` | Creates a new sysfs-based device provider |
| `SysProvider::list_block_devices()` | Lists all block devices found via sysfs |
| `SysProvider::probe_properties()` | Probes detailed properties of a specific device |
| `discover()` | Entry point for device discovery using default provider |
| `discover_with_provider()` | Device discovery with custom provider (for testing/flexibility) |
| `is_ignored_name()` | Checks if device name should be ignored (loop, ram, etc.) |
| `sys_block_path()` | Constructs sysfs path for a given block device |
| `base_name()` | Extracts base device name from path |
| `is_removable_sysfs()` | Checks if device is removable via sysfs |
| `is_partition_sysfs()` | Checks if device is a partition via sysfs |
| `read_disk_size_bytes()` | Reads disk size in bytes from sysfs |
| `read_rotational()` | Determines if disk is rotational (HDD) or not (SSD) |
| `read_model_serial()` | Reads device model and serial number |
| `read_optional_string()` | Utility to safely read optional string values from sysfs |
---
## Partition Management
### [`src/partition/plan.rs`](../src/partition/plan.rs)
**Structs:** `PartitionSpec`, `DiskPlan`, `PartitionPlan`, `PartitionResult`
**Enums:** `PartRole`
| Function | Purpose |
|----------|---------|
| `plan_partitions()` | Creates partition plans for all target disks based on topology |
| `apply_partitions()` | Executes partition plans using sgdisk tool |
| `type_code()` | Returns GPT partition type code for a given partition role |
| `part_dev_path()` | Constructs device path for a partition (e.g., /dev/sda1) |
| `sector_size_bytes()` | Reads logical sector size of disk |
| `parse_sgdisk_info()` | Parses output from sgdisk to extract partition information |
---
## Filesystem Operations
### [`src/fs/plan.rs`](../src/fs/plan.rs)
**Structs:** `FsSpec`, `FsPlan`, `FsResult`
**Enums:** `FsKind`
| Function | Purpose |
|----------|---------|
| `plan_filesystems()` | Plans filesystem creation for all partitions |
| `make_filesystems()` | Creates filesystems according to plan (mkfs.* tools) |
| `capture_uuid()` | Captures UUID of newly created filesystem |
| `parse_blkid_export()` | Parses blkid export format to extract filesystem metadata |
| `probe_existing_filesystems()` | Detects existing filesystems on partitions |
---
## Mount Operations
### [`src/mount/ops.rs`](../src/mount/ops.rs)
**Structs:** `PlannedMount`, `PlannedSubvolMount`, `MountPlan`, `MountResult`
| Function | Purpose |
|----------|---------|
| `fstype_str()` | Converts FsKind enum to mount filesystem type string |
| `plan_mounts()` | Creates mount plans for all filesystems |
| `apply_mounts()` | Executes mount operations and creates mount points |
| `maybe_write_fstab()` | Conditionally writes /etc/fstab entries for persistent mounts |
---
## Idempotency & State
### [`src/idempotency/mod.rs`](../src/idempotency/mod.rs)
| Function | Purpose |
|----------|---------|
| `detect_existing_state()` | Detects existing partitions and filesystems to avoid destructive operations |
| `is_empty_disk()` | Checks if a disk has no partition table or filesystems |
| `parse_blkid_export()` | Parses blkid output to identify existing filesystems |
| `read_proc_partitions_names()` | Reads partition names from /proc/partitions |
| `base_name()` | Extracts base name from device path |
| `is_partition_of()` | Checks if one device is a partition of another |
---
## Reporting
### [`src/report/state.rs`](../src/report/state.rs)
**Structs:** `StateReport`
| Function | Purpose |
|----------|---------|
| `build_report()` | Builds comprehensive state report of operations performed |
| `write_report()` | Writes report to specified output path (JSON format) |
---
## Utilities
### [`src/util/mod.rs`](../src/util/mod.rs)
**Structs:** `CmdOutput`
| Function | Purpose |
|----------|---------|
| `which_tool()` | Locates external tool in PATH (sgdisk, mkfs.*, etc.) |
| `run_cmd()` | Executes shell command and returns exit status |
| `run_cmd_capture()` | Executes command and captures stdout/stderr |
| `udev_settle()` | Waits for udev to process device events |
| `is_efi_boot()` | Detects if system booted in EFI mode |
---
## Logging
### [`src/logging/mod.rs`](../src/logging/mod.rs)
**Structs:** `LogOptions`
| Function | Purpose |
|----------|---------|
| `LogOptions::from_cli()` | Creates logging configuration from CLI arguments |
| `level_from_str()` | Converts string to log level enum |
| `init_logging()` | Initializes logging subsystem with configured options |
---
## Type Definitions
### [`src/types.rs`](../src/types.rs)
**Core Configuration Structures:**
- `Config` - Top-level configuration structure
- `LoggingConfig` - Logging configuration
- `DeviceSelection` - Device selection criteria
- `Topology` - Storage topology definition (enum)
- `Partitioning` - Partition layout specification
- `BiosBootSpec`, `EspSpec`, `DataSpec`, `CacheSpec` - Partition type specifications
- `FsOptions`, `BtrfsOptions`, `BcachefsOptions`, `VfatOptions` - Filesystem options
- `MountScheme`, `MountSchemeKind` - Mount configuration
- `ReportOptions` - Report generation configuration
### [`src/errors.rs`](../src/errors.rs)
**Error Types:**
- `Error` - Main error enum for all error conditions
- `Result<T>` - Type alias for `std::result::Result<T, Error>`
---
## Call Graph Relationships
### Main Execution Flow
```
main() → real_main() → orchestrator::run()
├─→ cli::from_args()
├─→ config::load_and_merge()
├─→ logging::init_logging()
├─→ device::discover()
├─→ partition::plan_partitions()
├─→ partition::apply_partitions()
├─→ fs::plan_filesystems()
├─→ fs::make_filesystems()
├─→ mount::plan_mounts()
├─→ mount::apply_mounts()
└─→ report::build_report() / write_report()
```
### Key Dependencies
- **Orchestrator** (`run()`) calls: All major subsystems
- **Device Discovery** uses: Utilities for system probing
- **Partition/FS/Mount** operations use: Utilities for command execution
- **All operations** call: `util::run_cmd()` or `util::run_cmd_capture()`
- **Idempotency checks** called by: Orchestrator before destructive operations
---
## Function Count Summary
- **Main Entry**: 2 functions
- **CLI & Config**: 9 functions
- **Orchestration**: 13 functions
- **Device Discovery**: 14 functions
- **Partition Management**: 6 functions
- **Filesystem Operations**: 5 functions
- **Mount Operations**: 4 functions
- **Idempotency**: 6 functions
- **Reporting**: 2 functions
- **Utilities**: 6 functions
- **Logging**: 3 functions
**Total: 70 documented functions** across 15 source files
---
## References
- Original call graph visualization: [`docs/Callgraph.svg`](Callgraph.svg)
- Architecture documentation: [`docs/ARCHITECTURE.md`](ARCHITECTURE.md)
- API documentation: [`docs/API.md`](API.md)

View File

@@ -1,27 +1,16 @@
# zosstorage Configuration Schema
# zosstorage Configuration (Deprecated schema)
This document defines the YAML configuration for the initramfs-only disk provisioning utility and the exact precedence rules between configuration sources. It complements [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md).
This schema document is deprecated per [docs/adr/0002-defaults-only-no-external-config.md](docs/adr/0002-defaults-only-no-external-config.md). Runtime now uses defaults-only with a single optional kernel cmdline override. The YAML configuration file is not read at boot.
Canonical paths and keys
- Kernel cmdline key: zosstorage.config=
- Default config file path: /etc/zosstorage/config.yaml
- JSON state report path: /run/zosstorage/state.json
- Optional log file path: /run/zosstorage/zosstorage.log
- fstab generation: disabled by default
- Reserved filesystem labels: ZOSBOOT (ESP), ZOSDATA (all data filesystems)
- GPT partition names: zosboot, zosdata, zoscache
Active behavior (ADR-0002)
- Defaults-only: all settings are defined in code. No /etc/zosstorage/config.yaml is read.
- Optional kernel cmdline override: `zosstorage.topology=VALUE` can override only the topology. Legacy alias `zosstorage.topo=` is accepted.
- CLI: `--config` and `--topology` are deprecated and ignored (warnings emitted). Operational flags remain (`--apply`, `--show`, `--report`, `--fstab`, logging).
- Report path: `/run/zosstorage/state.json`. Optional log file: `/run/zosstorage/zosstorage.log`.
- Reserved labels: `ZOSBOOT` (ESP), `ZOSDATA` (data). GPT names: `zosboot`, `zosdata`, `zoscache`.
Precedence and merge strategy
1. Start from built-in defaults documented here.
2. Merge in the on-disk config file if present at /etc/zosstorage/config.yaml.
3. Merge CLI flags next; these override file values.
4. Merge kernel cmdline last; zosstorage.config= overrides CLI and file.
5. No interactive prompts are permitted.
The kernel cmdline key zosstorage.config= accepts:
- A path to a YAML file inside the initramfs root (preferred).
- A file: absolute path (e.g., file:/run/config/zos.yaml).
- A data: URL containing base64 YAML (optional extension).
Historical reference (original YAML-based schema, no longer used at runtime)
The remainder of this document preserves the previous YAML schema for archival purposes only.
Top-level YAML structure
@@ -43,7 +32,7 @@ device_selection:
allow_removable: false # future option; default false
min_size_gib: 10 # ignore devices smaller than this (default 10)
topology: # desired overall layout; see values below
mode: single # single | dual_independent | ssd_hdd_bcachefs | btrfs_raid1 (optional)
mode: btrfs_single # btrfs_single | bcachefs_single | dual_independent | bcachefs-2copy | ssd_hdd_bcachefs | btrfs_raid1
partitioning:
alignment_mib: 1 # GPT alignment in MiB
require_empty_disks: true # abort if any partition or FS signatures exist
@@ -81,9 +70,11 @@ report:
```
Topology modes
- single: One eligible disk. Create BIOS boot (if enabled), ESP 512 MiB, remainder as data. Make a btrfs filesystem labeled ZOSDATA on the data partition.
- dual_independent: Two eligible disks. On each disk, create BIOS boot (if enabled) + ESP + data. Create a separate btrfs filesystem labeled ZOSDATA on each data partition. No RAID by default.
- ssd_hdd_bcachefs: One SSD/NVMe and one HDD. Create BIOS boot (if enabled) + ESP on both as required. Create cache (on SSD) and data/backing (on HDD) partitions named zoscache and zosdata respectively. Make a bcachefs filesystem across both with label ZOSDATA, using SSD as cache/promote and HDD as backing.
- btrfs_single: One eligible disk. Create BIOS boot (if enabled), ESP 512 MiB, remainder as data. Create a btrfs filesystem labeled ZOSDATA on the data partition.
- bcachefs_single: One eligible disk. Create BIOS boot (if enabled), ESP 512 MiB, remainder as data. Create a bcachefs filesystem labeled ZOSDATA on the data partition.
- dual_independent: One or more eligible disks. On each disk, create BIOS boot (if enabled) + ESP + data. Create an independent btrfs filesystem labeled ZOSDATA on each data partition. No RAID by default.
- bcachefs-2copy: Two or more eligible disks (minimum 2). Create data partitions and then a single multi-device bcachefs labeled ZOSDATA spanning those data partitions. The mkfs step uses `--replicas=2` (data and metadata).
- ssd_hdd_bcachefs: One SSD/NVMe and one HDD. Create BIOS boot (if enabled) + ESP on both as required. Create cache (on SSD) and data/backing (on HDD) partitions named zoscache and zosdata respectively. Create a bcachefs labeled ZOSDATA across SSD(HDD) per policy (SSD cache/promote; HDD backing).
- btrfs_raid1: Optional mode if explicitly requested. Create mirrored btrfs across two disks for the data role with raid1 profile. Not enabled by default.
Validation rules
@@ -117,11 +108,15 @@ Filesystem section
- vfat: label ZOSBOOT used for ESP.
Mount section
- base_dir: default /var/cache.
- scheme:
- per_uuid: mount data filesystems at /var/cache/<FS-UUID>
- custom: reserved for future mapping-by-config, not yet implemented.
- fstab.enabled: default false. When true, zosstorage will generate fstab entries in deterministic order.
- Runtime root mounts (all data filesystems):
- Each data filesystem is root-mounted at `/var/mounts/{UUID}`
- btrfs root mount options: `rw,noatime,subvolid=5`
- bcachefs root mount options: `rw,noatime`
- Subvolume mounts (from the primary data filesystem only) to final targets:
- Targets: `/var/cache/system`, `/var/cache/etc`, `/var/cache/modules`, `/var/cache/vm-meta`
- btrfs subvol options: `-o rw,noatime,subvol={name}`
- bcachefs subdir options: `-o rw,noatime,X-mount.subdir={name}`
- fstab.enabled: default false. When true, zosstorage writes only the four subvolume mount entries, in deterministic target order, using `UUID=` sources for the filesystem; root mounts under `/var/mounts` are excluded.
Report section
- path: default /run/zosstorage/state.json.
@@ -133,7 +128,7 @@ Minimal single-disk btrfs
```yaml
version: 1
topology:
mode: single
mode: btrfs_single
```
Dual independent btrfs (two disks)
@@ -184,15 +179,15 @@ Future extensions
- Multiple topology groups on multi-disk systems
Reference modules
- [src/types.rs](src/types.rs)
- [src/config/loader.rs](src/config/loader.rs)
- [src/cli/args.rs](src/cli/args.rs)
- [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/mount/ops.rs](src/mount/ops.rs)
- [src/report/state.rs](src/report/state.rs)
- [src/idempotency/mod.rs](src/idempotency/mod.rs)
- [src/types.rs](../src/types.rs)
- [src/config/loader.rs](../src/config/loader.rs)
- [src/cli/args.rs](../src/cli/args.rs)
- [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- [src/report/state.rs](../src/report/state.rs)
- [src/idempotency/mod.rs](../src/idempotency/mod.rs)
Change log
- v1: Initial draft of schema and precedence rules

View File

@@ -3,31 +3,31 @@
This document finalizes core specifications required before code skeleton implementation. It complements [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) and [docs/SCHEMA.md](docs/SCHEMA.md), and references the API declarations listed in [docs/API.md](docs/API.md).
Linked modules and functions
- Logging module: [src/logging/mod.rs](src/logging/mod.rs)
- [fn init_logging(opts: &LogOptions) -> Result<()>](src/logging/mod.rs:1)
- Report module: [src/report/state.rs](src/report/state.rs)
- [const REPORT_VERSION: &str](src/report/state.rs:1)
- [fn build_report(...) -> StateReport](src/report/state.rs:1)
- [fn write_report(report: &StateReport) -> Result<()>](src/report/state.rs:1)
- Device module: [src/device/discovery.rs](src/device/discovery.rs)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](src/device/discovery.rs:1)
- Partitioning module: [src/partition/plan.rs](src/partition/plan.rs)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](src/partition/plan.rs:1)
- Filesystems module: [src/fs/plan.rs](src/fs/plan.rs)
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](src/fs/plan.rs:1)
- Mount module: [src/mount/ops.rs](src/mount/ops.rs)
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](src/mount/ops.rs:1)
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](src/mount/ops.rs:1)
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](src/mount/ops.rs:1)
- Idempotency module: [src/idempotency/mod.rs](src/idempotency/mod.rs)
- [fn detect_existing_state() -> Result<Option<StateReport>>](src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](src/idempotency/mod.rs:1)
- CLI module: [src/cli/args.rs](src/cli/args.rs)
- [fn from_args() -> Cli](src/cli/args.rs:1)
- Orchestrator: [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [fn run(ctx: &Context) -> Result<()>](src/orchestrator/run.rs:1)
- Logging module: [src/logging/mod.rs](../src/logging/mod.rs)
- [fn init_logging(opts: &LogOptions) -> Result<()>](../src/logging/mod.rs:1)
- Report module: [src/report/state.rs](../src/report/state.rs)
- [const REPORT_VERSION: &str](../src/report/state.rs:1)
- [fn build_report(...) -> StateReport](../src/report/state.rs:1)
- [fn write_report(report: &StateReport) -> Result<()>](../src/report/state.rs:1)
- Device module: [src/device/discovery.rs](../src/device/discovery.rs)
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](../src/device/discovery.rs:1)
- Partitioning module: [src/partition/plan.rs](../src/partition/plan.rs)
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](../src/partition/plan.rs:1)
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](../src/partition/plan.rs:1)
- Filesystems module: [src/fs/plan.rs](../src/fs/plan.rs)
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](../src/fs/plan.rs:1)
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](../src/fs/plan.rs:1)
- Mount module: [src/mount/ops.rs](../src/mount/ops.rs)
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](../src/mount/ops.rs:1)
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](../src/mount/ops.rs:1)
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](../src/mount/ops.rs:1)
- Idempotency module: [src/idempotency/mod.rs](../src/idempotency/mod.rs)
- [fn detect_existing_state() -> Result<Option<StateReport>>](../src/idempotency/mod.rs:1)
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](../src/idempotency/mod.rs:1)
- CLI module: [src/cli/args.rs](../src/cli/args.rs)
- [fn from_args() -> Cli](../src/cli/args.rs:1)
- Orchestrator: [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- [fn run(ctx: &Context) -> Result<()>](../src/orchestrator/run.rs:1)
---
@@ -39,7 +39,7 @@ Goals
Configuration
- Levels: error, warn, info, debug (default info).
- Propagation: single global initialization via [fn init_logging](src/logging/mod.rs:1). Subsequent calls must be no-ops.
- Propagation: single global initialization via [fn init_logging](../src/logging/mod.rs:1). Subsequent calls must be no-ops.
Implementation notes
- Use tracing and tracing-subscriber.
@@ -57,7 +57,7 @@ Location
- Default output: /run/zosstorage/state.json
Versioning
- Include a top-level string field version equal to [REPORT_VERSION](src/report/state.rs:1). Start with v1.
- Include a top-level string field version equal to [REPORT_VERSION](../src/report/state.rs:1). Start with v1.
Schema example
@@ -154,17 +154,17 @@ Default exclude patterns
- ^/dev/fd\\d+$
Selection policy
- Compile include and exclude regex into [DeviceFilter](src/device/discovery.rs).
- Compile include and exclude regex into [DeviceFilter](../src/device/discovery.rs).
- Enumerate device candidates and apply:
- Must match at least one include.
- Must not match any exclude.
- Must be larger than min_size_gib (default 10).
- Probing
- Gather size, rotational flag, model, serial when available.
- Expose via [struct Disk](src/device/discovery.rs:1).
- Expose via [struct Disk](../src/device/discovery.rs:1).
No eligible disks
- Return a specific error variant in [enum Error](src/errors.rs:1).
- Return a specific error variant in [enum Error](../src/errors.rs:1).
---
@@ -181,17 +181,20 @@ Layout defaults
- Cache partitions (only in ssd_hdd_bcachefs): GPT name zoscache on SSD.
Per-topology specifics
- single: All roles on the single disk.
- dual_independent: Each disk gets BIOS boot + ESP + data.
- ssd_hdd_bcachefs: SSD gets BIOS boot + ESP + zoscache, HDD gets BIOS boot + ESP + zosdata.
- btrfs_single: All roles on the single disk; data formatted as btrfs.
- bcachefs_single: All roles on the single disk; data formatted as bcachefs.
- dual_independent: On each eligible disk (one or more), create BIOS boot (if applicable), ESP, and data.
- bcachefs-2copy: Create data partitions on two or more disks; later formatted as one multi-device bcachefs spanning all data partitions.
- ssd_hdd_bcachefs: SSD gets BIOS boot + ESP + zoscache; HDD gets BIOS boot + ESP + zosdata; combined later into one bcachefs.
- btrfs_raid1: Two disks minimum; data partitions mirrored via btrfs RAID1.
Safety checks
- Ensure unique partition UUIDs.
- Verify no pre-existing partitions or signatures. Use blkid or similar via [run_cmd_capture](src/util/mod.rs:1).
- After partition creation, run udev settle via [udev_settle](src/util/mod.rs:1).
- Verify no pre-existing partitions or signatures. Use blkid or similar via [run_cmd_capture](../src/util/mod.rs:1).
- After partition creation, run udev settle via [udev_settle](../src/util/mod.rs:1).
Application
- Utilize sgdisk helpers in [apply_partitions](src/partition/plan.rs:1).
- Utilize sgdisk helpers in [apply_partitions](../src/partition/plan.rs:1).
---
@@ -199,34 +202,38 @@ Application
Kinds
- Vfat for ESP, label ZOSBOOT.
- Btrfs for data on single and dual_independent.
- Bcachefs for ssd_hdd_bcachefs (SSD cache, HDD backing).
- Btrfs for data in btrfs_single, dual_independent, and btrfs_raid1 (with RAID1 profile).
- Bcachefs for data in bcachefs_single, ssd_hdd_bcachefs (SSD cache + HDD backing), and bcachefs-2copy (multi-device).
- All data filesystems use label ZOSDATA.
Defaults
- btrfs: compression zstd:3, raid_profile none unless explicitly set to raid1 in btrfs_raid1 mode.
- bcachefs: cache_mode promote, compression zstd, checksum crc32c.
- btrfs: compression zstd:3, raid_profile none unless explicitly set; for btrfs_raid1 use -m raid1 -d raid1.
- bcachefs: cache_mode promote, compression zstd, checksum crc32c; for bcachefs-2copy use `--replicas=2` (data and metadata).
- vfat: ESP label ZOSBOOT.
Planning and execution
- Decide mapping of [PartitionResult](src/partition/plan.rs:1) to [FsSpec](src/fs/plan.rs:1) in [plan_filesystems](src/fs/plan.rs:1).
- Create filesystems in [make_filesystems](src/fs/plan.rs:1) through wrapped mkfs tools.
- Capture resulting identifiers (fs uuid, label) in [FsResult](src/fs/plan.rs:1).
- Decide mapping of [PartitionResult](../src/partition/plan.rs:1) to [FsSpec](../src/fs/plan.rs:1) in [plan_filesystems](../src/fs/plan.rs:1).
- Create filesystems in [make_filesystems](../src/fs/plan.rs:1) through wrapped mkfs tools.
- Capture resulting identifiers (fs uuid, label) in [FsResult](../src/fs/plan.rs:1).
---
## 6. Mount scheme and fstab policy
Scheme
- per_uuid under /var/cache: directories named as filesystem UUIDs.
Runtime root mounts (all data filesystems)
- Each data filesystem is root-mounted at `/var/mounts/{UUID}` (runtime only).
- btrfs root mount options: `rw,noatime,subvolid=5`
- bcachefs root mount options: `rw,noatime`
Mount options
- btrfs: ssd when non-rotational underlying device, compress from config, defaults otherwise.
- vfat: defaults, utf8.
Final subvolume/subdir mounts (from the primary data filesystem)
- Create or ensure subvolumes named: `system`, `etc`, `modules`, `vm-meta`
- Mount targets: `/var/cache/system`, `/var/cache/etc`, `/var/cache/modules`, `/var/cache/vm-meta`
- btrfs options: `-o rw,noatime,subvol={name}`
- bcachefs options: `-o rw,noatime,X-mount.subdir={name}`
fstab
fstab policy
- Disabled by default.
- When enabled, [maybe_write_fstab](src/mount/ops.rs:1) writes deterministic entries sorted by target path.
- When enabled, [maybe_write_fstab](../src/mount/ops.rs:1) writes only the four final subvolume/subdir entries using `UUID=` sources, in deterministic target order. Root mounts under `/var/mounts` are excluded.
---
@@ -235,20 +242,23 @@ fstab
Signals for already-provisioned system
- Expected GPT names found: zosboot, zosdata, and zoscache when applicable.
- Filesystems with labels ZOSBOOT for ESP and ZOSDATA for all data filesystems.
- When consistent with selected topology, [detect_existing_state](src/idempotency/mod.rs:1) returns a StateReport and orchestrator exits success without changes.
- When consistent with selected topology, [detect_existing_state](../src/idempotency/mod.rs:1) returns a StateReport and orchestrator exits success without changes.
Disk emptiness
- [is_empty_disk](src/idempotency/mod.rs:1) checks for absence of partitions and FS signatures before any modification.
- [is_empty_disk](../src/idempotency/mod.rs:1) checks for absence of partitions and FS signatures before any modification.
---
## 8. CLI flags and help text outline
Flags mirrored by [struct Cli](src/cli/args.rs:1) parsed via [from_args](src/cli/args.rs:1)
Flags mirrored by [struct Cli](../src/cli/args.rs:1) parsed via [from_args](../src/cli/args.rs:1)
- --config PATH
- --log-level LEVEL error | warn | info | debug
- --log-to-file
- --fstab enable fstab generation
- --show print preview JSON to stdout (non-destructive)
- --report PATH write preview JSON to file (non-destructive)
- --apply perform partitioning, filesystem creation, and mounts (DESTRUCTIVE)
- --force present but returns unimplemented error
Kernel cmdline
@@ -257,7 +267,7 @@ Kernel cmdline
Help text sections
- NAME, SYNOPSIS, DESCRIPTION
- CONFIG PRECEDENCE
- TOPOLOGIES: single, dual_independent, ssd_hdd_bcachefs, btrfs_raid1
- TOPOLOGIES: btrfs_single, bcachefs_single, dual_independent, bcachefs-2copy, ssd_hdd_bcachefs, btrfs_raid1
- SAFETY AND IDEMPOTENCY
- REPORTS
- EXIT CODES: 0 success or already_provisioned, non-zero on error
@@ -267,9 +277,10 @@ Help text sections
## 9. Integration testing plan (QEMU KVM)
Scenarios to scaffold in [tests/](tests/)
- Single disk 40 GiB virtio: validates single topology end-to-end smoke.
- Dual NVMe 40 GiB each: validates dual_independent topology.
- SSD NVMe + HDD virtio: validates ssd_hdd_bcachefs topology.
- Single disk 40 GiB virtio: validates btrfs_single topology end-to-end smoke.
- Dual NVMe 40 GiB each: validates dual_independent topology (independent btrfs per disk).
- SSD NVMe + HDD virtio: validates ssd_hdd_bcachefs topology (bcachefs with SSD cache/promote, HDD backing).
- Three disks: validates bcachefs-2copy across data partitions using `--replicas=2`.
- Negative: no eligible disks, or non-empty disk should abort.
Test strategy
@@ -281,7 +292,7 @@ Test strategy
Artifacts to validate
- Presence of expected partition GPT names.
- Filesystems created with correct labels.
- Mountpoints under /var/cache/<UUID> when running in a VM.
- Runtime root mounts under `/var/mounts/{UUID}` and final subvolume targets at `/var/cache/{system,etc,modules,vm-meta}`.
- JSON report validates against v1 schema.
---

View File

@@ -47,14 +47,14 @@ Consequences
Implementation Notes
- Region markers have been added to key modules:
- [src/config/loader.rs](src/config/loader.rs)
- [src/orchestrator/run.rs](src/orchestrator/run.rs)
- [src/cli/args.rs](src/cli/args.rs)
- [src/device/discovery.rs](src/device/discovery.rs)
- [src/partition/plan.rs](src/partition/plan.rs)
- [src/fs/plan.rs](src/fs/plan.rs)
- [src/mount/ops.rs](src/mount/ops.rs)
- [src/report/state.rs](src/report/state.rs)
- [src/config/loader.rs](../src/config/loader.rs)
- [src/orchestrator/run.rs](../src/orchestrator/run.rs)
- [src/cli/args.rs](../src/cli/args.rs)
- [src/device/discovery.rs](../src/device/discovery.rs)
- [src/partition/plan.rs](../src/partition/plan.rs)
- [src/fs/plan.rs](../src/fs/plan.rs)
- [src/mount/ops.rs](../src/mount/ops.rs)
- [src/report/state.rs](../src/report/state.rs)
- Remaining modules will follow the same pattern as needed (e.g., util, idempotency, main/lib if helpful).
Related Documents

View File

@@ -0,0 +1,109 @@
# ADR 0002: Defaults-Only Configuration; Remove External YAML Config
Status
- Accepted
- Date: 2025-10-06
Context
- Running from initramfs at first boot provides no reliable access to an on-disk configuration file (e.g., /etc/zosstorage/config.yaml). An external file cannot be assumed to exist or be mounted.
- The previous design added precedence and merge complexity across file, CLI, and kernel cmdline as documented in [docs/SCHEMA.md](../SCHEMA.md) and implemented via [fn load_and_merge()](../../src/config/loader.rs:1), increasing maintenance burden and risks of drift.
- YAML introduces misconfiguration risk in early boot, adds I/O, and complicates idempotency guarantees without meaningful benefits for the intended minimal-first initializer.
- The desired model is to ship with sane built-in defaults, selected automatically from detected hardware topology; optional kernel cmdline may override only the topology choice for VM/lab scenarios.
Decision
- Remove all dependency on an on-disk configuration file:
- Do not read /etc/zosstorage/config.yaml or any file-based config.
- Deprecate and ignore repository-local config files for runtime (e.g., config/zosstorage.yaml). The example file [config/zosstorage.example.yaml](../../config/zosstorage.example.yaml) remains as historical reference only and may be removed later.
- Deprecate the --config CLI flag in [struct Cli](../../src/cli/args.rs:1). If present, emit a deprecation warning and ignore it.
- Retain operational CLI flags and logging controls for usability:
- --apply, --show, --report PATH, --fstab, --log-level LEVEL, --log-to-file
- Replace the prior file/CLI/kernel precedence with a defaults-only policy plus a single optional kernel cmdline override:
- Recognized key: zosstorage.topology=VALUE
- The key may override only the topology selection; all other settings use built-in defaults.
- Topology defaults and override policy:
- 1 eligible disk:
- Default: btrfs_single
- Allowed cmdline overrides: btrfs_single, bcachefs_single
- 2 eligible disks:
- Default: dual_independent
- Allowed cmdline overrides: dual_independent, ssd_hdd_bcachefs, btrfs_raid1, bcachefs-2copy
- >2 eligible disks:
- Default: btrfs_raid1
- Allowed cmdline overrides: btrfs_raid1, bcachefs-2copy
- Accept both snake_case and hyphenated forms for VALUE; canonical for two-copy bcachefs is bcachefs-2copy; normalize to [enum Topology](../../src/types.rs:1):
- btrfs_single | btrfs-single
- bcachefs_single | bcachefs-single
- dual_independent | dual-independent
- ssd_hdd_bcachefs | ssd-hdd-bcachefs
- btrfs_raid1 | btrfs-raid1
- bcachefs-2copy
- Kernel cmdline parsing beyond topology is deferred; future extensions for VM workflows may be proposed separately.
Rationale
- Eliminates unreachable configuration paths at first boot and simplifies the mental model.
- Reduces maintenance overhead by removing schema and precedence logic.
- Minimizes early-boot I/O and failure modes while preserving a targeted override for lab/VMs.
- Keeps the tool safe-by-default and fully idempotent without depending on external files.
Consequences
- Documentation:
- Mark [docs/SCHEMA.md](../SCHEMA.md) as deprecated for runtime behavior; retain only as historical reference.
- Update [docs/ARCHITECTURE.md](../ARCHITECTURE.md) and [docs/SPECS.md](../SPECS.md) to reflect defaults-only configuration.
- Update [docs/API.md](../API.md) and [docs/API-SKELETONS.md](../API-SKELETONS.md) where they reference file-based config.
- CLI:
- [struct Cli](../../src/cli/args.rs:1) keeps operational flags; --config becomes a no-op with a deprecation warning.
- Code:
- Replace [fn load_and_merge()](../../src/config/loader.rs:1) with a minimal loader that:
- Builds a [struct Config](../../src/types.rs:1) entirely from baked-in defaults.
- Reads /proc/cmdline to optionally parse zosstorage.topology and normalize to [enum Topology](../../src/types.rs:1).
- Removes YAML parsing, file reads, and merge logic.
- Tests:
- Remove tests that depend on external YAML; add tests for cmdline override normalization and disk-count defaults.
Defaults (authoritative)
- Partitioning:
- GPT only, 1 MiB alignment, BIOS boot 1 MiB first unless UEFI detected via [fn is_efi_boot()](../../src/util/mod.rs:1).
- ESP 512 MiB labeled ZOSBOOT (GPT name: zosboot), data uses GPT name zosdata.
- Filesystems:
- ESP: vfat labeled ZOSBOOT
- Data: label ZOSDATA
- Backend per topology (btrfs for btrfs_*; bcachefs for ssd_hdd_bcachefs and bcachefs-2copy)
- Mount scheme:
- Root-mount all data filesystems under /var/mounts/{UUID}; final subvolume/subdir mounts from the primary data FS to /var/cache/{system,etc,modules,vm-meta}; fstab remains optional.
- Idempotency:
- Unchanged: already-provisioned signals exit success-without-changes via [fn detect_existing_state()](../../src/idempotency/mod.rs:1).
Implementation Plan
1) Introduce a minimal defaults loader in [src/config/loader.rs](../../src/config/loader.rs:1):
- new internal fn parse_topology_from_cmdline() -> Option<Topology>
- new internal fn normalize_topology(s: &str) -> Option<Topology>
- refactor load to construct Config from constants + optional topology override
2) CLI:
- Emit deprecation warning when --config is provided; ignore its value.
3) Docs:
- Add deprecation banner to [docs/SCHEMA.md](../SCHEMA.md).
- Adjust [README.md](../../README.md) to describe defaults and the zosstorage.topology override.
4) Tests:
- Add unit tests for normalization and disk-count policy; remove YAML-based tests.
Backward Compatibility
- External YAML configuration is no longer supported at runtime.
- Kernel cmdline key zosstorage.config= is removed. Only zosstorage.topology remains recognized.
- The JSON report, labels, GPT names, and mount behavior remain unchanged.
Security and Safety
- By eliminating external configuration input, we reduce attack surface and misconfiguration risk in early boot.
- The emptiness and idempotency checks continue to gate destructive operations.
Open Items
- Decide whether to accept additional synonyms (e.g., “bcachefs-raid1”) and map them to existing [enum Topology](../../src/types.rs:1) variants; default is to reject unknown values with a clear error.
- Potential future kernel cmdline keys (e.g., logging level) may be explored via a separate ADR.
Links
- Architecture: [docs/ARCHITECTURE.md](../ARCHITECTURE.md)
- API Index: [docs/API-SKELETONS.md](../API-SKELETONS.md)
- Specs: [docs/SPECS.md](../SPECS.md)
- CLI: [src/cli/args.rs](../../src/cli/args.rs)
- Config loader: [src/config/loader.rs](../../src/config/loader.rs)
- Types: [src/types.rs](../../src/types.rs)
- Util: [src/util/mod.rs](../../src/util/mod.rs)

2932
docs/callgraph.html Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -51,34 +51,15 @@ impl std::fmt::Display for LogLevelArg {
}
}
/// Topology argument (maps to config Topology with snake_case semantics).
#[derive(Debug, Clone, Copy, ValueEnum)]
#[value(rename_all = "kebab_case")]
pub enum TopologyArg {
Single,
DualIndependent,
SsdHddBcachefs,
BtrfsRaid1,
}
impl std::fmt::Display for TopologyArg {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let s = match self {
TopologyArg::Single => "single",
TopologyArg::DualIndependent => "dual_independent",
TopologyArg::SsdHddBcachefs => "ssd_hdd_bcachefs",
TopologyArg::BtrfsRaid1 => "btrfs_raid1",
};
f.write_str(s)
}
}
//// Using crate::types::Topology (ValueEnum) directly for CLI parsing to avoid duplication.
// TopologyArg enum removed; CLI field uses crate::types::Topology
/// zosstorage - one-shot disk initializer for initramfs.
#[derive(Debug, Parser)]
#[command(name = "zosstorage", disable_help_subcommand = true)]
pub struct Cli {
/// Path to YAML configuration (mirrors kernel cmdline key 'zosstorage.config=')
#[arg(short = 'c', long = "config")]
/// DEPRECATED: external YAML configuration is not used at runtime (ADR-0002). Ignored with a warning.
#[arg(short = 'c', long = "config", hide = true)]
pub config: Option<String>,
/// Log level: error, warn, info, debug
@@ -93,29 +74,41 @@ pub struct Cli {
#[arg(short = 's', long = "fstab", default_value_t = false)]
pub fstab: bool,
/// Select topology (overrides config topology)
/// Select topology (CLI has precedence over kernel cmdline)
#[arg(short = 't', long = "topology", value_enum)]
pub topology: Option<TopologyArg>,
pub topology: Option<crate::types::Topology>,
/// Present but non-functional; returns unimplemented error
#[arg(short = 'f', long = "force")]
pub force: bool,
/// Allow removable devices (e.g., USB sticks) to be considered during discovery
/// Overrides config.device_selection.allow_removable when provided
/// Include removable devices (e.g., USB sticks) during discovery (default: false)
#[arg(long = "allow-removable", default_value_t = false)]
pub allow_removable: bool,
/// Attempt to mount existing filesystems based on on-disk headers; no partitioning or mkfs.
/// Non-destructive mounting flow; uses UUID= sources and policy from config.
#[arg(long = "mount-existing", default_value_t = false)]
pub mount_existing: bool,
/// Report current initialized filesystems and mounts without performing changes.
#[arg(long = "report-current", default_value_t = false)]
pub report_current: bool,
/// Print detection and planning summary as JSON to stdout (non-default)
#[arg(long = "show", default_value_t = false)]
pub show: bool,
/// Write detection/planning JSON report to the given path (overrides config.report.path)
/// Write detection/planning JSON report to the given path
#[arg(long = "report")]
pub report: Option<String>,
/// Execute destructive actions (apply mode). When false, runs preview-only.
#[arg(long = "apply", default_value_t = false)]
pub apply: bool,
}
/// Parse CLI arguments (non-interactive; suitable for initramfs).
pub fn from_args() -> Cli {
Cli::parse()
}
}

View File

@@ -10,4 +10,4 @@
pub mod args;
pub use args::*;
pub use args::*;

View File

@@ -1,12 +1,12 @@
//! Configuration loading, merging, and validation (loader).
//!
//! Precedence (highest to lowest):
//! - Kernel cmdline key `zosstorage.config=`
//! - CLI flags
//! - On-disk config file at /etc/zosstorage/config.yaml (if present)
//! - Built-in defaults
//!
//! See [docs/SCHEMA.md](../../docs/SCHEMA.md) for the schema details.
//// Precedence and policy (ADR-0002):
//// - Built-in sane defaults for all settings.
//// - Kernel cmdline key `zosstorage.topology=` (legacy alias `zosstorage.topo=`) may override topology only.
//// - CLI flags control operational toggles only (logging, fstab, allow-removable).
//// - `--config` and `--topology` are deprecated and ignored (warnings emitted).
////
//// Note: [docs/SCHEMA.md](../../docs/SCHEMA.md) is deprecated for runtime configuration; defaults are code-defined.
//
// REGION: API
// api: config::load_and_merge(cli: &crate::cli::Cli) -> crate::Result<crate::config::types::Config>
@@ -26,7 +26,7 @@
// REGION: EXTENSION_POINTS-END
//
// REGION: SAFETY
// safety: precedence enforced (kernel > CLI flags > CLI --config > /etc file > defaults).
// safety: precedence enforced (CLI flags > kernel cmdline > built-in defaults).
// safety: reserved GPT names and labels validated to avoid destructive operations later.
// REGION: SAFETY-END
//
@@ -41,22 +41,21 @@
// REGION: TODO-END
use std::fs;
use std::path::Path;
use crate::{cli::Cli, Error, Result};
use crate::types::*;
use serde_json::{Map, Value};
use base64::Engine as _;
use crate::{Error, Result, cli::Cli};
use serde_json::{Map, Value, json};
use tracing::warn;
/// Load defaults, merge on-disk config, overlay CLI, and finally kernel cmdline key.
//// Build configuration from built-in defaults and minimal operational CLI overlays.
/// Returns a validated Config on success.
///
/// Behavior:
/// - Starts from built-in defaults (documented in docs/SCHEMA.md)
/// - If /etc/zosstorage/config.yaml exists, merge it
/// - If CLI --config is provided, merge that (overrides file defaults)
/// - If kernel cmdline provides `zosstorage.config=...`, merge that last (highest precedence)
/// - Returns Error::Unimplemented when --force is used
/// Behavior (ADR-0002):
/// - Start from built-in defaults (code-defined).
/// - Ignore on-disk YAML and `--config` (deprecated); emit a warning if provided.
/// - CLI `--topology` is supported and has precedence when provided.
/// - If CLI does not provide topology, apply kernel cmdline `zosstorage.topology=` (or legacy `zosstorage.topo=`).
/// - Returns Error::Unimplemented when --force is used.
pub fn load_and_merge(cli: &Cli) -> Result<Config> {
if cli.force {
return Err(Error::Unimplemented("--force flag is not implemented"));
@@ -65,35 +64,23 @@ pub fn load_and_merge(cli: &Cli) -> Result<Config> {
// 1) Start with defaults
let mut merged = to_value(default_config())?;
// 2) Merge default on-disk config if present
let default_cfg_path = "/etc/zosstorage/config.yaml";
if Path::new(default_cfg_path).exists() {
let v = load_yaml_value(default_cfg_path)?;
merge_value(&mut merged, v);
}
// 2) (initramfs) Skipped reading default on-disk config to avoid dependency on /etc.
// If a config is needed, pass it via --config PATH or kernel cmdline `zosstorage.config=...`.
// 3) Merge CLI referenced config (if any)
if let Some(cfg_path) = &cli.config {
let v = load_yaml_value(cfg_path)?;
merge_value(&mut merged, v);
// 3) Deprecated config file flag: warn and ignore
if cli.config.is_some() {
warn!("--config is deprecated and ignored (ADR-0002: defaults-only)");
}
// (no file merge)
// 4) Overlay CLI flags (non-path flags)
let cli_overlay = cli_overlay_value(cli);
merge_value(&mut merged, cli_overlay);
// 5) Merge kernel cmdline referenced config (if any)
if let Some(src) = kernel_cmdline_config_source()? {
match src {
KernelConfigSource::Path(kpath) => {
let v = load_yaml_value(&kpath)?;
merge_value(&mut merged, v);
}
KernelConfigSource::Data(yaml) => {
let v: serde_json::Value = serde_yaml::from_str(&yaml)
.map_err(|e| Error::Config(format!("failed to parse YAML from data: URL: {}", e)))?;
merge_value(&mut merged, v);
}
// 5) Kernel cmdline topology override only when CLI did not provide topology
if cli.topology.is_none() {
if let Some(topo) = kernel_cmdline_topology() {
merge_value(&mut merged, json!({"topology": topo.to_string()}));
}
}
@@ -141,43 +128,50 @@ pub fn validate(cfg: &Config) -> Result<()> {
}
// Reserved GPT names
if cfg.partitioning.esp.gpt_name != "zosboot" {
return Err(Error::Validation(
"partitioning.esp.gpt_name must be 'zosboot'".into(),
));
if cfg.partitioning.esp.gpt_name != GPT_NAME_ZOSBOOT {
return Err(Error::Validation(format!(
"partitioning.esp.gpt_name must be '{}'",
GPT_NAME_ZOSBOOT
)));
}
if cfg.partitioning.data.gpt_name != "zosdata" {
return Err(Error::Validation(
"partitioning.data.gpt_name must be 'zosdata'".into(),
));
if cfg.partitioning.data.gpt_name != GPT_NAME_ZOSDATA {
return Err(Error::Validation(format!(
"partitioning.data.gpt_name must be '{}'",
GPT_NAME_ZOSDATA
)));
}
if cfg.partitioning.cache.gpt_name != "zoscache" {
return Err(Error::Validation(
"partitioning.cache.gpt_name must be 'zoscache'".into(),
));
if cfg.partitioning.cache.gpt_name != GPT_NAME_ZOSCACHE {
return Err(Error::Validation(format!(
"partitioning.cache.gpt_name must be '{}'",
GPT_NAME_ZOSCACHE
)));
}
// BIOS boot name is also 'zosboot' per current assumption
if cfg.partitioning.bios_boot.gpt_name != "zosboot" {
return Err(Error::Validation(
"partitioning.bios_boot.gpt_name must be 'zosboot'".into(),
));
if cfg.partitioning.bios_boot.gpt_name != GPT_NAME_ZOSBOOT {
return Err(Error::Validation(format!(
"partitioning.bios_boot.gpt_name must be '{}'",
GPT_NAME_ZOSBOOT
)));
}
// Reserved filesystem labels
if cfg.filesystem.vfat.label != "ZOSBOOT" {
return Err(Error::Validation(
"filesystem.vfat.label must be 'ZOSBOOT'".into(),
));
if cfg.filesystem.vfat.label != LABEL_ZOSBOOT {
return Err(Error::Validation(format!(
"filesystem.vfat.label must be '{}'",
LABEL_ZOSBOOT
)));
}
if cfg.filesystem.btrfs.label != "ZOSDATA" {
return Err(Error::Validation(
"filesystem.btrfs.label must be 'ZOSDATA'".into(),
));
if cfg.filesystem.btrfs.label != LABEL_ZOSDATA {
return Err(Error::Validation(format!(
"filesystem.btrfs.label must be '{}'",
LABEL_ZOSDATA
)));
}
if cfg.filesystem.bcachefs.label != "ZOSDATA" {
return Err(Error::Validation(
"filesystem.bcachefs.label must be 'ZOSDATA'".into(),
));
if cfg.filesystem.bcachefs.label != LABEL_ZOSDATA {
return Err(Error::Validation(format!(
"filesystem.bcachefs.label must be '{}'",
LABEL_ZOSDATA
)));
}
// Mount scheme
@@ -187,12 +181,16 @@ pub fn validate(cfg: &Config) -> Result<()> {
// Topology-specific quick checks (basic for now)
match cfg.topology {
Topology::Single => {} // nothing special
Topology::BtrfsSingle => {} // nothing special
Topology::BcachefsSingle => {}
Topology::DualIndependent => {}
Topology::SsdHddBcachefs => {}
Topology::Bcachefs2Copy => {}
Topology::BtrfsRaid1 => {
// No enforced requirement here beyond presence of two disks at runtime.
if cfg.filesystem.btrfs.raid_profile != "raid1" && cfg.filesystem.btrfs.raid_profile != "none" {
if cfg.filesystem.btrfs.raid_profile != "raid1"
&& cfg.filesystem.btrfs.raid_profile != "none"
{
return Err(Error::Validation(
"filesystem.btrfs.raid_profile must be 'none' or 'raid1'".into(),
));
@@ -214,15 +212,6 @@ fn to_value<T: serde::Serialize>(t: T) -> Result<Value> {
serde_json::to_value(t).map_err(|e| Error::Other(e.into()))
}
fn load_yaml_value(path: &str) -> Result<Value> {
let s = fs::read_to_string(path)
.map_err(|e| Error::Config(format!("failed to read config file {}: {}", path, e)))?;
// Load as generic serde_json::Value for merging flexibility
let v: serde_json::Value = serde_yaml::from_str(&s)
.map_err(|e| Error::Config(format!("failed to parse YAML {}: {}", path, e)))?;
Ok(v)
}
/// Merge b into a in-place:
/// - Objects are merged key-by-key (recursively)
/// - Arrays and scalars replace
@@ -269,64 +258,54 @@ fn cli_overlay_value(cli: &Cli) -> Value {
root.insert("device_selection".into(), Value::Object(device_selection));
}
// topology override via --topology
if let Some(t) = cli.topology {
// topology override via --topology (takes precedence over kernel cmdline)
if let Some(t) = cli.topology.as_ref() {
root.insert("topology".into(), Value::String(t.to_string()));
}
Value::Object(root)
}
enum KernelConfigSource {
Path(String),
/// Raw YAML from a data: URL payload after decoding (if base64-encoded).
Data(String),
}
/// Resolve a config from kernel cmdline key `zosstorage.config=`.
/// Supports:
/// - absolute paths (e.g., /run/zos.yaml)
/// - file:/absolute/path
/// - data:application/x-yaml;base64,BASE64CONTENT
/// Returns Ok(None) when key absent.
fn kernel_cmdline_config_source() -> Result<Option<KernelConfigSource>> {
//// Parse kernel cmdline for topology override.
//// Accepts `zosstorage.topology=` and legacy alias `zosstorage.topo=`.
pub fn kernel_cmdline_topology() -> Option<Topology> {
let cmdline = fs::read_to_string("/proc/cmdline").unwrap_or_default();
for token in cmdline.split_whitespace() {
if let Some(rest) = token.strip_prefix("zosstorage.config=") {
let mut val = rest.to_string();
// Trim surrounding quotes if any
if (val.starts_with('"') && val.ends_with('"')) || (val.starts_with('\'') && val.ends_with('\'')) {
val = val[1..val.len() - 1].to_string();
let mut val_opt = None;
if let Some(v) = token.strip_prefix("zosstorage.topology=") {
val_opt = Some(v);
} else if let Some(v) = token.strip_prefix("zosstorage.topo=") {
val_opt = Some(v);
}
if let Some(mut val) = val_opt {
if (val.starts_with('"') && val.ends_with('"'))
|| (val.starts_with('\'') && val.ends_with('\''))
{
val = &val[1..val.len() - 1];
}
if let Some(path) = val.strip_prefix("file:") {
return Ok(Some(KernelConfigSource::Path(path.to_string())));
let val_norm = val.trim();
if let Some(t) = parse_topology_token(val_norm) {
return Some(t);
}
if let Some(data_url) = val.strip_prefix("data:") {
// data:[<mediatype>][;base64],<data>
// Find comma separating the header and payload
if let Some(idx) = data_url.find(',') {
let (header, payload) = data_url.split_at(idx);
let payload = &payload[1..]; // skip the comma
let is_base64 = header.split(';').any(|seg| seg.eq_ignore_ascii_case("base64"));
let yaml = if is_base64 {
let decoded = base64::engine::general_purpose::STANDARD
.decode(payload.as_bytes())
.map_err(|e| Error::Config(format!("invalid base64 in data: URL: {}", e)))?;
String::from_utf8(decoded)
.map_err(|e| Error::Config(format!("data: URL payload not UTF-8: {}", e)))?
} else {
payload.to_string()
};
return Ok(Some(KernelConfigSource::Data(yaml)));
} else {
return Err(Error::Config("malformed data: URL (missing comma)".into()));
}
}
// Treat as direct path
return Ok(Some(KernelConfigSource::Path(val)));
}
}
Ok(None)
None
}
//// Helper to parse known topology tokens (canonical names only).
//// Note: underscores are normalized to hyphens prior to matching.
fn parse_topology_token(s: &str) -> Option<Topology> {
let k = s.trim().to_ascii_lowercase().replace('_', "-");
match k.as_str() {
"btrfs-single" => Some(Topology::BtrfsSingle),
"bcachefs-single" => Some(Topology::BcachefsSingle),
"dual-independent" => Some(Topology::DualIndependent),
"ssd-hdd-bcachefs" => Some(Topology::SsdHddBcachefs),
// Canonical single notation for two-copy bcachefs topology
"bcachefs-2copy" => Some(Topology::Bcachefs2Copy),
"btrfs-raid1" => Some(Topology::BtrfsRaid1),
_ => None,
}
}
/// Built-in defaults for the entire configuration (schema version 1).
@@ -352,7 +331,7 @@ fn default_config() -> Config {
allow_removable: false,
min_size_gib: 10,
},
topology: Topology::Single,
topology: Topology::DualIndependent,
partitioning: Partitioning {
alignment_mib: 1,
require_empty_disks: true,
@@ -398,4 +377,4 @@ fn default_config() -> Config {
path: "/run/zosstorage/state.json".into(),
},
}
}
}

View File

@@ -11,5 +11,5 @@
pub mod loader;
pub use loader::{load_and_merge, validate};
pub use crate::types::*;
pub use loader::{load_and_merge, validate};

View File

@@ -186,7 +186,10 @@ pub fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>> {
discover_with_provider(&provider, filter)
}
fn discover_with_provider<P: DeviceProvider>(provider: &P, filter: &DeviceFilter) -> Result<Vec<Disk>> {
fn discover_with_provider<P: DeviceProvider>(
provider: &P,
filter: &DeviceFilter,
) -> Result<Vec<Disk>> {
let mut candidates = provider.list_block_devices()?;
// Probe properties if provider needs to enrich
for d in &mut candidates {
@@ -210,10 +213,15 @@ fn discover_with_provider<P: DeviceProvider>(provider: &P, filter: &DeviceFilter
.collect();
if filtered.is_empty() {
return Err(Error::Device("no eligible disks found after applying filters".to_string()));
return Err(Error::Device(
"no eligible disks found after applying filters".to_string(),
));
}
debug!("eligible disks: {:?}", filtered.iter().map(|d| &d.path).collect::<Vec<_>>());
debug!(
"eligible disks: {:?}",
filtered.iter().map(|d| &d.path).collect::<Vec<_>>()
);
Ok(filtered)
}
@@ -259,9 +267,10 @@ fn read_disk_size_bytes(name: &str) -> Result<u64> {
let p = sys_block_path(name).join("size");
let sectors = fs::read_to_string(&p)
.map_err(|e| Error::Device(format!("read {} failed: {}", p.display(), e)))?;
let sectors: u64 = sectors.trim().parse().map_err(|e| {
Error::Device(format!("parse sectors for {} failed: {}", name, e))
})?;
let sectors: u64 = sectors
.trim()
.parse()
.map_err(|e| Error::Device(format!("parse sectors for {} failed: {}", name, e)))?;
Ok(sectors.saturating_mul(512))
}
@@ -287,11 +296,7 @@ fn read_optional_string(p: PathBuf) -> Option<String> {
while s.ends_with('\n') || s.ends_with('\r') {
s.pop();
}
if s.is_empty() {
None
} else {
Some(s)
}
if s.is_empty() { None } else { Some(s) }
}
Err(_) => None,
}
@@ -324,9 +329,27 @@ mod tests {
fn filter_by_size_and_include_exclude() {
let provider = MockProvider {
disks: vec![
Disk { path: "/dev/sda".into(), size_bytes: 500 * 1024 * 1024 * 1024, rotational: true, model: None, serial: None }, // 500 GiB
Disk { path: "/dev/nvme0n1".into(), size_bytes: 128 * 1024 * 1024 * 1024, rotational: false, model: None, serial: None }, // 128 GiB
Disk { path: "/dev/loop0".into(), size_bytes: 8 * 1024 * 1024 * 1024, rotational: false, model: None, serial: None }, // 8 GiB pseudo (but mock provider supplies it)
Disk {
path: "/dev/sda".into(),
size_bytes: 500 * 1024 * 1024 * 1024,
rotational: true,
model: None,
serial: None,
}, // 500 GiB
Disk {
path: "/dev/nvme0n1".into(),
size_bytes: 128 * 1024 * 1024 * 1024,
rotational: false,
model: None,
serial: None,
}, // 128 GiB
Disk {
path: "/dev/loop0".into(),
size_bytes: 8 * 1024 * 1024 * 1024,
rotational: false,
model: None,
serial: None,
}, // 8 GiB pseudo (but mock provider supplies it)
],
};
@@ -346,7 +369,13 @@ mod tests {
fn no_match_returns_error() {
let provider = MockProvider {
disks: vec![
Disk { path: "/dev/sdb".into(), size_bytes: 50 * 1024 * 1024 * 1024, rotational: true, model: None, serial: None }, // 50 GiB
Disk {
path: "/dev/sdb".into(),
size_bytes: 50 * 1024 * 1024 * 1024,
rotational: true,
model: None,
serial: None,
}, // 50 GiB
],
};
@@ -363,4 +392,4 @@ mod tests {
other => panic!("unexpected error: {:?}", other),
}
}
}
}

View File

@@ -9,4 +9,4 @@
pub mod discovery;
pub use discovery::*;
pub use discovery::*;

View File

@@ -53,4 +53,4 @@ pub enum Error {
}
/// Crate-wide result alias.
pub type Result<T> = std::result::Result<T, Error>;
pub type Result<T> = std::result::Result<T, Error>;

View File

@@ -9,4 +9,4 @@
pub mod plan;
pub use plan::*;
pub use plan::*;

View File

@@ -4,7 +4,7 @@
// api: fs::FsPlan { specs: Vec<FsSpec> }
// api: fs::FsResult { kind: FsKind, devices: Vec<String>, uuid: String, label: String }
// api: fs::plan_filesystems(parts: &[crate::partition::PartitionResult], cfg: &crate::config::types::Config) -> crate::Result<FsPlan>
// api: fs::make_filesystems(plan: &FsPlan) -> crate::Result<Vec<FsResult>>
// api: fs::make_filesystems(plan: &FsPlan, cfg: &crate::types::Config) -> crate::Result<Vec<FsResult>>
// REGION: API-END
//
// REGION: RESPONSIBILITIES
@@ -21,6 +21,7 @@
// REGION: SAFETY
// safety: must not run mkfs on non-empty or unexpected partitions; assume prior validation enforced.
// safety: ensure labels follow reserved semantics (ZOSBOOT for ESP, ZOSDATA for all data FS).
// safety: mkfs.btrfs uses -f in apply path immediately after partitioning to handle leftover signatures.
// REGION: SAFETY-END
//
// REGION: ERROR_MAPPING
@@ -29,8 +30,8 @@
// REGION: ERROR_MAPPING-END
//
// REGION: TODO
// todo: implement mapping of topology to FsSpec including bcachefs cache/backing composition.
// todo: implement mkfs invocation and UUID capture via util::run_cmd / util::run_cmd_capture.
// todo: bcachefs tuning flags mapping from config (compression/checksum/cache_mode) deferred
// todo: add UUID consistency checks across multi-device filesystems
// REGION: TODO-END
//! Filesystem planning and creation for zosstorage.
//!
@@ -41,12 +42,12 @@
//! [fn make_filesystems](plan.rs:1).
use crate::{
Result,
Error, Result,
partition::{PartRole, PartitionResult},
types::{Config, Topology},
partition::{PartitionResult, PartRole},
util::{run_cmd, run_cmd_capture, which_tool},
Error,
};
use std::fs;
use tracing::{debug, warn};
/// Filesystem kinds supported by zosstorage.
@@ -95,17 +96,14 @@ pub struct FsResult {
pub label: String,
}
/// Determine which partitions get which filesystem based on topology.
///
/// Rules:
/// - ESP partitions => Vfat with label from cfg.filesystem.vfat.label (reserved "ZOSBOOT")
/// - Data partitions => Btrfs with label cfg.filesystem.btrfs.label ("ZOSDATA"), unless topology SsdHddBcachefs
/// - SsdHddBcachefs => pair one Cache partition (SSD) with one Data partition (HDD) into one Bcachefs FsSpec with devices [cache, data] and label cfg.filesystem.bcachefs.label ("ZOSDATA")
/// - DualIndependent/BtrfsRaid1 => map each Data partition to its own Btrfs FsSpec (raid profile concerns are handled later during mkfs)
pub fn plan_filesystems(
parts: &[PartitionResult],
cfg: &Config,
) -> Result<FsPlan> {
/// Determine which partitions get which filesystem based on topology.
///
/// Rules:
/// - ESP partitions => Vfat with label from cfg.filesystem.vfat.label (reserved "ZOSBOOT")
/// - Data partitions => Btrfs with label cfg.filesystem.btrfs.label ("ZOSDATA"), unless topology SsdHddBcachefs
/// - SsdHddBcachefs => pair one Cache partition (SSD) with one Data partition (HDD) into one Bcachefs FsSpec with devices [cache, data] and label cfg.filesystem.bcachefs.label ("ZOSDATA")
/// - DualIndependent/BtrfsRaid1 => map each Data partition to its own Btrfs FsSpec (raid profile concerns are handled later during mkfs)
pub fn plan_filesystems(parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan> {
let mut specs: Vec<FsSpec> = Vec::new();
// Always map ESP partitions
@@ -120,10 +118,22 @@ pub fn plan_filesystems(
match cfg.topology {
Topology::SsdHddBcachefs => {
// Expect exactly one cache (SSD) and at least one data (HDD). Use the first data for pairing.
let cache = parts.iter().find(|p| matches!(p.role, PartRole::Cache))
.ok_or_else(|| Error::Filesystem("expected a Cache partition for SsdHddBcachefs topology".to_string()))?;
let data = parts.iter().find(|p| matches!(p.role, PartRole::Data))
.ok_or_else(|| Error::Filesystem("expected a Data partition for SsdHddBcachefs topology".to_string()))?;
let cache = parts
.iter()
.find(|p| matches!(p.role, PartRole::Cache))
.ok_or_else(|| {
Error::Filesystem(
"expected a Cache partition for SsdHddBcachefs topology".to_string(),
)
})?;
let data = parts
.iter()
.find(|p| matches!(p.role, PartRole::Data))
.ok_or_else(|| {
Error::Filesystem(
"expected a Data partition for SsdHddBcachefs topology".to_string(),
)
})?;
specs.push(FsSpec {
kind: FsKind::Bcachefs,
@@ -151,8 +161,42 @@ pub fn plan_filesystems(
label: cfg.filesystem.btrfs.label.clone(),
});
}
_ => {
// Map each Data partition to individual Btrfs filesystems.
Topology::Bcachefs2Copy => {
// Group all Data partitions into a single Bcachefs filesystem across multiple devices (2-copy semantics).
let data_devs: Vec<String> = parts
.iter()
.filter(|p| matches!(p.role, PartRole::Data))
.map(|p| p.device_path.clone())
.collect();
if data_devs.len() < 2 {
return Err(Error::Filesystem(
"Bcachefs2Copy topology requires at least 2 data partitions".to_string(),
));
}
specs.push(FsSpec {
kind: FsKind::Bcachefs,
devices: data_devs,
label: cfg.filesystem.bcachefs.label.clone(),
});
}
Topology::BcachefsSingle => {
// Single-device bcachefs on the sole Data partition.
let data = parts
.iter()
.find(|p| matches!(p.role, PartRole::Data))
.ok_or_else(|| {
Error::Filesystem(
"expected a Data partition for BcachefsSingle topology".to_string(),
)
})?;
specs.push(FsSpec {
kind: FsKind::Bcachefs,
devices: vec![data.device_path.clone()],
label: cfg.filesystem.bcachefs.label.clone(),
});
}
Topology::BtrfsSingle | Topology::DualIndependent => {
// Map Data partition(s) to Btrfs (single device per partition for DualIndependent).
for p in parts.iter().filter(|p| matches!(p.role, PartRole::Data)) {
specs.push(FsSpec {
kind: FsKind::Btrfs,
@@ -164,7 +208,9 @@ pub fn plan_filesystems(
}
if specs.is_empty() {
return Err(Error::Filesystem("no filesystems to create from provided partitions".to_string()));
return Err(Error::Filesystem(
"no filesystems to create from provided partitions".to_string(),
));
}
Ok(FsPlan { specs })
@@ -177,7 +223,7 @@ pub fn plan_filesystems(
/// - This initial implementation applies labels and creates filesystems with minimal flags.
/// - Btrfs RAID profile (e.g., raid1) will be applied in a follow-up by mapping config to mkfs flags.
/// - UUID is captured via blkid -o export on the first device of each spec.
pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
pub fn make_filesystems(plan: &FsPlan, cfg: &Config) -> Result<Vec<FsResult>> {
// Discover required tools up-front
let vfat_tool = which_tool("mkfs.vfat")?;
let btrfs_tool = which_tool("mkfs.btrfs")?;
@@ -185,7 +231,9 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
let blkid_tool = which_tool("blkid")?;
if blkid_tool.is_none() {
return Err(Error::Filesystem("blkid not found in PATH; cannot capture filesystem UUIDs".into()));
return Err(Error::Filesystem(
"blkid not found in PATH; cannot capture filesystem UUIDs".into(),
));
}
let blkid = blkid_tool.unwrap();
@@ -218,10 +266,29 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
return Err(Error::Filesystem("mkfs.btrfs not found in PATH".into()));
};
if spec.devices.is_empty() {
return Err(Error::Filesystem("btrfs requires at least one device".into()));
return Err(Error::Filesystem(
"btrfs requires at least one device".into(),
));
}
// mkfs.btrfs -L LABEL dev1 [dev2 ...]
// mkfs.btrfs -L LABEL [ -m raid1 -d raid1 (when multi-device/raid1) ] dev1 [dev2 ...]
let mut args: Vec<String> = vec![mkfs.clone(), "-L".into(), spec.label.clone()];
// If this Btrfs is multi-device (as planned in BtrfsRaid1 topology),
// set metadata/data profiles to raid1. This keeps plan/apply consistent.
if spec.devices.len() >= 2 {
args.push("-m".into());
args.push("raid1".into());
args.push("-d".into());
args.push("raid1".into());
}
// Note: compression is a mount-time option for btrfs; we will apply it in mount phase.
// Leaving mkfs-time compression unset by design.
// Force formatting in apply path to avoid leftover signatures on freshly created partitions.
// Safe because we just created these partitions in this run.
args.push("-f".into());
args.extend(spec.devices.iter().cloned());
let args_ref: Vec<&str> = args.iter().map(|s| s.as_str()).collect();
run_cmd(&args_ref)?;
@@ -240,11 +307,22 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
let Some(ref mkfs) = bcachefs_tool else {
return Err(Error::Filesystem("bcachefs not found in PATH".into()));
};
if spec.devices.len() < 2 {
return Err(Error::Filesystem("bcachefs requires at least two devices (cache + backing)".into()));
if spec.devices.is_empty() {
return Err(Error::Filesystem(
"bcachefs requires at least one device".into(),
));
}
// bcachefs format --label LABEL [--replicas=2] dev1 [dev2 ...]
// Apply replicas policy for Bcachefs2Copy topology (data+metadata replicas = 2)
let mut args: Vec<String> = vec![
mkfs.clone(),
"format".into(),
"--label".into(),
spec.label.clone(),
];
if matches!(cfg.topology, Topology::Bcachefs2Copy) {
args.push("--replicas=2".into());
}
// bcachefs format --label LABEL dev_cache dev_backing ...
let mut args: Vec<String> = vec![mkfs.clone(), "format".into(), "--label".into(), spec.label.clone()];
args.extend(spec.devices.iter().cloned());
let args_ref: Vec<&str> = args.iter().map(|s| s.as_str()).collect();
run_cmd(&args_ref)?;
@@ -267,40 +345,137 @@ pub fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>> {
}
fn capture_uuid(blkid: &str, dev: &str) -> Result<String> {
// blkid -o export /dev/...
let out = run_cmd_capture(&[blkid, "-o", "export", dev])?;
let map = parse_blkid_export(&out.stdout);
// Prefer ID_FS_UUID if present, fall back to UUID
if let Some(u) = map.get("ID_FS_UUID") {
return Ok(u.clone());
}
if let Some(u) = map.get("UUID") {
return Ok(u.clone());
}
warn!("blkid did not report UUID for {}", dev);
Err(Error::Filesystem(format!("missing UUID in blkid output for {}", dev)))
// blkid -o export /dev/...
let out = run_cmd_capture(&[blkid, "-o", "export", dev])?;
let map = parse_blkid_export(&out.stdout);
// Prefer ID_FS_UUID if present, fall back to UUID
if let Some(u) = map.get("ID_FS_UUID") {
return Ok(u.clone());
}
if let Some(u) = map.get("UUID") {
return Ok(u.clone());
}
warn!("blkid did not report UUID for {}", dev);
Err(Error::Filesystem(format!(
"missing UUID in blkid output for {}",
dev
)))
}
/// Minimal parser for blkid -o export KEY=VAL lines.
fn parse_blkid_export(s: &str) -> std::collections::HashMap<String, String> {
let mut map = std::collections::HashMap::new();
for line in s.lines() {
if let Some((k, v)) = line.split_once('=') {
map.insert(k.trim().to_string(), v.trim().to_string());
}
}
map
let mut map = std::collections::HashMap::new();
for line in s.lines() {
if let Some((k, v)) = line.split_once('=') {
map.insert(k.trim().to_string(), v.trim().to_string());
}
}
map
}
/// Probe existing filesystems on the system and return their identities (kind, uuid, label).
///
/// This inspects /proc/partitions and uses `blkid -o export` on each device to detect:
/// - Data filesystems: Btrfs or Bcachefs with label "ZOSDATA"
/// - ESP filesystems: Vfat with label "ZOSBOOT"
/// Multi-device filesystems (e.g., btrfs) are de-duplicated by UUID.
///
/// Returns:
/// - Vec<FsResult> with at most one entry per filesystem UUID.
pub fn probe_existing_filesystems() -> Result<Vec<FsResult>> {
let Some(blkid) = which_tool("blkid")? else {
return Err(Error::Filesystem(
"blkid not found in PATH; cannot probe existing filesystems".into(),
));
};
let content = fs::read_to_string("/proc/partitions")
.map_err(|e| Error::Filesystem(format!("/proc/partitions read error: {}", e)))?;
let mut results_by_uuid: std::collections::HashMap<String, FsResult> =
std::collections::HashMap::new();
for line in content.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with("major") {
continue;
}
// Format: major minor #blocks name
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 4 {
continue;
}
let name = parts[3];
// Skip pseudo devices commonly not relevant (loop, ram, zram, fd)
if name.starts_with("loop")
|| name.starts_with("ram")
|| name.starts_with("zram")
|| name.starts_with("fd")
{
continue;
}
let dev_path = format!("/dev/{}", name);
// Probe with blkid -o export; ignore non-zero statuses meaning "nothing found"
let out = match run_cmd_capture(&[blkid.as_str(), "-o", "export", dev_path.as_str()]) {
Ok(o) => o,
Err(Error::Tool { status, .. }) if status != 0 => {
// No recognizable signature; skip
continue;
}
Err(_) => {
// Unexpected failure; skip this device
continue;
}
};
let map = parse_blkid_export(&out.stdout);
let ty = map.get("TYPE").cloned().unwrap_or_default();
let label = map
.get("ID_FS_LABEL")
.cloned()
.or_else(|| map.get("LABEL").cloned())
.unwrap_or_default();
let uuid = map
.get("ID_FS_UUID")
.cloned()
.or_else(|| map.get("UUID").cloned());
let (kind_opt, expected_label) = match ty.as_str() {
"btrfs" => (Some(FsKind::Btrfs), "ZOSDATA"),
"bcachefs" => (Some(FsKind::Bcachefs), "ZOSDATA"),
"vfat" => (Some(FsKind::Vfat), "ZOSBOOT"),
_ => (None, ""),
};
if let (Some(kind), Some(u)) = (kind_opt, uuid) {
// Enforce reserved label semantics
if !expected_label.is_empty() && label != expected_label {
continue;
}
// Deduplicate multi-device filesystems by UUID; record first-seen device
results_by_uuid.entry(u.clone()).or_insert(FsResult {
kind,
devices: vec![dev_path.clone()],
uuid: u,
label: label.clone(),
});
}
}
Ok(results_by_uuid.into_values().collect())
}
#[cfg(test)]
mod tests_parse {
use super::parse_blkid_export;
use super::parse_blkid_export;
#[test]
fn parse_export_ok() {
let s = "ID_FS_UUID=abcd-1234\nUUID=abcd-1234\nTYPE=btrfs\n";
let m = parse_blkid_export(s);
assert_eq!(m.get("ID_FS_UUID").unwrap(), "abcd-1234");
assert_eq!(m.get("TYPE").unwrap(), "btrfs");
}
}
#[test]
fn parse_export_ok() {
let s = "ID_FS_UUID=abcd-1234\nUUID=abcd-1234\nTYPE=btrfs\n";
let m = parse_blkid_export(s);
assert_eq!(m.get("ID_FS_UUID").unwrap(), "abcd-1234");
assert_eq!(m.get("TYPE").unwrap(), "btrfs");
}
}

View File

@@ -28,14 +28,14 @@
//! disks are empty before making any destructive changes.
use crate::{
device::Disk,
report::{StateReport, REPORT_VERSION},
util::{run_cmd_capture, which_tool},
Error, Result,
device::Disk,
report::{REPORT_VERSION, StateReport},
util::{run_cmd_capture, which_tool},
};
use humantime::format_rfc3339;
use serde_json::json;
use std::{collections::HashMap, fs, path::Path};
use humantime::format_rfc3339;
use tracing::{debug, warn};
/// Return existing state if system is already provisioned; otherwise None.
@@ -155,7 +155,10 @@ pub fn is_empty_disk(disk: &Disk) -> Result<bool> {
// Probe with blkid -p
let Some(blkid) = which_tool("blkid")? else {
warn!("blkid not found; conservatively treating {} as not empty", disk.path);
warn!(
"blkid not found; conservatively treating {} as not empty",
disk.path
);
return Ok(false);
};
@@ -237,7 +240,11 @@ fn is_partition_of(base: &str, name: &str) -> bool {
if name == base {
return false;
}
let ends_with_digit = base.chars().last().map(|c| c.is_ascii_digit()).unwrap_or(false);
let ends_with_digit = base
.chars()
.last()
.map(|c| c.is_ascii_digit())
.unwrap_or(false);
if ends_with_digit {
// nvme0n1 -> nvme0n1p1
if name.starts_with(base) {
@@ -281,4 +288,4 @@ mod tests {
assert!(!is_partition_of("nvme0n1", "nvme0n1"));
assert!(!is_partition_of("nvme0n1", "nvme0n2p1"));
}
}
}

View File

@@ -1,20 +1,20 @@
//! Crate root for zosstorage: one-shot disk provisioning utility for initramfs.
pub mod cli;
pub mod logging;
pub mod config;
pub mod device;
pub mod partition;
pub mod fs;
pub mod mount;
pub mod report;
pub mod orchestrator;
pub mod idempotency;
pub mod util;
pub mod errors;
pub mod types; // top-level types (moved from config/types.rs for visibility)
pub mod fs;
pub mod idempotency;
pub mod logging;
pub mod mount;
pub mod orchestrator;
pub mod partition;
pub mod report;
pub mod types;
pub mod util; // top-level types (moved from config/types.rs for visibility)
pub use errors::{Error, Result};
/// Crate version string from Cargo.
pub const VERSION: &str = env!("CARGO_PKG_VERSION");
pub const VERSION: &str = env!("CARGO_PKG_VERSION");

View File

@@ -36,10 +36,10 @@ use std::fs::OpenOptions;
use std::io::{self};
use std::sync::OnceLock;
use tracing::Level;
use tracing_subscriber::filter::LevelFilter;
use tracing_subscriber::fmt;
use tracing_subscriber::prelude::*;
use tracing_subscriber::registry::Registry;
use tracing_subscriber::filter::LevelFilter;
use tracing_subscriber::util::SubscriberInitExt;
/// Logging options resolved from CLI and/or config.
@@ -116,21 +116,27 @@ pub fn init_logging(opts: &LogOptions) -> Result<()> {
.with(stderr_layer)
.with(file_layer)
.try_init()
.map_err(|e| crate::Error::Other(anyhow::anyhow!("failed to set global logger: {}", e)))?;
.map_err(|e| {
crate::Error::Other(anyhow::anyhow!("failed to set global logger: {}", e))
})?;
} else {
// Fall back to stderr-only if file cannot be opened
Registry::default()
.with(stderr_layer)
.try_init()
.map_err(|e| crate::Error::Other(anyhow::anyhow!("failed to set global logger: {}", e)))?;
.map_err(|e| {
crate::Error::Other(anyhow::anyhow!("failed to set global logger: {}", e))
})?;
}
} else {
Registry::default()
.with(stderr_layer)
.try_init()
.map_err(|e| crate::Error::Other(anyhow::anyhow!("failed to set global logger: {}", e)))?;
.map_err(|e| {
crate::Error::Other(anyhow::anyhow!("failed to set global logger: {}", e))
})?;
}
let _ = INIT_GUARD.set(());
Ok(())
}
}

View File

@@ -51,6 +51,13 @@ fn real_main() -> Result<()> {
let ctx = orchestrator::Context::new(cfg, log_opts)
.with_show(cli.show)
.with_report_path(cli.report.clone());
.with_apply(cli.apply)
.with_mount_existing(cli.mount_existing)
.with_report_current(cli.report_current)
.with_report_path(cli.report.clone())
.with_topology_from_cli(cli.topology.is_some())
.with_topology_from_cmdline(
config::loader::kernel_cmdline_topology().is_some() && cli.topology.is_none(),
);
orchestrator::run(&ctx)
}

View File

@@ -9,4 +9,4 @@
pub mod ops;
pub use ops::*;
pub use ops::*;

View File

@@ -1,85 +1,548 @@
// REGION: API
// api: mount::MountPlan { entries: Vec<(String, String, String, String)> }
// note: tuple order = (source, target, fstype, options)
// REGION: API — one-liners for plan_mounts/apply_mounts/maybe_write_fstab and structs
// api: mount::MountPlan { root_mounts: Vec<PlannedMount>, subvol_mounts: Vec<PlannedSubvolMount>, primary_uuid: Option<String> }
// api: mount::MountResult { source: String, target: String, fstype: String, options: String }
// api: mount::plan_mounts(fs_results: &[crate::fs::FsResult], cfg: &crate::config::types::Config) -> crate::Result<MountPlan>
// api: mount::plan_mounts(fs_results: &[crate::fs::FsResult], cfg: &crate::types::Config) -> crate::Result<MountPlan>
// api: mount::apply_mounts(plan: &MountPlan) -> crate::Result<Vec<MountResult>>
// api: mount::maybe_write_fstab(mounts: &[MountResult], cfg: &crate::config::types::Config) -> crate::Result<()>
// api: mount::maybe_write_fstab(mounts: &[MountResult], cfg: &crate::types::Config) -> crate::Result<()>
// REGION: API-END
//
// REGION: RESPONSIBILITIES
// - Translate filesystem identities to mount targets, defaulting to /var/cache/<UUID>.
// - Perform mounts using syscalls (nix) and create target directories as needed.
// - Optionally generate /etc/fstab entries in deterministic order.
// Non-goals: filesystem creation, device discovery, partitioning.
// - Implement mount phase only: plan root mounts under /var/mounts/{UUID} for data, mount ESP at /boot, ensure/plan subvols, and mount subvols to /var/cache/*.
// - Use UUID= sources, deterministic primary selection (first FsResult) for dual_independent.
// - Generate fstab entries covering runtime roots (/var/mounts/{UUID}, /boot when present) followed by the four subvol targets.
// REGION: RESPONSIBILITIES-END
//
// REGION: EXTENSION_POINTS
// ext: support custom mount scheme mapping beyond per-UUID.
// ext: add configurable mount options per filesystem kind via Config.
// REGION: EXTENSION_POINTS-END
//
// REGION: SAFETY
// safety: must ensure target directories exist and avoid overwriting unintended paths.
// safety: ensure options include sensible defaults (e.g., btrfs compress, ssd) when applicable.
// - Mount ESP (VFAT) read-write at /boot once; data roots use subvolid=5 (btrfs) or plain (bcachefs).
// - Create-if-missing subvolumes prior to subvol mounts; ensure directories exist.
// - Always use UUID= sources; no device paths.
// - Bcachefs subvolume mounts use option key 'X-mount.subdir={name}' (not 'subvol=').
// REGION: SAFETY-END
//
// REGION: ERROR_MAPPING
// errmap: syscall failures -> crate::Error::Mount with context.
// errmap: fstab write IO errors -> crate::Error::Mount with path details.
// - External tool failures map to Error::Tool via util::run_cmd/run_cmd_capture.
// - Missing required tools map to Error::Mount with clear explanation.
// REGION: ERROR_MAPPING-END
//
// REGION: TODO
// todo: implement option synthesis (e.g., compress=zstd:3 for btrfs) based on Config and device rotational hints.
// todo: implement deterministic fstab ordering and idempotent writes.
// - Defer compression/SSD options; later map from Config into mount options.
// - Consider validating tool presence up-front for clearer early errors.
// REGION: TODO-END
//! Mount planning and application.
//!
//! Translates filesystem results into mount targets (default under /var/cache/<UUID>)
//! and applies mounts using syscalls (via nix) in later implementation.
//!
//! See [fn plan_mounts](ops.rs:1), [fn apply_mounts](ops.rs:1),
//! and [fn maybe_write_fstab](ops.rs:1).
//! See [fn plan_mounts()](src/mount/ops.rs:1), [fn apply_mounts()](src/mount/ops.rs:1),
//! and [fn maybe_write_fstab()](src/mount/ops.rs:1).
#![allow(dead_code)]
use crate::{Result, types::Config, fs::FsResult};
use crate::{
Error, Result,
fs::{FsKind, FsResult},
types::Config,
util::{run_cmd, run_cmd_capture, which_tool},
};
use std::collections::HashMap;
use std::fs::{File, create_dir_all};
use std::io::Write;
use std::path::Path;
use tracing::info;
const ROOT_BASE: &str = "/var/mounts";
const BOOT_TARGET: &str = "/boot";
const TARGET_SYSTEM: &str = "/var/cache/system";
const TARGET_ETC: &str = "/var/cache/etc";
const TARGET_MODULES: &str = "/var/cache/modules";
const TARGET_VM_META: &str = "/var/cache/vm-meta";
const SUBVOLS: &[&str] = &["system", "etc", "modules", "vm-meta"];
/// Mount plan entries: (source, target, fstype, options)
#[derive(Debug, Clone)]
pub struct MountPlan {
/// Source device path, target directory, filesystem type, and mount options.
pub entries: Vec<(String, String, String, String)>,
struct ExistingMount {
source: String,
fstype: String,
options: String,
}
/// Result of applying a single mount entry.
fn current_mounts() -> HashMap<String, ExistingMount> {
let mut map = HashMap::new();
if let Ok(content) = std::fs::read_to_string("/proc/self/mountinfo") {
for line in content.lines() {
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 7 {
continue;
}
let target = parts[4].to_string();
let mount_options = parts[5].to_string();
if let Some(idx) = parts.iter().position(|p| *p == "-") {
if idx + 2 < parts.len() {
let fstype = parts[idx + 1].to_string();
let source = parts[idx + 2].to_string();
let super_opts = if idx + 3 < parts.len() {
parts[idx + 3].to_string()
} else {
String::new()
};
let combined_options = if super_opts.is_empty() {
mount_options.clone()
} else {
format!("{mount_options},{super_opts}")
};
map.insert(
target,
ExistingMount {
source,
fstype,
options: combined_options,
},
);
}
}
}
}
map
}
fn source_matches_uuid(existing_source: &str, uuid: &str) -> bool {
if existing_source == format!("UUID={}", uuid) {
return true;
}
if let Some(existing_uuid) = existing_source.strip_prefix("UUID=") {
return existing_uuid == uuid;
}
if existing_source.starts_with("/dev/") {
let uuid_path = Path::new("/dev/disk/by-uuid").join(uuid);
if let (Ok(existing_canon), Ok(uuid_canon)) = (
std::fs::canonicalize(existing_source),
std::fs::canonicalize(&uuid_path),
) {
return existing_canon == uuid_canon;
}
}
false
}
fn disk_of_device(dev: &str) -> Option<String> {
let path = Path::new(dev);
let name = path.file_name()?.to_str()?;
let mut cutoff = name.len();
while cutoff > 0 && name.as_bytes()[cutoff - 1].is_ascii_digit() {
cutoff -= 1;
}
if cutoff == name.len() {
return Some(dev.to_string());
}
let mut disk = name[..cutoff].to_string();
if disk.ends_with('p') {
disk.pop();
}
let parent = path.parent()?.to_str().unwrap_or("/dev");
Some(format!("{}/{}", parent, disk))
}
#[derive(Debug, Clone)]
pub struct PlannedMount {
pub uuid: String, // UUID string without prefix
pub target: String, // absolute path
pub fstype: String, // "btrfs" | "bcachefs"
pub options: String, // e.g., "rw,noatime,subvolid=5"
}
#[derive(Debug, Clone)]
pub struct PlannedSubvolMount {
pub uuid: String, // UUID of primary FS
pub name: String, // subvol name (system/etc/modules/vm-meta)
pub target: String, // absolute final target
pub fstype: String, // "btrfs" | "bcachefs"
pub options: String, // e.g., "rw,noatime,subvol=system"
}
/// Mount plan per policy.
#[derive(Debug, Clone)]
pub struct MountPlan {
/// Root mounts under /var/mounts/{UUID} for all data filesystems.
pub root_mounts: Vec<PlannedMount>,
/// Four subvol mounts chosen from the primary FS only.
pub subvol_mounts: Vec<PlannedSubvolMount>,
/// Primary UUID selection (only data FS; for multiple pick first in input order).
pub primary_uuid: Option<String>,
}
/// Result of applying a mount (root or subvol).
#[derive(Debug, Clone)]
pub struct MountResult {
/// Source device path (e.g., /dev/nvme0n1p3).
/// Source as "UUID=..." (never device paths).
pub source: String,
/// Target directory (e.g., /var/cache/<UUID>).
/// Target directory.
pub target: String,
/// Filesystem type (e.g., "btrfs", "vfat").
/// Filesystem type string.
pub fstype: String,
/// Options string (comma-separated).
/// Options used for the mount.
pub options: String,
}
/// Build mount plan under /var/cache/<UUID> by default.
fn fstype_str(kind: FsKind) -> &'static str {
match kind {
FsKind::Btrfs => "btrfs",
FsKind::Bcachefs => "bcachefs",
FsKind::Vfat => "vfat",
}
}
/// Build mount plan per policy.
pub fn plan_mounts(fs_results: &[FsResult], _cfg: &Config) -> Result<MountPlan> {
let _ = fs_results;
// Placeholder: map filesystem UUIDs to per-UUID directories and assemble options.
todo!("create per-UUID directories and mount mapping based on config")
// Identify data filesystems (Btrfs/Bcachefs), ignore ESP (Vfat)
let data: Vec<&FsResult> = fs_results
.iter()
.filter(|r| matches!(r.kind, FsKind::Btrfs | FsKind::Bcachefs))
.collect();
if data.is_empty() {
return Err(Error::Mount(
"no data filesystems to mount (expected Btrfs or Bcachefs)".into(),
));
}
// Root mounts for all data filesystems
let mut root_mounts: Vec<PlannedMount> = Vec::new();
for r in &data {
let uuid = r.uuid.clone();
let fstype = fstype_str(r.kind).to_string();
let target = format!("{}/{}", ROOT_BASE, uuid);
let options = match r.kind {
FsKind::Btrfs => "rw,noatime,subvolid=5".to_string(),
FsKind::Bcachefs => "rw,noatime".to_string(),
FsKind::Vfat => continue,
};
root_mounts.push(PlannedMount {
uuid,
target,
fstype,
options,
});
}
let primary = data[0];
let primary_uuid = Some(primary.uuid.clone());
let primary_disk = primary.devices.first().and_then(|dev| disk_of_device(dev));
let mut chosen_esp: Option<&FsResult> = None;
let mut fallback_esp: Option<&FsResult> = None;
for esp in fs_results.iter().filter(|r| matches!(r.kind, FsKind::Vfat)) {
if fallback_esp.is_none() {
fallback_esp = Some(esp);
}
if let (Some(ref disk), Some(esp_disk)) = (
primary_disk.as_ref(),
esp.devices.first().and_then(|dev| disk_of_device(dev)),
) {
if esp_disk == **disk {
chosen_esp = Some(esp);
break;
}
}
}
if let Some(esp) = chosen_esp.or(fallback_esp) {
root_mounts.push(PlannedMount {
uuid: esp.uuid.clone(),
target: BOOT_TARGET.to_string(),
fstype: fstype_str(esp.kind).to_string(),
options: "rw".to_string(),
});
}
// Subvol mounts only from primary FS
let mut subvol_mounts: Vec<PlannedSubvolMount> = Vec::new();
let fstype = fstype_str(primary.kind).to_string();
// Option key differs per filesystem: btrfs uses subvol=, bcachefs uses X-mount.subdir=
let opt_key = match primary.kind {
FsKind::Btrfs => "subvol=",
FsKind::Bcachefs => "X-mount.subdir=",
FsKind::Vfat => "subvol=", // not used for Vfat (ESP ignored)
};
for name in SUBVOLS {
let target = match *name {
"system" => TARGET_SYSTEM.to_string(),
"etc" => TARGET_ETC.to_string(),
"modules" => TARGET_MODULES.to_string(),
"vm-meta" => TARGET_VM_META.to_string(),
_ => continue,
};
let options = format!("rw,noatime,{}{}", opt_key, name);
subvol_mounts.push(PlannedSubvolMount {
uuid: primary.uuid.clone(),
name: name.to_string(),
target,
fstype: fstype.clone(),
options,
});
}
Ok(MountPlan {
root_mounts,
subvol_mounts,
primary_uuid,
})
}
/// Apply mounts using syscalls (nix), ensuring directories exist.
pub fn apply_mounts(_plan: &MountPlan) -> Result<Vec<MountResult>> {
// Placeholder: perform mount syscalls and return results.
todo!("perform mount syscalls and return results")
/// Apply mounts: ensure dirs, mount roots, create subvols if missing, mount subvols.
pub fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>> {
// Tool discovery
let mount_tool = which_tool("mount")?
.ok_or_else(|| Error::Mount("required tool 'mount' not found in PATH".into()))?;
// Ensure target directories exist for root mounts
for pm in &plan.root_mounts {
create_dir_all(&pm.target)
.map_err(|e| Error::Mount(format!("failed to create dir {}: {}", pm.target, e)))?;
}
// Ensure final subvol targets exist
for sm in &plan.subvol_mounts {
create_dir_all(&sm.target)
.map_err(|e| Error::Mount(format!("failed to create dir {}: {}", sm.target, e)))?;
}
let mut results_map: HashMap<String, MountResult> = HashMap::new();
let mut existing_mounts = current_mounts();
// Root mounts
for pm in &plan.root_mounts {
let source = format!("UUID={}", pm.uuid);
if let Some(existing) = existing_mounts.get(pm.target.as_str()) {
if source_matches_uuid(&existing.source, &pm.uuid) {
info!(
"mount::apply_mounts: target {} already mounted; skipping",
pm.target
);
let existing_fstype = existing.fstype.clone();
let existing_options = existing.options.clone();
results_map
.entry(pm.target.clone())
.or_insert_with(|| MountResult {
source: source.clone(),
target: pm.target.clone(),
fstype: existing_fstype,
options: existing_options,
});
continue;
} else {
return Err(Error::Mount(format!(
"target {} already mounted by {} (expected UUID={})",
pm.target, existing.source, pm.uuid
)));
}
}
let args = [
mount_tool.as_str(),
"-t",
pm.fstype.as_str(),
"-o",
pm.options.as_str(),
source.as_str(),
pm.target.as_str(),
];
run_cmd(&args)?;
existing_mounts.insert(
pm.target.clone(),
ExistingMount {
source: source.clone(),
fstype: pm.fstype.clone(),
options: pm.options.clone(),
},
);
results_map.insert(
pm.target.clone(),
MountResult {
source,
target: pm.target.clone(),
fstype: pm.fstype.clone(),
options: pm.options.clone(),
},
);
}
// Subvolume creation (create-if-missing) and mounts for the primary
if let Some(primary_uuid) = &plan.primary_uuid {
// Determine primary fs kind from planned subvols (they all share fstype for primary)
let primary_kind = plan
.subvol_mounts
.get(0)
.map(|s| s.fstype.clone())
.unwrap_or_else(|| "btrfs".to_string());
let root = format!("{}/{}", ROOT_BASE, primary_uuid);
if primary_kind == "btrfs" {
let btrfs_tool = which_tool("btrfs")?
.ok_or_else(|| Error::Mount("required tool 'btrfs' not found in PATH".into()))?;
// List existing subvols under root
let out = run_cmd_capture(&[
btrfs_tool.as_str(),
"subvolume",
"list",
"-o",
root.as_str(),
])?;
for sm in &plan.subvol_mounts {
if &sm.uuid != primary_uuid {
continue;
}
// Check existence by scanning output for " path {name}"
let exists = out
.stdout
.lines()
.any(|l| l.contains(&format!(" path {}", sm.name)));
if !exists {
// Create subvolume
let subvol_path = format!("{}/{}", root, sm.name);
let args = [
btrfs_tool.as_str(),
"subvolume",
"create",
subvol_path.as_str(),
];
run_cmd(&args)?;
}
}
} else if primary_kind == "bcachefs" {
let bcachefs_tool = which_tool("bcachefs")?
.ok_or_else(|| Error::Mount("required tool 'bcachefs' not found in PATH".into()))?;
for sm in &plan.subvol_mounts {
if &sm.uuid != primary_uuid {
continue;
}
let subvol_path = format!("{}/{}", root, sm.name);
if !Path::new(&subvol_path).exists() {
let args = [
bcachefs_tool.as_str(),
"subvolume",
"create",
subvol_path.as_str(),
];
run_cmd(&args)?;
}
}
} else {
return Err(Error::Mount(format!(
"unsupported primary fstype for subvols: {}",
primary_kind
)));
}
}
// Subvol mounts
for sm in &plan.subvol_mounts {
let source = format!("UUID={}", sm.uuid);
if let Some(existing) = existing_mounts.get(sm.target.as_str()) {
if source_matches_uuid(&existing.source, &sm.uuid) {
info!(
"mount::apply_mounts: target {} already mounted; skipping",
sm.target
);
let existing_fstype = existing.fstype.clone();
let existing_options = existing.options.clone();
results_map
.entry(sm.target.clone())
.or_insert_with(|| MountResult {
source: source.clone(),
target: sm.target.clone(),
fstype: existing_fstype,
options: existing_options,
});
continue;
} else {
return Err(Error::Mount(format!(
"target {} already mounted by {} (expected UUID={})",
sm.target, existing.source, sm.uuid
)));
}
}
let args = [
mount_tool.as_str(),
"-t",
sm.fstype.as_str(),
"-o",
sm.options.as_str(),
source.as_str(),
sm.target.as_str(),
];
run_cmd(&args)?;
existing_mounts.insert(
sm.target.clone(),
ExistingMount {
source: source.clone(),
fstype: sm.fstype.clone(),
options: sm.options.clone(),
},
);
results_map.insert(
sm.target.clone(),
MountResult {
source,
target: sm.target.clone(),
fstype: sm.fstype.clone(),
options: sm.options.clone(),
},
);
}
let mut results: Vec<MountResult> = results_map.into_values().collect();
results.sort_by(|a, b| a.target.cmp(&b.target));
Ok(results)
}
/// Optionally generate /etc/fstab entries in deterministic order.
pub fn maybe_write_fstab(_mounts: &[MountResult], _cfg: &Config) -> Result<()> {
// Placeholder: write fstab when enabled in configuration.
todo!("when enabled, write fstab entries deterministically")
}
/// Optionally write fstab entries for subvol mounts only (deterministic order).
pub fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()> {
if !cfg.mount.fstab_enabled {
return Ok(());
}
// Partition mount results into runtime root mounts and final subvolume targets.
let mut root_entries: Vec<&MountResult> = mounts
.iter()
.filter(|m| m.target.starts_with(ROOT_BASE) || m.target == BOOT_TARGET)
.collect();
let wanted = [TARGET_ETC, TARGET_MODULES, TARGET_SYSTEM, TARGET_VM_META];
let mut subvol_entries: Vec<&MountResult> = mounts
.iter()
.filter(|m| wanted.contains(&m.target.as_str()))
.collect();
// Sort by target path ascending to be deterministic (roots before subvols).
root_entries.sort_by(|a, b| a.target.cmp(&b.target));
subvol_entries.sort_by(|a, b| a.target.cmp(&b.target));
// Compose lines: include all root mounts first, followed by the four subvol targets.
let mut lines: Vec<String> = Vec::new();
for m in root_entries.into_iter().chain(subvol_entries.into_iter()) {
// m.source already "UUID=..."
let line = format!("{} {} {} {} 0 0", m.source, m.target, m.fstype, m.options);
lines.push(line);
}
// Atomic write to /etc/fstab
let fstab_path = "/etc/fstab";
let tmp_path = "/etc/fstab.zosstorage.tmp";
if let Some(parent) = Path::new(fstab_path).parent() {
create_dir_all(parent)
.map_err(|e| Error::Mount(format!("failed to create {}: {}", parent.display(), e)))?;
}
{
let mut f = File::create(tmp_path)
.map_err(|e| Error::Mount(format!("failed to create {}: {}", tmp_path, e)))?;
for line in lines {
writeln!(f, "{}", line)
.map_err(|e| Error::Mount(format!("failed to write tmp fstab: {}", e)))?;
}
f.flush()
.map_err(|e| Error::Mount(format!("failed to flush tmp fstab: {}", e)))?;
}
std::fs::rename(tmp_path, fstab_path).map_err(|e| {
Error::Mount(format!(
"failed to replace {} atomically: {}",
fstab_path, e
))
})?;
Ok(())
}

View File

@@ -3,4 +3,4 @@
//! Re-exports the concrete implementation from run.rs to avoid duplicating types/functions.
pub mod run;
pub use run::*;
pub use run::*;

View File

@@ -43,12 +43,13 @@
//! - Report generation and write
use crate::{
types::Config,
logging::LogOptions,
device::{discover, DeviceFilter, Disk},
idempotency,
partition,
Error, Result,
device::{DeviceFilter, Disk, discover},
fs as zfs, idempotency,
logging::LogOptions,
partition,
report::StateReport,
types::{Config, Topology},
};
use humantime::format_rfc3339;
use regex::Regex;
@@ -66,8 +67,18 @@ pub struct Context {
pub log: LogOptions,
/// When true, print detection and planning summary to stdout (JSON).
pub show: bool,
/// When true, perform destructive actions (apply mode).
pub apply: bool,
/// When true, attempt to mount existing filesystems based on on-disk headers (non-destructive).
pub mount_existing: bool,
/// When true, emit a report of currently initialized filesystems and mounts (non-destructive).
pub report_current: bool,
/// Optional report path override (when provided by CLI --report).
pub report_path_override: Option<String>,
/// True when topology was provided via CLI (--topology), giving it precedence.
pub topo_from_cli: bool,
/// True when topology was provided via kernel cmdline, giving it precedence if CLI omitted it.
pub topo_from_cmdline: bool,
}
impl Context {
@@ -77,7 +88,12 @@ impl Context {
cfg,
log,
show: false,
apply: false,
mount_existing: false,
report_current: false,
report_path_override: None,
topo_from_cli: false,
topo_from_cmdline: false,
}
}
@@ -93,6 +109,16 @@ impl Context {
self
}
/// Enable or disable apply mode (destructive).
///
/// When set to true (e.g. via `--apply`), orchestrator:
/// - Enforces empty-disk policy (unless disabled in config)
/// - Applies partition plan, then (future) mkfs, mounts, and report
pub fn with_apply(mut self, apply: bool) -> Self {
self.apply = apply;
self
}
/// Override the report output path used by preview mode.
///
/// When provided (e.g. via `--report /path/file.json`), orchestrator:
@@ -104,6 +130,56 @@ impl Context {
self.report_path_override = path;
self
}
/// Enable or disable mount-existing mode (non-destructive).
pub fn with_mount_existing(mut self, mount_existing: bool) -> Self {
self.mount_existing = mount_existing;
self
}
/// Enable or disable reporting of current state (non-destructive).
pub fn with_report_current(mut self, report_current: bool) -> Self {
self.report_current = report_current;
self
}
/// Mark that topology was provided via CLI (--topology).
pub fn with_topology_from_cli(mut self, v: bool) -> Self {
self.topo_from_cli = v;
self
}
/// Mark that topology was provided via kernel cmdline (zosstorage.topology=).
pub fn with_topology_from_cmdline(mut self, v: bool) -> Self {
self.topo_from_cmdline = v;
self
}
}
#[derive(Debug, Clone, Copy)]
enum ProvisioningMode {
Apply,
Preview,
}
#[derive(Debug, Clone, Copy)]
enum AutoDecision {
Apply,
MountExisting,
}
#[derive(Debug)]
struct AutoSelection {
decision: AutoDecision,
fs_results: Option<Vec<zfs::FsResult>>,
state: Option<StateReport>,
}
#[derive(Debug, Clone, Copy)]
enum ExecutionMode {
ReportCurrent,
MountExisting,
Apply,
Preview,
Auto,
}
/// Run the one-shot provisioning flow.
@@ -111,57 +187,352 @@ impl Context {
/// Returns Ok(()) on success and also on success-noop when already provisioned.
/// Any validation or execution failure aborts with an error.
pub fn run(ctx: &Context) -> Result<()> {
info!("orchestrator: starting run() with topology {:?}", ctx.cfg.topology);
info!("orchestrator: starting run()");
// 1) Idempotency pre-flight: if already provisioned, optionally emit summary then exit success.
match idempotency::detect_existing_state()? {
Some(state) => {
info!("orchestrator: already provisioned");
if ctx.show || ctx.report_path_override.is_some() {
let now = format_rfc3339(SystemTime::now()).to_string();
let state_json = to_value(&state).map_err(|e| {
Error::Report(format!("failed to serialize StateReport: {}", e))
})?;
let summary = json!({
"version": "v1",
"timestamp": now,
"status": "already_provisioned",
"state": state_json
});
if ctx.show {
println!("{}", summary);
let selected_modes =
(ctx.mount_existing as u8) + (ctx.report_current as u8) + (ctx.apply as u8);
if selected_modes > 1 {
return Err(Error::Validation(
"choose only one mode: --mount-existing | --report-current | --apply".into(),
));
}
let preview_requested = ctx.show || ctx.report_path_override.is_some();
let initial_mode = if ctx.report_current {
ExecutionMode::ReportCurrent
} else if ctx.mount_existing {
ExecutionMode::MountExisting
} else if ctx.apply {
ExecutionMode::Apply
} else if preview_requested {
ExecutionMode::Preview
} else {
ExecutionMode::Auto
};
match initial_mode {
ExecutionMode::ReportCurrent => run_report_current(ctx),
ExecutionMode::MountExisting => run_mount_existing(ctx, None, None),
ExecutionMode::Apply => run_provisioning(ctx, ProvisioningMode::Apply, None),
ExecutionMode::Preview => run_provisioning(ctx, ProvisioningMode::Preview, None),
ExecutionMode::Auto => {
let selection = auto_select_mode(ctx)?;
match selection.decision {
AutoDecision::MountExisting => {
run_mount_existing(ctx, selection.fs_results, selection.state)
}
if let Some(path) = &ctx.report_path_override {
fs::write(path, summary.to_string())
.map_err(|e| Error::Report(format!("failed to write report to {}: {}", path, e)))?;
info!("orchestrator: wrote idempotency report to {}", path);
AutoDecision::Apply => {
run_provisioning(ctx, ProvisioningMode::Apply, selection.state)
}
}
return Ok(());
}
None => {
debug!("orchestrator: not provisioned; continuing");
}
}
fn auto_select_mode(ctx: &Context) -> Result<AutoSelection> {
info!("orchestrator: auto-selecting execution mode");
let state = idempotency::detect_existing_state()?;
let fs_results = zfs::probe_existing_filesystems()?;
if let Some(state) = state {
info!("orchestrator: provisioned state detected; attempting mount-existing flow");
return Ok(AutoSelection {
decision: AutoDecision::MountExisting,
fs_results: if fs_results.is_empty() {
None
} else {
Some(fs_results)
},
state: Some(state),
});
}
if !fs_results.is_empty() {
info!(
"orchestrator: detected {} filesystem(s) with reserved labels; selecting mount-existing",
fs_results.len()
);
return Ok(AutoSelection {
decision: AutoDecision::MountExisting,
fs_results: Some(fs_results),
state: None,
});
}
info!(
"orchestrator: no provisioned state or labeled filesystems detected; selecting apply mode (topology={:?})",
ctx.cfg.topology
);
Ok(AutoSelection {
decision: AutoDecision::Apply,
fs_results: None,
state: None,
})
}
fn run_report_current(ctx: &Context) -> Result<()> {
info!("orchestrator: report-current mode");
let fs_results = zfs::probe_existing_filesystems()?;
// Read all mounts, filtering common system/uninteresting ones
let mounts_content = fs::read_to_string("/proc/mounts").unwrap_or_default();
let mounts_json: Vec<serde_json::Value> = mounts_content
.lines()
.filter_map(|line| {
let mut it = line.split_whitespace();
let source = it.next()?;
let target = it.next()?;
let fstype = it.next()?;
let options = it.next().unwrap_or("");
// Skip common pseudo/virtual filesystems and system mounts
if source.starts_with("devtmpfs")
|| source.starts_with("tmpfs")
|| source.starts_with("proc")
|| source.starts_with("sysfs")
|| source.starts_with("cgroup")
|| source.starts_with("bpf")
|| source.starts_with("debugfs")
|| source.starts_with("securityfs")
|| source.starts_with("mqueue")
|| source.starts_with("pstore")
|| source.starts_with("tracefs")
|| source.starts_with("hugetlbfs")
|| source.starts_with("efivarfs")
|| source.starts_with("systemd-1")
|| target.starts_with("/proc")
|| target.starts_with("/sys")
|| target.starts_with("/dev")
|| target.starts_with("/run")
|| target.starts_with("/boot")
|| target.starts_with("/efi")
|| target.starts_with("/boot/efi")
{
return None;
}
// Include zosstorage target mounts and general data mounts
Some(json!({
"source": source,
"target": target,
"fstype": fstype,
"options": options
}))
})
.collect();
// Read partition information from /proc/partitions
let partitions_content = fs::read_to_string("/proc/partitions").unwrap_or_default();
let partitions_json: Vec<serde_json::Value> = partitions_content
.lines()
.filter_map(|line| {
let line = line.trim();
if line.is_empty() || line.starts_with("major") {
return None;
}
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() < 4 {
return None;
}
let name = parts[3];
// Skip pseudo devices
if name.starts_with("loop")
|| name.starts_with("ram")
|| name.starts_with("zram")
|| name.starts_with("fd")
|| name.starts_with("dm-")
|| name.starts_with("md")
{
return None;
}
let major: u32 = parts[0].parse().ok()?;
let minor: u32 = parts[1].parse().ok()?;
let size_kb: u64 = parts[2].parse().ok()?;
Some(json!({
"name": name,
"major": major,
"minor": minor,
"size_kb": size_kb,
"size_gib": size_kb / (1024 * 1024)
}))
})
.collect();
let fs_json: Vec<serde_json::Value> = fs_results
.iter()
.map(|r| {
let kind_str = match r.kind {
zfs::FsKind::Vfat => "vfat",
zfs::FsKind::Btrfs => "btrfs",
zfs::FsKind::Bcachefs => "bcachefs",
};
json!({
"kind": kind_str,
"uuid": r.uuid,
"label": r.label,
"devices": r.devices
})
})
.collect();
let now = format_rfc3339(SystemTime::now()).to_string();
let summary = json!({
"version": "v1",
"timestamp": now,
"status": "observed",
"partitions": partitions_json,
"filesystems": fs_json,
"mounts": mounts_json
});
println!("{}", summary);
if let Some(path) = &ctx.report_path_override {
fs::write(path, summary.to_string())
.map_err(|e| Error::Report(format!("failed to write report to {}: {}", path, e)))?;
info!("orchestrator: wrote report-current to {}", path);
}
Ok(())
}
fn run_mount_existing(
ctx: &Context,
fs_results_override: Option<Vec<zfs::FsResult>>,
state_hint: Option<StateReport>,
) -> Result<()> {
info!("orchestrator: mount-existing mode");
let fs_results = match fs_results_override {
Some(results) => results,
None => zfs::probe_existing_filesystems()?,
};
if fs_results.is_empty() {
return Err(Error::Mount(
"no existing filesystems with reserved labels (ZOSBOOT/ZOSDATA) were found".into(),
));
}
let mplan = crate::mount::plan_mounts(&fs_results, &ctx.cfg)?;
let mres = crate::mount::apply_mounts(&mplan)?;
crate::mount::maybe_write_fstab(&mres, &ctx.cfg)?;
if ctx.show || ctx.report_path_override.is_some() || ctx.report_current {
let now = format_rfc3339(SystemTime::now()).to_string();
let fs_json: Vec<serde_json::Value> = fs_results
.iter()
.map(|r| {
let kind_str = match r.kind {
zfs::FsKind::Vfat => "vfat",
zfs::FsKind::Btrfs => "btrfs",
zfs::FsKind::Bcachefs => "bcachefs",
};
json!({
"kind": kind_str,
"uuid": r.uuid,
"label": r.label,
"devices": r.devices,
})
})
.collect();
let mounts_json: Vec<serde_json::Value> = mres
.iter()
.map(|m| {
json!({
"source": m.source,
"target": m.target,
"fstype": m.fstype,
"options": m.options,
})
})
.collect();
let mut summary = json!({
"version": "v1",
"timestamp": now,
"status": "mounted_existing",
"filesystems": fs_json,
"mounts": mounts_json,
});
if let Some(state) = state_hint {
if let Ok(state_json) = to_value(&state) {
if let Some(obj) = summary.as_object_mut() {
obj.insert("state".to_string(), state_json);
}
}
}
if ctx.show || ctx.report_current {
println!("{}", summary);
}
if let Some(path) = &ctx.report_path_override {
fs::write(path, summary.to_string())
.map_err(|e| Error::Report(format!("failed to write report to {}: {}", path, e)))?;
info!("orchestrator: wrote mount-existing report to {}", path);
}
}
// 2) Device discovery using compiled filter from config.
Ok(())
}
fn run_provisioning(
ctx: &Context,
mode: ProvisioningMode,
state_hint: Option<StateReport>,
) -> Result<()> {
let preview_outputs = ctx.show || ctx.report_path_override.is_some();
let mut state_opt = state_hint;
if state_opt.is_none() {
state_opt = idempotency::detect_existing_state()?;
}
if let Some(state) = state_opt {
info!("orchestrator: already provisioned; ensuring mounts are active");
return run_mount_existing(ctx, None, Some(state));
}
let filter = build_device_filter(&ctx.cfg)?;
let disks = discover(&filter)?;
info!("orchestrator: discovered {} eligible disk(s)", disks.len());
// 3) Emptiness enforcement: skip in preview mode (--show/--report) to allow planning output.
let preview = ctx.show || ctx.report_path_override.is_some();
if ctx.cfg.partitioning.require_empty_disks && !preview {
enforce_empty_disks(&disks)?;
info!("orchestrator: all target disks verified empty");
} else if ctx.cfg.partitioning.require_empty_disks && preview {
warn!("orchestrator: preview mode detected (--show/--report); skipping empty-disk enforcement");
} else {
if ctx.cfg.partitioning.require_empty_disks {
if matches!(mode, ProvisioningMode::Apply) {
enforce_empty_disks(&disks)?;
info!("orchestrator: all target disks verified empty");
} else {
warn!(
"orchestrator: preview mode detected (--show/--report); skipping empty-disk enforcement"
);
}
} else if matches!(mode, ProvisioningMode::Apply) {
warn!("orchestrator: require_empty_disks=false; proceeding without emptiness enforcement");
}
// 4) Partition planning (declarative only; application not yet implemented in this step).
let plan = partition::plan_partitions(&disks, &ctx.cfg)?;
let effective_cfg = {
let mut c = ctx.cfg.clone();
if !(ctx.topo_from_cli || ctx.topo_from_cmdline) {
let auto_topo = if disks.len() == 1 {
Topology::BtrfsSingle
} else if disks.len() == 2 {
Topology::DualIndependent
} else {
Topology::BtrfsRaid1
};
if c.topology != auto_topo {
info!("orchestrator: topology auto-selected {:?}", auto_topo);
c.topology = auto_topo;
} else {
info!("orchestrator: using configured topology {:?}", c.topology);
}
} else {
info!("orchestrator: using overridden topology {:?}", c.topology);
}
c
};
let plan = partition::plan_partitions(&disks, &effective_cfg)?;
debug!(
"orchestrator: partition plan ready (alignment={} MiB, disks={})",
plan.alignment_mib,
@@ -171,25 +542,41 @@ pub fn run(ctx: &Context) -> Result<()> {
debug!("plan for {}: {} part(s)", dp.disk.path, dp.parts.len());
}
// Note:
// - Applying partitions, creating filesystems, mounting, and reporting
// will be wired in subsequent steps. For now this performs pre-flight
// checks and planning to exercise real code paths safely.
if matches!(mode, ProvisioningMode::Apply) {
info!("orchestrator: apply mode enabled; applying partition plan");
let part_results = partition::apply_partitions(&plan)?;
info!(
"orchestrator: applied partitions on {} disk(s), total parts created: {}",
plan.disks.len(),
part_results.len()
);
info!("orchestrator: pre-flight complete (idempotency checked, devices discovered, plan computed)");
let fs_plan = zfs::plan_filesystems(&part_results, &effective_cfg)?;
info!(
"orchestrator: filesystem plan contains {} spec(s)",
fs_plan.specs.len()
);
let fs_results = zfs::make_filesystems(&fs_plan, &effective_cfg)?;
info!("orchestrator: created {} filesystem(s)", fs_results.len());
// Optional: emit JSON summary via --show or write via --report
if ctx.show || ctx.report_path_override.is_some() {
let summary = build_summary_json(&disks, &plan, &ctx.cfg)?;
let mplan = crate::mount::plan_mounts(&fs_results, &effective_cfg)?;
let mres = crate::mount::apply_mounts(&mplan)?;
crate::mount::maybe_write_fstab(&mres, &effective_cfg)?;
return Ok(());
}
info!(
"orchestrator: pre-flight complete (idempotency checked, devices discovered, plan computed)"
);
if preview_outputs {
let summary = build_summary_json(&disks, &plan, &effective_cfg)?;
if ctx.show {
// Print compact JSON to stdout
println!("{}", summary);
}
if let Some(path) = &ctx.report_path_override {
// Best-effort write (non-atomic for now, pending report::write_report implementation)
fs::write(path, summary.to_string()).map_err(|e| {
Error::Report(format!("failed to write report to {}: {}", path, e))
})?;
fs::write(path, summary.to_string())
.map_err(|e| Error::Report(format!("failed to write report to {}: {}", path, e)))?;
info!("orchestrator: wrote summary report to {}", path);
}
}
@@ -209,15 +596,13 @@ fn build_device_filter(cfg: &Config) -> Result<DeviceFilter> {
let mut exclude = Vec::new();
for pat in &cfg.device_selection.include_patterns {
let re = Regex::new(pat).map_err(|e| {
Error::Validation(format!("invalid include regex '{}': {}", pat, e))
})?;
let re = Regex::new(pat)
.map_err(|e| Error::Validation(format!("invalid include regex '{}': {}", pat, e)))?;
include.push(re);
}
for pat in &cfg.device_selection.exclude_patterns {
let re = Regex::new(pat).map_err(|e| {
Error::Validation(format!("invalid exclude regex '{}': {}", pat, e))
})?;
let re = Regex::new(pat)
.map_err(|e| Error::Validation(format!("invalid exclude regex '{}': {}", pat, e)))?;
exclude.push(re);
}
@@ -271,7 +656,11 @@ fn role_str(role: partition::PartRole) -> &'static str {
/// - mount: scheme summary and target template (e.g., "/var/cache/{UUID}")
///
/// This function is non-destructive and performs no probing beyond the provided inputs.
fn build_summary_json(disks: &[Disk], plan: &partition::PartitionPlan, cfg: &Config) -> Result<serde_json::Value> {
fn build_summary_json(
disks: &[Disk],
plan: &partition::PartitionPlan,
cfg: &Config,
) -> Result<serde_json::Value> {
// Disks summary
let disks_json: Vec<serde_json::Value> = disks
.iter()
@@ -307,12 +696,7 @@ fn build_summary_json(disks: &[Disk], plan: &partition::PartitionPlan, cfg: &Con
}
// Decide filesystem kinds and planned mountpoints (template) from plan + cfg.topology
let topo_str = match cfg.topology {
crate::types::Topology::Single => "single",
crate::types::Topology::DualIndependent => "dual_independent",
crate::types::Topology::SsdHddBcachefs => "ssd_hdd_bcachefs",
crate::types::Topology::BtrfsRaid1 => "btrfs_raid1",
};
let topo_str = cfg.topology.to_string();
// Count roles across plan to infer filesystems
let mut esp_count = 0usize;
@@ -408,4 +792,4 @@ fn build_summary_json(disks: &[Disk], plan: &partition::PartitionPlan, cfg: &Con
});
Ok(summary)
}
}

View File

@@ -9,4 +9,4 @@
pub mod plan;
pub use plan::*;
pub use plan::*;

View File

@@ -21,6 +21,7 @@
//
// REGION: SAFETY
// safety: must verify require_empty_disks before any modification.
// safety: when UEFI-booted, suppress creating BIOS boot partition to avoid unnecessary ef02 on UEFI systems.
// safety: must ensure unique partition GUIDs; identical labels are allowed when expected (e.g., ESP ZOSBOOT).
// safety: must call udev settle after partition table writes.
// REGION: SAFETY-END
@@ -42,7 +43,14 @@
//! See [fn plan_partitions](plan.rs:1) and
//! [fn apply_partitions](plan.rs:1).
use crate::{types::{Config, Topology}, device::Disk, Error, Result};
use crate::{
Error, Result,
device::Disk,
idempotency,
types::{Config, Topology},
util::{is_efi_boot, run_cmd, run_cmd_capture, udev_settle, which_tool},
};
use tracing::{debug, warn};
/// Partition roles supported by zosstorage.
#[derive(Debug, Clone, Copy)]
@@ -109,35 +117,39 @@ pub struct PartitionResult {
pub device_path: String,
}
/// Compute GPT-only plan per topology and constraints.
///
/// Layout defaults:
/// - BIOS boot: cfg.partitioning.bios_boot if enabled (size_mib)
/// - ESP: cfg.partitioning.esp.size_mib, GPT name cfg.partitioning.esp.gpt_name (typically "zosboot")
/// - Data: remainder, GPT name cfg.partitioning.data.gpt_name ("zosdata")
/// - Cache (only for SSD/HDD topology): remainder on SSD after boot/ESP, GPT name cfg.partitioning.cache.gpt_name ("zoscache")
///
/// Topology mapping:
/// - Single: use first eligible disk; create BIOS (opt) + ESP + Data
/// - DualIndependent: need at least 2 disks; disk0: BIOS (opt) + ESP + Data, disk1: Data
/// - BtrfsRaid1: need at least 2 disks; disk0: BIOS (opt) + ESP + Data, disk1: Data
/// - SsdHddBcachefs: need >=1 SSD (rotational=false) and >=1 HDD (rotational=true);
/// SSD: BIOS (opt) + ESP + Cache; HDD: Data
/// Compute GPT-only plan per topology and constraints.
///
/// Layout defaults:
/// - BIOS boot: cfg.partitioning.bios_boot if enabled (size_mib)
/// - ESP: cfg.partitioning.esp.size_mib, GPT name cfg.partitioning.esp.gpt_name (typically "zosboot")
/// - Data: remainder, GPT name cfg.partitioning.data.gpt_name ("zosdata")
/// - Cache (only for SSD/HDD topology): remainder on SSD after boot/ESP, GPT name cfg.partitioning.cache.gpt_name ("zoscache")
///
/// Topology mapping:
/// - Single: use first eligible disk; create BIOS (opt) + ESP + Data
/// - DualIndependent: need at least 2 disks; disk0: BIOS (opt) + ESP + Data, disk1: Data
/// - BtrfsRaid1: need at least 2 disks; disk0: BIOS (opt) + ESP + Data, disk1: Data
/// - SsdHddBcachefs: need >=1 SSD (rotational=false) and >=1 HDD (rotational=true);
/// SSD: BIOS (opt) + ESP + Cache; HDD: Data
pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
let align = cfg.partitioning.alignment_mib;
let require_empty = cfg.partitioning.require_empty_disks;
// If system booted via UEFI, suppress the BIOS boot partition even if enabled in config.
let add_bios = cfg.partitioning.bios_boot.enabled && !is_efi_boot();
if disks.is_empty() {
return Err(Error::Partition("no disks provided to partition planner".into()));
return Err(Error::Partition(
"no disks provided to partition planner".into(),
));
}
let mut plans: Vec<DiskPlan> = Vec::new();
match cfg.topology {
Topology::Single => {
Topology::BtrfsSingle => {
let d0 = &disks[0];
let mut parts = Vec::new();
if cfg.partitioning.bios_boot.enabled {
if add_bios {
parts.push(PartitionSpec {
role: PartRole::BiosBoot,
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
@@ -154,18 +166,48 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan { disk: d0.clone(), parts });
plans.push(DiskPlan {
disk: d0.clone(),
parts,
});
}
Topology::BcachefsSingle => {
let d0 = &disks[0];
let mut parts = Vec::new();
if add_bios {
parts.push(PartitionSpec {
role: PartRole::BiosBoot,
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
gpt_name: cfg.partitioning.bios_boot.gpt_name.clone(),
});
}
parts.push(PartitionSpec {
role: PartRole::Esp,
size_mib: Some(cfg.partitioning.esp.size_mib),
gpt_name: cfg.partitioning.esp.gpt_name.clone(),
});
parts.push(PartitionSpec {
role: PartRole::Data,
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan {
disk: d0.clone(),
parts,
});
}
Topology::DualIndependent => {
if disks.len() < 2 {
return Err(Error::Partition("DualIndependent topology requires at least 2 disks".into()));
return Err(Error::Partition(
"DualIndependent topology requires at least 2 disks".into(),
));
}
let d0 = &disks[0];
let d1 = &disks[1];
// Disk 0: BIOS (opt) + ESP + Data
let mut parts0 = Vec::new();
if cfg.partitioning.bios_boot.enabled {
if add_bios {
parts0.push(PartitionSpec {
role: PartRole::BiosBoot,
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
@@ -182,7 +224,10 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan { disk: d0.clone(), parts: parts0 });
plans.push(DiskPlan {
disk: d0.clone(),
parts: parts0,
});
// Disk 1: Data only
let mut parts1 = Vec::new();
@@ -191,18 +236,23 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan { disk: d1.clone(), parts: parts1 });
plans.push(DiskPlan {
disk: d1.clone(),
parts: parts1,
});
}
Topology::BtrfsRaid1 => {
if disks.len() < 2 {
return Err(Error::Partition("BtrfsRaid1 topology requires at least 2 disks".into()));
return Err(Error::Partition(
"BtrfsRaid1 topology requires at least 2 disks".into(),
));
}
let d0 = &disks[0];
let d1 = &disks[1];
// Disk 0: BIOS (opt) + ESP + Data
let mut parts0 = Vec::new();
if cfg.partitioning.bios_boot.enabled {
if add_bios {
parts0.push(PartitionSpec {
role: PartRole::BiosBoot,
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
@@ -219,7 +269,10 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan { disk: d0.clone(), parts: parts0 });
plans.push(DiskPlan {
disk: d0.clone(),
parts: parts0,
});
// Disk 1: Data only (for RAID1)
let mut parts1 = Vec::new();
@@ -228,18 +281,68 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan { disk: d1.clone(), parts: parts1 });
plans.push(DiskPlan {
disk: d1.clone(),
parts: parts1,
});
}
Topology::Bcachefs2Copy => {
if disks.len() < 2 {
return Err(Error::Partition(
"Bcachefs2Copy topology requires at least 2 disks".into(),
));
}
let d0 = &disks[0];
let d1 = &disks[1];
// Disk 0: BIOS (opt) + ESP + Data
let mut parts0 = Vec::new();
if add_bios {
parts0.push(PartitionSpec {
role: PartRole::BiosBoot,
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
gpt_name: cfg.partitioning.bios_boot.gpt_name.clone(),
});
}
parts0.push(PartitionSpec {
role: PartRole::Esp,
size_mib: Some(cfg.partitioning.esp.size_mib),
gpt_name: cfg.partitioning.esp.gpt_name.clone(),
});
parts0.push(PartitionSpec {
role: PartRole::Data,
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan {
disk: d0.clone(),
parts: parts0,
});
// Disk 1: Data only
let mut parts1 = Vec::new();
parts1.push(PartitionSpec {
role: PartRole::Data,
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan {
disk: d1.clone(),
parts: parts1,
});
}
Topology::SsdHddBcachefs => {
// Choose SSD (rotational=false) and HDD (rotational=true)
let ssd = disks.iter().find(|d| !d.rotational)
.ok_or_else(|| Error::Partition("SsdHddBcachefs requires an SSD (non-rotational) disk".into()))?;
let hdd = disks.iter().find(|d| d.rotational)
.ok_or_else(|| Error::Partition("SsdHddBcachefs requires an HDD (rotational) disk".into()))?;
let ssd = disks.iter().find(|d| !d.rotational).ok_or_else(|| {
Error::Partition("SsdHddBcachefs requires an SSD (non-rotational) disk".into())
})?;
let hdd = disks.iter().find(|d| d.rotational).ok_or_else(|| {
Error::Partition("SsdHddBcachefs requires an HDD (rotational) disk".into())
})?;
// SSD: BIOS (opt) + ESP + Cache remainder
let mut parts_ssd = Vec::new();
if cfg.partitioning.bios_boot.enabled {
if add_bios {
parts_ssd.push(PartitionSpec {
role: PartRole::BiosBoot,
size_mib: Some(cfg.partitioning.bios_boot.size_mib),
@@ -256,7 +359,10 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.cache.gpt_name.clone(),
});
plans.push(DiskPlan { disk: ssd.clone(), parts: parts_ssd });
plans.push(DiskPlan {
disk: ssd.clone(),
parts: parts_ssd,
});
// HDD: Data remainder
let mut parts_hdd = Vec::new();
@@ -265,7 +371,10 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
size_mib: None,
gpt_name: cfg.partitioning.data.gpt_name.clone(),
});
plans.push(DiskPlan { disk: hdd.clone(), parts: parts_hdd });
plans.push(DiskPlan {
disk: hdd.clone(),
parts: parts_hdd,
});
}
}
@@ -276,13 +385,195 @@ pub fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan> {
})
}
/// Apply the partition plan using system utilities (sgdisk) via util wrappers.
///
/// Safety:
/// - Must verify target disks are empty when required.
/// - Must ensure unique partition GUIDs.
/// - Should call udev settle after changes.
pub fn apply_partitions(_plan: &PartitionPlan) -> Result<Vec<PartitionResult>> {
// To be implemented: sgdisk orchestration + udev settle + GUID collection
todo!("shell out to sgdisk, trigger udev settle, collect partition GUIDs")
}
/**
Apply the partition plan using system utilities (sgdisk) via util wrappers.
Safety:
- Verifies target disks are empty when required (defense-in-depth; orchestrator should also enforce).
- Ensures unique partition GUIDs by relying on sgdisk defaults.
- Calls udev settle after changes to ensure /dev nodes exist.
Notes:
- Uses sgdisk -og to create a new GPT on empty disks.
- Adds partitions in declared order using -n (auto-aligned), -t (type code), -c (GPT name).
- Derives partition device paths: NVMe uses "pN" suffix; others use trailing "N".
- Captures per-partition GUID and geometry via `sgdisk -i <N> <disk>`.
*/
pub fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>> {
// Locate required tools
let Some(sgdisk) = which_tool("sgdisk")? else {
return Err(Error::Partition("sgdisk not found in PATH".into()));
};
// Helper: map role to GPT type code (gdisk codes)
fn type_code(role: PartRole) -> &'static str {
match role {
PartRole::BiosBoot => "ef02", // BIOS boot partition (for GRUB BIOS on GPT)
PartRole::Esp => "ef00", // EFI System Partition
PartRole::Data => "8300", // Linux filesystem
PartRole::Cache => "8300", // Treat cache as Linux filesystem (bcachefs)
}
}
// Helper: build partition device path for a given disk and partition number
fn part_dev_path(disk_path: &str, part_number: u32) -> String {
if disk_path.starts_with("/dev/nvme") {
format!("{disk_path}p{part_number}")
} else {
format!("{disk_path}{part_number}")
}
}
// Helper: sector size in bytes for disk (fallback 512 with warning)
fn sector_size_bytes(disk_path: &str) -> Result<u64> {
if let Some(blockdev) = which_tool("blockdev")? {
let out = run_cmd_capture(&[blockdev.as_str(), "--getss", disk_path])?;
let s = out.stdout.trim();
return s.parse::<u64>().map_err(|e| {
Error::Partition(format!(
"failed to parse sector size from blockdev for {}: {}",
disk_path, e
))
});
}
warn!(
"blockdev not found; assuming 512-byte sectors for {}",
disk_path
);
Ok(512)
}
// Helper: parse sgdisk -i output to (unique_guid, first_sector, last_sector)
fn parse_sgdisk_info(info: &str) -> Result<(String, u64, u64)> {
let mut guid = String::new();
let mut first: Option<u64> = None;
let mut last: Option<u64> = None;
for line in info.lines() {
let line = line.trim();
if let Some(rest) = line.strip_prefix("Partition unique GUID:") {
guid = rest.trim().to_string();
} else if let Some(rest) = line.strip_prefix("First sector:") {
// Format: "First sector: 2048 (at 1024.0 KiB)"
let val = rest.trim().split_whitespace().next().unwrap_or("");
if !val.is_empty() {
first = Some(
val.parse::<u64>()
.map_err(|e| Error::Partition(format!("parse first sector: {}", e)))?,
);
}
} else if let Some(rest) = line.strip_prefix("Last sector:") {
let val = rest.trim().split_whitespace().next().unwrap_or("");
if !val.is_empty() {
last = Some(
val.parse::<u64>()
.map_err(|e| Error::Partition(format!("parse last sector: {}", e)))?,
);
}
}
}
let first =
first.ok_or_else(|| Error::Partition("sgdisk -i missing First sector".into()))?;
let last = last.ok_or_else(|| Error::Partition("sgdisk -i missing Last sector".into()))?;
if guid.is_empty() {
return Err(Error::Partition(
"sgdisk -i missing Partition unique GUID".into(),
));
}
Ok((guid, first, last))
}
let mut results: Vec<PartitionResult> = Vec::new();
for dp in &plan.disks {
let disk_path = dp.disk.path.as_str();
// Defense-in-depth: verify emptiness when required
if plan.require_empty_disks {
let empty = idempotency::is_empty_disk(&dp.disk)?;
if !empty {
return Err(Error::Validation(format!(
"target disk {} is not empty (partitions or signatures present)",
dp.disk.path
)));
}
}
debug!("apply_partitions: creating GPT on {}", disk_path);
// Initialize (or re-initialize) a new empty GPT; requires truly empty disks per policy
run_cmd(&[sgdisk.as_str(), "-og", disk_path])?;
// Create partitions in order
for (idx0, spec) in dp.parts.iter().enumerate() {
let part_num = (idx0 as u32) + 1;
let size_arg = match spec.size_mib {
Some(mib) => format!("+{}M", mib), // rely on sgdisk MiB suffix support
None => String::from("0"), // consume remainder
};
// Use automatic aligned start (0) and specified size
let n_arg = format!("{}:0:{}", part_num, size_arg);
let t_arg = format!("{}:{}", part_num, type_code(spec.role));
let c_arg = format!("{}:{}", part_num, spec.gpt_name);
debug!(
"apply_partitions: {} -n {} -t {} -c {} {}",
sgdisk, n_arg, t_arg, c_arg, disk_path
);
run_cmd(&[
sgdisk.as_str(),
"-n",
n_arg.as_str(),
"-t",
t_arg.as_str(),
"-c",
c_arg.as_str(),
disk_path,
])?;
}
// Settle udev so new partitions appear under /dev
udev_settle(5_000)?;
// Gather per-partition details and build results
let sector_bytes = sector_size_bytes(disk_path)?;
let mib_div: u64 = 1024 * 1024;
for (idx0, spec) in dp.parts.iter().enumerate() {
let part_num = (idx0 as u32) + 1;
// Query sgdisk for partition info
let i_arg = format!("{}", part_num);
let info_out = run_cmd_capture(&[sgdisk.as_str(), "-i", i_arg.as_str(), disk_path])?;
let (unique_guid, first_sector, last_sector) = parse_sgdisk_info(&info_out.stdout)?;
let sectors = if last_sector >= first_sector {
last_sector - first_sector + 1
} else {
0
};
let start_mib = (first_sector.saturating_mul(sector_bytes)) / mib_div;
let size_mib = (sectors.saturating_mul(sector_bytes)) / mib_div;
let dev_path = part_dev_path(disk_path, part_num);
results.push(PartitionResult {
disk: dp.disk.path.clone(),
part_number: part_num,
role: spec.role,
gpt_name: spec.gpt_name.clone(),
uuid: unique_guid,
start_mib,
size_mib,
device_path: dev_path,
});
}
}
debug!(
"apply_partitions: created {} partition entries",
results.len()
);
Ok(results)
}

View File

@@ -9,4 +9,4 @@
pub mod state;
pub use state::*;
pub use state::*;

View File

@@ -77,4 +77,4 @@ pub fn build_report(
/// Write the state report JSON to disk (default path in config: /run/zosstorage/state.json).
pub fn write_report(_report: &StateReport, _path: &str) -> Result<()> {
todo!("serialize to JSON and persist atomically via tempfile and rename")
}
}

View File

@@ -2,15 +2,38 @@
//!
//! Mirrors docs in [docs/SCHEMA.md](docs/SCHEMA.md) and is loaded/validated by
//! [fn load_and_merge()](src/config/loader.rs:1) and [fn validate()](src/config/loader.rs:1).
//
// REGION: API
// api: types::Topology { BtrfsSingle, BcachefsSingle, DualIndependent, Bcachefs2Copy, SsdHddBcachefs, BtrfsRaid1 }
// api: types::Config { logging, device_selection, topology, partitioning, filesystem, mount, report }
// api: types::Partitioning { alignment_mib, require_empty_disks, bios_boot, esp, data, cache }
// api: types::FsOptions { btrfs, bcachefs, vfat }
// REGION: API-END
//
// REGION: RESPONSIBILITIES
// - Define serde-serializable configuration types and enums used across modules.
// - Keep field names and enums stable; update docs/SCHEMA.md when public surface changes.
// REGION: RESPONSIBILITIES-END
use clap::ValueEnum;
use serde::{Deserialize, Serialize};
/// Reserved filesystem labels.
pub const LABEL_ZOSBOOT: &str = "ZOSBOOT";
pub const LABEL_ZOSDATA: &str = "ZOSDATA";
pub const LABEL_ZOSCACHE: &str = "ZOSCACHE";
/// Reserved GPT partition names.
pub const GPT_NAME_ZOSBOOT: &str = "zosboot";
pub const GPT_NAME_ZOSDATA: &str = "zosdata";
pub const GPT_NAME_ZOSCACHE: &str = "zoscache";
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LoggingConfig {
/// Log level: "error" | "warn" | "info" | "debug"
pub level: String, // default "info"
pub level: String, // default "info"
/// When true, also log to /run/zosstorage/zosstorage.log
pub to_file: bool, // default false
pub to_file: bool, // default false
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -25,19 +48,47 @@ pub struct DeviceSelection {
pub min_size_gib: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, ValueEnum)]
#[serde(rename_all = "snake_case")]
#[value(rename_all = "snake_case")]
pub enum Topology {
/// Single eligible disk; btrfs on remainder.
Single,
/// Two eligible disks; independent btrfs on each data partition.
#[value(alias = "btrfs-single")]
BtrfsSingle,
/// Single eligible disk; bcachefs on remainder.
#[value(alias = "bcachefs-single")]
BcachefsSingle,
/// Independent btrfs filesystems on each data partition (any number of disks).
#[value(alias = "dual-independent")]
DualIndependent,
/// SSD + HDD; bcachefs with SSD cache/promote and HDD backing.
#[value(alias = "ssd-hdd-bcachefs")]
SsdHddBcachefs,
/// Multi-device bcachefs with two replicas (data+metadata).
/// Canonical token: bcachefs-2copy
#[serde(rename = "bcachefs-2copy")]
#[value(alias = "bcachefs-2copy")]
Bcachefs2Copy,
/// Optional mirrored btrfs across two disks when explicitly requested.
#[value(alias = "btrfs-raid1")]
BtrfsRaid1,
}
impl std::fmt::Display for Topology {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let s = match self {
Topology::BtrfsSingle => "btrfs_single",
Topology::BcachefsSingle => "bcachefs_single",
Topology::DualIndependent => "dual_independent",
Topology::SsdHddBcachefs => "ssd_hdd_bcachefs",
// Canonical single notation for two-copy bcachefs topology
Topology::Bcachefs2Copy => "bcachefs-2copy",
Topology::BtrfsRaid1 => "btrfs_raid1",
};
f.write_str(s)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BiosBootSpec {
/// Whether to create a tiny BIOS boot partition.
@@ -167,4 +218,4 @@ pub struct Config {
pub mount: MountScheme,
/// Report output configuration.
pub report: ReportOptions,
}
}

View File

@@ -4,11 +4,13 @@
// api: util::run_cmd(args: &[&str]) -> crate::Result&lt;()&gt;
// api: util::run_cmd_capture(args: &[&str]) -> crate::Result&lt;CmdOutput&gt;
// api: util::udev_settle(timeout_ms: u64) -> crate::Result&lt;()&gt;
// api: util::is_efi_boot() -> bool
// REGION: API-END
//
// REGION: RESPONSIBILITIES
// - Centralize external tool discovery and invocation (sgdisk, blkid, mkfs.*, udevadm).
// - Provide capture and error mapping to crate::Error consistently.
// - Provide environment helpers (udev settle, boot mode detection).
// Non-goals: business logic (planning/validation), direct parsing of complex outputs beyond what callers need.
// REGION: RESPONSIBILITIES-END
//
@@ -38,6 +40,7 @@
//! and consistent error handling.
use crate::{Error, Result};
use std::path::Path;
use std::process::Command;
use tracing::{debug, warn};
@@ -74,9 +77,10 @@ pub fn run_cmd(args: &[&str]) -> Result<()> {
)));
}
debug!(target: "util.run_cmd", "exec: {:?}", args);
let output = Command::new(args[0]).args(&args[1..]).output().map_err(|e| {
Error::Other(anyhow::anyhow!("failed to spawn {:?}: {}", args, e))
})?;
let output = Command::new(args[0])
.args(&args[1..])
.output()
.map_err(|e| Error::Other(anyhow::anyhow!("failed to spawn {:?}: {}", args, e)))?;
let status_code = output.status.code().unwrap_or(-1);
if !output.status.success() {
@@ -100,9 +104,10 @@ pub fn run_cmd_capture(args: &[&str]) -> Result<CmdOutput> {
)));
}
debug!(target: "util.run_cmd_capture", "exec: {:?}", args);
let output = Command::new(args[0]).args(&args[1..]).output().map_err(|e| {
Error::Other(anyhow::anyhow!("failed to spawn {:?}: {}", args, e))
})?;
let output = Command::new(args[0])
.args(&args[1..])
.output()
.map_err(|e| Error::Other(anyhow::anyhow!("failed to spawn {:?}: {}", args, e)))?;
let status_code = output.status.code().unwrap_or(-1);
let stdout = String::from_utf8_lossy(&output.stdout).to_string();
let stderr = String::from_utf8_lossy(&output.stderr).to_string();
@@ -147,6 +152,14 @@ pub fn udev_settle(timeout_ms: u64) -> Result<()> {
}
}
/// Detect whether the current system booted via UEFI (initramfs-friendly).
///
/// Returns true when /sys/firmware/efi exists (standard on UEFI boots).
/// Returns false on legacy BIOS boots where that path is absent.
pub fn is_efi_boot() -> bool {
Path::new("/sys/firmware/efi").exists()
}
#[cfg(test)]
mod tests {
use super::*;
@@ -194,4 +207,4 @@ mod tests {
// Should never fail even if udevadm is missing.
udev_settle(1000).expect("udev_settle should be non-fatal");
}
}
}